text
stringlengths
4
2.78M
--- abstract: 'The fundamental problem of our interest here is soft-input soft-output multiple-input multiple-output (MIMO) detection. We propose a method, referred to as subspace marginalization with interference suppression (SUMIS), that yields unprecedented performance at low and fixed (deterministic) complexity. Our method provides a well-defined tradeoff between computational complexity and performance. Apart from an initial sorting step consisting of selecting channel-matrix columns, the algorithm involves no searching nor algorithmic branching; hence the algorithm has a completely predictable run-time and allows for a highly parallel implementation. We numerically assess the performance of SUMIS in different practical settings: full/partial channel state information, sequential/iterative decoding, and low/high rate outer codes. We also comment on how the SUMIS method performs in systems with a large number of transmit antennas.' bibliography: - 'refs.bib' title: 'SUMIS: Near-Optimal Soft-In Soft-Out MIMO Detection With Low and Fixed Complexity' --- Introduction {#sec:intro} ============ We consider multiple-input multiple-output (MIMO) systems, which are known to have high spectral efficiency in rich scattering environments [@Telatar] and high link robustness. A major difficulty in the implementation of MIMO systems is the detection (signal separation) problem, which is generally computationally expensive to solve. This problem can be especially pronounced in large MIMO systems, see [@RusekPersson; @Srinidhi; @SomDatta; @DattaSrinidhi; @LiMurch; @DattaKumar] and the references therein. The complexity of the optimal detector, which computes the log-likelihood ratio (LLR) values exactly and therefore solves the MIMO detection problem optimally, grows exponentially with the number of transmit antennas and polynomially with the size of the signal constellation. The main difficulty in MIMO detection is the occurrence of ill-conditioned channels. Suboptimal and fast methods, such as zero-forcing, perform well only for well-conditioned channels. With coded transmission, if the channel is ill-conditioned and there is no coding across multiple channel realizations, using the zero-forcing detector can result in large packet error probabilities. Dealing with ill-conditioned channels is difficult and requires sophisticated techniques. Many different methods have been proposed during the past two decades that aim to achieve, with reduced computational complexity, the performance of the optimal detector [@Larsson; @BaiChoiBook; @WubbenSeethaler; @Elkhazin; @LarssonJalden; @BarberoThompson; @ChoiShim; @ViterboBoutros; @GuoNilsson; @Papa; @StuderBolcskei; @ChoiSinger]. A concise exposition of some state-of-the-art MIMO detection technique can be found in [@Larsson] and a more extensive overview is given in [@BaiChoiBook]. Detection methods based on lattice reduction are explained in some detail in [@WubbenSeethaler]. Most of today’s state-of-the-art detectors provide the possibility of trading complexity for performance via the choice of some user parameter. One important advantage of such detectors is that this parameter can be adaptively chosen depending on the channel conditions, in order to improve the overall performance [@Cirkic; @NikitopoulosAscheid]. There are two main categories of MIMO detectors. The first consists of detectors whose complexity (run-time) depends on the particular channel realization. This category includes, in particular, methods that perform a tree-search. Notable examples include the sphere-decoding (SD) aided max-log method and its many relatives [@ViterboBoutros; @GuoNilsson; @ChoiShim; @Papa; @StuderBolcskei]. A recent method in this category is the reduced dimension maximum-likelihood search (RD-MLS) [@ChoiShim; @BaiChoiBook]. Unfortunately, these methods have an exponential worst-case complexity unless a suboptimal termination criterion is used. The other category of detectors consists of methods that have a fixed (deterministic) complexity that does not depend on the channel realization. These methods are more desirable from an implementation point of view, as they eliminate the need for data buffers and over-dimensioning (for the worst-case) of the hardware. Examples of such detectors are the reduced dimension maximum a posteriori (RDMAP) method [@Elkhazin], the partial marginalization (PM) method [@LarssonJalden], and the fixed-complexity SD (FCSD) aided max-log method [@BarberoThompson]. These fixed-complexity detectors provide a simple and well-defined tradeoff between computational complexity and performance, they have a fixed and fully predictable run-time, and they are highly parallelizable. We will discuss existing detectors in more detail in Section \[ssec:stateof\]. #### Summary of Contributions {#summary-of-contributions .unnumbered} We propose a new method that is inspired by the ideas in [@ChoiShim; @LarssonJalden; @BarberoThompson; @Elkhazin] of partitioning the original problem into smaller subproblems. As in the PM and RDMAP methods, we perform marginalization over a few of the bits when computing the LLR values. The approximate LLRs that enter the marginalization are much simpler than those in PM, and this substantially reduces the complexity of our algorithm, as will be explained in more detail in . In addition, we suppress the interference on the considered subproblems (subspaces) by performing soft interference suppression (SIS). The core idea behind SIS germinated in [@Taylor] in a different context than MIMO detection, but the use of SIS as a constituent of our proposed algorithm is mainly inspired by the work in [@Taylor; @WangPoor; @Schniter; @ChoiCheong]. The main differences between the SIS procedure used in our algorithm and that in [@WangPoor; @Schniter; @ChoiCheong] are: (i) we allow for the signal and interference subspaces to have varying dimensionality; (ii) we perform the SIS in a MIMO setting internally without the need for a priori information from the decoder, as opposed to [@WangPoor; @Elkhazin]; and (iii) we do not iterate the internal LLR values more than once, nor do we ignore the correlation between the interfering terms over the different receive antennas as in [@Schniter; @ChoiCheong]. We refer to our method as *subspace marginalization with interference suppression* (SUMIS). During the review of this paper, reference [@ChoiLee] was brought to our attention. Reference [@ChoiLee], published a year after SUMIS was first presented [@CirkicLarsson], discusses another variation on the theme where the interference is suppressed successively in contrast to SUMIS which does this in parallel. SUMIS was developed with the primary objective not to require iteration with the channel decoder as this increases latency and complexity. However, SUMIS takes soft input, so the overall decoding performance can be improved by iterating with the decoder. The ideas behind SUMIS are fundamentally simple and allow for massively parallel algorithmic implementations. As demonstrated in Section \[sec:numres\], the computational complexity of SUMIS is extremely low, and the accuracy is near or better than that of max-log. SUMIS works well for both under- and over-determined MIMO systems. This paper extends our conference paper [@CirkicLarsson] by including: a detailed complexity analysis, techniques for exploiting soft input (non-uniform a priori probabilities), and techniques for dealing with higher-order constellations and imperfect channel state information. In addition, we discuss and exemplify the applicability of SUMIS to systems with a large number of antennas. Preliminaries {#sec:prel} ============= We consider the real-valued MIMO-channel model $$\label{eq:model} \bsy=\bsH\bss+\bse,$$ where $\bsH\in\bbR^{\Nr\times\Nt}$ is the MIMO channel matrix and $\bss\in\calS^{\Nt}$ is the transmitted vector. We assume that $\calS=\{-1,+1\}$ is a binary phase-shift keying (BPSK) constellation, hence referring to a “symbol” is equivalent to referring to a “bit”. With some extra expense of notation, as will be clear later, it is straightforward to extend all results that we present to higher order constellations. Furthermore, $\bse\in\bbR^{\Nr}\sim\calN% (\bszero,\frac{\No}{2}\bsI)$ denotes the noise vector and $\bsy\in\bbR^{\Nr}$ is the received vector. The channel is perfectly known to the receiver unless stated otherwise. Also, hereinafter we assume that $\Nr\geq\Nt$ since this is typical in practice and simplifies the mathematics in the paper. Note that unlike many competing methods, SUMIS does not require $\Nr\geq\Nt$—but this assumption is made to render the comparisons fair. Throughout we think of (\[eq:model\]) as an *effective channel model* for the MIMO transmission. In particular, if pure spatial multiplexing is used, the matrix $\bsH$ in (\[eq:model\]) just comprises the channel gains between all pairs of transmit and receive antennas. If linear precoding is used at the transmitter, then $\bsH$ represents the combined effect of the precoding and the propagation channel. In the latter case, “transmit antennas” should be interpreted as “simultaneously transmitted streams”. Note that with separable complex symbol constellations (quadrature amplitude modulation), every complex-valued model $$\begin{aligned} \label{eq:cxmodel} \bsy_\cx&=\bsH_\cx\bss_\cx+\bse_\cx,&\bse_\cx&\sim\calC\calN(0,\No\bsI),\end{aligned}$$ where $(\cdot)_\cx$ denotes the complex-valued counterparts of , can be posed as a real-valued model on the form by setting \[eq:cxtorlmodel\] $$\begin{aligned} \hfill\bsy&=\begin{bmatrix}\Re{\bsy_\cx}\\\Im{\bsy_\cx}\end{bmatrix},& \hfill\bss&=\begin{bmatrix}\Re{\bss_\cx}\\\Im{\bss_\cx}\end{bmatrix},& \hfill\bse&=\begin{bmatrix}\Re{\bse_\cx}\\\Im{\bse_\cx}\end{bmatrix}, \end{aligned}$$ and $$\bsH=\begin{bmatrix}\Re{\bsH_\cx}&-\Im{\bsH_\cx}\\\Im{\bsH_\cx}&\Re{\bsH_\cx}\end{bmatrix}.$$ We use the real-valued model throughout as we will later be partitioning $\bsH$ into submatrices, and then the real-valued model offers some more flexibility: selecting one column in the representation means simultaneously selecting two columns in the representation (via ), which is more restrictive. Simulation results, not presented here due to lack of space, confirmed this with a performance advantage in working with the real-valued model. However, the difference is not major in most relevant cases. Disregarding this technicality, it is straightforward to re-derive all results in the paper using a complex-valued model instead. That could be useful, for example if $M$-ary phase-shift keying or some other non-separable signal constellation is used per antenna. Optimal Soft MIMO Detection --------------------------- The optimal soft information desired by the channel decoder is the a posteriori log-likelihood ratio $l(s_i|\bsy)\triangleq\log\big(\frac{P(s_i=+1|\bsy)}{P(s_i=-1|\bsy)}\big)$ where $s_i$ is the $i$th bit of the transmitted vector $\bss$. The quantity $l(s_i|\bsy)$ tells us how likely it is that the $i$th bit of $\bss$ is equal to minus or plus one, respectively. By marginalizing out all bits except the $i$th bit in $P(\bss|\bsy)$ and using Bayes’ rule, the LLR becomes $$\label{eq:apostllr} \begin{split} l(s_i|\bsy)&=\log\left(\dfrac{\sum_{\bss:s_i(\bss)=+1}P(\bss|\bsy)}{\sum_{\bss:s_i(\bss)=-1}P(\bss|\bsy)}\right)\\ &=\log\left(\dfrac{\sum_{\bss:s_i(\bss)=+1}p(\bsy|\bss)P(\bss)}{\sum_{\bss:s_i(\bss)=-1}p(\bsy|\bss)P(\bss)}\right), \end{split}$$ where the notation $\sum_{\bss:s_i(\bss)=x}$ means the sum over all possible vectors $\bss\in\calS^\Nt$ for which the $i$th bit is equal to $x$. With uniform a priori probabilities, i.e., $P(\bss)=1/2^\Nt$, the LLR can be written as $$\label{eq:llr} \begin{split} l(s_i|\bsy)&=\log\left(\dfrac{\sum_{\bss:s_i(\bss)=+1}p(\bsy|\bss)}{\sum_{\bss:s_i(\bss)=-1}p(\bsy|\bss)}\right)\\ &=\log\left(\frac{\sum_{\bss:s_i(\bss)=+1}\exp{-\frac{1}{\No}\norm{\bsy-\bsH\bss}^2}}% {\sum_{\bss:s_i(\bss)=-1}\exp{-\frac{1}{\No}\norm{\bsy-\bsH\bss}^2}}\right).% \end{split}$$ In and , there are $2^\Nt$ terms that need to be evaluated and added. The complexity of this task is exponential in $\Nt$ and this is what makes MIMO detection difficult. Thus, many approximate methods have been proposed. One very good approximation of is the so called max-log approximation [@Robertson], $$\label{eq:maxlog} \begin{split} l(s_i|\bsy)\approx\log\left(\frac{\max_{\bss:s_i(\bss)=+1}\exp{-\frac{1}{\No}\norm{\bsy-\bsH\bss}^2}}% {\max_{\bss:s_i(\bss)=-1}\exp{-\frac{1}{\No}\norm{\bsy-\bsH\bss}^2}}\right),% \end{split}$$ where only the largest terms in each sum of are retained, i.e, the terms for which $\norm{\bsy-\bsH\bss}$ is as small as possible. Typically, for numerical stability, sums as in (\[eq:llr\]) are evaluated by repeated use of the Jacobian logarithm: $\log(e^a+e^b)=\mbox{max}(a,b)+\log(1+e^{-|a-b|})$, where the second term can be tabulated as function of $|a-b|$. Max-log can then be viewed as a special case where the second term in the Jacobian logarithm is neglected. Note that even though max-log avoids the summation, one needs to search over $2^\Nt$ terms to find the largest ones; hence the exponential complexity remains. Nevertheless, with the max-log approximation, one can make any hard decision detector, such as SD, to produce soft values. This has resulted in much of the literature focusing on finding efficient hard decision methods. In this paper, we make a clean break with this philosophy and instead devise a good approximation of the LLRs and directly. In order to explain our proposed method and the competing state-of-the-art methods, for fixed $\ns\in\{1,\dots,\Nt\}$, we define the following partitioning of the model in $$\label{eq:partmodel} \bsy=\bsH\bss+\bse=\!\!\underbrace{\left[\wbH\quad\wtH\right]}% _{\text{col. permut. of }\bsH}% \text{\makebox{$\underbrace{\left[\bar{\bss}^T\;\;\tilde{\bss}^T\right]^T}% _{\text{permut. of }\bss}$}}\!\!+\;\bse% =\wbH\bar{\bss}+\wtH\tilde{\bss}+\bse,$$ where $\wbH\in\bbR^{\Nr\times{\ns}}$, $\wtH\in\bbR^{\Nr\times(\Nt-\ns)}$, $\bar{\bss}\in\calS^{\ns}$ contains the $i$th bit $s_i$ in the original vector $\bss$, and $\tilde{\bss}\in\calS^{\Nt-\ns}$. The choice of partitioning involves the choice of a permutation, and how to make this choice (for $\ns>1$) is not obvious. For each bit in $\bss$, there are $\binom{\Nt-1}{\ns-1}$ possible permutations in . How we perform this partitioning is explained in . Note that for different detectors, the choice of partitioning serves different purposes. Today’s State-of-the-Art MIMO Detectors {#ssec:stateof} --------------------------------------- Note that all of the methods explained in this subsection, except for PM, are designed to deliver hard decisions. These methods can then produce soft decisions by using the max-log approximation. #### The PM Method {#the-pm-method .unnumbered} PM [@LarssonJalden] offers a tradeoff between exact and approximate computation of , via a parameter $r=\ns-1\in\{0,\dots,\Nt-1\}$. We present the slightly modified version in [@PerssonLarsson] of the original method in [@LarssonJalden], which is simpler than that in [@LarssonJalden] but without comprising performance. The PM method implements a two-step approximation of . More specifically, in the first step it approximates the sums of that correspond to $\tilde{\bss}\in\calS^{\Nt-\ns}$ with a maximization, $$\label{eq:pmmaxlog} l(s_i|\bsy)\approx% \log\left(\frac% {\displaystyle\sum_{\text{{\makebox[10mm]{$\quad\bar{\bss}\!\!:\!s_i\!(\!\bss)\!=\!+1$}}}}% \text{\raisebox{1pt}{$\hspace{-3mm}\displaystyle\max_{\tilde{\bss}}% \expp\Big(\!\!-\!\text{\makebox{\footnotesize$\frac{1}{\No}$}}\,% \Vert\bsy-\wbH\bar{\bss}-\wtH\tilde{\bss}\Vert^2\Big)$}}} {\text{\raisebox{-7pt}{$\displaystyle\sum_{\text{{\makebox[11mm]{$\quad\bar{\bss}\!\!:\!s_i\!(\!\bss)\!=\!-1$}}}}$}% \raisebox{-6pt}{$\hspace{-3mm}\displaystyle\max_{\tilde{\bss}}% \expp\Big(\!\!-\!\text{\footnotesize\makebox{$\frac{1}{\No}$}}\,% \Vert\bsy-\wbH\bar{\bss}-\wtH\tilde{\bss}\Vert^2\Big)$}}}\right).$$ In the second step, the maximization in is approximated with a linear filter with quantization (clipping), such as the zero-forcing with decision-feedback (ZF-DF) detector [@LarssonJalden]. The ZF-DF method is computationally much more efficient than exact maximization, but it performs well only for well-conditioned matrices $\wtH$. However, the $\max$ problems in are generally well-conditioned since the matrices $\wtH$ are typically tall. For PM, when forming the partitioning in , the original bit-order in $\bss=[s_1,\dots,s_\Nt]^T$ is permuted in in a way such that the condition number of $\wtH$ is minimized, see [@LarssonJalden]. Notably, PM performs ZF-DF aided max-log detection in the special case of $r=0$ and computes the exact LLR values (as defined by ) for $r=\Nt-1$. #### The FCSD Method {#the-fcsd-method .unnumbered} As already noted, SD is a well known method for computing hard decisions. All variants of SD have a random complexity (runtime). This is a very undesirable property from a hardware implementation point of view, and this is one of the insights that stimulated the development of the FCSD method [@BarberoThompson] (as well as PM [@LarssonJalden] and our proposed SUMIS). FCSD performs essentially the same procedure as the PM method except that it introduces an additional approximation by employing max-log on the sums in the PM method, i.e., the sums over $\{\bar{\bss}\in\calS^{\ns}:s_i(\bss)=x\}$ in . Hence, instead of summing over $\{\bar{\bss}\in\calS^{\ns}:s_i(\bss)=x\}$ for each $x$ as in PM, it picks the best candidate from $\{\bar{\bss}\in\calS^{\ns}:s_i(\bss)=x\}$ for each $x$. As a result, the FCSD method offers a tradeoff between exact and approximate computation of the max-log problem in . #### The SD Method and Its Soft-Output Derivatives {#the-sd-method-and-its-soft-output-derivatives .unnumbered} The conventional SD method [@Murugan], which also constitutes the core of its many derivatives such as [@WangGiannakis; @ChoiShim; @BaiChoiBook; @StuderBolcskei; @Mennenga; @HochwaldBrink], does not use the partitioned model in . It uses a (sorted) QR-decomposition $\bsH=\bsPi\bsR$, where $\bsPi\in\bbR^{\Nr\times\Nt}$ has orthonormal columns and $\bsR\in\bbR^{\Nt\times\Nt}$ is an upper-triangular matrix, in order to write $\argmin_{\bss:s_i=x}\big\lVert\bsy-\bsH\bss\big\rVert^2=% \argmin_{\bss:s_i=x}\big\lVert\bsPi^T\bsy-\bsR\bss\big\rVert^2$ for each $x$ so that the max-log problem in can be solved by means of a tree-search. This search requires the choice of an initial sphere radius and success is only guaranteed if the initial radius is large enough and if the search is not prematurely terminated [@Murugan]. SD has a high complexity for some channel realizations (the average complexity is known to be exponential in $\Nt$ [@JaldenOttersten]), and hence, in practice a stopping criterion is used to terminate the algorithm after a given number of operations. Particularly, SD requires more time to finish for ill-conditioned than for well-conditioned channel realizations. To reduce the negative effect of ill-conditioned matrices, matrix regularization [@Rugini; @StuderBolcskei] or lattice reduction techniques [@WubbenSeethaler] can be applied. The idea of the latter is to apply, after relaxing the boundaries of the signal constellation, a transformation that effectively reduces the condition number of the channel matrix. Using SD to solve naively would require two runs of the procedure per bit, which becomes computationally prohibitive when detecting a vector of bits. We next review some methods that tackle this issue. In the list sphere-decoder [@HochwaldBrink], a list of a fixed number of candidates is stored during the search through the tree (one single tree-search is performed). Then is solved approximately by picking out the minimum-norm candidates in the resulting candidate list instead of in the full multi-dimensional constellation. Unfortunately, this procedure does not guarantee that the signal vector candidates with the smallest norms are included in the resulting list and thus performance is compromised. The list size and the sphere radius must be very carefully chosen. A more sophisticated version of this method can be found in [@Mennenga]. In the repeated tree-search method [@WangGiannakis], the tree-search is performed multiple times but not as many times as in the naive approach. In the first run, the algorithm performs one single tree-search that finds the vector with the smallest norm $\argmin_{\bss}\big\lVert\bsPi^T\bsy-\bsR\bss\big\rVert^2$. This in effect finds $\argmin_{\bss:s_i=x}\big\lVert\bsPi^T\bsy-\bsR\bss\big\rVert^2$ for each bit but only for one of the bit hypotheses (either $x=+1$ or $x=-1$). To find the smallest-norm candidates for the counterhypothesis $x^\complement$, a tree-search is performed for each bit to solve $\argmin_{\bss:s_i=x^\complement}\big\lVert\bsPi^T\bsy-\bsR\bss\big\rVert^2$. This method finds the minimums in using only half the number of tree-searches required by the naive approach. In this method, the main disadvantage remains, which is that it may visit the same nodes multiple times. The single-tree-search method [@StuderBolcskei], on the other hand, traverses the multiple trees in parallel instead of in a repeated fashion as is done in [@WangGiannakis]. While doing that, it keeps track of which nodes have been already visited in one search so that they can be skipped in the rest. That way, this method can guarantee to find the minimums in with one extended single tree-search and hence provides clear advantages over methods in both [@HochwaldBrink] and [@WangGiannakis]. #### The RD-MLS Method {#the-rd-mls-method .unnumbered} RD-MLS [@ChoiShim; @BaiChoiBook] carries out the same procedure as FCSD except that it does not perform clipping after the linear filtering. Instead, it uses an SD type of algorithm to perform a reduced tree-search over $\{\bar{\bss}\in\calS^{\ns}:s_i(\bss)=x\}$ for each $x$. Although this method reduces the number of layers in the tree, it does not necessarily improve the conditioning of the reduced problem, as the PM and FCSD methods do. This is so due to the unquantized linear filtering operation that in effect results in performing a projection of the original space (column space of $\bsH$) onto the orthogonal complement of the column space of $\wtH$. Therefore, for an ill-conditioned matrix $\bsH$, even though RD-MLS searches over a reduced space (roughly half the original space dimension [@ChoiShim Sec. V], i.e., $\ns\approx\Nt/2$), it is unclear whether the RD-MLS algorithm would visit significantly fewer nodes in the reduced space $\bar{\bss}$ than in the original space $\bss$. The reason is that the RD-MLS will suffer from the same problem in the reduced space as the conventional SD would have in the original space, namely the slow reduction in radius and pruning of nodes that are not of interest. Proposed Soft MIMO Detector (SUMIS) {#sec:sumis} =================================== Our proposed method, SUMIS, consists of two main stages. In stage I, a first approximation to the a posteriori probability of each bit $s_i$ is computed. In stage II, these approximate LLRs are used in an interference suppression mechanism, whereafter the LLR values are calculated based on the resulting “purified” model. To keep the exposition simple, we herein first present a the basic ideas behind SUMIS for the case that $P(\bss)$ is uniform. The extension to non-uniform $P(\bss)$ is treated in . A highly optimized version (which has much lower complexity but sacrifices clarity of exposition) of SUMIS is then presented in . A practical implementation of SUMIS should use the version in . Stage I ------- We start with the partitioned model in $$\label{eq:subsmodel} \bsy=\wbH\bar{\bss}\;+\;\underbrace{\wtH\tilde{\bss}\;+\;\bse}_{\makebox[0mm]{\scriptsize{interference+noise}}}$$ and define an approximate model $\bar{\bsy}\triangleq\wbH\bar{\bss}+\bsn$ where $\bsn$ is a Gaussian stochastic vector $\calN(\bszero,\bsQ)$ with $\bsQ\triangleq\wtH\wtPsi\wtH^T\!\!\!+\!\frac{\No}{2}\bsI$ and $\wtPsi$ is the covariance matrix of $\tilde{\bss}$. Under the assumption that the symbols are independent, $\wtPsi$ is diagonal, and since $P(\bss)$ is uniform, $\wtPsi=\bsI$. It is important to note that $\bar{\bsy}=\wbH\bar{\bss}+\bsn$ is an approximated model of , which we will use to approximate the probability density function $p(\bsy|\bar{\bss})\approx{}% p(\bar{\bsy}|\bar{\bss})\big|_{\bar{\bsy}=\bsy}$. The approximation consists of considering the interfering terms $\wtH\tilde{\bss}$ as Gaussian. This is a reasonable approximation since each element in $\wtH\tilde{\bss}$ constitutes a sum of variates and thus generally has Gaussian behaviour, especially when the variates are many and independent. This is a consequence of the central limit theorem, see [@Papoulis sec. 8.5] and [@Papoulis fig. 8.4b]. To compute the a posteriori probability $P(s_k|\bsy)$ of a bit $s_k$, which is contained in $\bar{\bss}$, we can marginalize out the remaining bits in $P(\bar{\bss}|\bsy)$. Note that computing $P(\bar{\bss}|\bsy)$ itself requires marginalizing out $\tilde{\bss}$ from $P(\bss|\bsy)$, which is computationally very burdensome. However, with our proposed approximation, we can write $$P(\bar{\bss}|\bsy)\propto{p}(\bsy|\bar{\bss})P(\bar{\bss})\propto{}% p(\bsy|\bar{\bss})\approx{}p(\bar{\bsy}|\bar{\bss})\big|_{\bar{\bsy}=\bsy},$$ and therefore approximate the a posteriori probability function $P(s_k|\bsy)$ with the function $$\label{eq:skprob} P(s_k=s|\bar{\bsy})\big|_{\bar{\bsy}=\bsy}\propto% \sum_{\bar{\bss}:s_k=s}p(\bar{\bsy}|\bar{\bss})\big|_{\bar{\bsy}=\bsy}.$$ Note that the number of terms in the summation over $\bar{\bss}:s_k=s$ is $2^{\ns-1}$, which is significantly smaller than the number of terms that need to be added when evaluating $P(s_k|\bsy)$ exactly. Using the following operator $\norm{\bsx}^2_\bsQ\triangleq\bsx^T\bsQ^{-1}\bsx$ for some vector $\bsx\in\bbR^\Nr$, we write $$\label{eq:prbybarsbar} p(\bar{\bsy}|\bar{\bss})=\frac{1}{\sqrt{(2\pi)^\Nr|\bsQ|}}% \exp{-\frac{1}{2}\norm{\bar{\bsy}-\wbH\bar{\bss}}^2_\bsQ}$$ and due to the assumption on $\calS$ being BPSK, we can perform the marginalization in in the LLR domain as (after inserting $\bsy$ in place of $\bar{\bsy}$ in ) $$\label{eq:skprobllr} \lambda_k\triangleq\log\left(\frac% {\sum_{\bar{\bss}:s_k=+1}\exp{-\frac{1}{2}\norm{\bsy-\wbH\bar{\bss}}^2_\bsQ}} {\sum_{\bar{\bss}:s_k=-1}\exp{-\frac{1}{2}\norm{\bsy-\wbH\bar{\bss}}^2_\bsQ}}\right).$$ The summations in can be computed efficiently, with good numerical stability, and using low-resolution fixed-point arithmetics via repeated use of the Jacobian logarithm. The a posteriori probabilities of the remaining elements in $\bss$ are approximated analogously to - by simply choosing different partitionings (permutations) of $\bsH$ and $\bss$ such that the bit of interest is in $\bar{\bss}$. The main purpose of stage I is to reduce the impact of the interfering term $\wtH\tilde{\bss}$. For this purpose, we compute the conditional expected value of bit $s_k$ approximately using the function $P(s_k=s|\bar{\bsy})$, $$\label{eq:skexpect} \begin{split} \Exp{s_k|\bsy}&\triangleq\sum_{s\in\calS}sP(s_k=s|\bsy)\approx\sum_{s\in\calS}sP(s_k=s|\bar{\bsy})\bigg|_{\bar{\bsy}=\bsy}\\ &=\frac{-1}{1+e^{\lambda_k}}+\frac{1}{1+e^{-\lambda_k}}=\tanh\big(\textstyle\frac{\lambda_k}{2}\big). \end{split}$$ Stage I is performed for all bits $s_k$ in $\bss$, i.e., $k=1,\dots,\Nt$. For higher-order constellations, this stage would be performed symbol-wise rather than bit-wise as presented above. This is the reason for using the index $k$ in place of index $i$ in the description here. Note that the soft MMSE procedure computes bit-wisely with $\ns$ set to one, i.e., soft MMSE essentially is a special case of SUMIS when $\ns=1$ and stage II is disabled. One can see this equivalence by fixing $\ns=1$ and comparing the underlying approximation steps, from the exact LLR calculation, we take prior to with those that soft MMSE takes. Stage II: Purification ---------------------- For each bit $s_i$, the interfering vector $\tilde{\bss}$ in is suppressed using $$\label{eq:suppmodel} \bsy'\triangleq\bsy-\wtH\Exp{\tilde{\bss}|\bsy}\!=\!\wbH\bar{\bss}+% \underbrace{\wtH(\tilde{\bss}-\Exp{\tilde{\bss}|\bsy})+\bse}_{\text{interference+noise}}\approx\!\wbH\bar{\bss}+\bsn',$$ where $\bsn'\sim\calN\big(\bszero,\bsQ'\big)$ with $\bsQ'\triangleq\wtH\wtPhi\wtH^T\!\!\!+\!\frac{\No}{2}\bsI$ and $\wtPhi$ is the conditional covariance matrix of $\tilde{\bss}$. Note that it is natural to suppress the interference by subtracting its conditional mean since this removes the bias that the interference causes. Under the approximation that the elements in $\tilde{\bss}$ are independent conditioned on $\bsy$, we have that $$\wtPhi=\Exp{\diag(\tilde{\bss})^2\big|\bsy}-\Exp{\diag(\tilde{\bss})\big|\bsy}^2,$$ where the operator $\diag(\cdot)$ takes a vector of elements as input and returns a diagonal matrix with these elements on its diagonal, and the notation $\bsA^2$ means $\bsA\bsA$ for some square matrix $\bsA$. Since $\calS=\{-1,+1\}$, we get $$\wtPhi=\bsI-\diag(\Exp{\tilde{\bss}|\bsy})^2.$$ After the interfering vector $\tilde{\bss}$ is suppressed and the model is “purified”, we compute the LLRs. The LLRs are computed by performing a full-blown marginalization in over the corresponding $\ns$-dimensional subspace $\bar{\bss}$ using the purified approximate model in . Hence, the LLR value we compute for the $i$th bit is $$\label{eq:sumisllr} l(s_i|\bsy)\approx\log\left(\!\frac% {\sum_{\bar{\bss}:s_i(\bss)=+1}\exp{\!-\frac{1}{2}\norm{\bsy'-\wbH\bar{\bss}}^2_{\bsQ'}}}% {\sum_{\bar{\bss}:s_i(\bss)=-1}\exp{\!-\frac{1}{2}\norm{\bsy'-\wbH\bar{\bss}}^2_{\bsQ'}}}% \!\right).$$ For higher order constellations, this stage is performed bit-wise; hence we use the index $i$. Choosing the Permutations {#ssec:sumisperms} ------------------------- The optimal permutation that determines $\wbH$ and $\wtH$ would be the one that minimizes the probability of a decoding error. This permutation is hard to find as it is difficult to derive tractable expressions of the probability of decoding error. There are many possible heuristic ways to choose the permutations. For instance, in the PM and FCSD methods, the aim is to find a permutation such that the condition number, i.e., the ratio between the largest and smallest singular values, of $\wtH$ is minimized. The reason for this choice is that the matrix $\wtH$ determines the conditioning of the subproblem in PM, which in turn is solved by applying a zero-forcing filter. In SUMIS, by contrast, we aim to choose the partitioning such that for a bit $s_k$ in $\bss$, the interfering vector $\tilde{\bss}$ in has as little effect on the useful signal vector $\bar{\bss}$ as possible. This in essence means that we would like the interference to lie in the null-space of the useful signal, i.e., the inner product between the columns in $\wbH$ and those in $\wtH$ should be as small as possible. In the extreme case when the column spaces of $\wbH$ and $\wtH$ are orthogonal, the marginalization over $\bar{\bss}$ and $\tilde{\bss}$ decouples and SUMIS would become optimal. In this respect, SUMIS fundamentally differs from PM and FCSD, which instead would become optimal if the permutation could be chosen so that $\wtH$ had orthognal columns. We base our partitioning on $\bsH^T\bsH$, which has the structure $$\bsH^T\bsH=\begin{bmatrix}\text{\footnotesize$\sigma_1^2$}&\text{\makebox[8mm]{\footnotesize$\rho_{1,2}$}}&\dots\\% \text{\makebox[8mm]{\footnotesize$\quad\rho_{1,2}$}}&\text{\footnotesize$\sigma_2^2$}&&\\\vdots&&\ddots\end{bmatrix}.$$ For the $k$th column of $\bsH^T\bsH$ (corresponding to the $k$th bit in $\bss$) we pick the $\ns\!-1$ indices that correspond to the largest values of $|\rho_{k,\ell}|$. Then, these indexes along with the index $k$ specify the columns from $\bsH$ that are placed in $\wbH$. The rest of the columns are placed in $\wtH$. Therefore, the choice of permutation will depend on $\wbH^T\wbH$ (the “power” of $\wbH$) and the “correlation” $\wbH^T\wtH$. Note also that the matrix $\bsH^T\bsH$ is used in the other SUMIS stages and therefore evaluating it here does not add anything to the total computational complexity. Hence, complexity-wise, computing the permutation comes nearly for free. Also, as shown in the numerical results, using the proposed permutation, SUMIS performs close to optimal. Summary ------- The steps of our SUMIS method are summarized in in the form of generic pseudo-code. Via the adjustable subspace dimensionality, i.e., the parameter $\ns$, SUMIS provides a simple and well-defined tradeoff between computational complexity and detection performance. In the special case when $\ns=\Nt$, there is no interfering vector $\tilde{\bss}$ and SUMIS performs exact LLR computation. At the other extreme, if $\ns=1$ SUMIS becomes the soft MMSE method with the additional step of model purification. The complexity of SUMIS is derived in , and the complexity of some of its competitors is summarized in . The complexity is measured in terms of elementary operations bundled together: additions, subtractions, multiplications, and divisions. Further, it is divided into two parts: one part representing calculations done once for each channel matrix (“$\bsy$-independent”) and one part to be done for each received $\bsy$-vector (“$\bsy$-dependent”). We can see that PM requires approximately $(2\Nt^3+4\Nt^2)2^\ns$ operations in the $\bsy$-dependent part only, and max-log (via SD) requires $3(\Nr+\Nt)\Nt^2$ in the initialization stage only. Thus, SUMIS provides clear complexity savings and it does so, as we will see in , at the same time as it offers significant performance gains over these competing methods. Det. method $\bsy$-independent $\bsy$-dependent ------------------ ------------------------------ --------------------------------- SUMIS (proposed) $\Nr\Nt^2+\Nt^3+2\ns^2\Nt^2$ $\Nt^3+2\Nr\Nt+(2\ns^2+6)\Nt^2$ SD aided max-log $3(\Nr+\Nt)\Nt^2$ PM $\Nr\Nt^2+\Nt^3$ $(2\Nt^3+4\Nt^2)M^\ns$ soft MMSE $\Nr\Nt^2+\Nt^3$ $2\Nt(\Nr+\Nt)$ max-log $3\Nt{M}^\Nt$ $\Nt{M}^\Nt$ exact LLR $3\Nt{M}^\Nt$ $\Nt{M}^\Nt$ Non-Uniform A Priori Probabilities {#sec:nuniprob} ================================== Algorithm \[alg:sumis\] in can be directly extended to the case of non-uniform $P(\bss)$. The details for each step are given as follows. Stage I ------- Since we have a priori information on the symbols, we can purify the model already in this stage and suppress the interfering subspace $\tilde{\bss}$. First, we evaluate the expected value $\Exp{s_k}\triangleq\sum_{s\in\calS}sP(s_k=s)$ and the purified received data $$\bsy-\wtH\Exp{\tilde{\bss}}=\wbH\bar{\bss}+% \underbrace{\wtH(\tilde{\bss}-\Exp{\tilde{\bss}})+\bse}_{\text{interference+noise}},$$ where the “interference+noise” is, as in in the approximate model $\bar{\bsy}=\wbH\bar{\bss}+\bsn$, approximated to be $\calN\big(\bszero,\bsQ\big)$ where now $\wtPsi$ in $\bsQ$ is not necessarily equal to the identity matrix. More precisely, under the restriction that $\calS=\{-1,+1\}$ and the assumption that the bits $s_k$ are independent, we get $$\wtPsi=\bsI-\diag(\Exp{\tilde{\bss}})^2.$$ We can approximate the a posteriori probability $P(s_k=s|\bsy)$, analogously to , with $$\label{eq:skprobiter} P(s_k=s|\bar{\bsy})\propto\sum_{\bar{\bss}:s_k=s}p(\bar{\bsy}|\bar{\bss})P(\bar{\bss}).$$ Using , we can approximate the expectation of $s_k$ conditioned on $\bsy$ in the same manner as in , i.e., $$\begin{aligned} \label{eq:skexpectiter} &\Exp{s_k|\bsy}\approx\tanh\big(\textstyle\frac{\lambda_k}{2}\big),% &\lambda_k=\log\left(\frac{P(s_k=+1|\bar{\bsy})}{P(s_k=-1|\bar{\bsy})}\right)% \bigg|_{\bar{\bsy}=\bsy}.\end{aligned}$$ Similarly to stage I in for higher-order constellations, this stage is performed symbol-wise; hence we use here again the index $k$. Note that for BPSK constellations, the procedure in [@Elkhazin] is equivalent to stage I here, i.e., the method of [@Elkhazin] is a special case of SUMIS when stage II is disabled. Stage II -------- In this stage, exactly the same procedure is performed as in stage II in but with two minor modifications: first the model is purified using instead of , and second, the LLR value of the $i$th bit is computed using $$\label{eq:sumisllriter} l(s_i|\bsy)\approx\log\left(\!\frac% {\sum_{\bar{\bss}:s_i(\bss)=+1}\exp{\!-\frac{1}{2}\norm{\bsy'\!-\!\wbH\bar{\bss}}^2_{\bsQ'}}P(\bar{\bss})}% {\sum_{\bar{\bss}:s_i(\bss)=-1}\exp{\!-\frac{1}{2}\norm{\bsy'\!-\!\wbH\bar{\bss}}^2_{\bsQ'}}P(\bar{\bss})}% \!\right).$$ For higher order constellations, this stage is performed bit-wise; hence the index $i$. Imperfect Channel State Information {#sec:icsi} =================================== In practice the receiver does not have perfect knowledge about $\bsH$. Typically, the receiver then forms an estimate of the channel, based on a known transmitted pilot matrix $\bss^\Ntrvec\triangleq[\bss_1\dots\bss_\Ntr]$ and the corresponding received matrix $\bsy^\Ntrvec\triangleq[\bsy_1\dots\bsy_\Ntr]$. The so-obtained estimated channel matrix will not be perfectly accurate, and the estimation error should be taken into account when computing the LLRs. To optimally account for the imperfect knowledge of the channel, $P(\bss|\bsy)$ (or $P(\bss|\bsy,\bsH)$ if the dependence on $\bsH$ is spelled out explicitly) should be replaced with $P(\bss|\bsy,\bsy^\Ntrvec,\bss^\Ntrvec)$ in . As an approximation to the resulting optimal detector, instead of working with $P(\bss|\bsy,\bsy^\Ntrvec,\bss^\Ntrvec)$, $\bsH$ may be replaced with an estimate $\whH$ of $\bsH$ in $P(\bss|\bsy,\bsH)$. The so-obtained detector is called *mismatched*, and it is generally not optimal, except for in some special cases [@TariccoBiglieri]. We next extend SUMIS to take into account channel estimation errors, i.e., using $P(\bss|\bsy,\bsy^\Ntrvec,\bss^\Ntrvec)$ in . We model the channel estimate $\whH$, when obtained using the training data $\bss^\Ntrvec$ and $\bsy^\Ntrvec$, with $$\label{eq:chanest} \whH=\bsH+\bsDelta,$$ where $\bsH$ is the true channel and $\bsDelta$ is the estimation error matrix whose complex-valued counterpart $\bsDelta_\cx$ (which is decomposed into $\bsDelta$ using ) has independent $\calC\calN(0,\delta^2)$ elements where $\delta^2$ is the variance of the estimation error per complex dimension. Typically, $\delta^2$ is directly proportional to the noise variance $\No$. The assumption that the elements in $\bsDelta$ are independent holds if the pilots are orthogonal and the noise is uncorrelated. Further, we assume that the elements in $\whH$ are independent of those in $\bsDelta$; this is the case if MMSE channel estimation is used [@KailathSayed]. From and , we have $$\label{eq:icsimodel} \bsy=\bsH\bss+\bse% =\whH\bss+\overbrace{\bse-\bsDelta\bss}^{\triangleq\bsepsilon}% =\whH\bss+\bsepsilon.$$ For constellations that have constant modulus, i.e., satisfy say $\norm{\bss}^2=\Nt\forall\bss$, we know that $\bsepsilon\sim\calN(0,\frac{\Nt\delta^2+\No}{2}\bsI)$. In this case, SUMIS can be directly applied to a modified data model where $\whH$ and $\bsepsilon$ are the channel matrix and noise vector, respectively. By contrast, for general signal constellations, $\norm{\bss}^2$ is not equal to a constant, and the situation becomes more complicated. Recall that $P(\bss|\bsy,\whH)=% p(\bsy|\bss,\whH)P(\bss)/p(\bsy)$, where the goal is to approximate $p(\bsy|\bss,\whH)$ via $p(\bsy|\bar{\bss},\whH)$ using the philosophy of SUMIS. That is, we target $p(\bsy|\bar{\bss},\whH)$ by approximating $\bsepsilon$ conditioned on $\bar{\bss}$ and $\whH$ as Gaussian. Approximating $\bsepsilon\big|_{\bar{\bss},\scrwhH}$ as Gaussian is reasonable since each element in $\bsepsilon$ consists of a sum of independent variates. The covariance matrix of $\bsepsilon$ conditioned on $\bar{\bss}$ and $\whH$ is $\frac{\big(\mathbb{E}\{\norm{\tilde{\bss}}^2\}+ \norm{\bar{\bss}}^2\big)\delta^2+\No}{2}\bsI$. Thus, the power of the effective noise is $\frac{\big(\mathbb{E}\{\norm{\tilde{\bss}}^2\}+ \norm{\bar{\bss}}^2\big)\delta^2+\No}{2}$ instead of $\frac{\No}{2}$ as in . This power now depends on $\mathbb{E}\{\norm{\tilde{\bss}}^2\}$ and $\bar{\bss}$, causing the complexity of SUMIS to increase substantially. The reason is that the inverse of $\bsQ$ must be recomputed for each permutation in and even more often for each $\bar{\bss}$; the same applies in stage II of SUMIS for the $\bsQ'$ matrix. To avoid this complexity increase, we introduce further approximations. First, instead of $\mathbb{E}\{\norm{\tilde{\bss}}^2\}$, we use $$\eta\triangleq\sum_{\substack{\text{all}~\Nt\\\text{permuts.}}}% \frac{1}{\Nt}\mathbb{E}\{\norm{\tilde{\bss}}^2\},$$ where the sum is taken over all $\Nt$ permutations considered in SUMIS for a particular $\bsH$. This is reasonable since $\Nt\gg\ns$ and at most $\ns$ elements out of $\Nt-\ns$ in $\tilde{\bss}$ are replaced from one permutation to another; hence, $\mathbb{E}\{\norm{\tilde{\bss}}^2\}$ will not differ much over the permutations. Second, using again that $\Nt\gg\ns$, the variations in $\norm{\bar{\bss}}^2$ will have a minor effect on the absolute power of the effective noise. Therefore, we replace $\norm{\bar{\bss}}^2$ by $\frac{1}{|\calS|^\ns}\sum_{\bar{\bss}\in\calS^\ns}\norm{\bar{\bss}}^2=\ns$. Hence, the simplified noise power becomes $\frac{(\eta+\ns)\delta^2+\No}{2}$, which is constant for each stage and results in a SUMIS complexity equivalent to that of the full CSI approach, i.e., the complexity presented in . Very Large MIMO Settings {#sec:vlmimo} ======================== SUMIS was developed mainly for moderately-sized MIMO systems but it is also applicable to large MIMO channels. Previous research on detection in MIMO systems with a large number of transmit antennas has focused on hard decision algorithms [@Srinidhi; @DattaSrinidhi; @LiMurch; @DattaKumar]. The performance of such algorithms when used in coded systems is always upper-bounded by the performance of max-log. In some cases (see the numerical results in ), the performance loss of the max-log approximation becomes larger when the number of antennas increases. Hence, the philosophy behind SUMIS—to approximate the LLR directly—seems to be beneficial. Another interesting observation in the large-MIMO setting is that even the soft MMSE method can potentially achieve the performance of the exact LLR method and thus outperform the max-log method. This observation is motivated by the central limit theorem and the following argument. Consider the model in for $\ns=1$, which makes $\wbH$ a column vector, say $\bsh_i$, and $\bar{\bss}$ a scalar, say $s_i$; hence, $\bsy=\bsh_is_i+\wtH\tilde{\bss}+\bse$. We assume that the elements in $\bss$ are independent, which is a very common assumption in the literature and a very reasonable assumption in practice as the bits in a codeword are typically interleaved. By the central limit theorem, the distribution of $\bsy$ for a given $s_i$ will approach a Gaussian: $\calN\big(\bsh_is_i,\wtH\wtPsi\wtH^T\!\!\!+\!\frac{\No}{2}\bsI\big)$, as $\Nt$ grows [@Papoulis sec. 8.5]. The exact LLR, i.e., $L(s_i|\bsy)=\log(p(\bsy|s_i=+1))-\log(p(\bsy|s_i=-1))$, will then for sufficiently large $\Nt$ look like the LLR function based on the Gaussian distribution, i.e., $$\label{eq:llrvlmimo} L(s_i|\bsy)\approx4\bsy^T(\wtH\wtPsi\wtH^T\!\!\!+\!\frac{\No}{2}\bsI)^{-1}\bsh_i,% \quad\text{for}~\Nt\gg1.$$ This is a simple but important observation which does not seem to have been made in the existing literature on large-MIMO detection [@RusekPersson; @Srinidhi; @SomDatta; @DattaSrinidhi; @LiMurch; @DattaKumar]. However, the question remains as to how large $\Nt$ needs to be for the approximation in to be tight. According to [@Papoulis sec. 8.5], under some conditions, the approximation becomes very tight even for small values of $\Nt$. We are particularly interested in determining how tight it is in terms of frame-error rate for large but finite $\Nt$. Our investigation with $\Nt=12$ and $\Nt=26$ indicates that the performance of the soft MMSE method is much closer to that of the exact LLR method for larger $\Nt$. What is especially interesting for the SUMIS method is that the performance gap between the soft MMSE method and the exact LLR method is reduced remarkably via the procedure in stage II of SUMIS. Recall that the SUMIS method performs the soft MMSE procedure in stage I when $\ns=1$. Numerical Results {#sec:numres} ================= Simulation Setup {#ssec:simsetup} ---------------- Using Monte Carlo simulations we evaluate the performance of SUMIS and compare it to the performance of some competitors. Performance is quantified in terms of frame-error rate (FER) as a function of the normalized signal-to-noise ratio $\Eb/\No$ where $\Eb$ is the total transmitted energy per information bit. To make the results statistically reliable, we count $300$ frame errors for each simulated point. We simulate $6\times6$ and $13\times13$ complex MIMO systems with $M^2$-QAM where $M\in\{2,4\}$, which means that the detection is performed on equivalent real-valued $12\times12$ and $26\times26$ MIMO systems with 2-PAM and 4-PAM modulation. The channel is Rayleigh fading where each complex-valued channel matrix element is independently drawn from $\calC\calN(0,1)$. We use three different highly optimized irregular low-density parity-check (LDPC) codes with rates $\{2/9,1/2,5/6\}$ (note that $2/9\lesssim1/4$), each having a codeword length of approximately $10000$ bits. We use the parameters of [@LeeKim] for rate $1/4$ and [@RichardsonUrbanke; @RichardsonShokrollahi] for rate $1/2$. For rate $5/6$, we use the technique in [@MacKay] to generate the code parameters. For more details, see the uploaded supplementary material to this article. Two different coherence times are used: slow fading (each codeword sees one channel realization) and fast fading (each codeword spans $40$ channel matrices), respectively. We plot the FER curves of the exact LLR (as defined by ), the max-log approximation , SUMIS for $\ns=1,3$, SUMIS stage-I-only (without the purification procedure) for $\ns=1,3$, and PM for $\ns=r+1=3$ [@LarssonJalden]. We include the approximate LLRs computed in stage I of SUMIS in order to show the performance gain of the purification step (stage II) of SUMIS. Recall that SUMIS stage-I-only, for $\ns=1$, is equivalent to the soft MMSE method. For reasons of numerical stability and efficiency, all log-of-sums-of-exponentials were evaluated via repeated use of the Jacobian logarithm. The convention used in the figures that follow is that dashed lines represent the proposed methods and the solid lines represent the competing ones. Since the FCSD method is an approximation of the PM method, we refrain from plotting its performance curves. We also omit performance plots of RD-MLS and all other SD-based methods due to the fact that they approximate the max-log method, and thus they cannot perform better. Also, the complexity of the initialization step of these algorithms alone is higher than the complexity of the complete SUMIS procedure, see . Moreover, their complexity depends on the channel realization and they have a very high complexity for some channel realizations (the average complexity is exponential in $\Nt$ [@JaldenOttersten]). Taken together, this renders performance-complexity comparisons with SD-based methods less interesting. We have included the max-log curve as a universal indicator of the performance that can be achieved by any SD-type detector. Results {#ssec:simres} ------- \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[\]\[\]\[\]\[0\][exact LLR]{} \[\]\[\]\[\]\[0\][max-log]{} \[\]\[\]\[\]\[0\] \[\]\[\]\[\]\[0\] \[\]\[\]\[\]\[0\] ![FER as a function of $\Eb/\No$ for the slow-fading $6\times6$ MIMO system ($\Nr=\Nt=12$ in ) with $4$-QAM ($2$-PAM in ) and with the LDPC code of rate $1/2$. The shown performance curves are: (i) dashed curves for the SUMIS stage-I-only and the complete SUMIS procedure with $\ns=1$ and $\ns=3$ spanning from right to left, and (ii) solid curves for the exact LLR method, the max-log method, and the PM method with $\ns=r+1=3$.[]{data-label="fig:6x6"}](fer-6x6.eps "fig:"){width="\columnwidth"} In we show results for a slow-fading $6\times6$ MIMO system using $4$-QAM modulation and the LDPC code of rate $1/2$. There is no iteration between the detector and the decoder, and the transmitted symbols are assumed by the detector to be uniformly distributed. This plot illustrates our principal comparison and the rest (Figs. \[fig:6x6rates\]–\[fig:6x6siso\]) illustrate extended comparisons that deal with different scenarios of interest: slow-/fast-fading, moderate-/large-size MIMO, low-/high-rate codes, full/partial CSI, higher-order constellations, and iterative/non-iterative receivers. includes all the above mentioned detection methods whereas the remaining figures include only those methods that show noteworthy variations from what is already seen in . The results in clearly show that the SUMIS detector performs close to the exact LLR (optimal soft detector) performance and it does so at a very low complexity, see also . It outperforms the PM and max-log methods (SD and its derivatives). Note that the complexity of SUMIS with $\ns=3$ is much lower than that of PM with $\ns=r+1=3$ even though the partitioned problem in is of the same size. The reason is that the sums of the PM method consists of terms whose exponents require the evaluation of matrix-vector multiplications of much larger dimension than in SUMIS. Additionally, SUMIS (both the complete algorithm and the stage-I-only variant) offers a well-defined performance-complexity tradeoff via the choice of the parameter $\ns$. \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[t\]\[t\]\[\]\[0\] \[t\]\[t\]\[\]\[0\] \[l\]\[l\]\[\]\[0\][max-log]{} \[\]\[\]\[\]\[0\][PM $\ns=3$]{} \[b\]\[b\]\[\]\[0\] \[tc\]\[tc\]\[\]\[0\] ![Same as in but with LDPC codes of **rate** $\mathbf{2/9\lesssim1/4}$ **(black curves)** and **rate** $\mathbf{5/6}$ **(gray curves)**. Note that the left-most blended curves are five different curves: exact LLR, SUMIS for $\ns=1,3$, and SUMIS stage-I-only for $\ns=1,3$.[]{data-label="fig:6x6rates"}](fer-6x6-rates.eps "fig:"){width="\columnwidth"} shows results for the same setup as in but with code rates $2/9\lesssim1/4$ and $5/6$ instead. This plot suggest that there is a larger, but still very small, performance gap between SUMIS and exact LLR for higher coding rates. Also, the max-log curve is much closer to the exact LLR curve for higher rates than for lower rates. The high-rate scenario clearly shows the importance of the model “purification” procedure (SUMIS stage II) as the performance gap between SUMIS and stage-I-only SUMIS is significant. For the low-rate scenario, the performance gap between SUMIS and exact LLR is negligible. Similar results have been observed for short convolution codes with a codeword length of $100$ bits, but these plots are not included here due to space limitations. We have also conducted similar experiments with correlated MIMO channels (using a Toeplitz correlation structure) [@Vanzelst], but these results are omitted here due to space limitations. In these experiments, although there was a shift in the absolute performance of all methods, their relative performance, except for that of PM which became much worse, did not differ significantly from what is observed in Figs. \[fig:6x6\] and \[fig:6x6rates\]. One reason why PM performed differently may be its sensitivity to the condition number of $\wtH$, which in the correlated MIMO setting is typically much higher than in the uncorrelated case. This stands in contrast to SUMIS which is much more robust in such cases. \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[l\]\[l\]\[\]\[0\][exact LLR]{} \[l\]\[l\]\[\]\[0\][max-log]{} \[l\]\[l\]\[\]\[0\] \[l\]\[l\]\[\]\[0\][SUMIS, $\ns=1$]{} \[\]\[\]\[\]\[0\][PM, $\ns=3$]{} ![Same as in but for a $13\times13$ MIMO system instead of $6\times6$. Note (in contrast to ) the increased gap between max-log and exact LLR, and the decreased gap between soft MMSE (SUMIS stage-I-only for $\ns=1$) and exact LLR. The SUMIS curve is very close to the exact LLR curve.[]{data-label="fig:13x13"}](fer-13x13.eps "fig:"){width="\columnwidth"} A large MIMO system is simulated in . The setup is the same as in except that here the size of the system is $13\times13$ complex-valued (corresponding to $26\times 26$ real-valued MIMO). This figure illustrates how close both the soft MMSE (SUMIS stage-I-only with $\ns=1$) and SUMIS are to the exact LLR performance curve. Plots that include the exact LLR curve for large MIMO systems are unavailable in the literature to our knowledge, probably because of the required enormous simulation time. We used a highly optimized version of the exact LLR detector and it took approximately $50000$ core-hours to generate the curves in . A very interesting observation is that in , the gap between the max-log curve and the exact LLR curve is larger than in . This curve represents the performance that the various tree-search algorithms, such as SD with its variants and the method in [@Srinidhi] (which is specifically designed for large MIMO systems), aim to achieve. As predicted in , we see in that the soft MMSE is much closer to the exact LLR curve than in . Also, the purification step in SUMIS yields a clear compensation for the performance loss of soft MMSE. The performance of SUMIS is impressive, both with stage-I-only and with stage II included, which suggests that approximating the exact LLR expression directly (which is the philosophy of SUMIS) is a better approach than max-log. \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[r\]\[r\]\[\]\[0\][exact LLR]{} \[r\]\[r\]\[\]\[0\][max-log]{} \[r\]\[r\]\[\]\[0\][SUMIS, $\ns=1,3$]{} \[lt\]\[lt\]\[\]\[0\] \[l\]\[l\]\[\]\[0\][PM, $\ns=3$]{} ![Same as in but with fast-fading, i.e., a codeword spans $40$ independent channel realizations.[]{data-label="fig:6x6fast"}](fer-6x6-fast.eps "fig:"){width="\columnwidth"} Figs. \[fig:6x6fast\] and \[fig:snrflops\] show the results for a fast-fading scenario. The setup is the same as in , except that here each codeword spans over $40$ channel realizations. In , as expected, the relative performance of simple soft MMSE (SUMIS stage-I-only for $\ns=1$) is much better than in where the channel stays constant over a whole codeword. The reason is that the presence of ill-conditioned channel matrices, which make linear methods such as soft MMSE and soft ZF perform poorly, has less impact here. With coding over many channel realizations, only a small part of each codeword will be affected by ill-conditioned channel matrices since in i.i.d. Rayleigh fading, they do not occur often (assuming that the MIMO channel is not overly underdetermined). The axes in show the number of elementary operations bundled together (additions, subtractions, multiplications, and divisions) versus the minimum signal-to-noise ratio required to achieve 1% FER. The values on the horizontal axis are calculated for all methods, except for max-log via SD, using the expressions in . For max-log via SD, the minimum, maximum, and mean complexities of the $\bsy$-dependent part were calculated empirically over many different (and random) channel and noise realizations. More specifically, this was done for the single-tree-search algorithm [@StuderBolcskei] by calculating the number of visited nodes at different tree levels and the associated number of elementary operations at each node (disregarding the overhead of the book-keeping and the “if-and-else” statements). This method, as presented in [@StuderBolcskei], has $k$ additions/subtractions and $k$ multiplications in each node at tree level $k$, where the root node is at level $1$. The numbers in particularly show the advantages of SUMIS and linear like methods such as soft MMSE. \[b\]\[t\]\[\]\[0\][\#Elem. Ops. \[$10^3$\]]{} \[\]\[\]\[\]\[90\][$\Eb/\No$ \[dB\]]{} \[t\]\[t\]\[\]\[0\][exact LLR]{} \[b\]\[b\]\[\]\[0\] \[\]\[\]\[\]\[0\][`min`]{} \[\]\[\]\[\]\[0\][`max`]{} \[\]\[\]\[\]\[0\][`mean`]{} \[b\]\[b\]\[\]\[0\] \[l\]\[l\]\[\]\[0\][SUMIS]{} \[r\]\[r\]\[\]\[0\][$\ns=1$]{} \[l\]\[l\]\[\]\[0\][$\ns=3$]{} \[l\]\[l\]\[\]\[0\][soft MMSE]{} \[lb\]\[lt\]\[\]\[0\][PM $\ns=3$]{} ![Total complexity (both $\bsy$-independent and $\bsy$-dependent parts) per vector of bits in $\bss$ for the setting in . In this setting, the channel stays the same for $21$ consecutive transmissions of $\bss$. The axes show the number of elementary operations versus the minimum signal-to-noise ratio required to achieve 1% FER. For max-log via SD, we show the empirically evaluated minimum, maximum, and mean complexities.[]{data-label="fig:snrflops"}](snrflops.eps "fig:"){width="\columnwidth"} \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[r\]\[r\]\[\]\[0\][max-log]{} \[r\]\[r\]\[\]\[0\][SUMIS, $\ns=1,3$]{} \[\]\[\]\[\]\[0\][SUMIS stage-I-only, $\ns=1,3$]{} \[r\]\[r\]\[\]\[0\][PM, $\ns=3$]{} ![Same as in but for $16$-QAM ($4$-PAM in ) instead of $4$-QAM. The exact LLR curve has been excluded due to the massive complexity required to evaluate the FER. Its complexity is of the same order of magnitude as that in .[]{data-label="fig:6x6qam16"}](fer-6x6-qam16.eps "fig:"){width="\columnwidth"} reports results with $16$-QAM modulation (instead of 4-QAM as used earlier). Except for the modulation, all other parameters are the same as in . The results in show that the performance of SUMIS is very good and especially in relation to its complexity. For instance, for the $\bsy$-dependent part in (having set $\ns=3$), SUMIS, PM, and exact LLR require roughly $5.4\cdot10^3$, $2.6\cdot10^5$, and $2\cdot10^8$ operations, respectively. This suggests that the speedup of SUMIS is $50$ and $4\cdot10^4$ relative to PM and to exact LLR calculation, respectively. \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[\]\[\]\[\]\[0\][LLR match.]{} \[l\]\[l\]\[\]\[0\][max-log mismatch.]{} \[\]\[\]\[\]\[0\] \[\]\[\]\[\]\[0\] \[l\]\[l\]\[\]\[0\][SUMIS $\ns=1,3$ mismatch.]{} \[\]\[\]\[\]\[0\] \[\]\[\]\[\]\[0\] ![An example with imperfect CSI at the receiver. The error-matrix-element variance $\delta^2$ is directly proportional to the noise variance $\No$. The matched detectors use $P(\bss|\bsy,\whH)$ and the mismatched use $P(\bss|\bsy,\bsH)\big|_{\bsH=\scrwhH}$.[]{data-label="fig:6x6icsi"}](fer-6x6-icsi.eps "fig:"){width="\columnwidth"} \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[r\]\[r\]\[\]\[0\][max-log mismatch.]{} \[b\]\[b\]\[\]\[0\] \[b\]\[b\]\[\]\[0\] \[r\]\[r\]\[\]\[0\] ![An example with imperfect CSI at the receiver. The error-matrix-element variance $\delta^2$ is directly proportional to the noise variance $\No$. The matched detectors use $P(\bss|\bsy,\whH)$ and the mismatched use $P(\bss|\bsy,\bsH)\big|_{\bsH=\scrwhH}$.[]{data-label="fig:6x6icsi"}](fer-6x6-icsi-qam16.eps "fig:"){width="\columnwidth"} Yet another important scenario that often occurs in practice is detection under imperfect CSI (ICSI). Figs. \[fig:6x6icsiqam4\] and \[fig:6x6icsiqam16\] presents performance results for this case. The setup is the same as in and , respectively, but here the detector is provided with knowledge of $\whH$ instead of $\bsH$. The error-matrix-element variance $\delta^2$ is proportional to the noise variance $\No$, i.e., $\delta^2=\alpha\No$ where $\alpha$ is a constant. For SUMIS, we considered both the intelligent way of handling ICSI (using $P(\bss|\bsy,\whH)$) and the crude way using mismatched detection (inserting $\whH$ into $P(\bss|\bsy,\bsH)$), see . For the other detectors, we considered only the mismatched detector since versions of those algorithms that perform intelligent detection using $P(\bss|\bsy,\whH)$ and higher-order constellations do not seem to be available. Clearly, intelligently handling ICSI yields better performance than performing mismatched detection. The results in resemble those in with only a minor shift in signal-to-noise ratio. This comes as no surprise as the effective channel model in for BPSK per real dimension (as in ) is equivalent to (as in ) up to a scaling of the noise variance. \[\]\[\]\[\]\[0\][FER]{} \[\]\[\]\[\]\[0\][$\Eb/\No$ \[dB\]]{} \[r\]\[r\]\[\]\[0\][exact LLR]{} \[r\]\[r\]\[\]\[0\][max-log]{} \[r\]\[r\]\[\]\[0\][SUMIS, $\ns=1,3$]{} \[r\]\[r\]\[\]\[0\][no iter.]{} \[r\]\[r\]\[\]\[0\][3 iters.]{} ![Detection with iterative decoding using the same setting as in . The curves marked with gray were simulated with no iteration between the respective detector and the outer decoder. For curves marked with black, 3 iterations were used, which means 4 decoder runs. The extended SUMIS algorithm used here is presented in .[]{data-label="fig:6x6siso"}](fer-6x6-siso.eps "fig:"){width="\columnwidth"} Finally, shows results with iterative decoding where the detector and the decoder interchange information, hence exploiting the soft-input capability of SUMIS. The iterative decoding setup is that of [@WangPoor fig. 1], and the simulation model is the same as used in . SUMIS, as presented in , shows strikingly good performance at very low complexity. Conclusions =========== We have proposed a novel soft-input soft-output MIMO detection method, SUMIS, that outperforms today’s state-of-the-art detectors (such as PM, FCSD, SD and its derivatives), runs at fixed-complexity, provides a clear and well-defined tradeoff between computational complexity and detection performance, and is highly parallelizable. The ideas behind SUMIS are fundamentally simple and allow for very simple algorithmic implementations. The proposed method has a complexity comparable to that of linear detectors. We have conducted a thorough numerical performance evaluation of our proposed method and compared to state-of-the-art methods. The results indicate that in many cases SUMIS (for low $\ns$) outperforms the max-log method and therefore inherently all other methods that approximate max-log, such as SD and its derivatives. This performance is achieved with a complexity that is much smaller than that of competing methods, see and , and in particular smaller than the initialization step of SD alone. In terms of hardware implementation, SUMIS has remarkable advantages over tree-search (e.g., SD) algorithms that require comparisons and branchings (if-then-else statements). More fundamentally, the results indicate that approximating the exact LLR expression directly (which is the basic philosophy of SUMIS) is a better approach than max-log followed by hard-decisions. This way of thinking, similar to [@LarssonJalden], represents a profound shift in the way the problem is tackled and opens the door for further new approaches to the detection problem. The increase in performance that this philosophy offers is especially pronounced in larger MIMO systems where the performance gap between the max-log approximation and the exact LLR seems to increase. Optimized SUMIS and Soft MMSE and Their Complexity {#app:cmplx} -------------------------------------------------- ------------------------------------------------------------------------ $$\label{eq:completesq} \begin{split} \norm{\bar{\bsy}-\wbH\bar{\bss}}^2_\bsQ% &=(\bar{\bsy}-\wbH\bar{\bss})^T\bsQ^{-1}(\bar{\bsy}-\wbH\bar{\bss})% =\bar{\bsy}^T\bsQ^{-1}\bar{\bsy}-2\bar{\bsy}^T\bsQ^{-1}\wbH\bar{\bss}+\bar{\bss}^T\wbH^T\bsQ^{-1}\wbH\bar{\bss}\\ &=((\wbH^T\bsQ^{-1}\wbH)^{-1}\wbH^T\bsQ^{-1}\bar{\bsy}-\bar{\bss})^T\wbH^T\bsQ^{-1}\wbH((\wbH^T\bsQ^{-1}\wbH)^{-1}\wbH^T\bsQ^{-1}\bar{\bsy}-\bar{\bss})\\ &\quad\quad+\bar{\bsy}^T\bsQ^{-1}\bar{\bsy}-\bar{\bsy}^T\bsQ^{-1}\wbH(\wbH^T\bsQ^{-1}\wbH)^{-1}\wbH^T\bsQ^{-1}\bar{\bsy} \end{split}$$ ------------------------------------------------------------------------ \[eq:matinvident\] $$\label{eq:matinvhq} \begin{split} \wbH^T\!\bsQ^{-1}&=\wbH^T\!(\NoTwo\bsI+\wtH\wtPsi\wtH^T\!)^{-1}% =\wbPsi^{-1}\wbPsi\wbH^T(\NoTwo\bsI+\wtH\wtPsi\wtH^T)^{-1}\\ &=\wbPsi^{-1}\!\!\underbrace{\wbPsi\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T\!\!-\wbH\wbPsi\wbH^T)^{-1}}_{\text{apply matrix inversion lemma}}\\ &=\wbPsi^{-1}\big(\wbPsi^{-1}\!\!-\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\wbH\big)^{-1}% \wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\\ &=\big(\bsI-\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\wbH\wbPsi\big)^{-1}% \wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1} \end{split}$$ $$\label{eq:matinvhqh} \begin{split} \wbH^T\!\bsQ^{-1}\wbH&\overset{\eqref{eq:matinvhq}}{=}% \big(\bsI-\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\wbH\wbPsi\big)^{-1}% \wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\wbH=\\ &\Big(\big(\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\wbH\big)^{-1}-\wbPsi\Big)^{-1} \end{split}$$ ------------------------------------------------------------------------ \[eq:matinvh\] $$\label{eq:matinvhqhihq} (\wbH^T\!\bsQ^{-1}\wbH)^{-1}\wbH^T\!\bsQ^{-1}\overset{\eqref{eq:matinvident}}{=}% \big(\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}\wbH\big)^{-1}\wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}$$ $$\label{eq:phiplushdh} \begin{split} \wbH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}&=\wbP^T\!\!\bsH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}% =\wbP^T\!\!\bsPsi^{-1}\!\underbrace{\bsPsi\bsH^T\!(\NoTwo\bsI+\bsH\bsPsi\bsH^T)^{-1}}_{\text{apply matrix inversion lemma}}\\ &=\wbP^T\!\!\bsPsi^{-1}\!(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\bsPsi^{-1}\!\!+\bsH^T\!\!\bsH)^{-1}\!\bsH^T \end{split}$$ ------------------------------------------------------------------------ We first identify the main complexity bottlenecks of SUMIS step by step and keep track of the largest order of magnitude terms. The focus will be on in , its optimization and the number of operations required for its execution. Note that the techniques presented in what follows can also be used for an optimized implementation of the soft MMSE method [@StuderFateh]. The complexity will be measured in terms of elementary operations bundled together: additions, subtractions, multiplications, and divisions. The complexity count is divided into two parts: a received data independent ($\bsy$-independent) processing part and an $\bsy$-dependent processing part. We will also assume that $\ns\ll\Nt\leq\Nr$, which is the case of most practical interest. The assumption on $\Nt\leq\Nr$ is required only for the presented optimized SUMIS version, due to the requirement for various inverses to exist, but analogous complexity reductions can be made for $\Nt\geq\Nr$ and are excluded due to space limitations. ### $\bsy$-independent processing We start with the choice of permutations in . The SUMIS algorithm uses $\Nt$ different permutations that are decided based on $\bsH^T\bsH$. This procedure evaluates $\bsH^T\bsH$ requiring $\Nt^2\Nr$ operations. There is also a small search involved that requires $\Nt(\ns-1)(\Nt-\ns)$ comparisons and this we neglect. Next, by simple matrix manipulations, one can pre-process and simplify the computation of , in the $\bsy$-dependent part of the algorithm, consisting of a sum of terms in over all $\bar{\bss}$. Consider again , $$p(\bar{\bsy}|\bar{\bss})=\frac{1}{\sqrt{(2\pi)^\Nr|\bsQ|}}% \exp{-\frac{1}{2}\norm{\bar{\bsy}-\wbH\bar{\bss}}^2_{\bsQ}},$$ which includes matrix-vector multiplications of dimension $\Nr$. We can rewrite the exponent as in where the terms on the last line in do not depend on $\bar{\bss}$ and will not affect the final result in . So, from , we see that if $\wbH^T\bsQ^{-1}\wbH$ and $(\wbH^T\bsQ^{-1}\wbH)^{-1}\wbH^T\bsQ^{-1}$ are precomputed, the matrix-vector multiplications in in the $\bsy$-dependent part will be of dimension $\ns\ll\Nr$, which is evidently desirable. We need to evaluate these matrices once for each partitioning (there are $\Nt$ of them). This can be done simultaneously in one step. For this purpose, we derive the identities in and where we have defined $\bsPsi$ and $\wbPsi$ to be diagonal matrices such that $\bsH\bsPsi\bsH^T=\wbH\wbPsi\wbH^T+\wtH\wtPsi\wtH^T$, and $\wbP\in\{0,1\}^{\Nt\times\ns}$ to be a matrix that has precisely $\ns$ ones such that $\wbH=\bsH\wbP$ (a column picking matrix). Recall that $\wtPsi$ is the covariance matrix of $\tilde{\bss}$. For $\ns=1$, the identity (also mentioned in [@StuderFateh; @Studer]) is well known from the equivalence showed in [@TseViswanath exerc. 8.18] of the MMSE filter [@KailathSayed sec. 3.2.1]. Equation was also derived in [@StuderFateh] and [@Studer] though the derivations there contain a minor error. Specifically, the assumption on $\bsU\bsU^T=\bsI$ in the singular value decomposition of $\bsH=\bsU\bsSigma\bsV$ in [@Studer app. A.2.2] is not valid for $\Nr>\Nt$. Now, since we have established and , we can immediately write $$\label{eq:hqinvh} \wbH^T\!\!\inv{\bsQ}\wbH=% \Big(\big(\wbP^T\!\!\inv{\bsPsi}\inv{(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\inv{\bsPsi}+% \bsH^T\!\!\bsH)}\!\bsH^T\!\!\bsH\wbP\big)^{-1}\!\!\!-\wbPsi\Big)^{-1},$$ where the innermost inverse is of dimension $\Nt$ and the two outermost inversions are of dimension $\ns$. Focusing on the innermost inverse, it has been observed in [@StuderFateh] that the matrix $(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\inv{\bsPsi}+\bsH^T\!\!\bsH)$ can be numerically unstable to invert. The reason is that some (diagonal) values in $\bsPsi$ can be very small. This was addressed in [@StuderFateh] by writing $\inv{\bsPsi}% \inv{(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\inv{\bsPsi}+\bsH^T\!\!\bsH)}=% \inv{(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\bsI+% \bsH^T\!\!\bsH\bsPsi)}$, which is a more stable inverse but due to the lost symmetry property requires much more operations [@GolubVanloan]. We want to facilitate the use of efficient algorithms available for inversion of symmetric matrices [@GolubVanloan] but without having to deal with unstable inversions. Therefore, we instead write $\inv{\bsPsi}\inv{(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\inv{\bsPsi}+\bsH^T\!\!\bsH)}=% \bsPsi^{-\frac{1}{2}}\!\inv{(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\bsI+% \bsPsi^{\frac{1}{2}}\!\bsH^T\!\!\bsH\bsPsi^{\frac{1}{2}})}\bsPsi^{\frac{1}{2}}$ where $(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\bsI+% \bsPsi^{\frac{1}{2}}\!\!\bsH^T\!\!\bsH\bsPsi^{\frac{1}{2}})$ is symmetric and stable to invert. Note that the computation of $\bsPsi^{\frac{1}{2}}$ and $\bsPsi^{-\frac{1}{2}}$ is simple, even though it necessitates square root evaluations, since $\bsPsi$ is diagonal with positive values. There are several different approaches to inverting a positive definite matrix. Some are more numerically stable than others and some require less operations than others. One very fast and stable approach is through the LDL-decomposition [@GolubVanloan p.139], i.e., $(\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\bsI+% \bsPsi^{\frac{1}{2}}\!\bsH^T\!\!\bsH\bsPsi^{\frac{1}{2}})=\bsL\bsD\bsL^T$, where $\bsL$ is a lower-triangular matrix with ones on its diagonal and $\bsD$ is a diagonal matrix with positive diagonal elements. The LDL-decomposition itself requires $\Nt^3/3$ operations [@GolubVanloan p.139], and so does the inversion of $\bsL$ and $\bsD$ together. Hence, becomes $$\wbH^T\!\!\inv{\bsQ}\wbH=% \Big(\big(\wbP^T\!\!\bsPsi^{-\frac{1}{2}}\!\bsL^{-T}\!\!\inv{\bsD}% \inv{\bsL}\bsPsi^{\frac{1}{2}}\!(\bsH^T\!\!\bsH)\wbP\big)^{-1}\!\!\!-\wbPsi\Big)^{-1},$$ for which the number of operations for all partitionings can be summarized: - [LDL-decomposition ($\Nt^3/3$),]{} - [$\inv{\bsL}$ ($\Nt^3/3$),]{} - [$\bsPsi^{-\frac{1}{2}}\!\bsL^{-T}\!\inv{\bsD}\inv{\bsL}\bsPsi^{\frac{1}{2}}$ ($\Nt^3/3$),]{} - [$\big(\wbP^T\!\bsPsi^{-\frac{1}{2}}\!\!\bsL^{-T}\!\!\inv{\bsD}\inv{\bsL}% \bsPsi^{\frac{1}{2}}\!(\bsH^T\!\!\bsH)\wbP\big)$ $2\ns^2\Nt^2$.]{} The remaining evaluations consist of inverses of matrices of very small dimension $\ns$ for which there exist closed form formulas that require a negligible number of operations. Thus, the total number of operations required to compute $\wbH^T\!\!\inv{\bsQ}\wbH$ explicitly for all partitionings is $\Nt^3+2\ns^2\Nt^2$. ### $\bsy$-dependent processing {#app:cmplx:ydep} We need to compute, for all partitionings, $$\label{eq:hqinvhy} \begin{split} \inv{(\wbH^T\!\!\inv{\bsQ}\wbH)}\!\!\wbH^T\!\!\inv{\bsQ}\bsy\overset{\eqref{eq:matinvh}}{=}% \big(&\wbP^T\!\!\bsPsi^{-\frac{1}{2}}\!\bsL^{-T}\!\!\inv{\bsD}% \inv{\bsL}\bsPsi^{\frac{1}{2}}\!\bsH^T\!\!\bsH\wbP\big)^{-1}\\ &\times\wbP^T\!\!\bsPsi^{-\frac{1}{2}}\!\bsL^{-T}\!\!\inv{\bsD}% \inv{\bsL}\bsPsi^{\frac{1}{2}}\!\bsH^T\!\!\bsy, \end{split}$$ where only $\wbP^T\!\!\bsPsi^{-\frac{1}{2}}\!\bsL^{-T}\!\!\inv{\bsD}% \inv{\bsL}\bsPsi^{\frac{1}{2}}\!\bsH^T\!\!\bsy$ needs to be evaluated since the leftmost (inverse) matrix in is of dimension $\ns$ and has already been computed in the $\bsy$-independent part. Note that the computation of $\bsH^T\!\!\bsy$ and subsequently $\bsPsi^{-\frac{1}{2}}\!\bsL^{-T}\!\!\inv{\bsD}% \inv{\bsL}\bsPsi^{\frac{1}{2}}\!\bsH^T\!\!\bsy$ requires $2\Nr\Nt$ and $2\Nt^2$ operations, respectively. Hence, to compute $(\wbH^T\!\!\inv{\bsQ}\wbH)^{-1}\!\!\wbH^T\inv{\bsQ}\bsy$ for all partitionings requires $2\Nr\Nt+2\Nt^2$ operations. To compute , for each $s_k$, requires $\ns^22^\ns$ operations since the exponents in consist of matrix-vector multiplications of dimension $\ns$. This, we can safely neglect when $2^\ns\ll\Nt^2$ (which is typically the case). If higher-order constellations with cardinality much higher than 2 are used, one can keep the cardinality small and fixed by disregarding constellation points outside an appropriately chosen ellipse centered at the mean. The remaining bottleneck is the computation of the updated covariance matrix $\wbH^T\!\!\inv{\bsQ'}\wbH$ and the update of $\bsy$ to $\bsy'$ in . The number of operations required for $\wbH^T\!\!\inv{\bsQ'}\wbH$ is $\Nt^3+2\ns^2\Nt^2$, analogously to the computation of $\wbH^T\!\!\inv{\bsQ}\wbH$. For the update in , we have that $\bsy'=\bsy-\wtH\Exp{\tilde{\bss}|\bsy}=% \bsy-\bsH\Exp{\bss|\bsy}+\wbH\Exp{\bar{\bss}|\bsy}$, which after the transformation in using the updated matrix $\bsQ'$ instead of $\bsQ$ becomes $$\begin{aligned} \label{eq:hqinvhyprim} \hspace{2mm}&\hspace{-2mm}(\wbH^T\!\!\inv{\bsQ'}\wbH)^{-1}\!\!\wbH^T\!\!\inv{\bsQ'}\bsy'\notag\\ &=\inv{(\wbH^T\!\!\inv{\bsQ'}\wbH)}\!\!\wbH^T\!\!\inv{\bsQ'}(\bsy-\bsH\Exp{\bss|\bsy})+\Exp{\bar{\bss}|\bsy}\notag\\ &=\Exp{\bar{\bss}|\bsy}+\Big(\big(\wbP^T\!\!\inv{\bsPhi}% (\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\inv{\bsPhi}+% \bsH^T\!\!\bsH)^{-1}\!\bsH^T\!\!\bsH\wbP\big)^{-1}\notag\\ &\quad\quad\times\!\wbP^T\!\!\inv{\bsPhi}% (\,\text{\makebox[3mm]{\scriptsize$\frac{\No}{2}$}}\inv{\bsPhi}+% \bsH^T\!\!\bsH)^{-1}\!\big(\bsH^T\!\!\bsy-\bsH^T\!\!\bsH\Exp{\bss|\bsy}\!\big)\Big).\end{aligned}$$ The relation between the matrices $\bsPhi$ and $\tilde{\bsPhi}$ is analogous to the relation between $\bsPsi$ and $\tilde{\bsPsi}$. From the discussion after , we can conclude that requires $4\Nt^2$ operations for all partitionings. Lastly, the LLR computation of each bit requires $\ns^22^\ns$ operations, which we can safely neglect when $2^\ns\ll\Nt^2$. ### Summary Under the assumption that $\Nr\geq\Nt$ and that no a priori knowledge of $\bss$ is available, the $\bsy$-independent part of the algorithm requires roughly $\Nr\Nt^2+\Nt^3+2\ns^2\Nt^2$ operations, which is a similar number of operations as required by the soft MMSE algorithm. As for the $\bsy$-dependent part, the number of operations required is roughly $\Nt^3+2\Nr\Nt+(2\ns^2+6)\Nt^2$. Thus, the total number of operations required by the SUMIS detector to evaluate all LLRs associated with one received vector $\bsy$ is $$\Nr\Nt^2+2\Nt^3+(4\ns^2+6)\Nt^2+2\Nr\Nt.$$ The processing in SUMIS that is performed per bit can be done in parallel. The processing (per channel matrix) that involves matrix decompositions and inversions is not as simple to parallelize. More specifically, the LDL (or equivalently the Cholesky) decomposition has an inherent sequential structure that cannot be fully parallelized. Such sequential structures are present in most matrix algebraic operations (e.g., inversions) that are commonly used in detection algorithms. While those operations cannot be fully parallelized, they can be highly parallelized, see [@Becker] and the references therein.
--- abstract: | We present a bijection between vacillating tableaux and pairs consisting of a standard Young tableau and an orthogonal Littlewood-Richardson tableau for the special orthogonal group $\mathrm{SO}(2k+1)$. This bijection is motivated by the direct-sum-decomposition of the $r$th tensor power of the defining representation of $\mathrm{SO}(2k+1)$. To formulate it, we use Kwon’s orthogonal Littlewood-Richardson tableaux and introduce new alternative tableaux they are in bijection with. Moreover we use a suitably defined descent set for vacillating tableaux to determine the quasi-symmetric expansion of the Frobenius characters of the isotypic components. address: 'Institut für Diskrete Mathematik und Geometrie, Fakultät für Mathematik und Geoinformation, TU Wien, Austria,Supported by the Austrian science fund (FWF): P29275' author: - Judith Jagenteufel title: | A Sundaram type bijection for $\mathrm{SO}(2k+1)$:\ vacillating tableaux and pairs consisting of a standard Young tableau and an orthogonal Littlewood-Richardson tableau --- Introduction ============ We present a bijection for ${\mathrm{SO}(2k+1)}$ between vacillating tableaux and pairs consisting of a standard Young tableau and an orthogonal Littlewood-Richardson tableau. This bijection explains the direct-sum-decomposition of a tensor power ${V^{\otimes r}}$ of the defining representation $V$ of ${\mathrm{SO}(2k+1)}$ combinatorially. In particular we consider $$\begin{aligned} {V^{\otimes r}}=\bigoplus_{\mu} V(\mu)\otimes U(r,\mu)=\bigoplus_{\mu} V(\mu)\otimes\bigoplus_{\lambda} c_{\lambda}^{\mu}(\mathfrak{d}) S(\lambda)\end{aligned}$$ as an ${\mathrm{SO}(2k+1)}\times {\mathfrak{S}_r}$ representation. $V(\mu)$ is an irreducible representation of ${\mathrm{SO}(2k+1)}$ and $S(\lambda)$ is a Specht module. We concentrate on $U(r,\mu)$. A basis of $U(r,\mu)$ can be indexed by vacillating tableaux. The multiplicities $c^{\mu}_{\lambda}$ can be obtained by counting orthogonal Littlewood-Richardson tableaux. A basis of $S(\lambda)$ is indexed by standard Young tableaux. To formulate our bijection, we use Kwon’s orthogonal Littlewood-Richardson tableaux [@MR3814326]. Those are defined in a very general way in terms of crystal graphs. We introduce an alternative set of orthogonal Littlewood-Richardson tableaux, which is in bijection with Kwon’s set via Bijection $A$ described by Algorithm \[alg:1\]. Our alternative tableaux are described in terms of skew semistandard tableaux with a reading word that is Yamanouchi. Those are similar to Sundarams symplectic tableaux [@MR2941115]. However, the additional condition we obtain is far more complicated than the one she obtained. Our new set of tableaux reduces the problem to finding a bijection between vacillating tableaux and standard Young tableaux with $2k+1$ rows, all of them with lengths of the same parity. We solve this reduced problem with Bijection $B$ described by Algorithm \[alg:2\]. The question of finding such a bijection was posed by Sundaram in her 1986 thesis [@MR2941115] and has been attacked several times since Sundaram’s thesis; in particular by Sundaram [@MR1041447] and Proctor [@MR1043509]. A key ingredient for us to find it were Kwon’s orthogonal Littlewood-Richardson tableaux, defined recently in [@MR3814326]. Okada [@MR3604801] recently obtained the decomposition of $U(r,\mu)$ for multiplicity free cases implicitly using representation theoretic computations. We obtain parts of these results as a special case, which are on their part special cases of Okada’s work. In fact, Okada asks for bijective proofs of his results. One might assume that Fomin’s machinery of growth diagrams could be employed to find such a bijection. For the symplectic group this was done by Roby [@MR2716353] and Krattenthaler [@MR3534070]. However, for the special orthogonal group the situation appears to be quite different. In particular, at least a naive application of Fomin’s ideas does not even yield the desired bijection between vacillating tableaux and the set of standard Young tableaux in question, not even for dimension $3$. For ${\mathrm{SO}(3)}$ a bijection was provided in [@3erAlgo]. In dimension $3$ vacillating tableaux are Riordan paths: lattice paths with north-east, east and south-east steps, no steps below the $x$-axis and no east steps on the $x$-axis. This special combinatorial structure had led to stronger results there. For dimension $3$ the results we get are essentially the same as in [@3erAlgo]. The only new result for dimension $3$ is the description of our alternative orthogonal Littlewood-Richardson tableaux. An advantage of our combinatorial, bijective approach is that we obtain additional properties and consequences such as the following. We define a suitable notion of descents for vacillating tableaux and use the classical descent set for standard Young tableaux introduced by Schützenberger. We can show that our bijection is descent preserving. Thus we obtain the quasi-symmetric expansion of the Frobenius character of the isotypic space $U(r,\mu)$: $$\begin{aligned} \mathrm{ch} \,U(r,\mu)=\sum F_{{\mathrm{Des}}(w)}.\end{aligned}$$ where $F_D$ denotes a fundamental quasi-symmetric function, the sum runs over all vacillating tableaux $w$ of length $r$ and shape $\mu$ and ${\mathrm{Des}}(w)$ denotes the descent set of $w$. Among others, this property justifies our bijection to be called “Sundaram-like”, as she described a similar bijection for the defining representation of the symplectic group in her thesis [@MR2941115]. There exists a similar (but less complicated) definition for descents in oscillating tableaux, which are used in the symplectic case instead of vacillating tableaux, and which Sundaram’s bijection preserves. Thus there also exists a similar quasi-symmetric expansion of the Frobenius character, obtained for the symplectic group by Rubey, Sagan and Westbury in [@MR3226822]. Background ========== Schur-Weyl duality ------------------ Considering the general linear group we start with the “classical Schur-Weyl duality” $$\begin{aligned} {V^{\otimes r}}\cong \bigoplus_{\substack{\lambda \vdash r \\ \ell(\lambda) \leq n}} V^{{\mathrm{GL}}}(\lambda) \otimes S(\lambda).\end{aligned}$$ Here $V$ is a complex vector space of dimension $n$. The general linear group ${\mathrm{GL}}(V)$ acts diagonally (and on each position by matrix multiplication) and the symmetric group ${\mathfrak{S}_r}$ permutes tensor positions. Thus we consider a ${\mathrm{GL}}(V)\times {\mathfrak{S}_r}$ representation. $V^{{\mathrm{GL}}}(\lambda)$ is an irreducible representation of ${\mathrm{GL}}(V)$ and $S(\lambda)$ is a Specht module. Now we consider a vector space $V$ of odd dimension $n=2k+1$. To obtain a similar decomposition, we use the restriction from ${\mathrm{GL}}(V)$ to ${\mathrm{SO}}(V)$ $$\begin{aligned} \label{eq:BranchingRule} V(\lambda)\downarrow^{{\mathrm{GL}}(V)}_{{\mathrm{SO}}(V)} \cong \bigoplus_{\substack{\mu \text{ a partition} \\ \ell(\mu) \leq k}} c_{\lambda}^{\mu}(\mathfrak{d}) V^{SO}(\mu ),\end{aligned}$$ where ${\mathrm{c}_{\lambda}^{\mu}(\mathfrak{d})}$ is the multiplicity of the irreducible representation $V^{{\mathrm{SO}}}(\mu)$ of ${\mathrm{SO}}(V)$ in $V^{{\mathrm{GL}}}(\lambda)$. For $\ell(\lambda)\leq k$ this simplifies to the classical branching rule due to Littlewood. Combining Schur-Weyl duality and the branching rule stated above we obtain an isomorphism of ${\mathrm{SO}}(V)\times {\mathfrak{S}_r}$ representations $$\begin{aligned} {V^{\otimes r}}\cong \bigoplus_{\substack{\lambda \vdash r \\ \ell(\lambda) \leq n}} \Big( \bigoplus_{\substack{ \mu \text{ a partition}\\ \ell(\mu) \leq k}} c_{\lambda}^{\mu}(\mathfrak{d}) V^{SO}(\mu ) \Big) \otimes S(\lambda) = \bigoplus_{\substack{\mu \text{ a partition} \\ \ell(\mu) \leq k}} V^{SO}(\mu )\otimes U(r,\mu)\end{aligned}$$ with isotypic components of weight $\mu$ $$\begin{aligned} U(r,\mu)=\bigoplus_{\substack{\lambda \vdash r \\ \ell(\lambda) \leq n}} {\mathrm{c}_{\lambda}^{\mu}(\mathfrak{d})}S(\lambda).\end{aligned}$$ The isomorphism of ${\mathrm{SO}}(V)$ representations (e.g. Okada [@MR3604801 Cor. 3.6]), $$\begin{aligned} V^{SO}(\mu )\otimes V\cong\bigoplus_{\substack{\ell(\lambda)\leq k\\ \lambda=\mu\pm \square\\ \text{or } \lambda=\mu \text{ and } \ell(\mu)=k}} V^{SO}(\lambda)\end{aligned}$$ implies that a basis of $U(r,\mu)$ can be indexed by so called vacillating tableaux of shape $\mu$, defined in Section \[sec:VacTab\]. Kwon defined orthogonal Littlewood-Richardson tableaux, as set that is counted by ${\mathrm{c}_{\lambda}^{\mu}(\mathfrak{d})}$. We present Kwon’s definition, as well as a new combinatorial description in Section \[sec:KwonsOLRT\] and introduce or new alternative tableaux in Section \[sec:BijA\]. A basis of $S(\lambda)$ can be indexed by standard Young tableaux. Therefore we are interested in a bijection between vacillating tableaux and pairs that consist of a standard Young tableau and an orthogonal Littlewood-Richardson tableau. Moreover we introduce descent sets for vacillating tableau (see Section \[sec:VacTab\]). We show that our bijection preserves these descents, and follow the approach taken by Rubey, Sagan and Westbury [@MR3226822] for the symplectic group. This enables us to describe the quasi-symmetric expansion of the Frobenius character (see the textbook by Stanley [@MR1676282]). Recall that the Frobeinus character can can be defined by the requirement that it be an isometry and $$\begin{aligned} \mathrm{ch}\, S(\lambda)=s_\lambda=\sum_{Q\in {\mathrm{SYT}}(\lambda)} F_{{\mathrm{Des}}(Q)}\end{aligned}$$ where $s_\lambda$ is a Schur function, ${\mathrm{Des}}(Q)$ denotes the descent set of a standard Young tableau (see Section \[sec:Descents\]) and $F_D$ is the *fundamental quasi-symmetric function* $$\begin{aligned} F_D=\sum_{\substack{i_1\leq i_2\leq \dots\leq i_r\\ j\in D \Rightarrow i_j<i_{j+1}}} x_{i_1}x_{i_2}\dots x_{i_r}.\end{aligned}$$ Therefore we obtain the following theorem. $$\begin{aligned} \mathrm{ch} \,U(r,\mu)=\sum F_{{\mathrm{Des}}(w)},\end{aligned}$$ where the sum runs over all vacillating tableaux $w$ of length $r$ and shape $\mu$ and ${\mathrm{Des}}(w)$ is the descent set of $w$. Standard Young Tableaux and Skew Semistandard Tableaux ------------------------------------------------------ We now introduce some well known concepts in order to clarify notation. For a textbook treatment see [@MR1676282]. A *partition $\lambda\vdash n$* of a nonnegative integer $n$ is a sequence of positive integers $(\lambda_1,\lambda_2,\dots,\lambda_k)$ such that $\lambda_1\geq\lambda_2\geq\dots\geq\lambda_k>0$ and $\lambda_1+\lambda_2+\dots+\lambda_k=n$. The length $\ell(\lambda)$ of a partition $\lambda$ is the number of integers in this sequence namely $k$. A *Young diagram* of a partition $\lambda$ is a collection of left-adjusted cells such that each row consists of $\lambda_i$ cells. The conjugate partition $\lambda'$ is the partition belonging to the transposed Young diagram of the partition $\lambda$. Let $\mu$ and $\lambda$ be partitions such that $\mu\subseteq \lambda$ (thus $\mu_i\leq \lambda_i$). The *skew shape $\lambda\backslash \mu$* is the Young diagram of $\lambda$ with the cells of the Young diagram of $\mu$ missing. The partition $\mu$ is the inner shape while the partition $\lambda$ is the outer shape. A *horizontal strip* is a skew shape such that no two cells are in the same column. A *semistandard Young tableau* of shape $\lambda$ is obtained by a filling of the cells (with natural numbers) of the Young diagram of shape $\lambda$ such that each row is weakly increasing and each column is strictly increasing. We also consider *skew semistandard tableaux* where we take the Young diagram of a skew shape instead. We sometimes regard the missing cells as empty cells. A *reversed (skew) semistandard tableau* is a filling such that each row is weakly decreasing and each column is strictly decreasing. The *type* of a (reversed) semistandard Young tableau is $\mu=(\mu_1,\mu_2\dots\mu_l)$ where $\mu_i$ is the number of $i$’s in the tableau. A *standard Young tableau* of shape $\lambda$ is a semistandard Young tableau with entries $1,2,\dots,|\lambda|$. Thus rows are also strictly increasing. We write SYT($\lambda$) for the set of standard Young tableaux of shape $\lambda$. A tableau is *column* (respectively *row*) strict if its columns (respectively rows) are strictly increasing. By abuse of notation we call a horizontal strip in a tableau a collection of entries whose cells form a horizontal strip in the Young diagram. The *Robinson-Schensted correspondence* maps a word $w_1,\dots,w_m$ with $w_i\in \mathbb{N}$ to a pair $(P,Q)$ consisting of a semistandard Young tableau $P$, the insertion tableau, and a standard Young tableau $Q$, the recording tableau. (If and only if $w$ is a permutation the insertion tableau $P$ is also a standard Young tableau.) To construct it we start with empty tableaux $P$ and $Q$. We insert positions $w_i$ of $w$ from left to right into $P$. We insert $w_i$ into the first row using the following procedure: Element $e$ gets inserted into row $j$ as follows: - If all elements in row $j$ are smaller than or equal to $e$, (or row $j$ is empty) place $e$ to the end of row $j$. - Otherwise search for the leftmost element $f$, that is larger than $e$, in row $j$. Put $e$ to its spot and insert $f$ into row $j+1$ using the same procedure again. We say that $f$ got “bumped” into the next row. Insert $i$ into $Q$, where a new cell in $P$ was added. The *reading word* of a (skew) (semi)standard Young tableau is the word obtained by concatenating the rows from bottom to top. A word $w$ with entries in the natural numbers $w_1,w_2,\dots,w_l$ is called a Yamanouchi word (or lattice permutation) if for all $i$ and any initial sequence $s$ the number of $i$’s in $s$ is at least as great as the number of $(i+1)$’s in $s$. A word $w_1,w_2,\dots,w_m$ is a *reverse Yamanouchi word* if $w_m,\dots,w_2,w_1$ is Yamanouchi. For reverse Yamanouchi words the following theorem holds (see [@MR675953]): \[theo:YamanouchiWordSSYT\] If and only if a word $w$ is a reverse Yamanouchi word, the insertion tableau $P$ obtained by Robinson-Schensted is of the form (0,6) – (6,6); (0,5) – (6,5); (0,4) – (4.5,4); (0,3) – (3,3); (0,2) – (1,2); (0,6) – (0,2); (1,6) – (1,2); (2,4) – (2,3); (3,4) – (3,3); (3.5,5) – (3.5,4); (4.5,5) – (4.5,4); (5,6) – (5,5); (6,6) – (6,5); (0.5,5.5) node [1]{}; (3.1,5.5) node [$\dots$]{}; (5.5,5.5) node [1]{}; (0.5,4.5) node [2]{}; (2.3,4.5) node [$\dots$]{}; (4,4.5) node [2]{}; (0.5,3.5) node [3]{}; (1.60,3.5) node [$\dots$]{}; (2.5,3.5) node [3]{}; (0.5,2.77) node [$\vdots$]{}; . ### Descents of Standard Young Tableaux Let $Q\in {\mathrm{SYT}}(\lambda)$ be a standard Young tableau. An entry $j$ is a *descent* if $j+1$ is in a row below $j$. We define the *descent set* of $Q$ as: ${\mathrm{Des}}(Q)={\{j:j \text{ is a descent of } Q\}}$. \[ex:DescSYT\] The following standard Young tableau has descent set $\{2,3,5,7,12\}$. Descents $j$ are bold, $j+1$ are italic. (0,6) – (6,6); (0,5) – (6,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (2,2); (0,1) – (2,1); (0,6) – (0,1); (1,6) – (1,1); (2,6) – (2,1); (3,6) – (3,5); (4,6) – (4,5); (5,6) – (5,5); (6,6) – (6,5); (0.5,5.5) node [1]{}; (1.5,5.5) node [**2**]{}; (0.5,4.5) node [***3***]{}; (0.5,3.5) node [*4*]{}; (1.5,4.5) node [**5**]{}; (0.5,2.5) node [*6*]{}; (1.5,3.5) node [**7**]{}; (0.5,1.5) node [*8*]{}; (1.5,2.5) node [9]{}; (2.5,5.5) node [10]{}; (3.5,5.5) node [11]{}; (4.5,5.5) node [**12**]{}; (1.5,1.5) node [*13*]{}; (5.5,5.5) node [14]{}; ### Concatenation of Standard Young tableaux The *concatenation* $Q$ of two standard Young tableaux $Q_1$ and $Q_2$ is obtained as follows. First add the largest entry of $Q_1$ to each entry of $Q_2$ to obtain the tableau $\widetilde{Q_2}$. Then append row $i$ of $\widetilde{Q_2}$ to row $i$ of $Q_1$ to obtain $Q$. This procedure is associative, thus we can consider the concatenation of several standard Young tableaux. We say a standard Young tableau $Q$ is the concatenation of $m$ standard Young tableaux if we can find standard Young tableaux $Q_1,\dots,Q_m$ such that $Q$ is the concatenation of those. We will be interested only in those concatenations where all tableaux have either rows of even length, or row lengths of the same parity, each. \[ex:ConCatSYTDef\] We concatenate two standard Young tableaux (0,6) – (6,6); (0,5) – (6,5); (0,4) – (2,4); (0,3) – (2,3); (0,6) – (0,3); (1,6) – (1,3); (2,6) – (2,3); (3,6) – (3,5); (4,6) – (4,5); (5,6) – (5,5); (6,6) – (6,5); (0.5,5.5) node [1]{}; (1.5,5.5) node [2]{}; (0.5,4.5) node [3]{}; (1.5,4.5) node [4]{}; (2.5,5.5) node [5]{}; (3.5,5.5) node [6]{}; (0.5,3.5) node [7]{}; (1.5,3.5) node [8]{}; (4.5,5.5) node [9]{}; (5.5,5.5) node [10]{}; (0,1); (0,6) – (1,6); (0,5) – (1,5); (0,4) – (1,4); (0,3) – (1,3); (0,2) – (1,2); (0,1) – (1,1); (0,6) – (0,1); (1,6) – (1,1); (0.5,5.5) node [1]{}; (0.5,4.5) node [2]{}; (0.5,3.5) node [3]{}; (0.5,2.5) node [4]{}; (0.5,1.5) node [5]{}; (0,6) – (7,6); (0,5) – (7,5); (0,4) – (3,4); (0,3) – (3,3); (0,2) – (1,2); (0,1) – (1,1); (0,6) – (0,1); (1,6) – (1,1); (2,6) – (2,3); (3,6) – (3,3); (4,6) – (4,5); (5,6) – (5,5); (6,6) – (6,5); (7,6) – (7,5); (0.5,5.5) node [1]{}; (1.5,5.5) node [2]{}; (0.5,4.5) node [3]{}; (1.5,4.5) node [4]{}; (2.5,5.5) node [5]{}; (3.5,5.5) node [6]{}; (0.5,3.5) node [7]{}; (1.5,3.5) node [8]{}; (4.5,5.5) node [9]{}; (5.5,5.5) node [10]{}; (6.5,5.5) node [11]{}; (2.5,4.5) node [12]{}; (2.5,3.5) node [13]{}; (0.5,2.5) node [14]{}; (0.5,1.5) node [15]{}; The first tableau itself is a concatenation of standard Young tableaux. The parts are the tableau containing only numbers $1$ up to $8$ and two single cells containing $1$. If one is interested in a concatenation of tableaux with row lengths of the same parity, we can also take the tableau containing numbers $1$ up to $8$ and as second tableau, the one rowed tableau containing $1$ and $2$. (Empty rows $j$ are counted as rows of even length for $j<n$.) Vacillating Tableaux {#sec:VacTab} -------------------- We define vacillating tableaux (as defined by Sundaram in [@MR1041447 Def. 4.1]) in three different ways, once as sequence of Young diagrams, once in terms of highest weight words and once as certain $k$-tuples of lattice paths. 1. A ($(2k+1)$-*orthogonal*) *vacillating tableau* of length $r$ is a sequence of Young diagrams $\emptyset=\mu^0, \mu^1,\dots,\mu^r=\mu$ each of at most $k$ parts, such that: - $\mu^i$ and $\mu^{i+1}$ differ in at most one cell, - $\mu^i=\mu^{i+1}$ only occurs if the $k$th row of cells is non-empty. The partition belonging to the final Young diagram $\mu$ is the *shape* of the tableau. 2. A ($(2k+1)$-*orthogonal*) *highest weight word* is a word $w$ with letters in $\{\pm 1, \pm 2, \dots, \pm k, 0\}$ of length $r$ such that for every initial segment $s$ of $w$ the following holds (we write $\#i$ for the number of $i$’s in $s$): - $\#i - \#(-i) \geq 0$, - $\#i-\#(-i)\geq \#(i+1)-\#(-i-1)$, - if the last letter is $0$ then $\#k-\#(-k)>0$. The partition $(\#1-\#(-1),\#2-\#(-2),\dots, \#k - \#(-k))$ is the *weight* of a highest weight word. The vacillating tableau corresponding to a word $w$ is the sequence of weights of the initial segments of $w$. 3. Riordan paths are Motzkin paths without horizontal steps on the $x$-axis. They consist of up (north-east) steps, down (south-east) steps, and horizontal (east) steps, such that there is no step beneath the $x$-axis and no horizontal step on the $x$-axis. A $k$-tuple of Riordan paths of length $r$ is a vacillating tableau of length $r$ if it meets the following conditions: - The first path is a Riordan path of length $r$. - Path $i$ has steps where path $i-1$ has horizontal steps. Path $i$ is never higher than path $i-1$. For a better readability we sometimes label the steps with $1,\dots,r$ in order to see which steps belong together and shift paths together. The corresponding highest weight word is described as follows: A value $i$ is an up-step in path $i$ and a horizontal step in paths $1$ up to $i-1$. Similarly a value $-i$ is a down-step in path $i$ and a horizontal step in paths $1$ up to $i-1$ and a value $0$ is a horizontal step in every path, including $k$. By abuse of terminology we refer to all three objects as *vacillating tableaux*. \[ex:VacTab\] The same object once written as a vacillating tableau, once as a highest weight word and once as a Riordan path. To the left we draw the Riodan path labeled and the second path shifted together in the way we described above. (0,6.5) node[$\emptyset$]{}; (1,7)–(2,7)–(2,6)–(1,6)–(1,7); (3.5,7)–(4.5,7)–(4.5,5)–(3.5,5)–(3.5,7); (3.5,6)–(4.5,6); (6,7)–(8,7)–(8,6)–(7,6)–(7,5)–(6,5)–(6,7); (6,6)–(7,6)–(7,7); (8.5,7)–(10.5,7)–(10.5,6)–(9.5,6)–(9.5,5)–(8.5,5)–(8.5,7); (8.5,6)–(9.5,6)–(9.5,7); (11,7)–(13,7)–(13,6)–(12,6)–(12,5)–(11,5)–(11,7); (11,6)–(12,6)–(12,7); (13.5,7)–(15.5,7)–(15.5,6)–(13.5,6)–(13.5,7); (14.5,6)–(14.5,7); (16,7)–(17,7)–(17,6)–(16,6)–(16,7); (18.5,7)–(19.5,7)–(19.5,5)–(18.5,5)–(18.5,7); (18.5,6)–(19.5,6); (21,7)–(22,7)–(22,6)–(21,6)–(21,7); (23.5,6.5) node[$\emptyset$]{}; (0.5,4.5) node[1]{}; (3,4.5) node[2]{}; (5.5,4.5) node[1]{}; (8,4.5) node[0]{}; (10.5,4.5) node[0]{}; (13,4.5) node[-2]{}; (15.5,4.5) node[-1]{}; (18,4.5) node[2]{}; (20.5,4.5) node[-2]{}; (23,4.5) node[-1]{}; (-0.5,1.5)–(1.75,2.5)–(4.25,2.5)–(6.75,3.5)–(9.25,3.5)–(11.75,3.5)–(14.25,3.5)–(16.75,2.5)–(19.25,2.5)–(21.75,2.5)–(24.25,1.5); (1.75,0)–(4.25,1); (6.75,1)–(9.25,1)–(11.75,1)–(14.25,0); (16.75,0)–(19.25,1)–(21.75,0); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,0) node\[midway, above\] [10]{}; (2,-1.5) – (3,-0.5) node\[midway, above\] [2]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-1.5) node\[midway, above\] [6]{}– (7,-0.5) node\[midway, above\] [8]{}– (8,-1.5) node\[midway, above\] [9]{}; ### Descents of Vacillating Tableaux {#sec:Descents} We define descents for vacillating tableaux using highest weight words. A letter $w_i$ of $w$ is a *descent* if there exists a directed path from $w_i$ to $w_{i+1}$ in the crystal graph for the defining representation of ${\mathrm{SO}(2k+1)}$ $$1\to 2\to \dots\to k \to 0 \to -k \to \dots \to -1$$ and $w_iw_{i+1} \neq j(-j)$ if for the initial segment $w_1,\dots,w_{i-1}$ holds $\#j-\#(-j)=0$. We define the *descent set* of $w$ as ${\mathrm{Des}}(w)= {\{j: j \text{ is a descent of } w\}}$. In our tuple of paths a descent is a convex edge of consecutive steps, but not an up-step followed from a down-step on the bottom. The following vacillating tableau has descent set $\{2,3,5,7,12\}$. The corresponding positions are circled. Note that $10$ is no descent, as $10,11$ are on bottom level. (It is no coincidence that the standard Young tableau in Example \[ex:DescSYT\] has the same descents as they are assigned to each other by Bijection $B$.) (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [7]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,2) node\[midway, above\] [12]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (2.5,-1.5) – (3.5,-0.5) node\[midway, above\] [3]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (4.5,-0.5) node\[midway, above\] [4]{}– (5.5,0.5) node\[midway, above\] [5]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (6.5,-0.5) node\[midway, above\] [6]{}– (7.5,-0.5) node\[midway, above\] [7]{}– (8.5,-1.5) node\[midway, above\] [9]{}– (9.5,-0.5) node\[midway, above\] [10]{}– (10.5,-1.5) node\[midway, above\] [11]{}; ### Concatenation of Vacillating Tableaux The concatenation of vacillating tableaux of shape $\emptyset$ is obtained by writing them side by side. If we writing them labeled, we adjust the labels such that they are increasing from left to right. The following vacillating tableau is the concatenation of three vacillating tableaux, first the steps $1$ to $8$, then $9,10$, and third the steps $11$ to $15$. (We will see that it corresponds to the standard Young tableau of Example \[ex:ConCatSYTDef\] under Bijection $B$.) (0,0)– (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,0) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,0) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,1) node\[midway, above\] [14]{}– (15,0) node\[midway, above\] [15]{}; (2,-1.5)– (3,-0.5) node\[midway, above\] [3]{}– (4,-1.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-1.5) node\[midway, above\] [6]{}; (11,-1.5)– (12,-0.5) node\[midway, above\] [12]{}– (13,-0.5) node\[midway, above\] [13]{}– (14,-1.5) node\[midway, above\] [14]{}; Crystal Graphs {#sec:CrystalGraphs} -------------- In this section we summarize some properties of crystal graphs. In particular, we describe a certain crystal graph, that we need for defining orthogonal Littlewood-Richardson tableaux. For more information on crystals see the textbook by Hong and Kang [@MR1881971]. *Crystal graphs* are certain acyclic directed graphs where vertices have finite in- and out-degree and each edge is labeled by a natural number. We only use crystal graphs whose vertices are labeled with certain tableaux. For each vertex $C$ there is at most one outgoing edge labeled with $i$. If such an edge exists we denote its target by $f_i(C)$. Otherwise $f_i(C)$ is defined to be the distinguished symbol $\varnothing$. Analogously there is at most one incoming edge labeled with $i$ and we define $e_i(C)$ as the tableau obtained by following an incoming edge labeled with $i$. We denote by $\varphi_i(C)$ (respectively $\varepsilon_i(C)$) the number of times one can apply $f_i$ (respectively $e_i$) to $C$. We consider infinite crystal graphs. However, for the crystal graphs we consider, it holds that if we fix a natural number $\ell$ and delete all edges labeled with $\ell$ or larger, as well as all vertices that have incoming edges labeled with $\ell$ or larger, we obtain a finite crystal graph. Thus a lot of properties proven for finite crystal graphs hold also for our infinite crystal graphs. The crystal graphs we consider are all tensor products of the following crystal graph. The crystal graph of one-column tableaux is defined as follows: 1. The vertices are column strict tableaux with a single column and positive integers as entries. 2. Suppose that $i\in \mathbb{N}, i>0$ is an entry in a tableau $C$ but $i+1$ is not. Then $f_i(C)$ is the tableau one obtains by replacing $i$ by $i+1$. Otherwise $f_i(C)=\varnothing$. 3. Suppose that neither $1$ nor $2$ is an entry in a tableau $C$. Then $f_0(C)$ is the tableau one obtains by adding a domino (3,14) – (3,16); (4,14) – (4,16); (3,14) – (4,14); (3,15) – (4,15); (3,16) – (4,16); (3.5,14.5) node[2]{}; (3.5,15.5) node[1]{}; on top of $C$. Otherwise $f_0(C)=\varnothing$. See Figure \[fig:Crystal\] for an example. (-6.1,-3.4) node[$\vdots$]{}; (-2.5,-3.4) node[$\vdots$]{}; (1.5,-3.4) node[$\vdots$]{}; (3.5,-4.4) node[$\vdots$]{}; (5.5,-5.4) node[$\vdots$]{}; (5.5,-4.9) – (3.7,-2.1) node\[midway, above\][0]{}; (1.7,-2.9) – (3.3,-2.1) node\[midway, above\][5]{}; (3.5,-3.9) – (3.5,-2.1) node\[midway, left\][3]{}; (1.3,-2.9) – (-0.3,-2.1) node\[midway, above\][2]{}; (-2.5,-2.9) – (-0.7,-2.1) node\[midway, above\][6]{}; (-2.7,-2.9) – (-4.3,-2.1) node\[midway, above\][1]{}; (-6.3,-2.9) – (-4.7,-2.1) node\[midway, above\][7]{}; (5.7,-4.9) – (7.5,-4.1) node\[midway, above\][4]{}; (7,-4) – (7,0); (8,-4) – (8,0); (7,-4) – (8,-4); (7,-3) – (8,-3); (7,-2) – (8,-2); (7,-1) – (8,-1); (7,0) – (8,0); (7.5,-3.5) node[4]{}; (7.5,-2.5) node[3]{}; (7.5,-1.5) node[2]{}; (7.5,-0.5) node[1]{}; (3,-2) – (3,0); (4,-2) – (4,0); (3,-2) – (4,-2); (3,-1) – (4,-1); (3,0) – (4,0); (3.5,-1.5) node[5]{}; (3.5,-0.5) node[3]{}; (-1,-2) – (-1,0); (0,-2) – (0,0); (-1,-2) – (0,-2); (-1,-1) – (0,-1); (-1,0) – (0,0); (-0.5,-1.5) node[6]{}; (-0.5,-0.5) node[2]{}; (-5,-2) – (-5,0); (-4,-2) – (-4,0); (-5,-2) – (-4,-2); (-5,-1) – (-4,-1); (-5,0) – (-4,0); (-4.5,-1.5) node[7]{}; (-4.5,-0.5) node[1]{}; (3.5,0.1) – (1.7,0.9) node\[midway, above\][2]{}; (-0.3,0.1) – (1.3,0.9) node\[midway, above\][5]{}; (-0.7,0.1) – (-2.3,0.9) node\[midway, above\][1]{}; (-4.5,0.1) – (-2.7,0.9) node\[midway, above\][6]{}; (3.7,0.1) – (5.3,0.9) node\[midway, above\][4]{}; (7.5,0.1) – (5.7,0.9) node\[midway, above\][0]{}; (5,1) – (5,3); (6,1) – (6,3); (5,1) – (6,1); (5,2) – (6,2); (5,3) – (6,3); (5.5,1.5) node[4]{}; (5.5,2.5) node[3]{}; (1,1) – (1,3); (2,1) – (2,3); (1,1) – (2,1); (1,2) – (2,2); (1,3) – (2,3); (1.5,1.5) node[5]{}; (1.5,2.5) node[2]{}; (-3,1) – (-3,3); (-2,1) – (-2,3); (-3,1) – (-2,1); (-3,2) – (-2,2); (-3,3) – (-2,3); (-2.5,1.5) node[6]{}; (-2.5,2.5) node[1]{}; (5.5,3.1) – (3.7,3.9) node\[midway, above\][2]{}; (1.7,3.1) – (3.3,3.9) node\[midway, above\][4]{}; (1.3,3.1) – (-0.3,3.9) node\[midway, above\][1]{}; (-2.5,3.1) – (-0.7,3.9) node\[midway, above\][5]{}; (3,4) – (3,6); (4,4) – (4,6); (3,4) – (4,4); (3,5) – (4,5); (3,6) – (4,6); (3.5,4.5) node[4]{}; (3.5,5.5) node[2]{}; (-1,4) – (-1,6); (0,4) – (0,6); (-1,4) – (0,4); (-1,5) – (0,5); (-1,6) – (0,6); (-0.5,4.5) node[5]{}; (-0.5,5.5) node[1]{}; (3.3,6.1) – (1.7,6.9) node\[midway, below\][1]{}; (3.7,6.1) – (5.5,6.9) node\[midway, below\][3]{}; (-0.5,6.1) – (1.3,6.9) node\[midway, below\][4]{}; (1,7) – (1,9); (2,7) – (2,9); (1,7) – (2,7); (1,8) – (2,8); (1,9) – (2,9); (1.5,7.5) node[4]{}; (1.5,8.5) node[1]{}; (5,7) – (5,9); (6,7) – (6,9); (5,7) – (6,7); (5,8) – (6,8); (5,9) – (6,9); (5.5,7.5) node[3]{}; (5.5,8.5) node[2]{}; (3.3,9.9) – (1.5,9.1) node\[midway, above\][3]{}; (3.8,9.9) – (5.5,9.1) node\[midway, above\][1]{}; (3,10) – (3,12); (4,10) – (4,12); (3,10) – (4,10); (3,11) – (4,11); (3,12) – (4,12); (3.5,10.5) node[3]{}; (3.5,11.5) node[1]{}; (3.5,12.1) – (3.5,13.9) node\[midway, right\][2]{}; (3,14) – (3,16); (4,14) – (4,16); (3,14) – (4,14); (3,15) – (4,15); (3,16) – (4,16); (3.5,14.5) node[2]{}; (3.5,15.5) node[1]{}; We define now the tensor product of crystal graphs. We will use the tensor products of the crystal graph defined above for defining orthogonal Littlewood-Richardson tableaux. The tensor product $B_1\otimes B_2$ of two crystal graphs $B_1$ and $B_2$ is a crystal graph with vertex set $B_1 \times B_2$ and edges satisfying: $$\begin{aligned} f_i(b\otimes b')&=\left\{ \begin{array}{ll} b\otimes f_i(b') & \text{if } \varepsilon_i(b)<\varphi_i(b')\\ f_i(b)\otimes b' & \text{otherwise} \end{array} \right.\\ e_i(b\otimes b')&=\left\{ \begin{array}{ll} e_i(b)\otimes b' & \text{if } \varepsilon_i(b)>\varphi_i(b')\\ b\otimes e_i(b') & \text{otherwise} \end{array} \right.\end{aligned}$$ and $$\begin{aligned} \varphi_i(b\otimes b')&= \varphi_i(b) + \mathrm{max}(0,\varphi_i(b')-\varepsilon_i(b))\\ \varepsilon_i(b\otimes b')&= \varepsilon_i(b') + \mathrm{max}(0,\varepsilon_i(b)-\varphi_i(b')).\\\end{aligned}$$ Kwon’s Orthogonal Littlewood-Richardson Tableaux {#sec:KwonsOLRT} ================================================ In this section we will first present Kwon’s orthogonal Littlewood-Richardson tableaux. This description is very general, so we give a new, explicit formulation of his orthogonal Littlewood-Richardson tableaux afterwards. Although Kwon considers $\mathrm{O}(n)$, for odd $n$ ($n=2k+1$) we get ${\mathrm{SO}}(n)$ as a special case. In this case $V(\lambda)\downarrow^{\mathrm{O}(2k+1)}_{{\mathrm{SO}}(2k+1)}$ is an irreducible ${\mathrm{SO}}(2k+1)$ representation and every such representation is isomorphic to such a restriction (see for example Okada [@MR3604801 Sect. 2.4]). We start with introducing some notation we will use. Let $T$ be a two column skew semistandard tableau of shape $(2^b,1^m)/(1^a)$, with $b\geq a \geq 0$ and $m>0$. The *tail* of $T$ is the part where only the first column exists, that is, the lower $m$ entries of the first column. The topmost tail position is the *tail root* and the rest of the tail is the *lower tail*. The *fin* of $T$ is the largest entry in the second column. The *residuum* of $T$ is the number of positions the second column can be shifted down while maintaining semistandardness. In particular, the residuum of $T$ is at most $\min(a,m)$. For a partition $\mu$ with at most $k$ parts we define the crystal graph $B^{\mathfrak{d}}(\mu)$ as follows. It is the subgraph of the the tensor product of $n=2k+1$ one column crystal graphs, whose vertices are tuples $(T_1,T_2,\dots,T_{\ell(\mu)},S)$ of skew semistandard tableaux such that: - Each $T_j$ has shape $(2^{b_j},1^{\mu_j})/(1^{a_j})$, with $b_j\geq a_j \geq 0$, $b_j,a_j$ even and residuum at most one. - $S$ is of rectangular outer shape and has $n-2\ell(\mu)$ (possibly empty) columns, all whose lengths have the same parity. We say $S$ is even if its columns have even length, and $S$ is odd otherwise. The crystal $T^\mathfrak{d}(\mu)$ is the subgraph of $B^{\mathfrak{d}}(\mu)$ whose vertices are in the same component as one of the following highest weight elements: - $T_j$ has its left column filled with $1,2,\dots,\mu_j$ and its right column empty. - Either $S$ is empty or $S$ is a single row of $n-2\ell(\mu)$ entries equal to $1$. The set of *orthogonal Littlewood-Richardson tableaux* is $${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}={\{L\in T^{\mathfrak{d}}(\mu):\text{ $i$ occurs in $L$ exactly $\lambda'_i$ times and } \varepsilon_i(L)=0 \text{ for } i \neq 0\}}$$ with $\ell(\lambda)\leq n=2k+1$ and $\ell(\mu)\leq k$. As announced before the set of orthogonal Littlewood-Richardson tableaux is counted by ${\mathrm{c}_{\lambda}^{\mu}(\mathfrak{d})}$. See [@MR3814326 Theorem 5.3]. This is one of the main results of [@MR3814326]. ${\mathrm{c}_{\lambda}^{\mu}(\mathfrak{d})}=|{\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}|$ For two column skew shape semistandard tableaux we define admissibility which tells us if an element of $B^{\mathfrak{d}}(\mu)$ is in $T^\mathfrak{d}(\mu)$. To do so we need for a skew semistandard tableau consisting of a left and a right column $T=(T^L,T^R)$ the pairs $(^LT,^RT)$ and $(T^{L^*},T^{R^*})$: Let $T=(T^L,T^R)$ be a two column skew semistandard tableau. We define the pair $(^LT,^RT)$ of two one-column, column strict tableaux as follows. Beginning at the bottom, we slide each cell of $T^R$ down as far as possible, not beyond the bottom cell of $T^L$ and so that the entry of its left neighbor is not larger. Then ${^RT}$ consists of all entries $T^R$, together with those in $T^L$ that have no right neighbor. ${^LT}$ consists of the remaining entries in $T^L$. If $T$ has residuum $1$, we define additionally the pair $(T^{L^*},T^{R^*})$ of two one-column, column strict tableaux as follows. Beginning on the top, we slide each cell of $T^L$ up as far as possible, not beyond the top cell of $T^R$ and so that the entry of its right neighbor is not smaller. Then $T^{L^{*}}$ consists of all entries in $T^L$, together with the largest entry in $T^R$ that has no left neighbor. Note that such an entry must exist because $T$ has residuum $1$ and $a$ is even, thus $a\geq 2$. $T^{R^{*}}$ consists of the remaining entries in $T^R$. See Figure \[fig:Admissible\] for examples. For a single column $C$, let $C(i)$ be the $i$th entry from the bottom and $\mathrm{ht}(C)$ its length. Let $T$ and $U$ be two two-column skew semistandard tableaux with tails of length $\mu_T$ and $\mu_U$ such that $\mu_T\geq\mu_U>0$ and residuum $r_T\leq 1$ and $r_U\leq 1$, respectively. The pair $(T,U)$ is *admissible*, if the following conditions are met: $$\begin{aligned} \mathrm{ht}(T^R) \leq \mathrm{ht}(U^L)-\mu_U+2r_Tr_U \tag{H} \label{eq:H}\end{aligned}$$ $$\begin{aligned} T^R(i)\leq {^LU(i)} & \quad \text{ if } r_T\cdot r_U=0\tag{A1} \label{eq:A1}\\ T^{R^*}(i)\leq {^LU(i)} & \quad \text{ if } r_T\cdot r_U=1\nonumber\end{aligned}$$ $$\begin{aligned} {^RT}(i+\mu_T -\mu_U)\leq U^L(i) & \quad \text{ if } r_T\cdot r_U=0\tag{A2} \label{eq:A2}\\ {^RT}(i+\mu_T -\mu_U)\leq U^{L^*}(i) & \quad \text{ if } r_T\cdot r_U=1\nonumber\end{aligned}$$ Let $T$ be a two-column skew semistandard tableaux with tail of length $\mu_T>0$ and residuum $r_T\leq 1$. Let $S$ be a skew semistandard tableau of rectangular outer shape with first column $S^L$ and columns with lengths of the same parity. The pair $(T,S)$ is *admissible*, if the following conditions are met: $$\begin{aligned} \mathrm{ht}(T^R) \leq \mathrm{ht}(S^L) & \quad \text{ if } S \text{ is even}\tag{H$'$} \label{eq:H_}\\ \mathrm{ht}(T^R) \leq \mathrm{ht}(S^L)-1+2r_T & \quad \text{ otherwise }\nonumber\end{aligned}$$ $$\begin{aligned} T^R(i)\leq {S^L(i)} & \quad \text{ if $S$ is even or } r_T=0 \tag{A1$'$} \label{eq:A1_}\\ T^{R^*}(i)\leq {S^L(i)} & \quad \text{ otherwise}\nonumber\end{aligned}$$ $$\begin{aligned} {^RT}(i+\mu_T-1)\leq S^L(i) & \quad \text{ if $S$ is odd and } r_T=0\tag{A2$'$} \label{eq:A2_}\\ {^RT}(i+\mu_T)\leq S^L(i) & \quad \text{ otherwise}\nonumber\end{aligned}$$ \[theo:KwonLRTabs\] Let $L=(T_1,T_2,\dots,T_{\ell(\mu)},S)$ be a vertex in $B^{\mathfrak{d}}(\mu)$. Then $L$ is a vertex of $T^\mathfrak{d}(\mu)$ if and only if any pair of successive tableaux in $L$ is admissible. See Figure \[fig:Admissible\] for an example. (1,6)–(2,6); (1,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,-1)–(1,-1); (0,4) – (0,-1); (1,6) – (1,-1); (2,6) – (2,2); (0.5,3.5) node [1]{}; (0.5,2.5) node [2]{}; (0.5,1.5) node [3]{}; (0.5,0.5) node [7]{}; (0.5,-0.5) node [8]{}; (1.5,5.5) node [1]{}; (1.5,4.5) node [2]{}; (1.5,3.5) node [3]{}; (1.5,2.5) node [6]{}; (3.5,8)–(4.5,8); (3.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (2.5,2)–(3.5,2); (2.5,1)–(3.5,1); (2.5,6) – (2.5,1); (3.5,8) – (3.5,1); (4.5,8) – (4.5,2); (3,5.5) node [1]{}; (3,4.5) node [2]{}; (3,3.5) node [3]{}; (3,2.5) node [4]{}; (3,1.5) node [5]{}; (4,7.5) node [1]{}; (4,6.5) node [2]{}; (4,5.5) node [3]{}; (4,4.5) node [4]{}; (4,3.5) node [5]{}; (4,2.5) node [6]{}; (9,14)–(10,14); (9,13)–(10,13); (9,12)–(10,12); (9,11)–(10,11); (8,10)–(10,10); (8,9)–(10,9); (5,8)–(10,8); (5,7)–(10,7); (5,6)–(10,6); (5,5)–(10,5); (5,4)–(10,4); (5,3)–(10,3); (5,2)–(10,2); (5,8) – (5,2); (6,8) – (6,2); (7,8) – (7,2); (8,10) – (8,2); (9,14) – (9,2); (10,14) – (10,2); (5.5,7.5) node [1]{}; (5.5,6.5) node [2]{}; (5.5,5.5) node [3]{}; (5.5,4.5) node [4]{}; (5.5,3.5) node [5]{}; (5.5,2.5) node [6]{}; (6.5,7.5) node [1]{}; (6.5,6.5) node [2]{}; (6.5,5.5) node [3]{}; (6.5,4.5) node [4]{}; (6.5,3.5) node [5]{}; (6.5,2.5) node [6]{}; (7.5,7.5) node [1]{}; (7.5,6.5) node [2]{}; (7.5,5.5) node [3]{}; (7.5,4.5) node [4]{}; (7.5,3.5) node [5]{}; (7.5,2.5) node [6]{}; (8.5,9.5) node [1]{}; (8.5,8.5) node [2]{}; (8.5,7.5) node [3]{}; (8.5,6.5) node [2]{}; (8.5,5.5) node [5]{}; (8.5,4.5) node [6]{}; (8.5,3.5) node [7]{}; (8.5,2.5) node [8]{}; (9.5,13.5) node [1]{}; (9.5,12.5) node [2]{}; (9.5,11.5) node [3]{}; (9.5,10.5) node [4]{}; (9.5,9.5) node [5]{}; (9.5,8.5) node [6]{}; (9.5,7.5) node [7]{}; (9.5,6.5) node [8]{}; (9.5,5.5) node [9]{}; (9.5,4.5) node [10]{}; (9.5,3.5) node [11]{}; (9.5,2.5) node [12]{}; (-2,3) node[$T^LT^R=$]{}; (1,6)–(2,6); (1,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,-1)–(1,-1); (0,4) – (0,-1); (1,6) – (1,-1); (2,6) – (2,2); (0.5,3.5) node [1]{}; (0.5,2.5) node [2]{}; (0.5,1.5) node [3]{}; (0.5,0.5) node [7]{}; (0.5,-0.5) node [8]{}; (1.5,5.5) node [1]{}; (1.5,4.5) node [2]{}; (1.5,3.5) node [3]{}; (1.5,2.5) node [6]{}; (1.5,2) – (1.5,1); (-0.6,3) node[$\mapsto$]{}; (1,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(2,1); (0,0)–(1,0); (0,0)–(1,0); (0,-1)–(1,-1); (0,4) – (0,-1); (1,5) – (1,-1); (2,5) – (2,1); (0.5,3.5) node [1]{}; (0.5,2.5) node [2]{}; (0.5,1.5) node [3]{}; (0.5,0.5) node [7]{}; (1,0.5) – (1.5,0.5); (0.5,0.5) node\[draw, circle, scale=1.2\] ; (0.5,-0.5) node [8]{}; (1,-0.5) – (1.5,-0.5); (0.5,-0.5) node\[draw, circle, scale=1.2\] ; (1.5,4.5) node [1]{}; (1.5,3.5) node [2]{}; (1.5,2.5) node [3]{}; (1.5,1.5) node [6]{}; (-0.6,3) node[$\mapsto$]{}; (0,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (1,1)–(2,1); (1,0)–(2,0); (1,-1)–(2,-1); (0,5) – (0,2); (1,5) – (1,-1); (2,5) – (2,-1); (0.5,4.5) node [1]{}; (0.5,3.5) node [2]{}; (0.5,2.5) node [3]{}; (1.5,0.5) node [7]{}; (1.5,-0.5) node [8]{}; (1.5,4.5) node [1]{}; (1.5,3.5) node [2]{}; (1.5,2.5) node [3]{}; (1.5,1.5) node [6]{}; (4,3) node[$={^LT}{^RT}$]{}; (-2,3) node[$T^LT^R=$]{}; (1,6)–(2,6); (1,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,-1)–(1,-1); (0,4) – (0,-1); (1,6) – (1,-1); (2,6) – (2,2); (0.5,3.5) node [1]{}; (0.5,2.5) node [2]{}; (0.5,1.5) node [3]{}; (0.5,0.5) node [7]{}; (0.5,-0.5) node [8]{}; (1.5,5.5) node [1]{}; (1.5,4.5) node [2]{}; (1.5,3.5) node [3]{}; (1.5,2.5) node [6]{}; (0.5,4) – (0.5,5); (-0.6,3) node[$\mapsto$]{}; (0,6)–(2,6); (0,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,6) – (0,0); (1,6) – (1,0); (2,6) – (2,2); (0.5,5.5) node [1]{}; (0.5,4.5) node [2]{}; (0.5,3.5) node [3]{}; (0.5,1.5) node [7]{}; (0.5,0.5) node [8]{}; (0.5,0); (1,2.5) – (0.5,2.5); (1.5,2.5) node\[draw, circle, scale=1.2\] ; (1.5,5.5) node [1]{}; (1.5,4.5) node [2]{}; (1.5,3.5) node [3]{}; (1.5,2.5) node [6]{}; (-0.6,3) node[$\mapsto$]{}; (0,6)–(2,6); (0,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(1,2); (0,1)–(1,1); (0,0)–(1,0); (0,6) – (0,0); (1,6) – (1,0); (2,6) – (2,3); (0.5,5.5) node [1]{}; (0.5,4.5) node [2]{}; (0.5,3.5) node [3]{}; (0.5,2.5) node [6]{}; (0.5,1.5) node [7]{}; (0.5,0.5) node [8]{}; (0.5,0); (1.5,5.5) node [1]{}; (1.5,4.5) node [2]{}; (1.5,3.5) node [3]{}; (4.5,3) node[$=T^{L^*}T^{R^*}$]{}; (0.5,3) node[$U^LU^R=$]{}; (3.5,8)–(4.5,8); (3.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (2.5,1)–(3.5,1); (2.5,6) – (2.5,1); (3.5,8) – (3.5,1); (4.5,8) – (4.5,2); (3,5.5) node [1]{}; (3,4.5) node [2]{}; (3,3.5) node [3]{}; (3,2.5) node [4]{}; (3,1.5) node [5]{}; (4,7.5) node [1]{}; (4,6.5) node [2]{}; (4,5.5) node [3]{}; (4,4.5) node [4]{}; (4,3.5) node [5]{}; (4,2.5) node [6]{}; (4,2) – (4,1); (1.9,3) node[$\mapsto$]{}; (3.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (2.5,1)–(4.5,1); (2.5,6) – (2.5,1); (3.5,7) – (3.5,1); (4.5,7) – (4.5,1); (3,5.5) node [1]{}; (3,4.5) node [2]{}; (3,3.5) node [3]{}; (3,2.5) node [4]{}; (3,1.5) node [5]{}; (4,6.5) node [1]{}; (4,5.5) node [2]{}; (4,4.5) node [3]{}; (4,3.5) node [4]{}; (4,2.5) node [5]{}; (4,1.5) node [6]{}; (3,6) – (3,7); (1.9,3) node[$\mapsto$]{}; (2.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (3.5,1)–(4.5,1); (2.5,7) – (2.5,2); (3.5,7) – (3.5,1); (4.5,7) – (4.5,1); (3,6.5) node [1]{}; (3,5.5) node [2]{}; (3,4.5) node [3]{}; (3,3.5) node [4]{}; (3,2.5) node [5]{}; (4,6.5) node [1]{}; (4,5.5) node [2]{}; (4,4.5) node [3]{}; (4,3.5) node [4]{}; (4,2.5) node [5]{}; (4,1.5) node [6]{}; (6.5,3) node[$={^LU}{^RU}$]{}; (0.5,3) node[$U^LU^R=$]{}; (3.5,8)–(4.5,8); (3.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (2.5,1)–(3.5,1); (2.5,6) – (2.5,1); (3.5,8) – (3.5,1); (4.5,8) – (4.5,2); (3,5.5) node [1]{}; (3,4.5) node [2]{}; (3,3.5) node [3]{}; (3,2.5) node [4]{}; (3,1.5) node [5]{}; (4,7.5) node [1]{}; (4,6.5) node [2]{}; (4,5.5) node [3]{}; (4,4.5) node [4]{}; (4,3.5) node [5]{}; (4,2.5) node [6]{}; (3,6) – (3,7); (1.9,3) node[$\mapsto$]{}; (2.5,8)–(4.5,8); (2.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (2.5,1); (2.5,8) – (2.5,2); (3.5,8) – (3.5,2); (4.5,8) – (4.5,2); (3,7.5) node [1]{}; (3,6.5) node [2]{}; (3,5.5) node [3]{}; (3,4.5) node [4]{}; (3,3.5) node [5]{}; (4,7.5) node [1]{}; (4,6.5) node [2]{}; (4,5.5) node [3]{}; (4,4.5) node [4]{}; (4,3.5) node [5]{}; (4,2.5) node [6]{}; (3.5,2.5) – (3,2.5); (4,2.5) node\[draw, circle, scale=1.2\] ; (1.9,3) node[$\mapsto$]{}; (2.5,8)–(4.5,8); (2.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(3.5,2); (2.5,1); (2.5,8) – (2.5,2); (3.5,8) – (3.5,2); (4.5,8) – (4.5,3); (3,7.5) node [1]{}; (3,6.5) node [2]{}; (3,5.5) node [3]{}; (3,4.5) node [4]{}; (3,3.5) node [5]{}; (3,2.5) node [6]{}; (4,7.5) node [1]{}; (4,6.5) node [2]{}; (4,5.5) node [3]{}; (4,4.5) node [4]{}; (4,3.5) node [5]{}; (7,3) node[$=U^{L^*}U^{R^*}$]{}; \[rem:LRTabsSubSet\] Let $L\in{\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$ be an orthogonal Littlewood-Richardson tableau. Moreover let $\tilde{L}=(T_j,T_{j+1},\dots,T_{\ell(\mu)},S)$ be the tableau, which is obtained from $L$ by deleting the first $j-1$ semistandard tableaux. Due to Theorem \[theo:KwonLRTabs\] $\tilde{L}$ is an orthogonal Littlewood-Richardson tableau in $\mathrm{LR}_{\lambda}^{\tilde{\mu}}(\mathfrak{d})$, where $\tilde{\mu}=(\mu_j,\mu_{j+1},\dots\mu_{\ell(\mu)})$. We give now an explicit description of Kwon’s orthogonal Littlewood-Richardson tableaux. For it we need the concept of gaps and slots. Let $T$ be a semistandard tableau. A position $j>1$ of $T$ is a *gap* if $j-1$ is not in the same column as $j$. A position $j>0$ of $T$ is a *slot* if $j+1$ is not in the same column as $j$. Note that above a gap there is either a slot or nothing and below a slot there is either a gap or nothing. In the first tableau of Figure \[fig:Admissible\] the $3$ and the $8$ in the first column and the $3$ in the second column are slots, while the $7$ in the first column is a gap and the $6$ in the second column is both, a gap and a slot. \[theo:LRTabs\] Let $\lambda\vdash r$, $\ell(\lambda)\leq n(=2k+1)$, $\ell(\mu)\leq k$. Let $L=(T_1,T_2,\dots,T_{\ell(\mu)},S)$ be a vertex in $B^{\mathfrak{d}}(\mu)$. Then $L$ is an orthogonal Littlewood-Richardson tableau in ${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$ for ${\mathrm{SO}}(n)$ if and only if for all $i$ there are $\lambda_i'$ $i$’s in $L$ and the following conditions are met: 1. $b_i\leq b_{i+1}-a_{i+1}+2r_ir_{i+1}$ for $1\leq i \leq \ell(\mu)-1$. 2. $b_{\ell(\mu)}\leq ht(S^L)$ if $S$ is even and $ b_{\ell(\mu)}\leq ht(S^L)-1+2r_{\ell(\mu)}$ if $S$ is odd. 3. $S$ contains no gap. 4. Tableaux $T_1,T_2,\dots,T_{\ell(\mu)}$ are of one of the following three types. 1. Type 1 tableaux have residuum 0. Gaps can be only in the tail. 2. Type 2 tableaux have residuum 1. Gaps can be only in the lower tail. 3. Type 3 tableaux have residuum 1. The fin is a gap. Other gaps can be only in the lower tail. If $T_i$ is of type 3, $i<\ell(\mu)$, $T_{i+1}$ has residuum 1 and the fin of $T_i$ is not larger than the fin of $T_{i+1}$. If $T_{\ell(\mu)}$ is of type 3, $S$ is odd. If $T_{\ell(\mu)}$ is of type 1 and $S$ is odd, the tail root is smaller than or equal to $S^L(1)$, the bottommost position in the first column of $S$. 5. The tails shifted together such that they share the top line form a semistandard Young tableau. 6. For each gap $j$ there is a slot $j-1$ in a column to the right. This can be in the same Tableau $T_i$ or in another one that is right of $T_i$ in $L$ including $S$. More precisely, if there are $m$ gaps $j$ there are $m$ slots $j-1$ such that we can build pairs of a gap and a slot such that each slot is to the right of its gap. Properties and , as well as Properties and are just reformulations of each other. That is why we named them identically. \[lem:TensorEpsilonZero\] Let $L$ be in $B^{\mathfrak{d}}(\mu)$. Then $\varepsilon_i(L)=0 \text{ for } i \neq 0$ if and only if and . If and only if $\varepsilon_j(C)>0$ a column $C$ contains a gap $j$. In this case $\varepsilon_j(C)=1$. On the other hand if and only if $\varphi_j(C)>0$ a column $C$ contains a slot $j$. In this case $\varphi_j(C)=1$. The tensor product tells us $\varepsilon_j(b\otimes b')=\varepsilon_j(b')+\max(0,(\varepsilon_j(b)-\varphi_j(b')))$ and therefore $\varepsilon_j(b\otimes b')\geq \varepsilon_j(b')$. For a a tensor product consisting of several columns to have $\varepsilon_j=0$ this means that the first column needs to contain no gap and . Because $S$ is a skew semistandard tableau and the rightmost column has no gaps, it cannot have gaps, because slots to the right are to big. This also shows that the filling of such a tableau is a partition. \[lem:TypeProperty\] Let $L=(T_1, T_2, \dots, T_{\ell(\mu)}, S)$ be a tableau in $B^{\mathfrak{d}}(\mu)$ such that , , and hold. Then if and only if and hold also without the tail root condition for residuum zero tableaux holds. We first show inductively that the following two statements hold if and only if and hold. - Suppose $T_i$ has residuum 1. - Then $T_i^{R^*}$ is $T_i^R$ without the fin and ${^LT_i}$ is $T_i^L$ without the lower tail. The tail root is not a gap. - There is no slot smaller than the bottommost position of $T_i^{R^*}$ in $T_i^R$ or to the right. There is no slot smaller than the tail root in $T_i^L$ or to the right. - If the fin is a gap, then $T_{i+1}$ has also residuum 1, or if $i=\ell(\mu)$, $S$ is odd. - Suppose $T_i$ has residuum 0. - Then ${^LT_i}$ is $T_i^L$ without the tail. - There is no slot smaller than the fin in $T_i^R$ or to the right. There is no slot smaller than the bottommost position of ${^LT_i}$ in $T_i^L$ or to the right. This implies that there are no gaps at and above the positions in question, because slots to the right are to big. In the base case $L=S$ we can argue that this is equivalent to $S$ being a skew semistandard tableau. In the induction step we consider $T_1$. (Compare with Remark \[rem:LRTabsSubSet\].) If $T_1$ has residuum 1, it holds that: - $T^{R^{*}}_1$ contains one position less than $T^R_1$. Let us call this position $l_1$. Suppose that $l_1$ is not the fin. In this case there exists a position $l_3$ directly below $l_1$. As $l_1$ is not in $T^{R^{*}}_1$, there exists a position $l_2$ in $T^{L}_1$, that is shifted next to $l_3$ when determining $T^{R^{*}}_1$. Therefore $l_1<l_2\leq l_3$. If $l_2-1$ is in $T_1^L$, it is at most one position above $l_2$, thus directly besides $l_1$, which is a contradiction. Therefore $l_2$ is a gap. If $l_2-1=l_1$, either $l_3$ is a gap or $l_1$ is no slot. Thus either $l_2$ or $l_3$ is a gap with no slot in $T_1$. However $l_3$ is in $T^{R^{*}}_1$ and therefore smaller than or equal to the bottommost position of ${^LT_2}$ (or $S^L$ if $\ell(\mu)=1$, respectively). We have seen by induction that there are no smaller slots to the right. This is a contradiction. - The bottommost position of $T^{R^{*}}_1$ (the position above the fin) is smaller than or equal to the bottommost position of ${^LT_2}$ (or $S^L$ if $\ell(\mu)=1$, respectively). Thus there is no slot that is small enough for this position or one above to be a gap. - Because $T^{R^{*}}_1$ is $T^R_1$ without the fin, the tail root is shifted above the fin when calculating $T_1^{R^*}$. Therefore it is smaller than or equal to the bottommost position of $T_1^{R^*}$. By the same argumentation as above, neither it nor a position above is a gap and no slot is smaller than it. - If we consider the procedure to obtain ${^LT_1}$ we see that the fin is placed besides the tail root due to residuum 1, and therefore only the lower tail is shifted right. If $T_1$ has residuum 0, it holds that:. - The fin is smaller than or equal to the smallest slot to the right. Therefore it is no gap and there are no gaps above. The same holds for the position above the tail root. - Due to residuum 0, nothing is shifted besides the tail root when calculating ${^LT_1}$, thus the whole tail changes column. On the other hand, if those statements hold, the inequalities that hold for the bottommost positions of the considered columns, the column strictness and the lack of gaps imply and . We prove now that those statements hold if and only if without the tail root condition for residuum zero tableaux holds. The statements about the slots imply where gaps are. On the other hand, if the gaps are where they are described in and and hold, then we also get the inequalities between the slots in question. Finally the statements about ${^LT_i}$ and $T^{R*}_i$ follow from the residuum and the places where a gap can be. \[lem:TailProperty\] Let $L=(T_1, T_2, \dots, T_{\ell(\mu)}, S)$ be a tableau in $B^{\mathfrak{d}}(\mu)$ such that , , , and without the tail root condition for residuum zero tableaux hold. Then if and only if and hold also and the tail root condition for residuum zero tableaux of hold. Due to what we have seen before about ${^LT_i}$ and $T^{R^*}_i$ this holds once we argue, that for residuum 1 tableaux the tail root is smaller than or equal to the fin. The tail root condition for residuum zero tableaux and $S$ odd is equivalent to the second condition of . Now Theorem \[theo:LRTabs\] follows directly from Lemmas \[lem:TensorEpsilonZero\], \[lem:TypeProperty\] and \[lem:TailProperty\]. We finish this section by proving further properties about orthogonal Littlewood-Richardson tableaux we will use later on. \[prop:LRtabs1\] If $T_i$ is of type 2 or 3 the tail root is a slot. We have seen in the proof of Lemma \[lem:TypeProperty\], that the tail root is strictly smaller than the fin. Since the residuum is exactly $1$, the entry below the tail root, if it exists, is larger than the fin. \[prop:LRtabs2\] If the fin of a tableau $T_i$ exists, it is even and not larger than the fin of $T_{i+1}$, which then also exists. The fin of $T_i$ is even for type $1$ or $2$, as $T_i^R$ has no gap and even length. We show for these cases that the fin is smaller than or equal to the fin of $T_{i+1}$. If $T_{i}$ or $T_{i+1}$ is of type $1$, $T_{i+1}^L$ without tail is at least as long as $T_i^R$ by . Therefore, as $a_{i+1}\geq 0$, also $T_{i+1}^R$ is at least as long as $T_i^R$. If both tableaux have residuum $1$, $T_{i+1}^L$ without tail plus $2$ is at least as long as $T_i^R$ by and $T_{i+1}^R$ is longer than $T_{i+1}^L$ by at least $2$. The claim follows as the fin of $T_i$ is equal to the length of $T_i^R$ and the fin of $T_{i+1}$ is larger than or equal to the length of $T_{i+1}^R$. If $T_i$ is of type $3$, we know that the fin of $T_i$ is not larger than the fin of $T_{i+1}$ by assumption. We show for this case that the fin is even. We do so by showing that any possible slot is odd. Let $T_{j}$ be the next tableau of residuum $0$ to the right of $T_i$, if this exists, or $T_{\ell(\mu)}$, otherwise. Tableaux between $T_i$ and $T_j$ are therefore of type $3$ or $2$. Tableaux of type 3 have at least two odd slots, namely the position above the fin and the tail root. Tableaux of type 2 have at least one odd slot, namely the tail root. Other slots need to be at least as large as the fin. Therefore slots between $T_j$ and $T_i$, that are small enough for the fin of one of those tableaux or $T_i$ to be their slot, are also odd. It remains to show, that there is no even slot right of $T_j$ (and in $T_j$ if it is of type 1), that is small enough for any fin of $T_i$ or a tableau between $T_i$ and $T_j$ to be its gap. If $T_j$ is of type $2$ or $3$, it is directly left of $S$. As $S$ contains no gap by , slots in $S$ are in the bottom line. If $S$ is odd, the slots of $S$ are also odd. If $S$ is even, $T_j$ is of type $2$, due to and any slot of $S$ is larger than the fin of $T_2$ due to . If $T_j$ is of type $1$, slots of $T_j$ are at least as large as the fin of $T_{j-1}$ due to and because the fin of $T_{j-1}$ is not a gap, as it is of type 2 . Due to and (and because gaps are the fin or in the tail by ) this also holds for slots further to the right. Alternative Orthogonal Littlewood-Richardson Tableaux {#sec:BijA} ===================================================== In this section we define an alternative set of Littlewood-Richardson tableaux in terms of skew tableaux. Moreover define a bijection (Bijection $A$) between Kwon’s orthogonal Littlewood-Richardson tableaux and our new tableaux. We will use our new set of tableaux in the main bijection (Bijection $B$) to map pairs consisting of a standard Young tableau and a Littlewood-Richardson tableau to a vacillating tableau. Definition and Examples ----------------------- \[def:aoLRT\] We define the set of alternative orthogonal Littlewood-Richardson tableaux ${\mathrm{aLR}_\lambda^{\mu}}$ as follows. A tableau $L\in{\mathrm{aLR}_\lambda^{\mu}}$ is a reverse skew semistandard tableau of inner shape $\lambda$ and type $\mu$ (thus the filling consists of $\mu_j$ $j$’s, for all $j$). The outer shape has $2k+1$ possibly empty rows, whose lengths have all the same parity. The following two properties are satisfied. 1. The reading word is a Yamanouchi word. This is satisfied if and only if the $j$th cell from left, labeled $i$ is above the $j$th cell from left labeled $i-1$ for all $i>1$. 2. We go through the reading word of $L$ from right to left. Let $p$ be the current position. We define a sequence $v_p$ of positions of the reading word. The first entry of $v_p$ is $p$. If $m-1$ entries of $v_p$ are defined, let $e$ be entry number $m-1$. We search now for entry number $m$. For that we consider entries whose letter is larger than the letter of $e$ and which are in exactly $m-1$ sequences of positions right of $p$ (thus sequences already defined). If this set is nonempty we search for the smallest letter in it and take the rightmost position with this letter as entry $m$. If it is empty, $v_p$ has no more entries. Let $r_p$ be the row $p$ is in. Now we define the value $o_p$ to be the number of entries in $v_p$ with the following properties. It is the rightmost occurrence of its letter and if number $m$ in $v_p$ all $v_{\tilde{p}}$ with $\tilde{p}\neq p$ in the same row as $p$, have at most $m-1$ entries. We require $r_p\geq 2 |v_p| - o_p$. \[ex:aoLRT\] Two alternative orthogonal Littlewood-Richardson tableaux. (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,2)–(8.5,2); (4.5,1)–(6.5,1); (4.5,0)–(6.5,0); (4.5,5) – (4.5,0); (5.5,5) – (5.5,0); (6.5,5) – (6.5,0); (7.5,5) – (7.5,2); (8.5,5) – (8.5,2); (5,0.5) node [1]{}; (6,0.5) node [1]{}; (8,2.5) node [1]{}; (6,1.5) node [2]{}; (7,2.5) node [2]{}; (6,2.5) node [3]{}; (5,-1); (0,7) – (8,7); (0,6) – (8,6); (0,5) – (8,5); (0,4) – (8,4); (0,3) – (8,3); (0,2) – (6,2); (0,1) – (4,1); (0,7) – (0,1); (1,7) – (1,1); (2,7) – (2,1); (3,7) – (3,1); (4,7) – (4,1); (5,7) – (5,2); (6,7) – (6,2); (7,7) – (7,3); (8,7) – (8,3); (4.5,3.5) node [3]{}; (6.5,4.5) node [3]{}; (7.5,4.5) node [3]{}; (4.5,2.5) node [2]{}; (5.5,3.5) node [2]{}; (6.5,3.5) node [2]{}; (7.5,3.5) node [2]{}; (0.5,1.5) node [1]{}; (1.5,1.5) node [1]{}; (2.5,1.5) node [1]{}; (3.5,1.5) node [1]{}; (5.5,2.5) node [1]{}; (0,1); We write the reading word as a sequence of entries $l_p$ where $l$ is the letter and $p$ counts the position. The reading words are: $(1_1,1_2,2_3,3_4,2_5,1_6)$ and $(1_1,1_2,1_3,1_4,2_5,1_6,3_7,2_8,2_9,2_{10},3_{11},3_{12})$ Then we have the following $v$’s, where rows are separated by semicolons: $(1_6)$, $(2_5)$, $(3_4)$; $(2_3,3_4)$; $(1_2,2_5,3_4)$, $(1_1,2_3)$ and $(3_{12})$, $(3_{11})$; $(2_{10},3_{12})$, $(2_9,3_{11})$, $(2_8)$, $(3_7)$; $(1_6,2_{10},3_{12})$, $(2_5,3_7)$; $(1_4,2_9,3_{11})$, $(1_3,2_8,3_7)$, $(1_2,2_5)$, $(1_1)$. $1_6$ in the second tableau is in row $5$, which is fine as $3_{12}$ is counted by $o$. \[prop:2ndPropYamanouchi\] We can obtain the sequences $v_e$ by using Robinson-Schensted on the reversed reading word of $L$. In particular $v_e$ can be defined as the set of elements that got bumped during the insertion process of $e$. Therefore, by Theorem \[theo:YamanouchiWordSSYT\], the first property is satisfied if and only if the tableau one obtains by Robinson-Schensted on the reversed reading word is of the form as described in \[theo:YamanouchiWordSSYT\]. This is satisfied if and only if every element $j$ is bumped exactly $j$ times. In terms of our $v_e$’s this means that every $j$ is in exactly $j$ $v_e$’s. We show inductively that a position gets bumped if and only if it is in the current $v_e$. Therefore elements in the $j$-row were in $j$ $v_e$’s before. For the base case we consider the first element of an $v_e$. This is always the one we are inserting. Thus it ends up in the first row. On the other hand an element that ends up in the first row, does so only during the insertion process of itself, thus when it is the first element of an $v_e$. Now if an element is in $j$ different $v_e$’s, by induction hypothesis it got bumped $j$ times thus it is now in row $j$. Now if it is element number $j+1$ in a $v_e$, it is the rightmost one of the smallest letter that is larger than the letter of element number $j$. As elements of the same value get inserted into a row from left to right in Robinson-Schensted, this is the rightmost element in the reading word. The same observation leads to the other direction. Formulation of Bijection $A$ ---------------------------- Bijection $A$ is formulated by Algorithm \[alg:1\]. Its inverse is formulated by Algorithm \[alg:u1\]. It maps an orthogonal Littlewood-Richardson tableau of Kwon in ${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$ to an alternative orthogonal Littlewood-Richardson tableau in ${\mathrm{aLR}_\lambda^{\mu}}$. \[alg:1\] let $\tilde{L}$ be the Young diagram of $S$, reflected on $y=x$ $\tilde{L}$ \[alg:u1\] reflect $\tilde{L}$ by $x=y$ and fill each column with $1,2,\dots$ to obtain $S$ let $L$ be $(T_1,T_2,\dots,T_{\ell(\mu)},S)$ and Examples explaining Bijection $A$ --------------------------------- We consider an orthogonal Littlewood-Richardson tableau and apply Algorithm \[alg:1\]. (1,6)–(2,6); (1,5)–(2,5); (0,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,4) – (0,0); (1,6) – (1,0); (2,6) – (2,2); (0.5,3.5) node [1]{}; (0.5,2.5) node [2]{}; (0.5,1.5) node [3]{}; (0.5,0.5) node [7]{}; (1.5,5.5) node [1]{}; (1.5,4.5) node [2]{}; (1.5,3.5) node [3]{}; (1.5,2.5) node [6]{}; (3.5,8)–(4.5,8); (3.5,7)–(4.5,7); (2.5,6)–(4.5,6); (2.5,5)–(4.5,5); (2.5,4)–(4.5,4); (2.5,3)–(4.5,3); (2.5,2)–(4.5,2); (2.5,2)–(3.5,2); (2.5,1)–(3.5,1); (2.5,6) – (2.5,1); (3.5,8) – (3.5,1); (4.5,8) – (4.5,2); (3,5.5) node [1]{}; (3,4.5) node [2]{}; (3,3.5) node [3]{}; (3,2.5) node [4]{}; (3,1.5) node [5]{}; (4,7.5) node [1]{}; (4,6.5) node [2]{}; (4,5.5) node [3]{}; (4,4.5) node [4]{}; (4,3.5) node [5]{}; (4,2.5) node [6]{}; (5,8)–(6,8); (5,7)–(6,7); (5,6)–(6,6); (5,5)–(6,5); (5,4)–(6,4); (5,3)–(6,3); (5,2)–(6,2); (5,8) – (5,2); (6,8) – (6,2); (5.5,7.5) node [1]{}; (5.5,6.5) node [2]{}; (5.5,5.5) node [3]{}; (5.5,4.5) node [4]{}; (5.5,3.5) node [5]{}; (5.5,2.5) node [6]{}; (0,0) node; (0,8) node; (0,5) – (2,5); (0,8) – (0,7); (1,8) – (1,7); (2,8) – (2,7); (3,8) – (3,7); (4,8) – (4,7); (5,8) – (5,7); (6,8) – (6,7); (0,8)–(6,8); (0,7)–(6,7); (5.5,0); (-0.7,5) node[$\mapsto$]{}; (0,8) – (0,5); (1,8) – (1,5); (2,8) – (2,5); (3,8) – (3,5); (4,8) – (4,5); (5,8) – (5,5); (6,8) – (6,5); (0,8)–(6,8); (0,7)–(6,7); (0,6)–(6,6); (0,5)–(6,5); (5.5,5.5) node[2]{}; (5.5,0); (-0.7,5) node[$\mapsto$]{}; (0,8) – (0,3); (1,8) – (1,3); (2,8) – (2,3); (3,8) – (3,3); (4,8) – (4,5); (5,8) – (5,3); (6,8) – (6,3); (7,8) – (7,6); (0,8)–(7,8); (0,7)–(7,7); (0,6)–(7,6); (0,5)–(6,5); (0,4)–(3,4); (5,4)–(6,4); (0,3)–(3,3); (5,3)–(6,3); (5.5,4.5) node[2]{}; (5.5,3.5) node[1]{}; (6.5,6.5) node[1]{}; (5.5,0); (-0.7,5) node[$\mapsto$]{}; (0,8) – (0,3); (1,8) – (1,3); (2,8) – (2,3); (3,8) – (3,3); (4,8) – (4,3); (5,8) – (5,5); (6,8) – (6,5); (7,8) – (7,7); (8,8) – (8,7); (0,8)–(8,8); (0,7)–(8,7); (0,6)–(6,6); (0,5)–(6,5); (0,4)–(4,4); (0,3)–(4,3); (3.5,4.5) node [2]{}; (3.5,3.5) node [1]{}; (7.5,7.5) node [1]{}; (5.5,0); Doing so we insert first $T_2$ and then $T_1$. When inserting $T_2$, which is of type 2, we add a cell containing $2$ below the cell coming from the fin and use neither *merge* nor *shift* nor *correct parity*. When inserting $T_1$, which is of type 3, we *shift* the pair $2,1$ to the left and put the other $1$ to a row above in *correct parity*. We consider another orthogonal Littlewood-Richardson tableau and apply again Algorithm \[alg:1\]. (0,3)–(1,3); (0,2)–(1,2); (0,1)–(1,1); (0,0)–(1,0); (0,3) – (0,0); (1,3) – (1,0); (0.5,2.5) node [1]{}; (0.5,1.5) node [3]{}; (0.5,0.5) node [4]{}; (2.5,3)–(3.5,3); (2.5,2)–(3.5,2); (2.5,1)–(3.5,1); (2.5,3) – (2.5,1); (3.5,3) – (3.5,1); (3,2.5) node [1]{}; (3,1.5) node [4]{}; (6,5)–(7,5); (6,4)–(7,4); (5,3)–(7,3); (5,2)–(6,2); (5,3) – (5,2); (6,5) – (6,2); (7,5) – (7,3); (6.5,4.5) node [1]{}; (6.5,3.5) node [2]{}; (5.5,2.5) node [3]{}; (7.5,5)–(8.5,5); (7.5,4)–(8.5,4); (7.5,3)–(8.5,3); (7.5,5) – (7.5,3); (8.5,5) – (8.5,3); (8,4.5) node [1]{}; (8,3.5) node [2]{}; (0,0) node; (0,5) node; (0,3) – (2,3); (4.5,5)–(6.5,5); (4.5,4)–(6.5,4); (4.5,5) – (4.5,4); (5.5,5) – (5.5,4); (6.5,5) – (6.5,4); (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(7.5,5); (4.5,4)–(7.5,4); (4.5,3)–(7.5,3); (4.5,5) – (4.5,3); (5.5,5) – (5.5,3); (6.5,5) – (6.5,3); (7.5,5) – (7.5,3); (7,3.5) node[3]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(6.5,3); (4.5,5) – (4.5,3); (5.5,5) – (5.5,3); (6.5,5) – (6.5,3); (7.5,5) – (7.5,4); (8.5,5) – (8.5,4); (8,4.5) node[3]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(6.5,3); (7.5,3)–(8.5,3); (4.5,2)–(5.5,2); (7.5,2)–(8.5,2); (4.5,1)–(5.5,1); (4.5,5) – (4.5,1); (5.5,5) – (5.5,1); (6.5,5) – (6.5,3); (7.5,5) – (7.5,2); (8.5,5) – (8.5,2); (8,3.5) node[3]{}; (8,2.5) node[2]{}; (5,1.5) node[2]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(7.5,3); (4.5,2)–(6.5,2); (4.5,1)–(5.5,1); (4.5,5) – (4.5,1); (5.5,5) – (5.5,1); (6.5,5) – (6.5,2); (7.5,5) – (7.5,3); (8.5,5) – (8.5,4); (7,3.5) node[3]{}; (6,2.5) node[2]{}; (5,1.5) node[2]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,2)–(6.5,2); (4.5,2)–(5.5,2); (4.5,5) – (4.5,2); (5.5,5) – (5.5,2); (6.5,5) – (6.5,2); (7.5,5) – (7.5,3); (8.5,5) – (8.5,3); (7,3.5) node[3]{}; (6,2.5) node[2]{}; (8,3.5) node[2]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,2)–(8.5,2); (4.5,1)–(5.5,1); (6.5,1)–(8.5,1); (4.5,0)–(5.5,0); (4.5,5) – (4.5,0); (5.5,5) – (5.5,0); (6.5,5) – (6.5,1); (7.5,5) – (7.5,1); (8.5,5) – (8.5,1); (7,2.5) node[3]{}; (6,2.5) node[2]{}; (8,2.5) node[2]{}; (5,0.5) node[1]{}; (7,1.5) node[1]{}; (8,1.5) node[1]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,2)–(6.5,2); (7.5,2)–(8.5,2); (4.5,1)–(6.5,1); (7.5,1)–(8.5,1); (4.5,0)–(6.5,0); (4.5,5) – (4.5,0); (5.5,5) – (5.5,0); (6.5,5) – (6.5,0); (7.5,5) – (7.5,1); (8.5,5) – (8.5,1); (6,2.5) node[3]{}; (6,1.5) node[2]{}; (8,2.5) node[2]{}; (5,0.5) node[1]{}; (6,0.5) node[1]{}; (8,1.5) node[1]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,2)–(7.5,2); (4.5,1)–(7.5,1); (4.5,0)–(6.5,0); (4.5,5) – (4.5,0); (5.5,5) – (5.5,0); (6.5,5) – (6.5,0); (7.5,5) – (7.5,1); (8.5,5) – (8.5,3); (6,2.5) node[3]{}; (6,1.5) node[2]{}; (7,2.5) node[2]{}; (5,0.5) node[1]{}; (6,0.5) node[1]{}; (7,1.5) node[1]{}; (5,0); (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,2)–(8.5,2); (4.5,1)–(6.5,1); (4.5,0)–(6.5,0); (4.5,5) – (4.5,0); (5.5,5) – (5.5,0); (6.5,5) – (6.5,0); (7.5,5) – (7.5,2); (8.5,5) – (8.5,2); (5,0.5) node [1]{}; (6,0.5) node [1]{}; (8,2.5) node [1]{}; (6,1.5) node [2]{}; (7,2.5) node [2]{}; (6,2.5) node [3]{}; (5,0); Doing so we insert first $T_3$ then $T_2$ and in the end $T_1$. All three of them are of type 1. When inserting $T_3$ we use only *correct parity* to put the $3$ upwards. When inserting $T_2$ we first *shift* the pair $3,2$ to the left and then put the other $2$ upwards in *correct parity*. When inserting $T_1$ we first *merge* the pair $3,1$ with the $2$ to the left. Then we *shift* the pair $2,1$ to the left and in the end we put this $1$ upwards in *correct parity*. The empty cells of our tableaux can be determined by the filling of Kwon’s tableaux. However this shape does not define our tableaux by far. As the following tableaux show it neither defines where to add the filled cells: (6,5)–(7,5); (6,4)–(7,4); (5,3)–(7,3); (5,2)–(6,2); (5,1)–(6,1); (5,3) – (5,1); (6,5) – (6,1); (7,5) – (7,3); (6.5,4.5) node [1]{}; (6.5,3.5) node [2]{}; (5.5,2.5) node [3]{}; (5.5,1.5) node [4]{}; (7.5,5)–(8.5,5); (7.5,4)–(8.5,4); (7.5,3)–(8.5,3); (7.5,5) – (7.5,3); (8.5,5) – (8.5,3); (8,4.5) node [1]{}; (8,3.5) node [2]{}; (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(8.5,3); (4.5,5) – (4.5,3); (5.5,5) – (5.5,3); (6.5,5) – (6.5,3); (7.5,5) – (7.5,3); (8.5,5) – (8.5,3); (7,3.5) node[1]{}; (8,3.5) node[1]{}; (5,0); (5,3)–(6,3); (5,2)–(6,2); (5,1)–(6,1); (5,3) – (5,1); (6,3) – (6,1); (5.5,2.5) node [1]{}; (5.5,1.5) node [2]{}; (7.5,7)–(8.5,7); (7.5,6)–(8.5,6); (7.5,5)–(8.5,5); (7.5,4)–(8.5,4); (7.5,3)–(8.5,3); (7.5,7) – (7.5,3); (8.5,7) – (8.5,3); (8,6.5) node [1]{}; (8,5.5) node [2]{}; (8,4.5) node [3]{}; (8,3.5) node [4]{}; (3.5,3) node[$\mapsto$]{}; (4.5,5)–(8.5,5); (4.5,4)–(8.5,4); (4.5,3)–(6.5,3); (4.5,2)–(6.5,2); (4.5,5) – (4.5,2); (5.5,5) – (5.5,2); (6.5,5) – (6.5,2); (7.5,5) – (7.5,4); (8.5,5) – (8.5,4); (5,2.5) node[1]{}; (6,2.5) node[1]{}; (5,0); nor does it define how to fill those cells: (2.5,3)–(3.5,3); (2.5,2)–(3.5,2); (2.5,1)–(3.5,1); (2.5,3) – (2.5,1); (3.5,3) – (3.5,1); (3,2.5) node [1]{}; (3,1.5) node [3]{}; (6,5)–(7,5); (6,4)–(7,4); (5,3)–(7,3); (5,2)–(6,2); (5,3) – (5,2); (6,5) – (6,2); (7,5) – (7,3); (6.5,4.5) node [1]{}; (6.5,3.5) node [2]{}; (5.5,2.5) node [5]{}; (7.5,7)–(8.5,7); (7.5,6)–(8.5,6); (7.5,5)–(8.5,5); (7.5,4)–(8.5,4); (7.5,3)–(8.5,3); (7.5,7) – (7.5,3); (8.5,7) – (8.5,3); (8,6.5) node [1]{}; (8,5.5) node [2]{}; (8,4.5) node [3]{}; (8,3.5) node [4]{}; (3.5,3) node[$\mapsto$]{}; (4.5,5)–(10.5,5); (4.5,4)–(10.5,4); (4.5,3)–(8.5,3); (4.5,2)–(6.5,2); (4.5,5) – (4.5,2); (5.5,5) – (5.5,2); (6.5,5) – (6.5,2); (7.5,5) – (7.5,3); (8.5,5) – (8.5,3); (9.5,5) – (9.5,4); (10.5,5) – (10.5,4); (6,2.5) node[1]{}; (8,3.5) node[1]{}; (10,4.5) node[2]{}; (5,0); (2.5,3)–(3.5,3); (2.5,2)–(3.5,2); (2.5,1)–(3.5,1); (2.5,3) – (2.5,1); (3.5,3) – (3.5,1); (3,2.5) node [1]{}; (3,1.5) node [5]{}; (6,5)–(7,5); (6,4)–(7,4); (5,3)–(7,3); (5,2)–(6,2); (5,3) – (5,2); (6,5) – (6,2); (7,5) – (7,3); (6.5,4.5) node [1]{}; (6.5,3.5) node [2]{}; (5.5,2.5) node [3]{}; (7.5,7)–(8.5,7); (7.5,6)–(8.5,6); (7.5,5)–(8.5,5); (7.5,4)–(8.5,4); (7.5,3)–(8.5,3); (7.5,7) – (7.5,3); (8.5,7) – (8.5,3); (8,6.5) node [1]{}; (8,5.5) node [2]{}; (8,4.5) node [3]{}; (8,3.5) node [4]{}; (3.5,3) node[$\mapsto$]{}; (4.5,5)–(10.5,5); (4.5,4)–(10.5,4); (4.5,3)–(8.5,3); (4.5,2)–(6.5,2); (4.5,5) – (4.5,2); (5.5,5) – (5.5,2); (6.5,5) – (6.5,2); (7.5,5) – (7.5,3); (8.5,5) – (8.5,3); (9.5,5) – (9.5,4); (10.5,5) – (10.5,4); (6,2.5) node[1]{}; (8,3.5) node[2]{}; (10,4.5) node[1]{}; (5,0); Properties and Proofs for Bijection $A$ --------------------------------------- \[theo:algo1WellDef\] Algorithm \[alg:1\] is well-defined and returns an alternative orthogonal Littlewood-Richardson tableau. We will prove this theorem by induction. In particular we show that after every iteration $i$ of the outer for-loop the rules for alternative orthogonal Littlewodd-Richardson tableaux are satisfied, if we would subtract $(i-1)$ from every entry. For the base case we get the shape of $S$ reflected, which satisfies our conditions for $\mu=\emptyset$ and $k=0$. The induction step is shown by the following lemmas. We state some properties following from the formulation of the algorithm first. We refer to parts or operations in the algorithms by the comments placed next to them. \[cor:LongerColum\] 1. In Algorithm \[alg:1\] there are two types of rows that get longer during the inner for-loop. One type consists of those rows in which the new $i$’s are inserted and the rows directly above. Those get longer by one for each such $i$. The other type consists of the bottommost two rows which get longer by values of the same parity. 2. *Correct parity* can be reformulated to the following and still leads to the same result. Go through $\tilde{L}$ from bottom to top. If the current row has a length of a different parity than $S$, put the rightmost $i$ to the next row such that it is the leftmost $i$ in this new row. (Shift other $i$’s one column to the right.) 3. Unfilled positions form a Young diagram of a partition (and not a skew-shape). <!-- --> 1. For cells that do not come from the lower tail and the tail root / fin we consider and . Due to those there is for each newly inserted empty cell an already inserted one coming from $T_{i+1}$. If $r_{T_i}=r_{T_{i+1}}=1$ or $S$ odd and $r_{T_{\ell(\mu)}}=1$ this holds for the tail root and the non-tail-parts except for the fin. Otherwise this holds for the non-tail-parts. Adding cells for the lower tail and the tail root / fin and adding cells with $i$ corresponding to them extends columns by two. *Merge* or *shift* preserves this until the point where only an $i$ is *shifted*. In this case this is the last movement of this $i$ and it is still in the row below, the one that gets longer too. 2. Wrong parity is caused by columns getting longer by one. Therefore, if the row above the bottommost row with wrong parity has the right parity, there is also an $i$ (an odd number of $i$’s in fact). Iterating this argument completes the proof. 3. This follows directly from and . Each step of the outer for-loop is well-defined if it adds a new $T_{i}$ to an $\tilde{L}$ as demanded. There are three steps, in which this is not obvious: - There is always an $i$ in the current row when we *merge*. Columns that get longer by inserted cells of the non-tail parts cannot cause a *merge*-situation because then also the columns to the left get longer. Besides this, a column can only get longer by inserting an $i$, *merge* or *shift*. Only at insert $i$ and *merge* a former inserted $l$ can move to a different row. Thus only in those cases a *merge*-situation can arise. Therefore we can show inductively that there is always an $i$ in such a column. - There are always $i$’s to find and places to put them at correct parity. For rows that are not the two bottommost ones this is follows from Corollary \[cor:LongerColum\]. We make rows longer in pairs. If we make them longer by an odd number, there needs to be enough space to put an element of the upper row to the lower one due to parity reasons. (If the latter got longer together with the one above, then this also got longer. We can iterate this argument.) Now we consider the bottommost rows. We start with the case that $S$ is even and consider a newly inserted $T_i$. Suppose that the bottommost row is of odd length and contains no $i$. This means that an empty cell has been added but no $i$ has been put below it. As non-tail parts of our $T_i$ have even length, $T_i$ needs to be of type $2$ or $3$. If $T_{i+1}$ has residuum $0$ (thus $T_i$ is of type 2), the bottommost row so far had even length and was longer than or of equal length as $T_i^R$ by . Therefore the fin of $T_i$ is placed into the second row from bottom, with an $i$ below. This $i$ is put into the bottommost row, which is a contradiction. At this point there are no other $j$ left or below to this. If $T_{i+1}$ has residuum $1$ it is also of type $2$ or $3$. In both cases the bottommost rows (those which consist of odd many empty cells, thus those above the fin of $T_{i+1}$) contain each one $j$ increasingly from bottom to top. Now if we insert the new fin, it is inserted to one of these spots, as it is even and smaller than or equal to the fin of $T_i$. A sequence of *merge* puts the $i$ into the bottommost row and the $i+1$ to the row above, as in each step a $j$ is put into the row of an $j-1$. Now we consider the case that $S$ is odd. We have to show that there are $i$’s in a row, if this is of even length. Even columns without $i$ cannot come from type 2 or 3 tableaux thus we have to consider type 1. If this is the first tableau from right thus directly left of $S$, the additional condition prevents this. Otherwise, we know that the tail root is smaller than or equal to the tail root to the right. Now we can argue exactly as above, but with the tail root instead of the fin. - While-loops stop. The first while-loop stops after several steps because *merge* always works, as we have seen in the previous point, and moves $i$’s to the left. As there are finitely many columns, this has to stop at some point. The second while-loop stops as after some steps everything is *shifted* to the left. The third while-loop stops as after some steps all rows have the same parity, because the parity of $S$ equals the parity of the number of elements in $\tilde{L}$. This holds as the number of elements in $S$ has the same parity as $S$ and when inserting $T_i$ we insert $2b_i-a_i+\mu_{T_i}$ empty cells and $\mu_{T_i}$ $i$’s, which gives an even number of added cells to $\tilde{L}$. After each insertion of a $T_i$ we obtain a reversed skew semistandard tableau. Each column is sorted such that fillings decrease and empty cells are on top as the columns get sorted after any operation that might change this. Each row is sorted the same way as due to Corollary \[cor:LongerColum\] the empty cells build a Young diagram, thus are left in each row, and unsorted filled cells get eliminated by *merge*. Therefore it suffices to show that there is at most one $j$ in each column. As we insert at most one $i$ to each column, there is at most one $i$ in each column at the begin of an iteration. The same holds by induction for $j$ with $j>i$. We show that this holds also at the end of this iteration. We note that the only situations where an $l$ or $i$ moves to another column are *merge*, *shift* and *correct parity*. First we consider *merge*: Whenever there is a $j$ in left of an $l$ with $j<l$, this happens exactly if below the $l$ is a newly inserted $i$ or, if there was such a situation in the column to the right too, but not in the column of $j$, or if there was a shift of $l$ and $i$. Therefore there cannot be an $i$ in the column to the left because otherwise entries in this column would have moved one down too and $j<l$ would have caused a disorder before. Thus after *merge* the number of $i$ in each column is still at most one. What remains to show is that there cannot be another $l$ in the new column of $l$. Therefore we consider the position directly to the right of this other $l$. This needs to be smaller, as rows above are sorted and it needs to be larger as columns are sorted. This is a contradiction. After *merge* the current row is sorted. Therefore *shifting* cells in the same row to the left does not increase the number of $j$’s to two for any $j$ and any column. Finally we consider *correct parity*. Suppose an $i$ is put into a column where already an $i$ is. This would mean that there is not enough space for this $i$ to be put into this row, if the other $i$ would not be here. However we make rows longer in pairs, and if we make them longer by an odd number, there needs to be enough space to put an element of the upper row to the lower one due to parity reasons. After each insertion of a $T_i$ the first property of alternative orthogonal Littlewood-Richardson tableaux holds. Due to each element $i$ is inserted left below of the $(i+1)$ directly to the right in the tails. We show that operations in the outer for-loop do not change that. If $(i+1)$ is still in the column where it was inserted or, due to *correct parity*, to the right, $i$ gets inserted left of $(i+1)$. In this case neither *merge* nor *shift* can change this. Now we consider the case that $(i+1)$ has changed column in *merge* or *shift*. If this happened and $i$ is inserted to the right of it, we show that there needs to be an $l$ above of $i$, thus *merge* also takes place for $i$. (If there is such an $l$, there was a position on top of the upper *merged* or *shifted* position, that now ends up to the right.) Where the empty cell belonging to $i$ gets inserted, there needs to be an $l$ or no cell and an empty cell to the left for the empty cells to form a tableau. No cell is not possible as $(i+1)$ was inserted to the same column (or to the right) and changed column in *merge* or *shift* and there is an empty cell to the left. A sequence of *merge* and *shift* will be followed by a sequence of *merge* by the same argument. (We always put two elements leftwards and the upper one will be the next candidate for *merge* as it is smaller than the element it is *shifted* to, because it was in the same column on top on it.) What is left to consider is *correct parity*. In this case the row of $i$ has odd (respectively even, depending on the parity of $S$) length and $i$ is the rightmost position in it. Thus $i$ is in an odd (respectively even) column. If $(i+1)$ is in a column to the right of $i$, $i$ still ends up in the same column or to the left. If $i$ is in the same column as $(i+1)$ we show that there cannot happen correct parity. The column where $(i+1)$ is now, got longer when it was inserted (or *merged/shifted to*). As it is still there and it is an odd (respectively even) column, the column to the left also got longer such that they had the same length. Thus it too contains an $(i+1)$, but no $i$, which is a contradiction. $l$ also changes column. If $(l-1)$ is in the same column, this was next to $j$ before this column got longer through $i$. This means by induction $l-1\leq j$ but since $j<l$ we obtain $l-1=j$. Thus the $(l-1)$ in the original column of $l$ is not its according $(l-1)$. (The $(l-1)$ in the column $l$ is moved to is its according $(l-1)$.) After each insertion of a $T_i$ also the second property of alternative orthogonal Littlewood-Richardson tableau holds. We start this proof by reformulating the second property: Instead of putting elements into a $v_e$ we can also mark them using the same rules. We remember how often an element was marked and count which element was marked at with position during considering $e$. Now we observe the following: - It only matters how often an element is marked. It is not important by which elements it was marked. - Whenever we consider an $i$ and mark elements, if we mark an element the $(j+1)$-th time, we mark another, smaller element, the $j$-th time. Thus the number elements marked $j$ times does never decrease. Now we prove the statement. First we have a closer look at what happens locally, when there is an $i$ inserted (or *merge* happens) in one column but not in its neighbor. To do so we first consider a column together with its left neighbor. We examine four elements in a pattern as below with $j_1<j_2$, $j_3<j_4$ and $j_2\geq j_3$. *Merge* or insert $i$ happens in the right column. Thus this is placed downwards by one. If $j_1\geq j_3$ and $j_2\geq j_4$ nothing changes, while if $j_1< j_3$ and $j_2\geq j_4$ or $j_2<j_4$ we merge. (1,4)–(2,4); (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,3) – (0,1); (1,4) – (1,1); (2,4) – (2,2); (1.5,3.5) node[$j_4$]{}; (1.5,2.5) node[$j_3$]{}; (0.5,2.5) node[$j_2$]{}; (0.5,1.5) node[$j_1$]{}; (0,0); is changed into: if $j_1\geq j_3$ and $j_2\geq j_4$: (0,3)–(2,3); (0,2)–(2,2); (0,1)–(2,1); (0,3) – (0,1); (1,3) – (1,1); (2,3) – (2,1); (1.5,2.5) node[$j_4$]{}; (1.5,1.5) node[$j_3$]{}; (0.5,2.5) node[$j_2$]{}; (0.5,1.5) node[$j_1$]{}; (0,0); if $j_1< j_3$ and $j_2\geq j_4$: (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,3) – (0,0); (1,3) – (1,0); (2,3) – (2,2); (1.5,2.5) node[$j_4$]{}; (0.5,1.5) node[$j_3$]{}; (0.5,2.5) node[$j_2$]{}; (0.5,0.5) node[$j_1$]{}; (0,0); if $j_2<j_4$: (0,3)–(2,3); (0,2)–(2,2); (0,1)–(1,1); (0,0)–(1,0); (0,3) – (0,0); (1,3) – (1,0); (2,3) – (2,2); (0.5,2.5) node[$j_4$]{}; (1.5,2.5) node[$j_3$]{}; (0.5,1.5) node[$j_2$]{}; (0.5,0.5) node[$j_1$]{}; (0,0); Now we determine which elements get marked if no other elements interfere: - $j_1\geq j_3$ and $j_2\geq j_4$: $\{j_4\}$, $\{j_3,j_4\}$, $\{j_2\}$, $\{j_1,j_2\}$. After the insertion process this changes to $\{j_4\}$, $\{j_2\}$, $\{j_3,j_4\}$, $\{j_1,j_2\}$, which only changed the order. - $j_1< j_3$ and $j_2\geq j_4$: $\{j_4\}$, $\{j_3,j_4\}$, $\{j_2\}$, $\{j_1,j_3, j_4\}$. After the insertion process this changes to $\{j_4\}$, $\{j_2\}$, $\{j_3,j_4\}$, $\{j_1,j_3, j_4\}$, which also only changed the order. - For $j_2<j_4$ we distinguish the cases $j_1<j_3$ and $j_1\geq j_3$: $\{j_4\}$, $\{j_3,j_4\}$, $\{j_2\}$, $\{j_1,j_3, j_4\}$ or $\{j_1,j_2, j_4\}$. After the insertion process this changes to $\{j_3\}$, $\{j_4\}$, $\{j_2,j_4\}$, $\{j_1,j_3, j_4\}$ or $\{j_1,j_2, j_4\}$, which is more than just a change of order but does not change anything about what is marked afterwards. $j_3$ and $j_2$ swap number of marked elements, which is allowed as $j_2$, which number increases, is one row below of where $j_3$ was. (The same row would have been sufficient.) Now we consider a column together with its right neighbor. Again we examine four elements in a pattern as below with $j_1<j_2$, $j_3<j_4$, $j_1\geq j_3$ and $j_2\geq j_4$ (and therefore $j_2>j_3$). *Merge* or insert $i$ happens in the right column. Thus this is placed downwards by one. Everything ends up sorted, so no *merge* happens: (0,3)–(2,3); (0,2)–(2,2); (0,1)–(2,1); (0,3) – (0,1); (1,3) – (1,1); (2,3) – (2,1); (1.5,2.5) node[$j_4$]{}; (1.5,1.5) node[$j_3$]{}; (0.5,2.5) node[$j_2$]{}; (0.5,1.5) node[$j_1$]{}; (0,0); is changed into (1,3)–(2,3); (0,2)–(2,2); (0,1)–(2,1); (0,0)–(1,0); (0,2) – (0,0); (1,3) – (1,0); (2,3) – (2,1); (1.5,2.5) node[$j_4$]{}; (1.5,1.5) node[$j_3$]{}; (0.5,1.5) node[$j_2$]{}; (0.5,0.5) node[$j_1$]{}; (0,0); . Now we consider the marked elements in the insertion process: $\{j_4\}$, $\{j_2\}$, $\{j_3,j_4\}$, $\{j_1,j_2\}$ which changes to $\{j_4\}$, $\{j_3,j_4\}$, $\{j_2\}$, $\{j_1,j_2\}$ which is again only a change of the order. As a second step we show that there are no relevant changes in those columns to the left and to the right. Once we have shown this, we can conclude, that $j>i$ still satisfy the third property, if we ignore elements counted by $o$. To see this we can argue that anything even more to the left of a column that got changed is larger. Thus it marks even larger elements. In the case that it marks elements that would have been marked by $j_1$ to $j_4$ due to their change of rows, they simply swap which elements they mark. As they used to be in the same row, the third property still holds. Everything more to the right of a column that got changed is smaller, and takes smaller elements, still it could be that the same kind of swapping occurs. In the third step we have a closer look at what happens to $i$. If an $i$ is inserted and not changed by *correct parity*, it is two rows below from where the elements one row above are. It can mark at most one element more, which still satisfies the third property. *Correct parity* puts $i$ one row up. If there are other $i$’s in this row we can argue as above. Otherwise we see that it can mark at most one element more than those elements one row above. Lets call the rightmost one of them $j$. Suppose this $j$ has only one row for every rightmost element in $v_j$ and our $i$ is not the rightmost $i$. (If our $i$ is the rightmost one, it can take only rightmost elements, and can take more elements as $o$. If $j$ has more rightmost element $i$ can take also more elements as $o$. This is because the element left of $i$ can take at most as many elements as $j$ but is the row of $i$ now. Thus the “only” condition holds.) For $j$ to be taken in $v_i$, elements that are smaller or equal but more to the right have to be taken by other $i$. Let’s call the leftmost such element $m$. Below $m$ there is no row without a number because if there would be one, an element of $v_j$ would be in the same row as one of $v_m$, that cannot satisfy the “only” condition. Thus when we insert another $i$ below $m$ that is not moved in correct parity above or besides $m$, either this $i$ or another $i$ left of it moves a position of $v_j$. Thus $v_i$ has less elements too and satisfies the third condition. Finally we consider elements that are counted by an $o$. Normally they just stay the same as every other element. When they are put one row down, it can happen that they count one time less as an $o$ element, which is fine, as they also went down one row (and with them those which mark it). This happens if only this columns gains length and not the one directly to the left. $\tilde{L}$ has at most $2(k-i+1)+1$ rows, if $S$ is odd, this number is met. Each column grows by adding empty cells and cells labeled $i$. *Merge* and *shift* only lead to grows of columns untouched so far. There are at most two numbers with the same value in $T_i$, thus at most two new empty cells in each column. We show that if there are two, no $i$ is inserted to the same column. Recall that $i$’s are inserted below lower tail elements and the tail root or the fin, depending on the parity. If a tableau has residuum zero, there cannot be an element that is in both, in the tail and in the right column. If a tableau has residuum 1, such a tail element could only be the tail root which produces no $i$ either. The number of the fin cannot be in both columns, as the left column without the tail is shorter by at least two. Therefore no row can grow by more than two. It remains to show the second claim. Thus we consider odd $S$ and $T_i$ that do not contain two $1$’s. Tableaux with residuum one contain a $1$ in each column, as neither the position above the fin (which exists) nor the tail root nor any position above one of them is a gap. Thus we only consider tableaux of type $1$. If only the left column contains a $1$, thus the right one is empty, we add $i$ an empty cell in column 1. Tableaux with no $1$ in the left column consist only of the tail and a right column, if the tableau to the right had a left column larger than its tail (due to ). All those tail elements get inserted together with an $i$. Now the tail root needs to be smaller than or equal to all other tail roots by . Residuum $1$ tableaux produce two $1$’s, two $2$’s up to two such tail roots. Type $1$ tableaux right of residuum 1 tableaux (respectively $S$) also do so due to (respectively ). Thus the first tableau of type $1$, $T_j$, whose left column consists only of the tail inserts a $j$ into row $2j+1$. Now as the second property of alternative tableau holds, $i$’s that come afterwards will end up below, and their column will grow by two. (It is not possible that they end up there by a *shift* where only one element is *shifted*, as this would need two $(i+1)$’s belonging to one $i$ which is a contradiction.) We have now proven every Lemma that proves Theorem \[theo:algo1WellDef\]. Next we show the same for the reversed algorithm. \[theo:ualgo2WellDef\] Algorithm \[alg:u1\] is well-defined. More precisely, after each iteration $i$ of the outer for-loop we obtain an alternative orthogonal Littlewood-Richardson tableau $\tilde{L}$ if we would decrease numbers by $i-1$. Due to construction the algorithm is well-defined once we ensure that we have always enough cells to mark. We will see this during the proof. First we show that $\tilde{L}$ is a reverse skew semistandard tableau and the two properties of an alternative orthogonal Littlewood-Richardson tableau hold if they held before. - Again, to show that $\tilde{L}$ is a reverse skew semistandard tableau, it suffices to show that everything is sorted and that there is at most one $j$ in each column. Again no operation puts more than one $i$ into the same column. Columns get sorted after deleting something, a violation of the row order is be prevented by *merge*, because a violation occurs exactly when the condition of *merge* arises. If an empty cell is erased left of another empty cell and thus shifts a cell labeled $j_2$ to an empty cell, or deleting shifts a smaller entry $j_2$ next to a bigger one $j_3$, this column is at least three cells larger, otherwise there would have been a *shift*. In this case we obtain *merge* and define $j_1$ to be the largest entry between $j_2$ and $i$. - The only operation which can destroy the first property (that there is always a $j$ below a $(j+1)$) is *merge* at $j_2$. This makes a problem if there is a $(j_2+1)$ in the same column, and this $j_2$ is the one belonging to it. For $(j_2+1)$ not to be taken instead of $j_2$ one of the following conditions must be met: either $(j_2+1)$ is in the column to the right or $j_2<j_4$ where $j_4$ is the position right of $(j_2+1)$. In the former case $j_2$ belongs to that, which is a contradiction. In the latter case we obtain $(j_2+1)>j_4>j_2$ which is also a contradiction as all three numbers are natural numbers. - For the second property we can do a similar case study of local changes as we did before for Algorithm \[alg:1\]. However there are some steps we have to consider more precisely. When an $i$ gets extracted, elements move one row up. This row however, was not necessary then, because the $i$ causing this move needed two more rows for its formula, so the moved elements needed at least two less. For $o$ we also argue similar as for Algorithm \[alg:1\]. If those which count for $o$ are put upwards, but not the ones to the left, they are counted as $o$ once more as before. This needs to be the case as we might not have this “not necessary” row in the $o$ case. Now we show that the row parity is constant. We shift one $i$ for $i$’s that are in odd sequences to the left. Thus the shifted $i$’s shorten the row where they were by two (this $i$ and their empty cell), while the other $i$’s (an even number) shorten this row and the one above by an even number each. The tableau has at most $2i+1$ rows after an iteration as lower ones are taken as left and right column of the new tableau. To see that there are enough cells to mark we consider the second condition of the alternative orthogonal Littlewood-Richardson tableau. This ensures one position to mark for positions, except for some $o$ cases. In this cases *correct parity* puts it one row above. Each iteration of the outer for-loop produces a tableau of one of the three types. The shape follows from well-definedness and the last if-query once we can show that there are never one even and one odd row to be taken for the left and the right column of $T_i$. Thus we consider rows $2i+1$ and $2i$ and distinguish two cases. In the first case no $i$ is put to row $2i+1$ in correct parity. Therefore there are even many $i$’s in row $2i$. Thus the parity of row $2i$ after extracting is the same as the parity of row $2i+1$, as $i$’s in row $2i+1$ shorten both, row $2i$ and $2i+1$. *Shift* or *merge* do not change this. In the second case an $i$ is put from row $2i$ to row $2i+1$ in correct parity. In the end this shortens row $2i$ by two, so parity is still preserved, by the same argument as in the first case. Because elements that were put originally to the tail are larger than other elements, the residuum of $T_i$ is $0$ before the last if-query. This also shows that a gap in the right column is the fin. If it is one the residuum is 1. Moreover the tail root is not a gap for residuum one tableaux, as it comes from the left column without tail. $T_i$ is semistandard as row $2i+1$ cannot be longer than row $2i$. The fin of such a tableau is even. The fin is even if it is no gap. If its a gap, rows $2i$ and $2i+1$ are of odd length before extracting them. The leftmost $i$ after *correct parity* is in an even column. (If $S$ would be odd and this $i$ would be in an odd column, it changes in correct parity. If $S$ is even, it needs to be in row $2i+1$ and was there before.) Neither *shift* nor *merge* can change this, as $i$’s that are in the same row can be *shifted* to the right together and if *merge* occurs, there can either be another *merge* for the $i$ to the left or they stay in the row where they are. As the parity of row lengths is constant and there are even many $i$’s in other rows, this is sufficient. Let $T_i$ be of type 3. If $i<\ell(\mu)$, $T_{i+1}$ cannot be of type 1. If $i=\ell(\mu)$, $S$ is odd. A tableau $T_i$ of type 3 is formed in the last if-query where the tail root becomes the fin. The fin has to come from a row strictly above of row $2i$ as it is a gap. We show now that the bottommost row after extracting $T_i$, thus row $2i-1$ consist of odd many empty cells. As this will get into the left column of $T_{i+1}$ we ensure residuum $1$ or an odd $S$ by what we have seen before. As the fin of $T_i$ is even, if something is extracted from row $2i-1$ during extracting $T_i$ this row has odd many empty cells afterwards. Suppose that row $2i-1$ has even many empty cells and nothing is extracted from it. Row $2i+1$ has as many $i$’s as there are $(i+1)$’s in row $2i$ and $(i+2)$’s in row $2i-1$. Otherwise $i$ would not be *shifted* and there are to less $j$ involved for *merge*. (If an $i$ would not end up below an $(i+1)$, that is below an $(i+2)$, something would have been extracted from row $2i-1$.) Moreover we can conclude that no $i$ was put to row $2i+1$ in *correct parity*. By the same argument $i$’s that are in row $2i$ have exactly as many $(i+1)$’s in row $2i-1$. This number is even because nothing is changed in *correct parity*. We can conclude that row $2i$ contains odd many empty cells and several $i$’s, while row $2i-1$ contains even many empty cells, the same number of $(i+1)$’s and an even number of $i$’s. Thus the parity of the rows is different. This is a contradiction. If $T_i$ is of type 1 and the tail root is a gap then either $i\neq \ell(\mu)$ or the tail root is odd. Once we have shown , and , it follows that the tail root needs an even slot to the right if it is a gap. In the case that $S$ is odd, this would be the fin. Due to this is smaller by at least one than $S^L(1)$, which makes the tail root smaller than or equal to it. We consider a tableau $T_i$ of type 1 such that the tail root is a gap and even and $i=\ell(\mu)$. Thus $i$’s are the last numbers in $\tilde{L}$. Therefore, at least one row ends with an odd position once $T_i$ is extracted. Thus $\tilde{L}$ had rows with odd length. Now we consider row $2i+1$ and $2i$ before extracting $T_i$. Those get $T^L$ without tail and $T^R$. Thus they are even. The leftmost $i$ is in row $2i$ or above in order to take something from a different row, which is necessary for the gap. Thus row $2i+1$ has even length which is a contradiction. holds between two consecutive tableaux. All $i$’s have corresponding $(i+1)$’s to the right. Thus, if this is not changed, they take smaller or equal numbers. We show that if *merge* occurs and $i$ ends up in a column to the right, then either another *merge* occurs for $(i+1)$, or a *shift*, or that this was not the corresponding $(i+1)$. Note that *correct parity* puts $i$’s left so we do not need to consider it. Moreover $i$’s below $(i+1)$’s are *shifted* together. A *merge* situation in question occurs if $(i+1)$ is in the same column as $i$. It makes the column of $(i+1)$ shorter by two. $(i+1)$ could not be put one row above by *correct parity* as the row above has the same length. *Merge* puts $(i+1)$ one position upwards and puts $i$ together with another $j$ into the next column. If those columns have the same length after extracting $i$, there would be another $(i+1)$ belonging to another $i$ due to the requirements of the positions right of $j$. Inductively this gives a contradiction. As those columns have different length such that all empty cells in the column of $(i+1)$ have right neighbors (because they had them before merging with $i$). $(i+1)$ either changes column by *shift* or by another *merge* when $T_{i+1}$ is extracted. Moreover we need to consider further applications of *merge* or *shift*. If $i$ is *shifted* after *merge*, $(i+1)$ can follow this path. When it comes to another *merge* situation, the length of this column is not changed, thus $(i+1)$ can also introduce a *merge* situation if no $(i+1)$ is in this column (compare with above). and hold between two consecutive tableaux. This follows as empty cells form a Young diagram of a partition. If row $2i+2$ was taken for $T^L_i$, row $2i+1$ is taken for $T^R_{i+1}$. The tail cannot consist of more than one position from this row, because to take something from this row an $i$ must change row in *correct parity* or change column in the last if-query. The former cannot happen. The latter can happen only once. The only situation where $T_i^R$ is shorter than $T_{i+1}^L$ without tail is if in both tableaux there was something taken from their left column and put into the right column in the last if-query. In this case both have residuum $1$ and the column is longer by at most two, which is allowed. Due to construction also the following holds: For each gap there is a slot to the right. Thus holds. Moreover there are no gaps in $S$. Thus holds. Therefore it follows that: \[theo:ualgo1LRtabs\] Algorithm \[alg:u1\] returns an orthogonal Littlewood-Richardson tableaux of shape $\mu$. \[theo:algo1ualgo1Inverse\] Algorithm \[alg:1\] and Algorithm \[alg:u1\] are inverse. It suffices to show that one iteration of the outer for-loop (one insertion/extraction of a $T_i$) is inverse. We insert an empty cell and one filled with $i$ below and extract the same. Therefore what we have to show is that *merge*, *shift*, *correct parity* and dealing with the non tail parts are inverse. We can consider those separately as they do not interfere. It is clear that the *shift*-procedures are inverse. Now we consider the *merge* procedures. It is clear that they act inverse and that after *merge* in one algorithm we also *merge* in the other one. It remains to show that we do not *merge* in any other situation. Merge in Algorithm \[alg:u1\] deals with an $i$. If it was not *merged* to get there, it was inserted there, or there was a *shift*. The former is not possible, as this would mean, that rows were not sorted before or the empty cells did not form a tableaux. The latter is prevented by $j_2>j_1>j_3$. *Merge* in Algorithm \[alg:1\] happens if a row is not sorted. The only procedure that leaves a row unsorted in Algorithm \[alg:u1\] is *merge*. To show see that *correct parity* is inverse we have to show that there cannot be an odd number of $i$’s when there was no *correct parity* during inserting. (When we do correct parity in Algorithm \[alg:u1\], this changes the parity.) In Algorithm \[alg:1\] rows $2i-1$ or above get longer if and only if they contain an $i$ or the row below contains an $i$. Therefore only odd many $i$’s produce a different parity in rows $1,\dots,2i-1$. Row $2i$ without $i$’s has the same parity as row $2i+1$. If that is the wrong one, an $i$ changes column. If row $2i$ has now an even number of $i$’s in it, this is the wrong parity and another $i$ changes column. Therefore only odd many $i$’s produce a different parity and change place in *correct parity*. For type 2 and type 3 tableaux we argue that inserting fin and lower tail is inverse to shift the fin to its place in the last if-query. Columns (non tail parts) are placed to row $2i$ and $2i+1$ due to and . Results ------- With Theorems \[theo:algo1WellDef\], \[theo:ualgo2WellDef\], \[theo:ualgo1LRtabs\] and \[theo:algo1ualgo1Inverse\] we have proven the following Theorem. This is one of our main results. Our alternative orthogonal Littlewood-Richardson tableaux ${\mathrm{aLR}_\lambda^{\mu}}$ are in bijection with Kwon’s orthogonal Littlewood-Richardson tableaux ${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$. Therefore they also count the multiplicities $c_{\lambda}^{\mu}$ in . The Bijection for ${\mathrm{SO}(2k+1)}$ ======================================= Formulation of the Bijection ---------------------------- We start with a pair $(Q,L)$ consisting of a standard Young tableau $Q$ in ${\mathrm{SYT}}(\lambda)$ and an orthogonal Littlewood-Richardson tableau $L$ in ${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$. First we use Bijection $A$ (see Section \[sec:BijA\]) to change $L$ into an alternative orthogonal Littlewood-Richardson tableau $\tilde{L}$. Now we use $\tilde{L}$ to obtain a larger standard Young tableau $\tilde{Q}$ with row lengths of the same parity as follows. If $e$ is the largest entry in $Q$ we add a cells labeled $e+(\mu_{j+1}+\dots+\mu_{\ell(\mu)})+1$, $e+(\mu_{j+1}+\dots+\mu_{\ell(\mu)})+2$, $\dots$, $e+(\mu_{j+1}+\dots+\mu_{\ell(\mu)})+\mu_j$ to the spots where cells labeled $j$ are in $\tilde{L}$, such that the numbers in the horizontal strip belonging to $j$ are increasing from left to right. We obtain a new standard Young tableau $\tilde{Q}$ with the same shape as $L$. Moreover the $\mu$ largest entries form a $\mu$-horizontal strip (see Section \[sec:BijB\]). Now we distinguish two cases: If our resulting tableau $\tilde{Q}$ consists of even length rows this is the tableau we will use in Bijection $B$ (see Section \[sec:BijB\]). Otherwise, thus when $\tilde{Q}$ consists of $n$ odd length rows, we concatenate the one column tableau filled with $1,2,\dots,n$ from left to $\tilde{Q}$. We obtain an all even rowed standard Young tableau, which we will use in in Bijection $B$. We continue applying Bijection $B$ to $\tilde{Q}$ and obtain a vacillating tableau $\tilde{V}$ with shape $\emptyset$ and cut-away-shape $\mu$ (shape $\emptyset$ ending with $\mu_{\ell(\mu)}$ $(-\ell(\mu))$’s, $\dots$, $\mu_2$ $(-2)$’s and $\mu_1$ $(-1)$’s, see Section \[sec:BijB\]). Once again we distinguish the two cases from before. If we did not concatenate with a column, we do not change $\tilde{V}$. If we concatenated a column to $\tilde{Q}$, we delete the first $n$ entries of $\tilde{V}$. In this case those always are $1,2,\dots,k,0,-k,\dots,-2,-1$. Therefore we obtain once again a vacillating tableau $\tilde{V}$ with shape $\emptyset$ and cut-away-shape $\mu$. We finish our algorithm by deleting the last $|\mu|=\mu_1+\mu_2+\dots+\mu_k$ entries to obtain a vacillating tableau $V$ of shape $\mu$ and length $r=|\lambda|$. In Figures \[fig:StrategyEven\] and \[fig:Strategyodd\] we illustrate this using an even example for $r=15$, $k=2$, $n=2k+1=5$ and an odd example for $r=17$, $k=3$, $n=2k+1=7$. In Table \[tab:ListOfExamples\] in the appendix we provide a list of all tableaux with $r=3$ and $n=5$. (0,5) – (5,5); (0,4) – (5,4); (0,3) – (4,3); (0,2) – (4,2); (0,1) – (1,1); (0,0) – (1,0); (0,5) – (0,0); (1,5) – (1,0); (2,5) – (2,2); (3,5) – (3,2); (4,5) – (4,2); (5,5) – (5,4); (0.5,4.5) node [1]{}; (1.5,4.5) node [2]{}; (2.5,4.5) node [3]{}; (3.5,4.5) node [4]{}; (0.5,3.5) node [5]{}; (1.5,3.5) node [6]{}; (2.5,3.5) node [7]{}; (3.5,3.5) node [8]{}; (0.5,2.5) node [9]{}; (1.5,2.5) node [10]{}; (2.5,2.5) node [11]{}; (3.5,2.5) node [12]{}; (0.5,1.5) node [13]{}; (0.5,0.5) node [14]{}; (4.5,4.5) node [15]{}; (7,3)–(8,3); (7,2)–(8,2); (6,1)–(8,1); (6,0)–(7,0); (6,-1)–(7,-1); (6,1) – (6,-1); (7,3) – (7,-1); (8,3) – (8,1); (6.5,0.5) node [1]{}; (6.5,-0.5) node\[blue\] [5]{}; (7.5,2.5) node [1]{}; (7.5,1.5) node\[blue\] [4]{}; (9.5,5)–(10.5,5); (9.5,4)–(10.5,4); (8.5,3)–(10.5,3); (8.5,2)–(10.5,2); (8.5,1)–(10.5,1); (8.5,1)–(9.5,1); (8.5,0)–(9.5,0); (8.5,3) – (8.5,0); (9.5,5) – (9.5,0); (10.5,5) – (10.5,1); (9,2.5) node [1]{}; (9,1.5) node [2]{}; (9,0.5) node [3]{}; (10,4.5) node [1]{}; (10,3.5) node [2]{}; (10,2.5) node [3]{}; (10,1.5)\[red\] node [4]{}; (11,5)–(12,5); (11,4)–(12,4); (11,3)–(12,3); (11,2)–(12,2); (11,1)–(12,1); (11,5) – (11,1); (12,5) – (12,1); (11.5,4.5) node [1]{}; (11.5,3.5) node [2]{}; (11.5,2.5) node [3]{}; (11.5,1.5) node [4]{}; (0,-1.5) node ; (6,6) node [(SYT, oLRT)]{}; (0,0) node; (0,5) – (2,5) node\[midway, above\] [Bij. $A$]{}; (0,5) – (5,5); (0,4) – (5,4); (0,3) – (4,3); (0,2) – (4,2); (0,1) – (1,1); (0,0) – (1,0); (0,5) – (0,0); (1,5) – (1,0); (2,5) – (2,2); (3,5) – (3,2); (4,5) – (4,2); (5,5) – (5,4); (0.5,4.5) node [1]{}; (1.5,4.5) node [2]{}; (2.5,4.5) node [3]{}; (3.5,4.5) node [4]{}; (0.5,3.5) node [5]{}; (1.5,3.5) node [6]{}; (2.5,3.5) node [7]{}; (3.5,3.5) node [8]{}; (0.5,2.5) node [9]{}; (1.5,2.5) node [10]{}; (2.5,2.5) node [11]{}; (3.5,2.5) node [12]{}; (0.5,1.5) node [13]{}; (0.5,0.5) node [14]{}; (4.5,4.5) node [15]{}; (6,5) – (12,5); (6,4) – (12,4); (6,3) – (10,3); (6,2) – (10,2); (6,1) – (8,1); (6,0) – (8,0); (6,5) – (6,0); (7,5) – (7,0); (8,5) – (8,0); (9,5) – (9,2); (10,5) – (10,2); (11,5) – (11,4); (12,5) – (12,4); (7.5,1.5) node \[color=red\] [2]{}; (7.5,0.5) node \[color=blue\] [1]{}; (11.5,4.5) node \[color=blue\] [1]{}; (0,-1.5) node ; (6,6) node[(SYT, aoLRT)]{}; (0,0) node; (0,5) node; (0,5) – (2,5); (0,5) – (6,5); (0,4) – (6,4); (0,3) – (4,3); (0,2) – (4,2); (0,1) – (2,1); (0,0) – (2,0); (0,5) – (0,0); (1,5) – (1,0); (2,5) – (2,0); (3,5) – (3,2); (4,5) – (4,2); (5,5) – (5,4); (6,5) – (6,4); (0.5,4.5) node [1]{}; (1.5,4.5) node [2]{}; (2.5,4.5) node [3]{}; (3.5,4.5) node [4]{}; (0.5,3.5) node [5]{}; (1.5,3.5) node [6]{}; (2.5,3.5) node [7]{}; (3.5,3.5) node [8]{}; (0.5,2.5) node [9]{}; (1.5,2.5) node [10]{}; (2.5,2.5) node [11]{}; (3.5,2.5) node [12]{}; (0.5,1.5) node [13]{}; (0.5,0.5) node [14]{}; (4.5,4.5) node [15]{}; (1.5,1.5) node \[color=red\] [16]{}; (1.5,0.5) node \[color=blue\] [17]{}; (5.5,4.5) node \[color=blue\] [18]{}; (5,1) node [$\mu=(2,1)$]{}; (0,-1.5) node ; (3.8,6) node[(SYT even, part.)]{}; \ (0,0) node; (0,5) – (2,5) node\[midway, above\] [Bij. $B$]{}; (0,0)–(1,1)–(2,2)–(3,3)–(4,4)–(5,4)–(6,4)–(7,4)–(8,4)–(9,3)–(10,2)–(11,2)–(12,2)–(13,2)–(14,1)–(15,2); (15,2)–(16,2); (16,2)–(17,1)–(18,0); (4,-1)–(5,0)–(6,-1)–(7,0)–(8,1); (10,1)–(11,1)–(12,1)–(13,0); (15,0)–(16,-1); (13.5,3.5) node [$\mu=(2,1)$]{}; (8,5.5) node [(vac. tab. shape $\emptyset$, even, part.)]{}; (0,0) node; (0,5) node; (0,5) – (2,5); (0,0)–(1,1)–(2,2)–(3,3)–(4,4)–(5,4)–(6,4)–(7,4)–(8,4)–(9,3)–(10,2)–(11,2)–(12,2)–(13,2)–(14,1)–(15,2); (4,-1)–(5,0)–(6,-1)–(7,0)–(8,1); (10,1)–(11,1)–(12,1)–(13,0); (6,5.5) node [vacillating tableaux]{}; (0,7) – (5,7); (0,6) – (5,6); (0,5) – (5,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (1,2); (0,1) – (1,1); (0,0) – (1,0); (0,7) – (0,0); (1,7) – (1,0); (2,7) – (2,3); (3,7) – (3,5); (4,7) – (4,5); (5,7) – (5,5); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (3.5,6.5) node [4]{}; (4.5,6.5) node [5]{}; (0.5,5.5) node [6]{}; (1.5,5.5) node [7]{}; (2.5,5.5) node [8]{}; (3.5,5.5) node [9]{}; (4.5,5.5) node [10]{}; (0.5,4.5) node [11]{}; (1.5,4.5) node [12]{}; (0.5,3.5) node [13]{}; (1.5,3.5) node [14]{}; (0.5,2.5) node [15]{}; (0.5,1.5) node [16]{}; (0.5,0.5) node [17]{}; (7,5) – (8,5); (7,4) – (8,4); (6,3) – (8,3); (6,2) – (7,2); (6,1) – (7,1); (6,0) – (7,0); (6,3) – (6,0); (7,5) – (7,0); (8,5) – (8,3); (7.5,4.5) node [1]{}; (7.5,3.5)\[blue\] node [2]{}; (6.5,2.5) node [1]{}; (6.5,1.5)\[blue\] node [4]{}; (6.5,0.5)\[blue\] node [5]{}; (9.5,5) – (10.5,5); (9.5,4) – (10.5,4); (8.5,3) – (10.5,3); (8.5,2) – (9.5,2); (8.5,1) – (9.5,1); (8.5,3) – (8.5,1); (9.5,5) – (9.5,1); (10.5,5) – (10.5,3); (10,4.5) node [1]{}; (10,3.5)\[violet\] node [2]{}; (9,2.5) node [1]{}; (9,1.5)\[violet\] node [5]{}; (12,7) – (13,7); (12,6) – (13,6); (12,5) – (13,5); (12,4) – (13,4); (11,3) – (13,3); (11,2) – (12,2); (11,3) – (11,2); (12,7) – (12,2); (13,7) – (13,3); (12.5,6.5) node [1]{}; (12.5,5.5) node [2]{}; (12.5,4.5) node [3]{}; (12.5,3.5)\[red\] node [4]{}; (11.5,2.5) node [1]{}; (13.5,7) – (14.5,7); (13.5,6) – (14.5,6); (13.5,5) – (14.5,5); (13.5,4) – (14.5,4); (13.5,7) – (13.5,4); (14.5,7) – (14.5,4); (14,6.5) node [1]{}; (14,5.5) node [2]{}; (14,4.5) node [3]{}; (5,8.3) node [(SYT, oLRT)]{}; (0,0) node; (0,3.5) – (2,3.5) node\[midway, above\] [Bij. $A$]{}; (0,7) – (5,7); (0,6) – (5,6); (0,5) – (5,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (1,2); (0,1) – (1,1); (0,0) – (1,0); (0,7) – (0,0); (1,7) – (1,0); (2,7) – (2,3); (3,7) – (3,5); (4,7) – (4,5); (5,7) – (5,5); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (3.5,6.5) node [4]{}; (4.5,6.5) node [5]{}; (0.5,5.5) node [6]{}; (1.5,5.5) node [7]{}; (2.5,5.5) node [8]{}; (3.5,5.5) node [9]{}; (4.5,5.5) node [10]{}; (0.5,4.5) node [11]{}; (1.5,4.5) node [12]{}; (0.5,3.5) node [13]{}; (1.5,3.5) node [14]{}; (0.5,2.5) node [15]{}; (0.5,1.5) node [16]{}; (0.5,0.5) node [17]{}; (6,7) – (11,7); (6,6) – (11,6); (6,5) – (11,5); (6,4) – (11,4); (6,3) – (9,3); (6,2) – (9,2); (6,1) – (7,1); (6,0) – (7,0); (6,7) – (6,0); (7,7) – (7,0); (8,7) – (8,2); (9,7) – (9,2); (10,7) – (10,4); (11,7) – (11,4); (7.5,2.5)\[blue\] node [1]{}; (8.5,2.5)\[blue\] node [1]{}; (10.5,4.5)\[blue\] node [1]{}; (8.5,3.5)\[violet\] node [2]{}; (9.5,4.5)\[violet\] node [2]{}; (8.5,4.5)\[red\] node [3]{}; (5,8.3) node [(SYT, aoLRT)]{}; (0,0) node; (0,3.5) – (2,3.5); (0,7) – (5,7); (0,6) – (5,6); (0,5) – (5,5); (0,4) – (5,4); (0,3) – (3,3); (0,2) – (3,2); (0,1) – (1,1); (0,0) – (1,0); (0,7) – (0,0); (1,7) – (1,0); (2,7) – (2,2); (3,7) – (3,2); (4,7) – (4,4); (5,7) – (5,4); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (3.5,6.5) node [4]{}; (4.5,6.5) node [5]{}; (0.5,5.5) node [6]{}; (1.5,5.5) node [7]{}; (2.5,5.5) node [8]{}; (3.5,5.5) node [9]{}; (4.5,5.5) node [10]{}; (0.5,4.5) node [11]{}; (1.5,4.5) node [12]{}; (0.5,3.5) node [13]{}; (1.5,3.5) node [14]{}; (0.5,2.5) node [15]{}; (0.5,1.5) node [16]{}; (0.5,0.5) node [17]{}; (2.5,4.5) node\[red\] [18]{}; (2.5,3.5) node\[violet\] [19]{}; (3.5,4.5) node\[violet\] [20]{}; (1.5,2.5) node\[blue\] [21]{}; (2.5,2.5) node\[blue\] [22]{}; (4.5,4.5) node\[blue\] [23]{}; (3,8.3) node [(SYT odd, part.)]{}; (4,1) node [$\mu=(3,2,1)$]{}; (0,-0.5) node; (0,5) node; (0,3) – (2,3); (-1,7) – (5,7); (-1,6) – (5,6); (-1,5) – (5,5); (-1,4) – (5,4); (-1,3) – (3,3); (-1,2) – (3,2); (-1,1) – (1,1); (-1,0) – (1,0); (-1,7) – (-1,0); (0,7) – (0,0); (1,7) – (1,0); (2,7) – (2,2); (3,7) – (3,2); (4,7) – (4,4); (5,7) – (5,4); (-0.5,6.5) node\[olive\] [1]{}; (-0.5,5.5) node\[olive\] [2]{}; (-0.5,4.5) node\[olive\] [3]{}; (-0.5,3.5) node\[olive\] [4]{}; (-0.5,2.5) node\[olive\] [5]{}; (-0.5,1.5) node\[olive\] [6]{}; (-0.5,0.5) node\[olive\] [7]{}; (0.5,6.5) node [8]{}; (1.5,6.5) node [9]{}; (2.5,6.5) node [10]{}; (3.5,6.5) node [11]{}; (4.5,6.5) node [12]{}; (0.5,5.5) node [13]{}; (1.5,5.5) node [14]{}; (2.5,5.5) node [15]{}; (3.5,5.5) node [16]{}; (4.5,5.5) node [17]{}; (0.5,4.5) node [18]{}; (1.5,4.5) node [19]{}; (0.5,3.5) node [20]{}; (1.5,3.5) node [21]{}; (0.5,2.5) node [22]{}; (0.5,1.5) node [23]{}; (0.5,0.5) node [24]{}; (2.5,4.5) node\[red\] [25]{}; (2.5,3.5) node\[violet\] [26]{}; (3.5,4.5) node\[violet\] [27]{}; (1.5,2.5) node\[blue\] [28]{}; (2.5,2.5) node\[blue\] [29]{}; (4.5,4.5) node\[blue\] [30]{}; (3,8.3) node [(SYT even, part.)]{}; (4,1) node [$\mu=(3,2,1)$]{}; (0,0) node; (0,3.5) – (2,3.5) node\[midway, above\] [Bij. $B$]{}; (0,0) – (1,1)–(2,1)–(3,1)–(4,1)–(5,1)–(6,1)–(7,0); (7,0)–(8,1)–(9,2)–(10,3)–(11,4)–(12,5)–(13,5)–(14,5)–(15,5)–(16,5)–(17,5)–(18,5)–(19,5)–(20,4)–(21,4)–(22,4)–(23,4)–(24,3); (24,3)–(25,3); (25,3)–(26,3)–(27,3); (27,3)–(28,2)–(29,1)–(30,0); (1,-1)–(2,0)–(3,0)–(4,0)–(5,0)–(6,-1); (12,-1)–(13,0)–(14,-1)–(15,0)–(16,1)–(17,2)–(18,2)–(19,2); (20,2)–(21,2)–(22,2)–(23,1); (24,1)–(25,1); (25,1)–(26,0)–(27,-1); (2,-2)–(3,-1)–(4,-1)–(5,-2); (17,-2)–(18,-1)–(19,0); (20,0)–(21,0)–(22,-1); (24,-1)–(25,-2); (12,6.5) node [(vac. tab. even shape $\emptyset$, partition)]{}; (4,3) node[$\mu=(3,2,1)$]{}; (0,-0.5) node; (0,5) node; (0,3) – (2,3); (7,0)–(8,1)–(9,2)–(10,3)–(11,4)–(12,5)–(13,5)–(14,5)–(15,5)–(16,5)–(17,5)–(18,5)–(19,5)–(20,4)–(21,4)–(22,4)–(23,4)–(24,3); (24,3)–(25,3); (25,3)–(26,3)–(27,3); (27,3)–(28,2)–(29,1)–(30,0); (12,-1)–(13,0)–(14,-1)–(15,0)–(16,1)–(17,2)–(18,2)–(19,2); (20,2)–(21,2)–(22,2)–(23,1); (24,1)–(25,1); (25,1)–(26,0)–(27,-1); (17,-2)–(18,-1)–(19,0); (20,0)–(21,0)–(22,-1); (24,-1)–(25,-2); (12,6.5) node [(vac. tab. odd shape $\emptyset$, partition)]{}; (5.5,3) node[$\mu=(3,2,1)$]{}; (0,-0.5) node; (0,5) node; (0,3) – (2,3); (7,0)–(8,1)–(9,2)–(10,3)–(11,4)–(12,5)–(13,5)–(14,5)–(15,5)–(16,5)–(17,5)–(18,5)–(19,5)–(20,4)–(21,4)–(22,4)–(23,4)–(24,3); (12,-1)–(13,0)–(14,-1)–(15,0)–(16,1)–(17,2)–(18,2)–(19,2); (20,2)–(21,2)–(22,2)–(23,1); (17,-2)–(18,-1)–(19,0); (20,0)–(21,0)–(22,-1); (18,6.5) node [vacillating tableau]{}; As we know the inverse of Bijection $A$ and Bijection $B$, the inverse bijection is easily defined: We start with a vacillating tableau $V$ of shape $\mu$ and length $r$, and add $\mu_k$ $(-k)$’s, $\mu_{k-1}$ $(-k+1)$’s, $\dots$ and $\mu_1$ $(-1)$’s to obtain a vacillating tableau $V$ of shape $\emptyset$ and cut-away-shape $\mu$. If this has odd length we furthermore add $1,2,\dots,k,0,-k,\dots,-2,-1$ in the front. Next we apply the inverse of Bijection $B$ to obtain a standard Young tableau $\tilde{Q}$. If we added $1,2,\dots,k,0,-k,\dots,-2,-1$ to $\tilde{V}$, we cancel the smallest $n$ entries of $\tilde{Q}$ now. Those are in the first column in increasing order. If we did so, we furthermore reduce each entry of $\tilde{Q}$ by $n$ afterwards, to obtain a standard Young tableau again. We obtain $Q$ by deleting the $|\mu|$ largest entries in $\tilde{Q}$. $Q$ is a standard Young tableau of shape $\lambda$. Moreover we define $\tilde{L}$ to be the reverse skew semistandard tableau of the same outer shape as $\tilde{Q}$ and inner shape $\lambda$. We fill cells, where entries of $\mu_j$ are in $\tilde{Q}$ with $j$. Due to the properties of $\mu$-horizontal strips, $\tilde{L}$ is an alternative orthogonal Littlewood-Richardson tableau. Finally we apply the inverse of Bijection $A$ to obtain $L$, an orthogonal Littlewood-Richardson tableau in ${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$ where $\lambda$ is a partition such that $\lambda\vdash r$ and $\mu\leq \lambda$. The strategy we use is similar as in the case $n=3$ in [@3erAlgo]. There are two main differences. The first is, that in [@3erAlgo] we do not calculate the alternative Littlewood-Richardson tableau but go directly to the $\mu$-horizontal strip. The second is, that we attach numbers for odd tableaux to the left of $Q$ in order to obtain an all even rowed tableau. However, in case $n=3$ we know, that concatenating of standard Young tableaux where all row lengths have the same parity corresponds to concatenation of vacillating tableaux with shape $\emptyset$. Therefore, for $n=3$ both strategies are the same. Let $\lambda\vdash r$, $\ell(\lambda)\leq n(=2k+1)$, $\ell(\mu)\leq k$. The map defined in this section maps a pair $(Q,L)$ consisting of a standard Young tableau $Q$ in ${\mathrm{SYT}}(\lambda)$ and an orthogonal Littlewood-Richardson tableau $L$ in ${\mathrm{LR}_\lambda^{\mu}(\mathfrak{d})}$ to a vacillating tableau of length $r$ and shape $\mu$. Moreover it is well-defined, bijective and descent preserving. We prove that every algorithm we use defines a well-defined mapping in Theorems \[theo:algo1WellDef\], \[theo:ualgo2WellDef\], \[theo:algo2welldefvactab\] and \[theo:ualgo2welldefSYT\]. Those also show, together with Theorems \[theo:muhorizontal\] and \[theo:ualgo1LRtabs\], that they produce the desired objects. To see that it is bijective, we argue that the algorithms we use are inverse in Theorems \[theo:algo1ualgo1Inverse\] and \[theo:algo2ualgo2Inverse\]. Moreover the procedure we describe between alternative orthogonal Littlewood-Richardson tableaux and $\mu$-horizontal strips is inverse by definition. The procedure we describe by adding and deleting the first positions is inverse by Theorem \[theo:10-1\]. It is descent preserving as Bijection $B$ is descent-preserving (see Theorem \[theo:algo2DescPres\]). Bijection $B$ {#sec:BijB} ============= Formulation of Bijection $B$ ---------------------------- Bijection $B$ is formulated by Algorithms \[alg:2\] and \[alg:u2\] which are inverse. It maps a standard Young tableau with $n=2k+1$ possibly empty rows, whose lengths are even, containing a $\mu$-horizontal strip, to a vacillating tableau of dimension $k$, shape $\emptyset$ and cut-away-shape $\mu$. To formulate those algorithms we introduce some notation in Table \[tab:Notation\]. Note that at some points we have left to right opposites for those algorithms. When looking at weight $\emptyset$ words, which will not always be the case while executing Algorithms \[alg:2\] and \[alg:u2\], the definitions are the same. We refer to parts or operations in the algorithms by the comments placed next to them. [|p[3.3cm]{}|p[10.7cm]{}|]{} A *labeled word* $w$ with letters in $\{\pm 1, \dots, \pm k,0\}$. & A word, where each letter is labeled by an integer $1 \leq i \leq r$ strictly increasing from left to right. Each position consists of a label and an entry. We denote by $w(p)$ the entry of $w$ labeled with $p$.\ A position $q$ is on $l$-level $m$ in Algorithm \[alg:2\] (respectively Algorithm \[alg:u2\]). & The minimum of the following to sums over entries with absolute value $l$ is $-l\cdot m$ (respectively $+l\cdot m$). For the first sum we consider entries strictly to the *right* (respectively *left*) of $q$. For the second one we consider entries to the *right* (respectively *left*) including $q$. Illustration of positions on level $m$: (1,0)–(6,0); (0.5,0) node[$m$]{}; (1.5,1)–(2.5,0); (3,0)–(4,1); (4.5,0) – (5.5,0); \ A position $q$ is a height violation in $l$. & The $l$-level of $q$ is smaller than the $(l+1)$-level of $q$. If $w(q)=\pm (l+1)$ we take the $(l+1)$-level plus one instead.\ Insert $q$ with $l$. & We insert a new position with entry $l$ and label $q$ in that way, that the labels are still sorted.\ Ignore $q$. & Act as if this position was not here, for example in level calculations.\ A position $p$ is a 3-row-position in $j$. & $p$ is either the rightmost $0$ of an odd sequence of $0$’s on $j$-level one or a $0$ that is on $j$-level two or higher.\ A position $p$ is a 2-row-position in $j$. & $p$ is either a $j$ on $j$-level one or the leftmost $0$ of a sequence of $0$’s.\ A position $p$ is in a $j$-even (respectively odd) position & The number of positions $q$ strictly to the left with $w(q)\in\{0,\pm j\}$ is even (respectively odd).\ \[alg:2\] let $w$ be word $(1,-1,\dots,1,-1)$ labeled by first row elements of $Q$ ()[$i=2,3,\dots,n$]{}[ $j:=\lfloor i/2 \rfloor$; unmark everything ()[$i$ even]{} [change $0$-entries of $w$ into $j,-j,\dots,j,-j$]{} ]{} forget the labels of $w$, set $V=w$ $V$ label $V$ with $1,2,\dots,r$ to obtain a labeled word $w$ ()[$i=n, n-1,\dots,2$]{} [ $j:=\lfloor i/2 \rfloor$, unmark everything ]{} put the labels still in $V$ in the first row of $Q$ $Q$ Examples explaining Bijection $B$ --------------------------------- We can draw labeled words like we draw vacillating tableaux as tuple of paths, compare with Example \[ex:VacTab\]. We consider the following tableau for $n=2k+1=7$, thus we are going to create $k=3$ paths: (0,7) – (6,7); (0,6) – (6,6); (0,5) – (6,5); (0,4) – (4,4); (0,3) – (2,3); (0,2) – (2,2); (0,1) – (2,1); (0,7) – (0,1); (1,7) – (1,1); (2,7) – (2,1); (3,7) – (3,4); (4,7) – (4,4); (5,7) – (5,5); (6,7) – (6,5); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (0.5,5.5) node [4]{}; (3.5,6.5) node [5]{}; (1.5,5.5) node [6]{}; (0.5,4.5) node [7]{}; (2.5,5.5) node [8]{}; (1.5,4.5) node [9]{}; (2.5,4.5) node [10]{}; (0.5,3.5) node [11]{}; (1.5,3.5) node [12]{}; (0.5,2.5) node [13]{}; (1.5,2.5) node [14]{}; (0.5,1.5) node [15]{}; (3.5,5.5) node [16]{}; (3.5,4.5) node [17]{}; (1.5,1.5) node [18]{}; (4.5,6.5) node [19]{}; (5.5,6.5) node [20]{}; (4.5,5.5) node [21]{}; (5.5,5.5) node [22]{}; - We initialize the first path with up, down, up, down, $\dots$, up, down-steps, labeled with the elements of the first row. - We insert rows $2$ up to $2k+1$ from top to bottom. For each row we insert pairs of two elements, starting with the rightmost pair, into the topmost path. - When we insert a pair $a,b$ into a path, we insert the new positions $a$ and $b$ with down-steps and - if we have not inserted into this path during the insertion process of the previous row, we change the down-step left of a pair $(a,b)$, into an up-step. - otherwise we change the next down-step to the left of each, $a$ and $b$ into a horizontal step. If there is a path beneath, we insert these new horizontal-steps as a pair $a,b$ into this path according to this rule. - Whenever we finish inserting an odd row, we initialize a new path below (with up- and down steps) and label it with the horizontal steps of the bottommost path so far. So the core of Algorithm \[alg:2\] is an an insertion algorithm from standard Young tableaux into vacillating tableaux. Some insertions create horizontal steps and to preserve descents, these are bumped into lower paths.\ (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [5]{}– (5,1) node\[midway, above\] [19]{}– (6,0) node\[midway, above\] [20]{}; (0,-1.1) node ; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [5]{}– (5,1) node\[midway, above\] [19]{}– (6,2) node\[midway, above\] [20]{}– (7,1) node\[midway, above\] [21]{}– (8,0) node\[midway, above\] [22]{}; (0,-1.1) node ; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [5]{}– (5,1) node\[midway, above\] [8]{}– (6,0) node\[midway, above\] [16]{}– (7,1) node\[midway, above\] [19]{}– (8,2) node\[midway, above\] [20]{}– (9,1) node\[midway, above\] [21]{}– (10,0) node\[midway, above\] [22]{}; (0,-1.1) node ; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,3) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [8]{}– (8,0) node\[midway, above\] [16]{}– (9,1) node\[midway, above\] [19]{}– (10,2) node\[midway, above\] [20]{}– (11,1) node\[midway, above\] [21]{}– (12,0) node\[midway, above\] [22]{}; (0,-1.1) node ; \ (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,3) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [8]{}– (8,1) node\[midway, above\] [10]{}– (9,1) node\[midway, above\] [16]{}– (10,0) node\[midway, above\] [17]{}– (11,1) node\[midway, above\] [19]{}– (12,2) node\[midway, above\] [20]{}– (13,1) node\[midway, above\] [21]{}– (14,0) node\[midway, above\] [22]{}; (0,-1.1) node ; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,3) node\[midway, above\] [4]{}– (5,4) node\[midway, above\] [5]{}– (6,4) node\[midway, above\] [6]{}– (7,3) node\[midway, above\] [7]{}– (8,3) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [16]{}– (12,0) node\[midway, above\] [17]{}– (13,1) node\[midway, above\] [19]{}– (14,2) node\[midway, above\] [20]{}– (15,1) node\[midway, above\] [21]{}– (16,0) node\[midway, above\] [22]{}; (4,-1) – (5,0) node\[midway, above\] [4]{}– (6,-1) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [8]{}– (8,-1) node\[midway, above\] [16]{}; (0,-1.1) node ; \ (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,3) node\[midway, above\] [4]{}– (5,4) node\[midway, above\] [5]{}– (6,4) node\[midway, above\] [6]{}– (7,3) node\[midway, above\] [7]{}– (8,3) node\[midway, above\] [8]{}– (9,3) node\[midway, above\] [9]{}– (10,3) node\[midway, above\] [10]{}– (11,2) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [16]{}– (14,0) node\[midway, above\] [17]{}– (15,1) node\[midway, above\] [19]{}– (16,2) node\[midway, above\] [20]{}– (17,1) node\[midway, above\] [21]{}– (18,0) node\[midway, above\] [22]{}; (4,-1) – (5,0) node\[midway, above\] [4]{}– (6,1) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [8]{}– (8,1) node\[midway, above\] [9]{}– (9,0) node\[midway, above\] [10]{}– (10,-1) node\[midway, above\] [16]{}; (0,-2.1) node ; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,3) node\[midway, above\] [4]{}– (5,4) node\[midway, above\] [5]{}– (6,4) node\[midway, above\] [6]{}– (7,3) node\[midway, above\] [7]{}– (8,3) node\[midway, above\] [8]{}– (9,3) node\[midway, above\] [9]{}– (10,3) node\[midway, above\] [10]{}– (11,3) node\[midway, above\] [11]{}– (12,3) node\[midway, above\] [12]{}– (13,2) node\[midway, above\] [13]{}– (14,1) node\[midway, above\] [14]{}– (15,1) node\[midway, above\] [16]{}– (16,0) node\[midway, above\] [17]{}– (17,1) node\[midway, above\] [19]{}– (18,2) node\[midway, above\] [20]{}– (19,1) node\[midway, above\] [21]{}– (20,0) node\[midway, above\] [22]{}; (4,-1) – (5,0) node\[midway, above\] [4]{}– (6,1) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [8]{}– (8,2) node\[midway, below\] [9]{}– (9,2) node\[midway, below\] [10]{}– (10,1) node\[midway, above\] [11]{}– (11,0) node\[midway, above\] [12]{}– (12,-1) node\[midway, above\] [16]{}; (7,-2) – (8,-1) node\[midway, above\] [9]{}– (9,-2) node\[midway, above\] [10]{}; (0,-2.1) node ; \ (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,3) node\[midway, above\] [4]{}– (5,4) node\[midway, above\] [5]{}– (6,4) node\[midway, above\] [6]{}– (7,3) node\[midway, above\] [7]{}– (8,3) node\[midway, above\] [8]{}– (9,3) node\[midway, above\] [9]{}– (10,3) node\[midway, above\] [10]{}– (11,3) node\[midway, above\] [11]{}– (12,3) node\[midway, above\] [12]{}– (13,2) node\[midway, above\] [13]{}– (14,2) node\[midway, above\] [14]{}– (15,1) node\[midway, above\] [15]{}– (16,1) node\[midway, above\] [16]{}– (17,1) node\[midway, above\] [17]{}– (18,0) node\[midway, above\] [18]{}– (19,1) node\[midway, above\] [19]{}– (20,2) node\[midway, above\] [20]{}– (21,1) node\[midway, above\] [21]{}– (22,0) node\[midway, above\] [22]{}; (4,-1) – (5,0) node\[midway, above\] [4]{}– (6,1) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [8]{}– (8,2) node\[midway, above\] [9]{}– (9,2) node\[midway, below\] [10]{}– (10,1) node\[midway, below\] [11]{}– (11,1) node\[midway, above\] [12]{}– (12,0) node\[midway, above\] [14]{}– (13,0) node\[midway, above\] [16]{}– (14,-1) node\[midway, above\] [17]{}; (7,-2) – (8,-1) node\[midway, below\] [9]{}– (9,0) node\[midway, below\] [10]{}– (10,-1) node\[midway, below\] [12]{}– (11,-2) node\[midway, below\] [16]{}; (0,-3.1) node ; This description considers only the main cases of our Algorithm (in the algorithmic description these are those commented by: *$a_l$, $b_l$, $i$ even $a_{j+1}$ 2, $i$ odd $b_{j+1}$, $i$ odd $a_{j+1}$*). The other cases are explained in the examples below. We consider the following tableau: (0,7) – (8,7); (0,6) – (8,6); (0,5) – (6,5); (0,4) – (4,4); (0,3) – (2,3); (0,2) – (2,2); (0,7) – (0,2); (1,7) – (1,2); (2,7) – (2,2); (3,7) – (3,4); (4,7) – (4,4); (5,7) – (5,5); (6,7) – (6,5); (7,7) – (7,6); (8,7) – (8,6); (0.5,6.5) node [1]{}; (0.5,5.5) node [2]{}; (0.5,4.5) node [3]{}; (0.5,3.5) node [4]{}; (0.5,2.5) node [5]{}; (1.5,6.5) node [6]{}; (1.5,5.5) node [7]{}; (2.5,6.5) node [8]{}; (3.5,6.5) node [9]{}; (2.5,5.5) node [10]{}; (4.5,6.5) node [11]{}; (1.5,4.5) node [12]{}; (5.5,6.5) node [13]{}; (2.5,4.5) node [14]{}; (3.5,5.5) node [15]{}; (6.5,6.5) node [16]{}; (4.5,5.5) node [17]{}; (7.5,6.5) node [18]{}; (1.5,3.5) node [19]{}; (5.5,5.5) node [20]{}; (3.5,4.5) node [21]{}; (1.5,2.5) node [22]{}; When we insert the first row, we see that our Algorithm is not descent-preserving and gives no sensible output if we follow our rules form the previous example strictly. Therefore we create another case for inserting something the first time into a path and change pairs of up-down-steps between them into horizontal steps. This refers to *$i$ even $a_{j+1}$ 1* (at $2,6$ and $17,18$) and *$i$ even connect* (at $11,12$): (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [6]{}– (3,1) node\[midway, above\] [8]{}– (4,0) node\[midway, above\] [9]{}– (5,1) node\[midway, above\] [11]{}– (6,0) node\[midway, above\] [13]{}– (7,1) node\[midway, above\] [16]{}– (8,0) node\[midway, above\] [18]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [6]{}– (4,0) node\[midway, above\] [7]{}– (5,1) node\[midway, above\] [8]{}– (6,2) node\[midway, above\] [9]{}– (7,1) node\[midway, above\] [10]{}– (8,1) node\[midway, above\] [11]{}– (9,1) node\[midway, above\] [13]{}– (10,0) node\[midway, above\] [15]{}– (11,1) node\[midway, above\] [16]{}– (12,1) node\[midway, above\] [17]{}– (13,1) node\[midway, above\] [18]{}– (14,0) node\[midway, above\] [20]{}; When inserting the third row we have to adjust this rules once again in order to preserve descents and that concatenated tableaux are mapped to concatenated paths. (A property that is only proven for $n=3$, but conjectured otherwise, see Conjecture \[con:ConCat\].) Therefore we have to introduce *$i$ odd connect* and *$i$ odd separate*. Between $a$ and $b$ two horizontal steps on level $1$ are changed into a down-step and an up-step and a down-step and an up-step on level $0$ are changed into two horizontal steps. (In our example we do *$i$ odd separate* at $2,6$, when inserting $3$ and $12$ and at $11,13$ and $17,18$ when inserting $14$ and $21$. We do *$i$ odd connect* at $7,8$ when inserting $3$ and $12$ and at $15,16$ when inserting $14$ and $21$.) The corresponding positions are cycled. Until this point our algorithm works exactly as in [@3erAlgo]. Now we initialize the second path. We see that where we did *separate* on our first path, there are some steps that do not observe the rules for vacillating tableau. We will deal with those in the next insertions. (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (4,1) node\[midway, above\] [6]{}– (5,1) node\[midway, above\] [7]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (6,1) node\[midway, above\] [8]{}– (7,2) node\[midway, above\] [9]{}– (8,2) node\[midway, above\] [10]{}– (9,2) node\[midway, above\] [11]{}– (10,1) node\[midway, above\] [12]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (11,2) node\[midway, above\] [13]{}– (12,1) node\[midway, above\] [14]{}– (13,1) node\[midway, above\] [15]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (14,1) node\[midway, above\] [16]{}– (15,0) node\[midway, above\] [17]{} node\[draw=black,circle,inner sep=0cm,minimum size=0.2cm\] – (16,1) node\[midway, above\] [18]{}– (17,1) node\[midway, above\] [20]{}– (18,0) node\[midway, above\] [21]{}; (4,-1)– (5,0) node\[midway, above\] [2]{}– (6,-1) node\[midway, above\] [7]{}– (7,0) node\[midway, above\] [8]{}– (8,-1) node\[midway, above\] [10]{}– (9,0) node\[midway, above\] [11]{}– (10,-1) node\[midway, above\] [15]{}– (11,0) node\[midway, above\] [16]{}– (12,-1) node\[midway, above\] [20]{}; Now several things happen at once: As mentioned above, we have to deal with this rule violations that we have noticed before. However we will see, that if $a$ is inserted completely left of such a violation and $b$ completely to the right, the rules we have introduced so far deal with that and two separate paths will be formed. We just mark them as “allowed hight violations” in *mark it + connect*. We do so between $3$ and $6$. However between $17$ and $18$ we have to intervene and use *adjust separation point* (we see this at the right dashed circles). When doing so we ignore $b_1$ that is $19$ and act as if $17$ and $18$ are still on level zero. Moreover when inserting according to the simple rules we come to another point where the paths do not observe the rules of a vacillating tableau. Again we deal with this and use *height violation*, as this is not marked (we see this at the left dashed circles). (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [6]{}– (6,1) node\[midway, above\] [7]{}– (7,1) node\[midway, above\] [8]{}– (8,2) node\[midway, above\] [9]{}– (9,2) node\[midway, above\] [10]{}– (10,2) node\[midway, above\] [11]{}– (11,2) node\[midway, above\] [12]{}– (12,2) node\[midway, above\] [13]{}– (13,2) node\[midway, above\] [14]{}– (14,2) node\[midway, above\] [15]{}– (15,2) node\[midway, above\] [16]{}– (16,2) node\[midway, above\] [17]{}– (17,2) node\[midway, above\] [18]{}– (18,1) node\[midway, above\] [19]{}– (19,1) node\[midway, above\] [20]{}– (20,0) node\[midway, above\] [21]{}; (5,-1)– (6,0) node\[midway, above\] [2]{}– (7,0) node\[midway, below\] [3]{}– (8,0) node\[midway, above\] [7]{}– (9,0) node\[midway, above\] [8]{}– (10,0) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway,above\] [12]{}– (13,1) node\[midway,above\] [13]{}– (14,0) node\[midway,above\] [14]{}– (15,-1) node\[midway, above\] [15]{}– (16,0) node\[midway, above\] [16]{}– (17,0) node\[midway, above\] [17]{}– (18,0) node\[midway, above\] [18]{}– (19,-1) node\[midway, above\] [20]{}; (12,0.7) circle (30pt) node; (11,2.7) circle (30pt) node; (17,-0.3) circle (30pt) node; (16,2.7) circle (30pt) node; In the last insertion no new rule is introduced. However we can see how the two separate, concatenated paths have developed according to the two separate, concatenated tableaux our tableau consists of. (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,2) node\[midway, above\] [10]{}– (11,2) node\[midway, above\] [11]{}– (12,2) node\[midway, above\] [12]{}– (13,2) node\[midway, above\] [13]{}– (14,2) node\[midway, above\] [14]{}– (15,2) node\[midway, above\] [15]{}– (16,2) node\[midway, above\] [16]{}– (17,2) node\[midway, above\] [17]{}– (18,2) node\[midway, above\] [18]{}– (19,1) node\[midway, above\] [19]{}– (20,1) node\[midway, above\] [20]{}– (21,1) node\[midway, above\] [21]{}– (22,0) node\[midway, above\] [22]{}; (1,-1)– (2,0) node\[midway, above\] [2]{}– (3,0) node\[midway, below\] [3]{}– (4,-1) node\[midway, above\] [4]{}; (6,-1)– (7,0) node\[midway, above\] [7]{}– (8,-1) node\[midway, above\] [8]{}– (9,0) node\[midway, above\] [10]{}– (10,1) node\[midway, above\] [11]{}– (11,0) node\[midway,above\] [12]{}– (12,1) node\[midway,above\] [13]{}– (13,0) node\[midway,above\] [14]{}– (14,0) node\[midway, above\] [15]{}– (15,0) node\[midway, above\] [16]{}– (16,-1) node\[midway, above\] [17]{}– (17,0) node\[midway, above\] [18]{}– (18,0) node\[midway, above\] [20]{}– (19,-1) node\[midway, above\] [21]{}; Note that this example is also an example for Algorithm \[alg:u2\], if one reads it from bottom to top. \[ex:SpecialCases\] We consider several different tableaux to illustrate *ignore inserted $a_l$* in *height violation* (respectively *ignore next $l$*) and the two special cases of *height violation* *$i$ even, $a_{j+1}\neq 0$* and *$i$ odd, $w(p_j)=0$ on $j$-level $0$*. We start with the following tableau:\ (0,7) – (4,7); (0,6) – (4,6); (0,5) – (4,5); (0,4) – (2,4); (0,3) – (2,3); (0,7) – (0,3); (1,7) – (1,3); (2,7) – (2,3); (3,7) – (3,5); (4,7) – (4,5); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (0.5,5.5) node [3]{}; (1.5,5.5) node [4]{}; (0.5,4.5) node [5]{}; (1.5,4.5) node [6]{}; (2.5,6.5) node [7]{}; (3.5,6.5) node [8]{}; (2.5,5.5) node [9]{}; (3.5,5.5) node [10]{}; (0.5,3.5) node [11]{}; (1.5,3.5) node [12]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [7]{}– (4,0) node\[midway, above\] [8]{}; (1,-1); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [7]{}– (6,2) node\[midway, above\] [8]{}– (7,1) node\[midway, above\] [9]{}– (8,0) node\[midway, above\] [10]{}; (1,-1); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,0) node\[midway, above\] [10]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,-1) node\[midway, above\] [4]{}; (1,-1); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,2) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway, above\] [12]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}– (6,-1) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,0) node\[midway, above\] [9]{}– (10,-1) node\[midway, above\] [10]{}; Inserting the first three rows works as before. However when inserting the fourth row we see in which cases we need *ignore inserted $a_l$ if $a_{l+1}=0$* in *height violation*. When we insert $11,12$, we have a *height violation* at $p=8$. At $p=7$ we have again a *height violation* as we ignore the inserted $11$. (7,1) – (8,2) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,2) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway, above\] [12]{}; (8,1) – (9,0) node\[midway, above\] [9]{}– (10,-1) node\[midway, above\] [10]{}; (6,1) – (7,2) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,2) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway, above\] [12]{}; (7,0) – (8,1) node\[midway, above\] [8]{}– (9,0) node\[midway, above\] [9]{}– (10,-1) node\[midway, above\] [10]{}; (6,2) – (7,2) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,2) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway, above\] [12]{}; (6,-1) – (7,0) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,0) node\[midway, above\] [9]{}– (10,-1) node\[midway, above\] [10]{}; Now we consider the resulting vacillating tableau apply Algorithm \[alg:u2\]. We get the labeled words in the opposite direction and obtain elements of our standard Young tableau two by two. We have a closer look at the first extraction as here again happens a special case. We extract $5,6$ as $a_2,b_2$ and get a *height violation* at $7$. We correct it and continue. At $8$ we have again a *height violation*, as we ignore $7$. Again we correct it and continue. (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [7]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,-1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [7]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,-1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [8]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,-1) node\[midway, above\] [4]{}; We point out, that this is also an example where a *height violation* of $a$ overlaps with one of $b$. If this is not the case, then those special cases can also occur just in one of the two algorithms. In the following Example, during insert row 4, we have to ignore 9 at $p=7$ and get a *height violation* there. (0,7) – (4,7); (0,6) – (4,6); (0,5) – (2,5); (0,4) – (2,4); (0,3) – (2,3); (0,7) – (0,3); (1,7) – (1,3); (2,7) – (2,3); (3,7) – (3,6); (4,7) – (4,6); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (0.5,5.5) node [3]{}; (1.5,5.5) node [4]{}; (0.5,4.5) node [5]{}; (1.5,4.5) node [6]{}; (2.5,6.5) node [7]{}; (3.5,6.5) node [8]{}; (0.5,3.5) node [9]{}; (1.5,3.5) node [10]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [7]{}– (4,0) node\[midway, above\] [8]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [7]{}– (6,0) node\[midway, above\] [8]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,0) node\[midway, above\] [8]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,-1) node\[midway, above\] [4]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,0) node\[midway, above\] [10]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}– (6,-1) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [7]{}– (8,-1) node\[midway, above\] [8]{}; (1,-1.4); In the following Example, during extract row $5$, we have to ignore $2$ at $p=8$ and get a hight violation there.\ (0,7) – (4,7); (0,6) – (4,6); (0,5) – (2,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (2,2); (0,7) – (0,2); (1,7) – (1,2); (2,7) – (2,2); (3,7) – (3,6); (4,7) – (4,6); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (0.5,5.5) node [3]{}; (1.5,5.5) node [4]{}; (0.5,4.5) node [5]{}; (0.5,3.5) node [6]{}; (1.5,4.5) node [7]{}; (2.5,6.5) node [8]{}; (1.5,3.5) node [9]{}; (3.5,6.5) node [10]{}; (0.5,2.5) node [11]{}; (1.5,2.5) node [12]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [8]{}– (4,0) node\[midway, above\] [10]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [8]{}– (6,0) node\[midway, above\] [10]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [7]{}– (7,1) node\[midway, above\] [8]{}– (8,0) node\[midway, above\] [10]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,-1) node\[midway, above\] [4]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,0) node\[midway, above\] [10]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}– (6,-1) node\[midway, above\] [7]{}; (1,-1.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [7]{}– (8,2) node\[midway, above\] [8]{}– (9,2) node\[midway, above\] [9]{}– (10,2) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway, above\] [12]{}; (2,-1) – (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, below\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,0) node\[midway, below\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,0) node\[midway, above\] [9]{}– (10,-1) node\[midway, above\] [10]{}; (1,-1.4); Next we consider the following tableau:\ (0,7) – (4,7); (0,6) – (4,6); (0,5) – (2,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (2,2); (0,7) – (0,2); (1,7) – (1,2); (2,7) – (2,2); (3,7) – (3,6); (4,7) – (4,6); (0.5,6.5) node [1]{}; (0.5,5.5) node [2]{}; (0.5,4.5) node [3]{}; (1.5,6.5) node [4]{}; (2.5,6.5) node [5]{}; (0.5,3.5) node [6]{}; (0.5,2.5) node [7]{}; (3.5,6.5) node [8]{}; (1.5,5.5) node [9]{}; (1.5,4.5) node [10]{}; (1.5,3.5) node [11]{}; (1.5,2.5) node [12]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [4]{}– (3,1) node\[midway, above\] [5]{}– (4,0) node\[midway, above\] [8]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [4]{}– (4,1) node\[midway, above\] [5]{}– (5,1) node\[midway, above\] [8]{}– (6,0) node\[midway, above\] [9]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [8]{}– (7,1) node\[midway, above\] [9]{}– (8,0) node\[midway, above\] [10]{}; (3,-1) – (4,0) node\[midway, below\] [2]{}– (5,-1) node\[midway, below\] [9]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [8]{}– (8,1) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,0) node\[midway, above\] [11]{}; (2,-1) – (3,0) node\[midway, below\] [2]{}– (4,0) node\[midway, below\] [3]{}– (5,0) node\[midway, below\] [4]{}– (6,0) node\[midway, below\] [5]{}– (7,0) node\[midway, below\] [9]{}– (8,-1) node\[midway, below\] [10]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,0) node\[midway, above\] [12]{}; (1,-1) – (2,0) node\[midway, above\] [2]{}– (3,0) node\[midway, below\] [3]{}– (4,0) node\[midway, below\] [4]{}– (5,0) node\[midway, below\] [5]{}– (6,-1) node\[midway, above\] [6]{}; (8,-1)– (9,0) node\[midway, above\] [9]{}– (10,0) node\[midway, below\] [10]{}– (11,-1) node\[midway, above\] [11]{}; (1,-1.7); Again inserting the first three rows works as before. When inserting the fourth row, thus $6,11$, we come into the special case of *height violation* *$i$ even, $a_{j+1}\neq 0$*. The way we deal with this ensures, that we can continue normally after adjusting the *height violation*. At $p=4$ we have a *height violation* that is marked but $p<a_2$. At this point $a_{3}$ is already defined to be $2$. We set $a_2$ and $a_3$ back to $0$ and search for them anew. Later we define them to be $3$ and $2$. (3,0)– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [8]{}– (7,1) node\[midway, above\] [9]{}– (8,0) node\[midway, above\] [10]{}; (6,0)– (7,-1) node\[midway, below\] [9]{}; (3,-1.7); (3,0) – (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [8]{}– (8,1) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,0) node\[midway, above\] [11]{}; (4,-1) – (5,0) node\[midway, below\] [2]{}– (6,0) node\[midway, below\] [5]{}– (7,0) node\[midway, below\] [9]{}– (8,-1) node\[midway, below\] [10]{}; (2,2) – (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [8]{}– (8,1) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,0) node\[midway, above\] [11]{}; (3,0) – (4,0) node\[midway, below\] [2]{}– (5,0) node\[midway, below\] [4]{}– (6,0) node\[midway, below\] [5]{}– (7,0) node\[midway, below\] [9]{}– (8,-1) node\[midway, below\] [10]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [8]{}– (8,1) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,0) node\[midway, above\] [11]{}; (2,-1) – (3,0) node\[midway, below\] [2]{}– (4,0) node\[midway, below\] [3]{}– (5,0) node\[midway, below\] [4]{}– (6,0) node\[midway, below\] [5]{}– (7,0) node\[midway, below\] [9]{}– (8,-1) node\[midway, below\] [10]{}; Again we consider the resulting vacillating tableau and apply Algorithm \[alg:u2\]. Again we obtain the same sequence of labeled words but the other way around and extract elements of our standard Young tableau in pairs. We obtain our special case when extracting row $4$, thus an even row. $4$ is a $0$ on $1$-level $0$. We set $a_2$ back to $r$ and change $4$ into a $-1$. Later we extract $5$ as a new $a_2$. (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}; (2,-1) – (3,0) node\[midway, below\] [2]{}– (4,0) node\[midway, below\] [3]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [4]{}; (1,0) – (2,0) node\[midway, below\] [2]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}; (1,-1) – (2,0) node\[midway, above\] [2]{}– (3,0) node\[midway, below\] [5]{}; (1,-1.7); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,0) node\[midway, above\] [5]{}; (1,-1) – (2,0) node\[midway, above\] [2]{}; (1,-1.7); Finally we consider the following tableau:\ (0,7) – (4,7); (0,6) – (4,6); (0,5) – (4,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (2,2); (0,7) – (0,2); (1,7) – (1,2); (2,7) – (2,2); (3,7) – (3,5); (4,7) – (4,5); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (0.5,5.5) node [3]{}; (1.5,5.5) node [4]{}; (0.5,4.5) node [5]{}; (1.5,4.5) node [6]{}; (0.5,3.5) node [7]{}; (0.5,2.5) node [8]{}; (1.5,3.5) node [9]{}; (2.5,6.5) node [10]{}; (2.5,5.5) node [11]{}; (3.5,6.5) node [12]{}; (3.5,5.5) node [13]{}; (1.5,2.5) node [14]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,0) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [10]{}– (4,0) node\[midway, above\] [12]{}; (1,-2.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,0) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [10]{}– (6,1) node\[midway, above\] [11]{}– (7,1) node\[midway, above\] [12]{}– (8,0) node\[midway, above\] [13]{}; (1,-2.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,0) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [10]{}– (8,1) node\[midway, above\] [11]{}– (9,1) node\[midway, above\] [12]{}– (10,0) node\[midway, above\] [13]{}; (1,-1) – (2,0) node\[midway, above\] [3]{}– (3,-1) node\[midway, above\] [4]{}– (4,0) node\[midway, above\] [11]{}– (5,-1) node\[midway, above\] [12]{}; (1,-2.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,0) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,1) node\[midway, above\] [11]{}– (11,1) node\[midway, above\] [12]{}– (12,0) node\[midway, above\] [13]{}; (1,-1) – (2,0) node\[midway, above\] [3]{}– (3,1) node\[midway, above\] [4]{}– (4,0) node\[midway, above\] [5]{}– (5,-1) node\[midway, above\] [6]{}– (6,0) node\[midway, above\] [11]{}– (7,-1) node\[midway, above\] [12]{}; (1,-2.4); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,2) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (1,-1) – (2,0) node\[midway, above\] [3]{}– (3,1) node\[midway, above\] [4]{}– (4,1) node\[midway, below\] [5]{}– (5,1) node\[midway, below\] [6]{}– (6,0) node\[midway, above\] [7]{}– (7,-1) node\[midway, above\] [9]{}– (8,0) node\[midway, above\] [10]{}– (9,0) node\[midway, below\] [11]{}– (10,0) node\[midway, below\] [12]{}– (11,-1) node\[midway, above\] [13]{}; (1,-1.4); This time inserting the first four rows works as before, however when inserting the fifth row we come the special case of *height violation* *$i$ odd, $w(p_j)=0$ on $j$-level $0$*. Again the way we deal with this ensures, that we can continue normally after *adjusting the height violation*. This happens while inserting $8,14$. At $p=10$ we have a *height violation*, where we had *$i$ odd connect* at $6,11$. We insert $b_2$ again with $9$. (8,0) – (9,1) node\[midway, above\] [10]{}– (10,1) node\[midway, above\] [11]{}– (11,1) node\[midway, above\] [12]{}– (12,0) node\[midway, above\] [13]{}; (9,-1)– (10,0) node\[midway, below\] [11]{}– (11,-1) node\[midway, below\] [12]{}; (9,0) – (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (9.5,0) – (10.5,0) node\[midway, below\] [6]{}– (11.5,0) node\[midway, below\] [11]{}– (12.5,0) node\[midway, below\] [12]{}– (13.5,-1) node\[midway, above\] [13]{}; (9,1)– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (8,0) – (9,-1) node\[midway, above\] [6]{}– (10,0) node\[midway, above\] [10]{}– (11,0) node\[midway, below\] [11]{}– (12,0) node\[midway, below\] [12]{}– (13,-1) node\[midway, above\] [13]{}; Again we consider the resulting vacillating tableau and apply Algorithm \[alg:u2\]. Again we obtain the same sequence of labeled words but the other way around and extract elements to our standard Young tableau in pairs. We extract $7,8$ for $a_2,b_2$ and $8$ for $a_1$ and obtain a *height violation* at $10$. We set $b_2=r$ and correct it. Then we obtain a *height violation special case* at $11$. This happens during extracting row $5$, thus during *$i$ odd*. We set $b_3=r$ and $w(11)=2$. We continue extracting $12$ as $b_3$, $13$ as $b_2$ and $14$ as $b_1=b$. Thus we extract $8,14$ as $a,b$. (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,0) node\[midway, above\] [9]{}– (9,0) node\[midway, above\] [10]{}; (1,-1) – (2,0) node\[midway, above\] [3]{}– (3,1) node\[midway, above\] [4]{}– (4,0) node\[midway, above\] [5]{}– (5,-1) node\[midway, above\] [6]{}– (6,0) node\[midway, above\] [10]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,0) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,1) node\[midway, above\] [11]{}; (1,-1) – (2,0) node\[midway, above\] [3]{}– (3,1) node\[midway, above\] [4]{}– (4,0) node\[midway, above\] [5]{}– (5,-1) node\[midway, above\] [6]{}– (6,-1) node\[midway, above\] [11]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,2) node\[midway, above\] [3]{}– (4,2) node\[midway, above\] [4]{}– (5,2) node\[midway, above\] [5]{}– (6,2) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,0) node\[midway, above\] [9]{}– (9,1) node\[midway, above\] [10]{}– (10,1) node\[midway, above\] [11]{}; (1,-1) – (2,0) node\[midway, above\] [3]{}– (3,1) node\[midway, above\] [4]{}– (4,0) node\[midway, above\] [5]{}– (5,-1) node\[midway, above\] [6]{}– (6,0) node\[midway, above\] [11]{}; *Separation points* are positions that are marked. In this example we consider a standard Young tableau $Q$ with $7$ rows and $2$ columns in different dimension $n$. The first column of $Q$ is filled with $1,2,\dots,7$, the second column is filled with $8,9,\dots,14.$ As the rows have even length, empty rows are allowed. We see that *separation points* (positions that get marked) make a difference doing so, as those parts of the algorithms are the only ones executed for $a=b=0$. We see an illustration of this example in Figure \[fig:Embedding\]. When considering $n=2k+1=9$, we have $k=3$ paths. When we consider $n=9$ or $n=11$ we see how *adjust separation points* alter the paths step by step and creates more paths. Finally we consider $n=2k+1=13$ and $n>13$. We see that when going from $13$ to $14$ we add path $7$, which is an up-step and a down-step. When considering larger $n$, the paths do not change anymore. Path $7$ is dashed. The reason for this phenomena is that $0$’s in a vacillating tableau are only allowed when the $k$-level is at least $1$. Thus horizontal steps, that are truly horizontal steps, and not some other steps in paths below, are only allowed in the bottommost path. This are the only differences when considering a tableau in different dimensions $n=2k+1$. (0,7) – (2,7); (0,6) – (2,6); (0,5) – (2,5); (0,4) – (2,4); (0,3) – (2,3); (0,2) – (2,2); (0,1) – (2,1); (0,0) – (2,0); (0,7) – (0,0); (1,7) – (1,0); (2,7) – (2,0); (0.5,6.5) node [1]{}; (0.5,5.5) node [2]{}; (0.5,4.5) node [3]{}; (0.5,3.5) node [4]{}; (0.5,2.5) node [5]{}; (0.5,1.5) node [6]{}; (0.5,0.5) node [7]{}; (1.5,6.5) node [8]{}; (1.5,5.5) node [9]{}; (1.5,4.5) node [10]{}; (1.5,3.5) node [11]{}; (1.5,2.5) node [12]{}; (1.5,1.5) node [13]{}; (1.5,0.5) node [14]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (1,-1.5) – (2,-0.5) node\[midway, above\] [2]{}– (3,-0.5) node\[midway, above\] [3]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-1.5) node\[midway, above\] [6]{}; (8,-1.5)– (9,-0.5) node\[midway, above\] [9]{}– (10,-0.5) node\[midway, above\] [10]{}– (11,-0.5) node\[midway, above\] [11]{}– (12,-0.5) node\[midway, above\] [12]{}– (13,-1.5) node\[midway, above\] [13]{}; (2,-3) – (3,-2) node\[midway, above\] [3]{}– (4,-2) node\[midway, above\] [4]{}– (5,-3) node\[midway, above\] [5]{}; (9,-3)– (10,-2) node\[midway, above\] [10]{}– (11,-2) node\[midway, above\] [11]{}– (12,-3) node\[midway, above\] [12]{}; (10,-4.5); (7,3) node[$n=2k+1=7$]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (1,-1.5) – (2,-0.5) node\[midway, above\] [2]{}– (3,-0.5) node\[midway, above\] [3]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}– (7,-0.5) node\[midway, above\] [7]{}– (8,-0.5) node\[midway, above\] [8]{}– (9,-0.5) node\[midway, above\] [9]{}– (10,-0.5) node\[midway, above\] [10]{}– (11,-0.5) node\[midway, above\] [11]{}– (12,-0.5) node\[midway, above\] [12]{}– (13,-1.5) node\[midway, above\] [13]{}; (2,-3) – (3,-2) node\[midway, above\] [3]{}– (4,-2) node\[midway, above\] [4]{}– (5,-2) node\[midway, above\] [5]{}– (6,-2) node\[midway, above\] [6]{}– (7,-3) node\[midway, above\] [7]{}– (8,-2) node\[midway, above\] [8]{}– (9,-2) node\[midway, above\] [9]{}– (10,-2) node\[midway, above\] [10]{}– (11,-2) node\[midway, above\] [11]{}– (12,-3) node\[midway, above\] [12]{}; (3,-4.5) – (4,-3.5) node\[midway, above\] [4]{}– (5,-3.5) node\[midway, above\] [5]{}– (6,-4.5) node\[midway, above\] [6]{}; (8,-4.5) – (9,-3.5) node\[midway, above\] [9]{}– (10,-3.5) node\[midway, above\] [10]{}– (11,-4.5) node\[midway, above\] [11]{}; (7,3) node[$n=2k+1=9$]{}; (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (1,-1.5) – (2,-0.5) node\[midway, above\] [2]{}– (3,-0.5) node\[midway, above\] [3]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}– (7,-0.5) node\[midway, above\] [7]{}– (8,-0.5) node\[midway, above\] [8]{}– (9,-0.5) node\[midway, above\] [9]{}– (10,-0.5) node\[midway, above\] [10]{}– (11,-0.5) node\[midway, above\] [11]{}– (12,-0.5) node\[midway, above\] [12]{}– (13,-1.5) node\[midway, above\] [13]{}; (2,-3) – (3,-2) node\[midway, above\] [3]{}– (4,-2) node\[midway, above\] [4]{}– (5,-2) node\[midway, above\] [5]{}– (6,-2) node\[midway, above\] [6]{}– (7,-2) node\[midway, above\] [7]{}– (8,-2) node\[midway, above\] [8]{}– (9,-2) node\[midway, above\] [9]{}– (10,-2) node\[midway, above\] [10]{}– (11,-2) node\[midway, above\] [11]{}– (12,-3) node\[midway, above\] [12]{}; (3,-4.5) – (4,-3.5) node\[midway, above\] [4]{}– (5,-3.5) node\[midway, above\] [5]{}– (6,-3.5) node\[midway, above\] [6]{}– (7,-3.5) node\[midway, above\] [7]{}– (8,-3.5) node\[midway, above\] [8]{}– (9,-3.5) node\[midway, above\] [9]{}– (10,-3.5) node\[midway, above\] [10]{}– (11,-4.5) node\[midway, above\] [11]{}; (4,-6) – (5,-5) node\[midway, above\] [5]{}– (6,-5) node\[midway, above\] [6]{}– (7,-6) node\[midway, above\] [7]{}– (8,-5) node\[midway, above\] [8]{}– (9,-5) node\[midway, above\] [9]{}– (10,-6) node\[midway, above\] [10]{}; (7,3) node[$n=2k+1=11$]{}; (10,-9); (0,0) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}– (7,1) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}– (10,1) node\[midway, above\] [10]{}– (11,1) node\[midway, above\] [11]{}– (12,1) node\[midway, above\] [12]{}– (13,1) node\[midway, above\] [13]{}– (14,0) node\[midway, above\] [14]{}; (1,-1.5) – (2,-0.5) node\[midway, above\] [2]{}– (3,-0.5) node\[midway, above\] [3]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}– (7,-0.5) node\[midway, above\] [7]{}– (8,-0.5) node\[midway, above\] [8]{}– (9,-0.5) node\[midway, above\] [9]{}– (10,-0.5) node\[midway, above\] [10]{}– (11,-0.5) node\[midway, above\] [11]{}– (12,-0.5) node\[midway, above\] [12]{}– (13,-1.5) node\[midway, above\] [13]{}; (2,-3) – (3,-2) node\[midway, above\] [3]{}– (4,-2) node\[midway, above\] [4]{}– (5,-2) node\[midway, above\] [5]{}– (6,-2) node\[midway, above\] [6]{}– (7,-2) node\[midway, above\] [7]{}– (8,-2) node\[midway, above\] [8]{}– (9,-2) node\[midway, above\] [9]{}– (10,-2) node\[midway, above\] [10]{}– (11,-2) node\[midway, above\] [11]{}– (12,-3) node\[midway, above\] [12]{}; (3,-4.5) – (4,-3.5) node\[midway, above\] [4]{}– (5,-3.5) node\[midway, above\] [5]{}– (6,-3.5) node\[midway, above\] [6]{}– (7,-3.5) node\[midway, above\] [7]{}– (8,-3.5) node\[midway, above\] [8]{}– (9,-3.5) node\[midway, above\] [9]{}– (10,-3.5) node\[midway, above\] [10]{}– (11,-4.5) node\[midway, above\] [11]{}; (4,-6) – (5,-5) node\[midway, above\] [5]{}– (6,-5) node\[midway, above\] [6]{}– (7,-5) node\[midway, above\] [7]{}– (8,-5) node\[midway, above\] [8]{}– (9,-5) node\[midway, above\] [9]{}– (10,-6) node\[midway, above\] [10]{}; (5,-7.5) – (6,-6.5) node\[midway, above\] [6]{}– (7,-6.5) node\[midway, above\] [7]{}– (8,-6.5) node\[midway, above\] [8]{}– (9,-7.5) node\[midway, above\] [9]{}; (6,-9)– (7,-8) node\[midway,above\] [7]{}– (8,-9) node\[midway,above\] [8]{}; (6,3) node[$n=2k+1=13$, respectively $n>13$]{}; If we ignore everything not concerning $j$ in Algorithm \[alg:2\] and point out, that the combination of *separating* left of $a_j$ and “change $a_{j+1}$ into $0$”, corresponds to “insert $a$ case 2” while insert the third row in [@3erAlgo], we get the following: \[theo:3erAlgoTheSame\] For tableaux in dimension three Algorithm \[alg:2\] and Algorithm \[alg:u2\] generate the same output as Algorithm 3 and Algorithm 4 in [@3erAlgo]. Properties and Proofs for Bijection $B$ --------------------------------------- The first goal of this subsection is to prove the following Theorem: \[theo:algo2welldefvactab\] Algorithm \[alg:2\] is well-defined and produces a vacillating tableau $V$ of length $r$ and dimension $k$ given a standard Young tableau $Q$ with $2k+1$ rows of even length and $r$ entries. We prepare the proof by stating and proving several lemmas concerning Algorithm \[alg:2\]. We use notation form Algorithm \[alg:2\]. Variables, etc. also refer to it. Moreover we call marked positions *separation points*. \[cor:Newjplus1\] For $j$ even, we redefine $a_{j+1}$ to be the next unchanged $j$ so far to the left of $a_j$ and $b_{j+1}$ of be the next changed position to the left during the insertion process of $a$ and $b$ so far. This is just a renaming. However it follows that the $l$-level grows between $c_l$ and $c_{l+1}$, but not somewhere else. \[cor:SeparationInsert\] *Adjust separation point* changes the marked positions as if an $a$ was inserted in between and a $b$ was inserted to the left. \[lem:SeparationHeight\] *Separation points* contain *height violations* exactly after initializing a new $j$ and after inserting an even row. This *height violation* is always in $j-1$. This implies that they cause no *height violation* in the end of our insertion process. Separation points always start at *separate odd* between $a$ and $b_{j+1}$. As long as they are still between such newly inserted elements, they expand on $j$. After inserting an odd row there is always another *separate odd* and thus at the last inserted odd row there is no *height violation* anymore. Once we come into the case *adjust separation point* we do the same as some inserted $a,b$ would do. However one path after the other, beginning with the topmost, is not part of the *separation point* anymore. (0,1) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}; (0,-0.5)– (1,-0.5) node\[midway, above\] [1]{}– (2,-1.5) node\[midway, above\] [2]{}; (4,-1.5) – (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}; (0,-2)–(1,-2) node\[midway, above\] [1]{}; (5,-2)–(6,-2) node\[midway, above\] [6]{}; (0,-3); (0,0.5) node; (0,5) node; (0,3) – (2,3); (0,1) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}; (0,-0.5)– (1,-0.5) node\[midway, above\] [1]{}– (2,-0.5) node\[midway, above\] [2]{}– (3,-1.5) node\[midway, above\] [3]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}; (0,-2)– (1,-2) node\[midway, above\] [1]{}– (2,-3) node\[midway, above\] [2]{}; (4,-3)– (5,-2) node\[midway, above\] [5]{}– (6,-2) node\[midway, above\] [6]{}; , (0,1) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,0) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}; (0,-0.5)– (1,-0.5) node\[midway, above\] [1]{}– (2,-1.5) node\[midway, above\] [2]{}; (4,-1.5) – (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}; (0,-3)–(1,-2) node\[midway, above\] [1]{}; (5,-2)–(6,-3) node\[midway, above\] [6]{}; (0,0.5) node; (0,5) node; (0,3) – (2,3); (0,1) – (1,1) node\[midway, above\] [1]{}– (2,1) node\[midway, above\] [2]{}– (3,1) node\[midway, above\] [3]{}– (4,1) node\[midway, above\] [4]{}– (5,1) node\[midway, above\] [5]{}– (6,1) node\[midway, above\] [6]{}; (0,-0.5)– (1,-0.5) node\[midway, above\] [1]{}– (2,-0.5) node\[midway, above\] [2]{}– (3,-1.5) node\[midway, above\] [3]{}– (4,-0.5) node\[midway, above\] [4]{}– (5,-0.5) node\[midway, above\] [5]{}– (6,-0.5) node\[midway, above\] [6]{}; (0,-3)– (1,-2) node\[midway, above\] [1]{}– (2,-2) node\[midway, above\] [2]{}; (4,-2)– (5,-2) node\[midway, above\] [5]{}– (6,-3) node\[midway, above\] [6]{}; The predecessor of some number $a$ or $b$, namely $c$, is the following number $d$. Search for the number $c=c_1$ that is inserted first during the insertion process. This number $c$ has some other number directly below in $Q$, namely $d$. We refer to its insertions with $d_l$ like we do for $a$ or $b$ Algorithm \[alg:2\]. \[lem:Predecessor\] If an inserted number $c_{l+1}$ equals its predecessor $d_{l}$, no new *height violation* arises. Inserting $d_l$ made the $l$-level higher between $d_l$ and $d_{l+1}$ and did not change the $(l+1)$-level in this area. When choosing $d_l$ as $c_{l+1}$ this makes the $(l+1)$-level higher in the same area or a smaller one, (and might changes $\pm l$ into $0$’s with a level grow of $1$) which can cause no *height violation*. The following sketch illustrates this for $l+1\neq j$. (0,1)–(1,0); (1,0) – (3,2); (3,2) – (5,4); (1,-2) – (3,0); (3,0) – (5,2); (0,-0.5) node; (0,5) node; (0,3) – (2,3); (0,1)–(1,1) node\[midway, above\][$d_{l+1}$]{}; (1,1) – (3,3); (3,3)–(4,2) node\[midway, above\][$d_l$]{}; (4,2) – (6,4); (0,-1)–(1,-2) node\[midway,above\][$d_{l+1}$]{}; (1,-2) – (3,0); (4,0) – (6,2); (0,-0.5) node; (0,5) node; (0,3) – (2,3); (0,1)–(1,1) node\[midway, above\][$d_{l+1}$]{}; (1,1) – (3,3); (3,3)–(4,3) node\[midway, above\][$d_l=c_{l+1}$]{}; (4,3) – (6,5); (6,5)–(7,4) node\[midway, above\][$c_l$]{}; (0,-1)–(1,-1) node\[midway,below\][$d_{l+1}=c_{l+2}$]{}; (1,-1) – (3,1); (3,1)–(4,0) node\[midway,above\][$c_{l+1}$]{}; (4,0) – (6,2); \[lem:HeightInsert\] 1. If $p$ is a *height violation* in $(l-1)$, $w(p)$ is always $(l-1)$. 2. There are no *height violations* after inserting a pair of $a,b$ if there where no before. The only exceptions are *separation points* where there is a *height violation* in $(j-1)$ before and after inserting an even row. 3. We can always find $a_{j+1}$ and $b_{j+1}$. We show these three statements inductively. The base case is clear (empty case). For the inductive step we show one statement after the other. 1. If there was no *height violation* before, to find a new one, we consider in which situations the level in some paths can grow: - between $c_l$ and the first $c_{l+1}$; - between a *height violation* in $l$ and the new $c_{l+1}$; - between a *height violation* in $l$ and a new *height violation* in $(l-1)$. In this area cannot be a $-l$ except a marked one or $a_l$ if $c$ is a $b$, as this would have been taken for $c_{l+1}$ otherwise. We illustrated these three cases for $l+1\neq j$. (0,1)–(1,0); (1,0) – (3,2); (0,2) node; (0,3) node; (0,3) – (2,3); (0,1)–(1,1) node\[midway, below\][$c_{l+1}$]{}; (1,1) – (3,3); (3,3)–(4,2) node\[midway, above\][$c_l$]{}; , (0,1)–(1,0); (1,0) – (3,2); (3,2)–(4,3) node\[midway, above\][h.v.]{}; (0,2) node; (0,3) node; (0,3) – (2,3); (0,1)–(1,1) node\[midway, below\][$c_{l+1}$]{}; (1,1) – (3,3); (3,3)–(4,3); , (1,0) – (3,2); (3,2)–(4,3) node\[midway, above\][h.v. 1]{}; (0,2) node; (0,3) node; (0,3) – (2,3); (0,0)–(1,1) node\[midway, above\][h.v. 2]{}; (1,1) – (3,3); (3,3)–(4,3); - *$i$ odd connect* or *$i$ even connect*; This happens between $a$ and $b$ and can only create a *height violation* at a position with $j$-level $0$, thus a $j-1$. ($-j+1$ could also create a *height violation* but needs to be left of such a $j-1$). At *$i$ even connect* such positions are marked. Height violations in $(l-1)$ only happen in those situations. $p$ cannot be a $0$ or else the previous $p$ would be a *height violation* as well and we see inductively that right of $p$ there are no *height violations*. The same argument holds for $l$ and $-(l-1)$. We have seen that in the area where word $l$ gets higher there is either no $-l$ or it is ignored ($a_l$) or it is to the left of a $l-1$ (which is marked). Therefore the only possible value is $l-1$. 2. Therefore at a *height violation*, the $(l-1)$-level increases and the $l$-level decreases. However as this happened somewhere where the $l$-level was increased before (by inserting $c_l$), it sets the $l$-level to its original height. Thus there is no *height violation* until another level growth. (-2,-1) – (0,-1); (0,-1)–(1,0); (1,0) – (3,2); (3,2)–(4,1); (4,1) – (6,3); (-2,-3) – (0,-3); (1,-3) – (3,-1); (4,-1) – (6,1); (0,-0.5) node; (0,5) node; (0,3) – (2,3); (-2,-1) – (0,-1); (0,-1)–(1,0) node\[midway, above\][h.v.]{}; (1,0) – (3,2); (3,2)–(4,2) node\[midway, above\][$c_{l}$]{}; (4,2) – (6,4); (6,4)–(7,3) node\[midway, above\][$c_{l+1}$]{}; (-2,-2) – (0,-2); (1,-2) – (3,0); (3,0)–(4,-1) node\[midway, above\][$c_{l}$]{}; (4,-1) – (6,1); (0,-3) node; (0,-0.5) node; (0,5) node; (0,3) – (2,3); (-2,0) – (0,0); (0,0)–(1,0) node\[midway, above\][h.v.]{}; (1,0) – (3,2); (3,2)–(4,2) node\[midway, above\][$c_{l}$]{}; (4,2) – (6,4); (6,4)–(7,3) node\[midway, above\][$c_{l+1}$]{}; (-2,-3) – (0,-3); (0,-3)–(1,-2) node\[midway, above\][h.v.]{}; (1,-2) – (3,0); (3,0)–(4,-1) node\[midway, above\][$c_{l}$]{}; (4,-1) – (6,1); (0,-3) node; *Height violations* can only add up in pairs of two (*connect* only happens left of $b_{j+1}$). If that happens, also the levels of that *height violations* add up, so it is sufficient to look at each situation separately. The *ignore $a_l$* ensures that everything is considered separately. (Compare with the first tableau in Example \[ex:SpecialCases\].) 3. To conclude we note that once we reach the predecessor no new *height violation* arises (compare with Lemma \[lem:Predecessor\]). This also implies inductively that the predecessor cannot be taken from a pair before, as their predecessors are the leftmost positions they can take. It remains to show that the predecessor $d_l$ can be taken indeed as new $c_{l+1}$. This holds for $l+1\neq j$ as in this case $d_{l}=-l$. For $l+1=j$ we distinguish between $i$ even and $i$ odd. If $i$ is odd we can argue the same except for the case that $d_l=0$. In this case, however, *$i$ odd separate* changes this $d_l$ into a $-l$. If $i$ is even we can argue that in *$i$ odd* $a$’s produce $0$’s that are in $0$-even positions and $b$’s produce $0$’s that are in $0$-odd positions. Now $a_{j+1}$’s are always $j$-even positions and $b_{j+1}$’s are always $j$-odd positions and every such position can be chosen as such. Left sides (down-steps) of *separation points* can never be inserted as $b$ thus are never predecessors of $b$. \[lem:Sumis0\] The sum over the labeled word $w$ is $0$ after every insertion of a pair $a,b$. In particular the sum over $\pm l$ in $w$ is $0$ for every $l$. The sum over all $j$’s is $0$ after initializing $j$ as there is always an even number of $0$’s. Thus we have to show that nothing we do during the insertion process, changes the sum: - If $i$ is even and we insert $a_j$, $a_{j+1}$ and $b_j$, $b_{j+1}$ we insert either two $-j$’s and change one $-j$ into a $j$ or we insert one $-j$ and one $0$ and change one $-j$ into a $0$. Otherwise we insert something with $-l$ and change another $-l$ into a $0$ or a $(-l+1)$. - When *$i$ even connect*, *$i$ odd connect* or *adjust separating point* we always change an $l$ and a $-l$ into $0$’s or into $l+1$ and $-l-1$. At *$i$ odd separate* we change two $0$’s into a $-l$ and $l$. - At *height violation* we do the inverse of finding a $c_{l}$, namely changing a $l-1$ into an $l$ instead of changing an $-l-1$ into an $-l$. As we insert $c_l$ anew later on, this does not change the sum. However, there are two situations where it could be that $c_{l+1}$ is already found but there is still a *height violation*. Therefore we have to adjust the path to ensure sum $0$ in this situations. Those are the two special cases. At *$i$ odd connect* we deal with this by changing a $0$ into a $-j=-l-1$ again. At *$i$ even $a_{j+1}$ 1* we defined and adjusted $a_{j+1}$ before and deal with this by changing it back again. For well-definedness, we have to show that the while loop always terminates, thus that we find an $a_{j+1}$. We have seen this in Lemma \[lem:HeightInsert\]. Moreover we have to show that the vacillating tableau properties hold for our resulting word: 1. In every initial segment the following holds: 1. \[p:WellDefBottom\] $\#i - \#(-i) \geq 0$, 2. \[p:WellDefHeight\] $\#i-\#(-i)\geq \#(i+1)-\#(-i-1)$, 3. \[p:WellDefZero\]if the last position is $0$ then $\#k-\#(-k)>0$. 2. \[p:WellDefSum\] The sum over all positions is $0$. To show that Property \[p:WellDefBottom\] is satisfied after any insertion of a pair $a,b$, we have to show that there are no steps with negative $l$-level. There are two steps in the algorithm where we decrease the level of some position. At the first one, *separate odd*, we generate $-j,j$ on $j$-level one. At the second one, *height violation*, we have seen in Lemma \[lem:HeightInsert\] that we decrease positions that have been increased before. To show that Property \[p:WellDefHeight\] is satisfied, we have to show that there is no *height violation*. This is shown in Lemma \[lem:HeightInsert\] and Lemma \[lem:SeparationHeight\]. To show that Property \[p:WellDefZero\] is satisfied we show that $0$’s are always at least on $k$-level one. When initializing a new $j$, $0$’s get changed into $\pm j$. New $0$’s come either from *connect*, where they are on level one or at *$c_{j+1}$, $i$ odd*, where we change a $-j$ on level at least $0$ to a $0$ with level at least one or more. Property \[p:WellDefSum\] is shown in Lemma \[lem:Sumis0\]. Finally, the number of steps is $r$ as every entry of $Q$ inserts exactly one step. Due to what we have seen about the predecessor, the following lemma holds for even length paths and standard Young tableau with all rows of even length. \[theo:Concat\] Considering Algorithm \[alg:2\], concatenation of vacillating tableaux of empty shape and even length corresponds to concatenation of standard Young tableaux whose rows have even length. In particular, the following holds: - If a vacillating tableau is composed of two concatenated paths of empty shape and even length, its corresponding standard Young tableau can be written as concatenation of two standard Young tableaux all whose rows have even length. - On the other hand if a standard Young tableau can be written as concatenation of two standard Young tableaux whose rows have even length, its corresponding vacillating tableau is also composed of two concatenated paths of empty shape and even length. Now we want to show the same for Algorithm \[alg:u2\] which we will prove later to be the reversed algorithm of Algorithm \[alg:2\]. To see this we will again provide and prove several lemmas first. \[theo:ualgo2welldefSYT\] Algorithm \[alg:u2\] is well-defined and produces a standard Young tableau, with rows of even length and $r$ entries, given a vacillating tableau of even length $r$ and empty shape. \[lem:Ualgo2SumIsZero\] The sum over positions in the labeled word $w$ is $0$ after any extraction of a pair $a,b$. In particular the sum over $\pm l$ in $w$ stays $0$. For every $c_l$ that is changed from $-l$ into $-(l-1)$ we change a $-(l-1)$ into a $-(l-2)$. If we consider $c_j$, $i$ odd, we loose a $0$ in this process. If we consider $c_j$, $i$ even, we conclude that we either extract two $-j$’s and change a $j$ into a $-j$ or we extract one $0$ and one $-j$ and change a $0$ into a $-j$. If we consider $1$ we simply delete a $-1$ after we inserted a new one to the left. *Connect*, *separate*, *height violations* and *separation points* also just change $l$ in pairs of two - always an $l$ and a $-l$. For details compare this with the proof of Lemma \[lem:Sumis0\]. \[lem:Ualgo2HeightVio\] If $p$ is a *height violation* in $l$, $p$ is always an $l+1$. After extracting a pair of $a,b$ there is no *height violation* if there was no before and the extraction process stops. Again the only exceptions are *separation points*. Once again we consider $a$ and $b$ separately as combined *height violations* just add up. *Ignore positions that are corrected height violations of $a$* ensures that everything is considered separately. (Again, compare with the first tableau in Example \[ex:SpecialCases\].) For a *height violation* we have to consider where the $l$-level for $l>j$ is decreased: - between $c_{l+1}$ and $c_l$; - between $c_{l+1}$ and a *height violation* in $l+1$; - when *adjusting a height violation* in $l-1$ until finding a new $c_{l}$ or a new *height violation*; In this area cannot be a $-l$ except for a marked one, if $c$ is a $b$, as this would have been taken for $c_l$. In the proof of Lemma \[lem:HeightInsert\] those situations are illustrated for Algorithm 1. Here the situation is similar but the other way around. Moreover we have consider level increasings. Those are only possible for the $j$-level by *separation odd*. However this happens at $j$-even positions, so if there is something on $j$-level zero we change it into $j-1$ and set $b_{j+1}$ and $b_j$ to undefined in *height violations special cases*. Thus there is no *height violation* afterwards. An $l$ is not a *height violation*, as there would have been be one before. The same holds for a $-(l+1)$. Moreover it cannot be a $-l$ as we have seen in the list above, thus it is an $l+1$. At *adjust height violations* this $l+1$ is changed into an $l$, thus we increase the $l$-level and decrease the $(l+1)$-level. However this happens somewhere, where the $(l+1)$-level has been increased before by choosing $c_{l+1}$. Again the illustrations in the proof of Lemma \[lem:HeightInsert\] show the same situations in Algorithm \[alg:2\]. \[lem:Ualgo2AB\] We always find an $a_l$ and a $b_l$ if we found an $a_{l+1}$ and a $b_{l+1}$. After several steps we find an $a_l$ and a $b_l$ whose extraction does not cause a new *height violation*. We start with $a_{j+1}$ and $b_{j+1}$ and distinguish the parity of $i$. If $i$ is odd, $a_{j+1}$ changes a $0$ in 3-row-position. (If none exists anymore due to *adjust separation point*, nothing happens.) This is the only time when a $0$ is changed except for $b_{j+1}$ and *connect/separate odd*, where $0$’s are changed in pairs of two. Thus after finding $a_{j+1}$ there will remain an odd number of $0$’s, therefore we will find a $b_{j+1}$. If $i$ is even, we only need to find $a_{j+1}$ as this gives us both a first $a_j$ and a first $b_j$ ($a_j$ is a $0$ or a $-j$ on $j$-level at least one, thus we find $b_j$ as a $-j$ on $j$-level at least zero). We find it left of a 2-row-position. Again if none exists anymore, nothing happens. For $l<j+1$ we always find a first $c_l$ as $c_{l+1}$ used to be a $0$ not on level zero, so there is a $-l$ to the right. After a *height violation*, we insert an $l$ and get the old $l$-level back so we can find another $-l$. If there would be only one $-l$ for both $a$ and $b$, that would mean, that both $a_{l+1}$ and $b_{l+1}$ were $-(l+1)$ on $l$-level zero with no $-l$ in between. However this would either cause a *height violation* in $a_{l+1}$, which is a contradiction or there is an $(l+1)$ on $(l+1)$-level $0$ next to $a_{l+1}$ and $b_{l+1},\dots,b_j$ are right of that. Moreover $a_{l+1},\dots,a_j$ are on level $0$ each. As there is no $-l$ between $a_{l+1}$ and this $(l+1)$ $a_j$ satisfies the conditions for *adjust separation point* as there is no $l$ in between either ($b_{l+1}$ is also on level $0$), which is also a contradiction. We find a $c_l$ whose extraction does not cause a new *height violation* as this is the case once we reach a $-l$ not followed by an $l$. This is the case at some point in a path without *height violations*, that ends on $l$-level zero for all $l$ due to Lemma \[lem:Ualgo2SumIsZero\] and Lemma \[lem:Ualgo2HeightVio\]. \[lem:UalgoSYT\] For each extracted number in row $i+1$ we extract at least one smaller number in row $i$. We show first that that every last $d_{l+1}$ can be a last $c_{l}$ for $l\neq j$: Extracting $d_{l+1}$ decreased the $l$-level by $1$ for $l<j$ between $d_{l+1}$ and $d_l$ without changing the $(l-1)$-level there. Thus we can decrease the $(l-1)$-level in this area when extracting the next row, thus this could be a $c_l$ that causes no further *height violations*. (Compare with the illustrations in the proof of Lemma \[lem:Predecessor\].) Now we show that each extracted $d_j$ causes a $c_{j+1}$. Again we do so by distinguishing the parity of $i$ when extracting $c_j$. If $i$ is odd, we extracted $-(j+1)$’s before in an even process. We need to show that those produced $0$’s in 3-row-positions and that we can take those as $a_{j+1}$ and $b_{j+1}$. We distinguish two cases. If those are separated by $a_j$, they produce automatically two odd sequences of $0$’s, one of which to take. It could be the case that one of them gets even due to some former or later $c_j$ in this round, however this is the same situation as in the next case. If those are not separated by $a_j$, there cannot be a $-j$ between those. Thus at least the right one is on $j$-level two or higher and the other one is either on the same level, or if separated by a $j$, it is in an odd sequence. If $i$ is even, we extracted $j$’s in an $i$-odd-process before. Thus, due to Corollary \[cor:SeparationInsert\] we can use Lemma 34 and its proof in [@3erAlgo]. It remains to show that those new 2- or 3-row-positions will not be changed in *adjust separating points*. This follows as the extracting process of $a$ will leave some negative step in between or will extract a $-1$. For well-definedness we have to show that the two while loops terminate. That the inner one terminates ensure Lemma \[lem:Ualgo2HeightVio\] and Lemma \[lem:Ualgo2AB\]. The outer one has to terminate, as with each extraction the word gets smaller by two, so in the end there is nothing left to build a 2-or 3-row position. Using Lemma \[lem:UalgoSYT\], we see that it produces a standard Young tableau with even row lengths. \[theo:algo2ualgo2Inverse\] Algorithms \[alg:2\] and \[alg:u2\] are inverse. We show this by showing that every step has its inverse in the other algorithm. Those steps are named (commented) the same. We consider them separately. For the following steps it follows directly from the definition that they are inverse: - Initialize $j$ - Insert / extract row 1 - $a$ / $b$ getting inserted or extracted as $a_1$ / $b_1$ For the following steps we have to argue a little more: - *Height violations:* We show that if a *height violation* $h$ with $w(h)=l$ in Algorithm \[alg:2\] occurs after inserting $c^1_{l+1}$ and we correct it, and insert $c^2_{l+1}$ later, we get the same $h$ with $w(h)=l+1$ as *height violation* when extracting this $c^2_{l+1}$, and the other way around. This is sufficient as due to the fact that *height violations* in $l$ are always $l$ respectively $l+1$, they act inverse. When we *adjust* a *height violation* $h$ in Algorithm \[alg:2\], we get a $w(h)=l+1$ whose $(l+1)$-level is one less than its $l$-level. When extracting $c^2_{l+1}$ in Algorithm \[alg:u2\], this decreases the $l$-level to the right, and $h$ is the first position with an $l$-level that is too large, as the other positions were no *height violation* before starting *height violation* in Algorithm \[alg:2\]. The other way around is similar. If we find a *height violation* $w(h)=l+1$ caused by extracting $c^2_{l+1}$ in Alorithm \[alg:u2\] we change this into an $l$ and extract a $c^1_{l+1}$ to the right. When inserting $c^1_{l+1}$ in Algorithm \[alg:2\], we increase the $(l+1)$-level such that exactly at $h$ there is a new *height violation*. Again nothing earlier could have caused it as those positions were no *height violations* in Algorithm \[alg:u2\]. We illustrated this for $l+1\neq j$ in the following sketch. (2.5,4.5) node[path $l$]{}; (0,1)–(1,0) node\[midway, above\][$c^2_{l+1}$]{}; (1,0) – (3,2); (3,2)–(4,3)node\[midway, above\][$h$]{}; (4,3) – (6,5); (6,5)–(7,4) node\[midway, above\][$c^1_{l+1}$]{}; (1,-2) – (3,0); (4,0) – (6,2); (4.5,-1.5) node[path $l+1$]{}; (0,-0.5); (0,5); (0,3) – (1.3,3); (0,1)–(1,0) node\[midway, above\][$c^2_{l+1}$]{}; (1,0) – (3,2); (3,2)–(4,3)node\[midway, above\][$h$]{}; (4,3) – (6,5); (6,5)–(7,5) node\[midway, above\][$c^1_{l+1}$]{}; (0,-2); (1,-1) – (3,1); (4,1) – (6,3); (6,3)–(7,2) node\[midway, above\][$c^1_{l+1}$]{}; (0,-0.5); (0,5); (0,3) – (1.3,3); (0,2)–(1,1) node\[midway, above\][$c^2_{l+1}$]{}; (1,1) – (3,3); (3,3)–(4,3)node\[midway, above\][$h$]{}; (4,3) – (6,5); (6,5)–(7,5) node\[midway, above\][$c^1_{l+1}$]{}; (1,-2) – (3,0); (3,0)–(4,1)node\[midway, above\][$h$]{}; (4,1) – (6,3); (6,3)–(7,2) node\[midway, above\][$c^1_{l+1}$]{}; (0,-0.5); (0,5); (0,3) – (1.3,3); (0,1)–(1,1) node\[midway, above\][$c^2_{l+1}$]{}; (1,1) – (3,3); (3,3)–(4,3)node\[midway, above\][$h$]{}; (4,3) – (6,5); (6,5)–(7,5) node\[midway, above\][$c^1_{l+1}$]{}; (0,-1)–(1,-2) node\[midway, above\][$c^2_{l+1}$]{}; (1,-2) – (3,0); (3,0)–(4,1)node\[midway, above\][$h$]{}; (4,1) – (6,3); (6,3)–(7,2) node\[midway, above\][$c^1_{l+1}$]{}; The three special cases are left to consider. *Ignore $a_l$* at a *height violation* happens when $a_l$ is already inserted but $a_{l+1}$ is not, thus the level of path $l$ is changed but the level of path $l+1$ is not. When we do the inverse this is not relevant as in this case $a$ is always extracted further than $b$. Thus we have to do the same if $b_{l+1}$ is extracted but $b_{l}$ is not. Therefore this special case changes nothing from the argumentation above. The special case *$i$ even*, happens if there is a *height violation* between $a_{j+1}$ and $a_j$. Thus, together with finding a new $a_j$, it sets $-(j-1),(j-1)$ between those to $0,0$. Therefore when we extract we get the left $0$ as $a_j$ and change it into $-(j-1)$ again. The special case changes the right $0$ into $(j-1)$ and sets $a_j$ and $a_{j+1}$ to undefined. For an illustration of an example see the fourth tableau of Example \[ex:SpecialCases\]. The other direction works the same way. The special case *$i$ odd* happens after *connect*. Thus it changes a $-j$ changed into a $0$ back into a $-j$. Then $b_j$ is inserted anew as this $-j$. When it gets extracted, it detects *height violations*. Correcting them leaves the $0$ we produced on $j$-level $0$, which we change back into a $j$. For an illustration of an example see the fifth tableau of Example \[ex:SpecialCases\]. The other direction works the same way. We point out that Algorithm \[alg:1\] *connects* between $a_{j+1}$ and $b_{j+1}$ at $j$-level $0$ whenever $i$ is odd. - *Separation points:* In both algorithms we *mark* and *adjust separation points* while searching for $a$’s and $b$’s. This way we *adjust separation points* before reaching the next $a$ and $b$. Thus in Algorithm \[alg:2\] this happens right of $a$ and $b$ and in Algorithm \[alg:u2\] this happens to the left. This makes no difference as we consider everything still in the same order, and make an extra iteration for *separating points* at the ends not considered so far. We mark positions $\pm l$ at certain points, to make certain exceptions for them. We mark $0$’s also (in a slightly different way), but those are not relevant as those are never such exceptions. The *separation points* we just mark are for an $l$ between $a_l$ and $b_{j+1}$ and mark positions $\pm l$ up to $\pm j$. Due to *$i$ odd separate* and *$i$ even connect*, we mark in each algorithm positions that way that they form the same pattern after the iteration as the other algorithms marks in the beginning of an iteration. The marking $a_l$ ensures that even though $a$ might be inserted on the left part of the *separation point*, all $\pm l$ that should be marked are marked. The *separation points* we *mark* and *adjust* in Algorithm \[alg:2\] are for no $l$ between $a_l$ and $b_{j+1}$. When we *adjust a separation point* in Algorithm \[alg:2\], it is automatically right of the current $b_l$. Thus, what we have to show is that exactly *separation points* we *adjust* form the patterns we demand in Algorithm \[alg:u2\] *adjust separation points*. When an $a$ is in between there are three different ways it can be so. When $a$ is inserted as such, we have a marked $-1$. When $a$ starts to be in between in the first path marked, lets call it $l$, then $a_{l-1}$ is either between the marked positions $\pm l$, therefore a $-(l-1)$ is between some marked positions or it is not, thus it changed a marked $-(l-1)$ into a $-l$ but not the according $(l-1)$. When $a$ causes a *height violation* even though it is marked and therefore $p<a_l$, we can argue as above. This explains why we look for $\pm l$ between the $0$’s directly to the right. We do so because in Algorithm \[alg:2\] we mark all $\pm l$ between a $-j$ until the next $j$, which become the leftmost and rightmost $0$ of their sequence of $0$’s. When we *adjust a separation point*, we shift the $\pm l$ upwards, as $-(l-1)$ was not between $\pm(l-1)$ it is now not between $\pm l$ any more. With the knowledge of those, the following gets easier: - *$i$ even, $i$ odd*: It remains to show that everything that does not involve marking is inverse. Due to Corollary \[cor:SeparationInsert\] we can use Lemma 36 in [@3erAlgo] to show this. We point out that the main arguments include analyzing 2- and 3-row-positions. - $a_l$ / $b_l$: We point out, that we have an index shift of one at $l$ between the formulations. Once we consider this, we see that they operate clearly in the opposite way. It remains to show, that they act on the same positions. As they always take the next $-l$, and change it, there is no $-l$ that could be taken before from the other algorithm. \[theo:algo2DescPres\] Algorithm \[alg:2\] is descent preserving. This proof is very similar to the proof of Lemma 37 in [@3erAlgo]. We show that the algorithm preserves descents after every insertion of a pair $a,b$ in the sense that we consider the inserted numbers as a new total order. In the first step we show that when we insert a pair $(a,b)$, $a$ and $b$ cause a descent except for the case that $a$ and $b$ are neighbors in the order of already inserted numbers. To see why we want this to hold, we consider the partial standard Young tableau consisting only of already inserted numbers. The number smaller than $a$ needs to be in a row below $a$ as numbers in the same row to the right are larger. The same holds for $b$ except if $a$ and $b$ are neighbors in the current order. In the case that $a$ and $b$ are neighbors, they are both inserted as $-1$’s and we have to show that only $a$ is a descent. Thus everything else is analogously to the general case. As $a$ or $b$ are inserted with $-1$ the only way that this causes no descent is that the position to the left is a $-1$ or a $1$ on level zero. The latter is not possible as this $1$ would have been on level $-1$ before. A $-1$ directly to the left of $a$ or $b$ would change either into a $0$, a $1$ or a $-2$, depending on $i$. All cases cause a descent. In the second step we show that we do not lose descents when inserting a pair $(a,b)$. If an entry was a descent in the partial tableau before inserting $(a,b)$ it is still one in the new partial tableau, either with the same number above, or with $a$ or $b$. In the former case neither $a$ nor $b$ are inserted between those. In the latter case either $a$ or $b$ is inserted in between. This creates a descent in the vacillating tableau and removes the other descent as $(-1,x)$ can never be a descent. Inserting a $-l$, always creates a new descent, when ignoring positions with smaller absolute values. The only such value that is not a descent left of a $-l$ is $-l$. However a $-l$ left of an inserted $-l$ is changed into a $-l-1$ while inserting. (Separation points are ignored if $c=b$, however they are adjusted before, if $b$ would be inserted between those, thus changed into a $-l-1$ too.) It follows, that the position left of our new $-l$ is a descent if and only if it was a descent before changing it into $-l$. In the third step we consider *connect* and *separate* as well as *height violation* and *adjust separation points*: We show that *separate* and *connect* neither produce nor cancel a descent. For *$i$ even connect* this is clear as $(j,-j)$ on $l$-level zero is not a descent. For *$i$ odd separate* we consider a $0$ left or right of a position that was changed in *separate*. Those need to be either $\tilde{a}$ or $\tilde{b}$ or they were changed in *connect*, because otherwise they would have been *separated* also. In the former case we want a descent, in the latter too, as $(j,0)$ has changed into $(0,-j)$ or the other way around. The same holds for $j$’s or $-j$’s of *connect*. At *height violation* we change an $l$ that was a *height violation* to an $(l+1)$. If $l$ was a descent, and $(l+1)$ is none, there needs to be a $(l+1)$ to the right, however this cannot happen, as then the *height violation* would have started earlier to the right. If $l$ was no descent, then $(l+1)$ is none too. In our special cases we undo some change we have just done before, thus we do not change any descents. For *separation points* we point out that a descent left of a $\pm l$ needs to be a $\pm (l+1)$, thus when *adjusting* them they get either $\pm (l+1)$ and $\pm (l+2)$ or $\pm (j-1)$ and $0$, both preserves the descent. \[theo:10-1\] Let $Q$ be a standard Young tableau with rows of even length and $V$ be its corresponding vacillating tableau determined by Algorithms \[alg:2\] and \[alg:u2\]. If and only if for all rows $i=1,2,\dots,2k+1$ of $Q$ the first position in row $i$ is $i$, the first $k$ steps of $V$ are $1,2,\dots,k,0,-k,-k+1,\dots,-1$. This holds as Algorithm \[alg:2\] is descent preserving and Algorithms \[alg:2\] and \[alg:u2\] are inverse. Cut-away-shapes and $\mu$-horizontal strips ------------------------------------------- In this subsection we will define a pattern on vacillating tableau, namely “cut-away-shapes”, and an equivalent pattern on standard Young tableaux, namely “$\mu$-horizontal strip”. We will see that these are mapped to each other in Algorithms \[alg:2\] and \[alg:u2\]. The definition of the latter is strongly related to alternative Littlewood-Richardson tableaux. A vacillating tableau of shape $\emptyset$ has *cut-away-shape* $\mu=(\mu_1,\mu_2, \dots, \mu_l)$ if it ends with ---------------------------------- --------------------------------------- ---------- --------------------------------- --------------------------------- -- -- -- -- $(\underbrace{-l,-l,\dots, -l,}$ $\underbrace{-l+1,-l+1,\dots, -l+1,}$ $\dots $ $\underbrace{-2,-2,\dots, -2,}$ $\underbrace{-1,-1,\dots, -1})$ $\mu_l$ $\mu_{l-1}$ $\dots$ $\mu_2$ $\mu_1$ ---------------------------------- --------------------------------------- ---------- --------------------------------- --------------------------------- -- -- -- -- . Therefore, if we delete “cut away” the last $|\mu|$ positions the vacillating tableau has shape $\mu$. The following vacillating tableau has cut-away-shape $\mu=(\color{blue} 3 \color{black}, \color{violet} 2 \color{black}, \color{red} 1\color{black})$: (0,0) – (1,1) node\[midway, above\] [1]{}– (2,2) node\[midway, above\] [2]{}– (3,3) node\[midway, above\] [3]{}– (4,4) node\[midway, above\] [4]{}– (5,4) node\[midway, above\] [5]{}– (6,4) node\[midway, above\] [6]{}– (7,4) node\[midway, above\] [7]{}– (8,4) node\[midway, above\] [8]{}– (9,4) node\[midway, above\] [9]{}– (10,3) node\[midway, above\] [10]{}; (10,3)\[red\]– (11,3) node\[midway, above\] [11]{}; (11,3)\[violet\]– (12,3) node\[midway, above\] [12]{}– (13,3) node\[midway, above\] [13]{}; (13,3)\[blue\]– (14,2) node\[midway, above\] [14]{}– (15,1) node\[midway, above\] [15]{}– (16,0) node\[midway, above\] [16]{}; (4,-1) – (5,0) node\[midway, above\] [5]{}– (6,-1) node\[midway, above\] [6]{}– (7,0) node\[midway, above\] [7]{}– (8,1) node\[midway, above\] [8]{}– (9,1) node\[midway, above\] [9]{}; (9,1)\[red\]– (10,1) node\[midway, above\] [11]{}; (10,1)\[violet\]– (11,0) node\[midway, above\] [12]{}– (12,-1) node\[midway, above\] [13]{}; (8,-2)– (9,-1) node\[midway, above\] [9]{}; (9,-1)\[red\]– (10,-2) node\[midway, above\] [11]{}; If a tableau has cut-away-shape $\mu=(\mu_1,\mu_2,\dots,\mu_l)$ it has also cut-away-shape $\tilde{\mu}$ where $\tilde{\mu}\subseteq\mu$ are subpartitions of the form $\tilde{\mu}=(\mu_1,\mu_2,\dots,\mu_{m},\mu_{m+1}-u)$ for every $0\leq m<l$ and $0\leq u <\mu_{m+1}$. \[def:MuHorizontalStrip\] Let $\mu$ be a partition with $\ell(\mu)\leq k$. Let $Q$ be a standard Young tableau with $2k+1$, possibly empty, rows, whose lengths have all the same parity, and $r$ entries. A *$\mu$-horizontal strip* is a pattern of the last $|\mu|$ numbers in the following way: 1. \[p:MuHoriHorizontalStrip\] For each $j$, the numbers $r-(\mu_1+\mu_2+\dots+\mu_{j-1})-\mu_j+1$ up to $r-(\mu_1+\mu_2+\dots+\mu_{j-1})$ form a horizontal strip filled increasingly from left to right. By abuse of notation we say that those numbers are in $\mu_j$. 2. \[p:MuHoriUpperNeighbour\] The $i$th number in $\mu_j$ is in a row below the $i$th number of $\mu_{j+1}$ if the latter exists. 3. \[p:MuHoriMinRow\] Go through the elements of $Q$ belonging to the $|\mu|$ last numbers from top to bottom, from right to left. Let $e$ be the current element of the $\mu$-horizontal strip. We define a sequence $v_e$ of elements of the $\mu$-horizontal strip. Let $e$ be the first entry of $v_e$. If $m-1$ entries of $v_e$ are defined, let $f$ be entry number $m-1$. We search now for entry number $m$. For that we consider entries whose that are smaller than $f$ and which are in exactly $m-1$ sequences defined before $v_e$. If this set is nonempty, take the largest entry as entry $m$. If it is empty, $v_e$ has no more entries. Let $r_e$ be the row $p$ is in. Now we define the value $o_e$ to be the number of entries in $v_e$ with the following properties. It is the rightmost occurrence in their $\mu_j$ and if number $m$ in $v_e$, all $v_{\tilde{e}}$, where $\tilde{e}\neq e$ is in the same row as $e$, have at most $m-1$ entries. We require $r_e\geq 2 |v_e| - o_e$. \[prop:MuAndAeLRSame\] If and only if the $|\mu|$ largest elements in a standard Young tableau $Q$ form a $\mu$-horizontal strip, the reverse skew semistandard tableau we obtain by deleting smaller elements and replacing elements in $\mu_j$ by $j$ is an alternative orthogonal Littlewood-Richardson tableau. This follows directly from the definitions (Definition \[def:aoLRT\] and Definition \[def:MuHorizontalStrip\]). The main difference in the definitions is that in the $\mu$-horizontal strip we only require in the third point of defining $v$ that it is the largest one, and not that it is the rightmost occurrence. Since entries in $\mu_j$ are increasing, this is still equivalent. We consider the following tableaux (the first and the last one are corresponding tableaux to those in Example \[ex:aoLRT\]): (0,7) – (4,7); (0,6) – (4,6); (0,5) – (4,5); (0,4) – (4,4); (0,3) – (2,3); (0,2) – (2,2); (0,7) – (0,2); (1,7) – (1,2); (2,7) – (2,2); (3,7) – (3,4); (4,7) – (4,4); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (3.5,6.5) node [4]{}; (0.5,5.5) node [5]{}; (1.5,5.5) node [6]{}; (2.5,5.5) node [7]{}; (3.5,5.5) node [8]{}; (0.5,4.5) node [9]{}; (0.5,3.5) node [10]{}; (1.5,4.5) node [11]{}; (1.5,3.5) node [12]{}; (2.5,4.5) node [13]{}; (0.5,2.5) node [14]{}; (1.5,2.5) node [15]{}; (3.5,4.5) node [16]{}; (0,1); (0,7) – (2,7); (0,6) – (2,6); (0,5) – (2,5); (0,4) – (2,4); (0,7) – (0,4); (1,7) – (1,4); (2,7) – (2,4); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (0.5,5.5) node [3]{}; (1.5,5.5) node [4]{}; (0.5,4.5) node [5]{}; (1.5,4.5) node [6]{}; (0,1); (0,7) – (4,7); (0,6) – (4,6); (0,5) – (4,5); (0,4) – (2,4); (0,7) – (0,4); (1,7) – (1,4); (2,7) – (2,4); (3,7) – (3,5); (4,7) – (4,5); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (0.5,5.5) node [3]{}; (1.5,5.5) node [4]{}; (2.5,6.5) node [5]{}; (3.5,6.5) node [6]{}; (2.5,5.5) node [7]{}; (0.5,4.5) node [8]{}; (1.5,4.5) node [9]{}; (3.5,5.5) node [10]{}; (0,1); (0,7) – (4,7); (0,6) – (4,6); (0,5) – (4,5); (0,4) – (4,4); (0,3) – (2,3); (0,2) – (2,2); (0,1) – (2,1); (0,7) – (0,1); (1,7) – (1,1); (2,7) – (2,1); (3,7) – (3,4); (4,7) – (4,4); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (3.5,6.5) node [4]{}; (0.5,5.5) node [5]{}; (1.5,5.5) node [6]{}; (0.5,4.5) node [7]{}; (1.5,4.5) node [8]{}; (0.5,3.5) node [9]{}; (1.5,3.5) node [10]{}; (2.5,5.5) node [11]{}; (3.5,5.5) node [12]{}; (0.5,2.5) node [13]{}; (1.5,2.5) node [14]{}; (0.5,1.5) node [15]{}; (1.5,1.5) node [16]{}; (2.5,4.5) node [17]{}; (3.5,4.5) node [18]{}; (0,7) – (8,7); (0,6) – (8,6); (0,5) – (8,5); (0,4) – (8,4); (0,3) – (8,3); (0,2) – (6,2); (0,1) – (4,1); (0,7) – (0,1); (1,7) – (1,1); (2,7) – (2,1); (3,7) – (3,1); (4,7) – (4,1); (5,7) – (5,2); (6,7) – (6,2); (7,7) – (7,3); (8,7) – (8,3); (0.5,6.5) node [1]{}; (1.5,6.5) node [2]{}; (2.5,6.5) node [3]{}; (3.5,6.5) node [4]{}; (0.5,5.5) node [5]{}; (1.5,5.5) node [6]{}; (2.5,5.5) node [7]{}; (3.5,5.5) node [8]{}; (0.5,4.5) node [9]{}; (1.5,4.5) node [10]{}; (2.5,4.5) node [11]{}; (3.5,4.5) node [12]{}; (0.5,3.5) node [13]{}; (1.5,3.5) node [14]{}; (2.5,3.5) node [15]{}; (3.5,3.5) node [16]{}; (0.5,2.5) node [17]{}; (1.5,2.5) node [18]{}; (2.5,2.5) node [19]{}; (3.5,2.5) node [20]{}; (4.5,6.5) node [21]{}; (5.5,6.5) node [22]{}; (6.5,6.5) node [23]{}; (7.5,6.5) node [24]{}; (4.5,5.5) node [25]{}; (5.5,5.5) node [26]{}; (6.5,5.5) node [27]{}; (7.5,5.5) node [28]{}; (4.5,4.5) node [29]{}; (5.5,4.5) node [30]{}; (4.5,3.5) node [31]{}; (6.5,4.5) node [32]{}; (7.5,4.5) node [33]{}; (4.5,2.5) node [34]{}; (5.5,3.5) node [35]{}; (6.5,3.5) node [36]{}; (7.5,3.5) node [37]{}; (0.5,1.5) node [38]{}; (1.5,1.5) node [39]{}; (2.5,1.5) node [40]{}; (3.5,1.5) node [41]{}; (5.5,2.5) node [42]{}; (0,1); The first tableau contains a $\mu=(3,2,1)$-horizontal strip (as well as a $(3,2)$-, $(3,1)$-, $(3)$-, $(2)$-, $(1)$-, and $\emptyset$-horizontal strip). It is the corresponding standard Young tableau to the vacillating tableau in the previous example. The $v$’s are: $(16)$, $(13)$, $(11)$, $(12,11)$, $(15,13,11)$, $(14,12)$. (Compare with Example \[ex:aoLRT\].) The second tableau contains a $(2,1)$-horizontal strip but not a $(2,2)$, $(2,2,1)$ or $(2,2,2)$ one due to the third condition. The $v$’s are: $(4)$, $(6,4)$, $(5)$. The third tableau contains a $(3,1)$-horizontal strip. The $v$’s are: $(10)$, $(7)$, $(9,7)$, $(8)$. The fourth tableau contains a $(4,2,1)$-horizontal strip but not a $(4,2,2)$-horizontal strip due to the third condition. The $v$’s are: $(12)$, $(18,12)$, $(17)$, $(14)$, $(13)$, $(16,14,12)$, $(15,13)$. The last (fifth) tableau contains a $\mu=(5,4,3)$-horizontal strip. The $v$’s are: $(33)$, $(32)$; $(37,33)$, $(36,32)$, $(35)$, $(31)$; $(42,37,33)$, $(34,31)$; $(41,36,32)$, $(40,35,31)$, $(39,34)$, $(38)$. Before we prove that $\mu$-horizontal strips are equivalent to cut-away-shapes, we state some facts about Algorithm \[alg:2\] we will need later on. These follow directly of the formulation of the algorithm. We see that everything happens right of the rightmost up-step that is not part of the right part of a *separation point*. Therefore *height violations* do not play a role here. For the $|\mu|$ largest positions in $Q$ it holds that: - A $-l$ gets a $-(l+1)$ if and only if it is chosen as some $c_{l+1}$ for $l<j$. - $-j$’s chosen get $(j+1)$ or $-(j+1)$ when chosen as $c_{j+1}$ in insert row $2j+1$. (They get $0$’s first, and are initialized later.) - $-j$’s chosen get either $0$ or $j$ when chosen as $c_{j+1}$ in insert row $2j$. Only if there is only one position right of them they become a $0$ still in our considered part of the path. - An $l$ can only get a negative entry if it is part of a *separation point*. \[lem:MuhoriCutaway\] We consider an element $e$ in $\mu_i$ in row $r_e$ which gets inserted. 1. For $l\leq\lfloor r_e/2 \rfloor$ element number $l$ in $v_e$ is $e_l$. 2. If $|v_e|<\lfloor r_e/2 \rfloor$ $e_l$ with $l>|v_e|$ are left of the part of the labeled word we consider. 3. If $|v_e|>\lfloor r_e/2 \rfloor$ element number $l$ with $l>\lfloor r_e/2 \rfloor$ is part of a *separation point* directly left of our down-steps. Each time nothing is changed to the right of it, the rightmost one of those gets a $-j$. We prove this inductively on the row $r_e$ an element is in. For the base case we consider an element of the first row. This is the only one belonging to the $\mu$-horizontal strip and the last one of the first row. Thus it gets inserted as a $-1$. One could say that it was inserted as a $0$, thus part of a *separation point*, but changes into a $-j=-1$ when *initializing row $j=1$*. We show the induction step by another induction on the elements in $v_e$. The base case is clear as $e$ gets inserted as $-1$. Now we consider element $l$ in $v_e$. This is a $-(l-1)$ and was in $l-1$ $v$’s before. Moreover it is left of $e_{l-1}$. Every $-(l-1)$ that is between those, was in some other $v$ in the same row, or else it would have been taken instead. Thus this $-(l-1)$ is $e_l$ and gets changed into an $-l$. Therefore the first property in question holds. The second property holds, as once there is no element number $l$ in $v_e$ left, we know that there is no untouched $-(l-1)$ in our part of the path in question left, thus $e_l$ is more to the left. The third property is more complicated. We point out, that elements, that are number $l$ in $v_e$ with $l>\lfloor r_e/2 \rfloor$, are counted by $o$. Thus they are the rightmost ones of their $\mu_m$. Due to the Yamanouchi property and Propositions \[prop:2ndPropYamanouchi\] and \[prop:MuAndAeLRSame\] we can argue that in those paths in which they are, there is no other position so far. Another crucial point for the third property is, that once elements counted by $o$ occur, they also occur in the next row, if there is an element that is larger. The only way how they get less, is when we correct our separation point, thus if a smaller element is considered or there is an empty row. We now consider elements number $j+1$ up to $|v_e|$ during the insertion process of $e$. - An element that is number $j+1$ in $v_e$ is a $-j$ and gets a $0$ that is the rightmost $0$. This is clear if $i$ is odd. If $i$ is even this follows as then element $j$ needs to be counted as $o$ as well and therefore it is the only element inserted to path $j$ in our area of question. In this case it becomes a $0$ on $j$-level $1$. - The rightmost $0$ gets a $-(j+1)$ on level $0$ if $i$ is odd just before inserting the next row. - An element, that is number $j+2$ in $v_e$, is a $0$ before and a $j$ afterwards, if $i$ is odd due to *separate odd*. If $i$ is even, it was and is a $(j-1)$. - An element that is number $j+m$ in $v_e$ is a $j-m+1$ (respectively $j-m+2$) if $i$ is even (respectively odd). Now if we insert a $c_l$ into the first path that contains such a $j-m+1$ (respectively $j-m+2$), there are two possible cases. In the first case, $c_l$ is inserted right of the corresponding $-j+m-1$ (respectively $j-m+2$). In this case $c_l$ is larger, and $v_c$ contains all elements our $-j+m-1$ (respectively $j-m+2$) had in its $v$ as well. Thus those are all counted by $o$. We do not adjust the separation point and the procedure goes on. In the second case, $c_l$ is inserted to the left. Therefore we *adjust the separation point* and the $j-m+1$ (respectively $j-m+2$) becomes a $j+1-m+1$ (respectively $j+1-m+2$). In the same step either a $j-1$ becomes a $j$ or a $j-1$ becomes a $0$ (and thus later on a $-(j+1)$) depending on the parity of $i$. The same happens if for a row there is nothing inserted in the area of question. \[lem:muhorizontal1\] A standard Young tableau $Q$ containing a $\mu$-horizontal step is mapped to a vacillating tableau of cut-away-shape $\mu$ by Algorithm \[alg:2\]. Lemma \[lem:MuhoriCutaway\] tells us that if an element is in $j$ different $v_e$’s, it ends up as a $-j$. As elements in $\mu_j$ are in exactly $j$ different $v_e$’s (compare with Propositions \[prop:2ndPropYamanouchi\] and \[prop:MuAndAeLRSame\]), we get cut away-shape $\mu$. \[lem:muhorizontal2\] If a vacillating tableau has cut-away-shape $\mu$, it is mapped by Algorithm \[alg:u2\] to a standard Young tableau containing a $\mu$-horizontal strip. Let $V$ be a vacillating tableau with cut-away-shape $\mu$. Let $Q$ be its corresponding standard Young tableau and let $\tilde{\mu}$ be the largest partition such that $Q$ contains a $\tilde{\mu}$-horizontal strip. Now by Lemma \[lem:muhorizontal1\], $V$ also contains a $\tilde{\mu}$-horizontal strip. If $\tilde{\mu} \supseteq \mu$ we are done. If $\tilde{\mu}\subsetneq \mu$ we show that we get a contradiction. In this case let $p$ be the largest position in $Q$ that is not in the $\tilde{\mu}$-horizontal strip. We add it to the $\tilde{\mu}$-horizontal strip such that $\tilde{\mu}\subseteq\mu$. Now we know that this does not satisfy one of the three conditions. Therefore we distinguish cases. 1. If the last $\tilde{\mu}_j$ is not a horizontal strip, then $p$ is a descent, which gives a contradiction as Algorithms \[alg:2\] and \[alg:u2\] are descent preserving and $p$ is not a descent in $V$. 2. If the word does not satisfy the second condition, the reversed reading word of the according alternative orthogonal Littlewood-Richarson tableau is not Yamanouchi. This gives a contradiction to Propositions \[prop:2ndPropYamanouchi\] and \[prop:MuAndAeLRSame\] and Lemma \[lem:MuhoriCutaway\]. 3. If the inequality of the third property is not satisfied there are two possible cases. - It could be that a $v$ got longer (this happens exactly if $p$ is in it). For it to be to long, $p$ needs to be at least number $j+1$. However we know, that $p+1$ was inserted at least as often. Therefore $p$ is inserted on level $2$. This is a contradiction to being part of a *separation point* due to being number $(j+1)$, compare with Lemma \[lem:MuhoriCutaway\]. - Or it could be that a $\tilde{v}$ in the same row got longer (then $p$ is in this $\tilde{v}$). In this case again there needs to be at least one number $j+1$. The first path with a *separation point* belonging to a position counted by $o$ gets also level $2$ positions, which is also a contradiction. Thus we have proven (by Lemma \[lem:muhorizontal1\] and \[lem:muhorizontal2\]) the following theorem: \[theo:muhorizontal\] If and only if a standard Young tableau $Q$ contains a $\mu$-horizontal strip, the corresponding vacillating tableau has cut-away-shape $\mu$. Conjectures for Bijection B --------------------------- \[con:ConCat\] Concatenation of standard Young tableaux, whose row lengths have all the same parity corresponds to concatenation of vacillating tableaux of shape $\emptyset$ in general. This is proven for $k=1$ in [@3erAlgo], and for standard Young tableaux with even row length in Theorem \[theo:Concat\]. Evacuation (Schützenberger involution) in a standard Young tableau corresponds to the reversal of the corresponding vacillating tableau. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Martin Rubey and Stephan Pfannerer for valuable discussions and helpful comments. [10]{} Jin Hong and Seok-Jin Kang. , volume 42 of [*Graduate Studies in Mathematics*]{}. American Mathematical Society, Providence, RI, 2002. Judith Jagenteufel. A sundaram type bijection for so (3): vacillating tableaux and pairs of syt and orthogonal lr tableaux. , 25(3):3.50, 2018. C. Krattenthaler. Bijections between oscillating tableaux and (semi)standard tableaux via growth diagrams. , 144:277–291, 2016. Jae-Hoon Kwon. Combinatorial extension of stable branching rules for classical groups. , 370(9):6125–6152, 2018. M. Lothaire. , volume 17 of [*Encyclopedia of Mathematics and its Applications*]{}. Addison-Wesley Publishing Co., Reading, Mass., 1983. A collective work by Dominique Perrin, Jean Berstel, Christian Choffrut, Robert Cori, Dominique Foata, Jean Eric Pin, Guiseppe Pirillo, Christophe Reutenauer, Marcel-P. Schützenberger, Jacques Sakarovitch and Imre Simon, With a foreword by Roger Lyndon, Edited and with a preface by Perrin. Soichi Okada. Pieri rules for classical groups and equinumeration between generalized oscillating tableaux and semistandard tableaux. , 23(4):Paper 4.43, 27, 2016. Robert A. Proctor. A [S]{}chensted algorithm which models tensor representations of the orthogonal group. , 42(1):28–49, 1990. Thomas Walton Roby, V. . ProQuest LLC, Ann Arbor, MI, 1991. Thesis (Ph.D.)–Massachusetts Institute of Technology. Martin Rubey, Bruce E. Sagan, and Bruce W. Westbury. Descent sets for symplectic groups. , 40(1):187–208, 2014. Richard P. Stanley. , volume 62 of [ *Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, 1999. Sheila Sundaram. . ProQuest LLC, Ann Arbor, MI, 1986. Thesis (Ph.D.)–Massachusetts Institute of Technology. Sheila Sundaram. Orthogonal tableaux and an insertion algorithm for [${\rm SO}(2n+1)$]{}. , 53(2):239–256, 1990. Appendix {#appendix .unnumbered} ======== [|p[1.4cm]{}|p[1.4cm]{}|p[2.2cm]{}|p[1.8cm]{}|p[1.4cm]{}|p[1.8cm]{}|]{}\ $\lambda$ & $\mu$ & $L$ & $\tilde{L}$ & $Q$ & $V$\ \ $(1,1,1)$ & $(1,1)$ & (0,0)–(0,1); (1,0)–(1,1); (0,0)–(1,0); (0,1)–(1,1); (0.5,0.5) node [1]{}; (2.5,0)–(2.5,1); (3.5,0)–(3.5,1); (2.5,0)–(3.5,0); (2.5,1)–(3.5,1); (3,0.5) node [1]{}; (5,0)–(5,1); (6,0)–(6,1); (5,0)–(6,0); (5,1)–(6,1); (5.5,0.5) node [1]{}; & (0,0)–(0,5); (1,0)–(1,5); (0,0)–(1,0); (0,1)–(1,1); (0,2)–(1,2); (0,3)–(1,3); (0,4)–(1,4); (0,5)–(1,5); (0.5,1.5) node [2]{}; (0.5,0.5) node [1]{}; & (0,0)–(0,3); (1,0)–(1,3); (0,0)–(1,0); (0,1)–(1,1); (0,2)–(1,2); (0,3)–(1,3); (0.5,2.5) node [1]{}; (0.5,1.5) node [2]{}; (0.5,0.5) node [3]{}; & (0,0)–(1,1)–(2,1)–(3,1); (1,-1)–(2,0)–(3,0); \ & & & & (0,0)–(0,2); (1,0)–(1,2); (2,1)–(2,2); (0,0)–(1,0); (0,1)–(2,1); (0,2)–(2,2); (0.5,1.5) node [1]{}; (1.5,1.5) node [2]{}; (0.5,0.5) node [3]{}; & (0,0)–(1,1)–(2,2)–(3,1); \ & & & & (0,0)–(0,2); (1,0)–(1,2); (2,1)–(2,2); (0,0)–(1,0); (0,1)–(2,1); (0,2)–(2,2); (0.5,1.5) node [1]{}; (1.5,1.5) node [3]{}; (0.5,0.5) node [2]{}; & (0,0)–(1,1)–(2,1)–(3,1); (1,-1)–(2,0)–(3,-1); \ & & & & (0,0)–(0,2); (1,0)–(1,2); (2,1)–(2,2); (0,0)–(1,0); (0,1)–(2,1); (0,2)–(2,2); (0.5,1.5) node [1]{}; (1.5,1.5) node [2]{}; (0.5,0.5) node [3]{}; & (0,0)–(1,1)–(2,2)–(3,2); (2,-1)–(3,0); (3,2) node; \ & & & & (0,0)–(0,2); (1,0)–(1,2); (2,1)–(2,2); (0,0)–(1,0); (0,1)–(2,1); (0,2)–(2,2); (0.5,1.5) node [1]{}; (1.5,1.5) node [3]{}; (0.5,0.5) node [2]{}; & (0,0)–(1,1)–(2,1)–(3,2); (1,-1)–(2,0); (3,2) node; \ & $(1)$ & (0,0)–(0,1); (1,0)–(1,1); (0,0)–(1,0); (0,1)–(1,1); (0.5,0.5) node [3]{}; (5,1)–(5,3); (6,1)–(6,3); (5,1)–(6,1); (5,2)–(6,2); (5,3)–(6,3); (5.5,1.5) node [2]{}; (5.5,2.5) node [1]{}; & (0,0)–(0,1); (1,0)–(1,1); (2,0)–(2,1); (3,0)–(3,1); (4,0)–(4,1); (0,0)–(4,0); (0,1)–(4,1); (3.5,0.5) node [1]{}; & (0,0)–(0,1); (1,0)–(1,1); (2,0)–(2,1); (3,0)–(3,1); (0,0)–(3,0); (0,1)–(3,1); (0.5,0.5) node [1]{}; (1.5,0.5) node [2]{}; (2.5,0.5) node [3]{}; & (0,0)–(1,1)–(2,0)–(3,1); \ & $(3)$ & (0,0)–(0,3); (1,0)–(1,3); (0,0)–(1,0); (0,1)–(1,1); (0,2)–(1,2); (0,3)–(1,3); (0.5,2.5) node [1]{}; (0.5,1.5) node [2]{}; (0.5,0.5) node [3]{}; & (0,0)–(0,2); (1,0)–(1,2); (2,0)–(2,2); (3,1)–(3,2); (4,1)–(4,2); (0,0)–(2,0); (0,1)–(4,1); (0,2)–(4,2); (0.5,0.5) node [1]{}; (1.5,0.5) node [1]{}; (3.5,1.5) node [1]{}; & (0,0)–(0,1); (1,0)–(1,1); (2,0)–(2,1); (3,0)–(3,1); (0,0)–(3,0); (0,1)–(3,1); (0.5,0.5) node [1]{}; (1.5,0.5) node [2]{}; (2.5,0.5) node [3]{}; & (0,0)–(1,1)–(2,2)–(3,3); \
--- abstract: 'For a fixed compact Riemann surface $X$, of genus at least $2$, we count the number of connected components of the moduli space of maximal Higgs bundles over $X$ for the hermitian groups ${\mathrm{PSp}}(2n,{\mathbb{R}})$, ${\mathrm{PSO}}^*(2n)$, ${\mathrm{PSO}}_0(2,n)$ and $E_6^{-14}$. Hence the same result follows for the number of connected components of the moduli space of maximal representations of $\pi_1X$ in these groups. We use the Cayley correspondence proved in [@biquard-garcia-prada-rubio:2015] as our main tool.' address: - - author: - 'Oscar García-Prada' - André Oliveira date: 13 January 2016 title: Maximal Higgs bundles for adjoint forms via Cayley correspondence --- [^1] [^2] Introduction ============ Given a real reductive Lie group $G$, the count of the connected components of the moduli spaces ${{\mathcal M}}(G)$ of $G$-Higgs bundles over a compact Riemann surface $X$ of genus $g{\geqslant}2$, has been a subject of intense study in the last two decades. The answers are known for many families of classical Lie groups and some general results are also known [@hitchin:1992; @bradlow-garcia-prada-gothen:2005; @garcia-prada-oliveira:2016], but new phenomena is still being uncovered at the moment. In this paper we compute the number of connected components of ${{\mathcal M_{\mathrm{max}}}}(G)$ when $G$ is an adjoint form of a classical, non-compact, connected and simple real Lie group of hermitian type with finite centre (to which we will refer simply as hermitian group). Here ${{\mathcal M_{\mathrm{max}}}}(G)$ means the subspace of ${{\mathcal M}}(G)$ of those $G$-Higgs bundles with maximal Toledo invariant, which is a natural topological invariant $\tau\in{\mathbb{Q}}$ of $G$-Higgs bundles, whenever $G$ is a hermitian group. Semistability of such Higgs bundles imposes a boundedness condition on $|\tau|$ by $\operatorname{rk}(G/H)(2g-2)$, where $H\subset G$ is a maximal compact and $\operatorname{rk}(G/H)$ is the rank of the corresponding symmetric space. Thus the moduli spaces ${{\mathcal M}}(G)$ are empty if $|\tau|>\operatorname{rk}(G/H)(2g-2)$ (see [@biquard-garcia-prada-rubio:2015]) and ${{\mathcal M_{\mathrm{max}}}}(G)$ corresponds to $\tau=\operatorname{rk}(G/H)(2g-2)$ (it can also correspond to $\tau=-\operatorname{rk}(G/H)(2g-2)$, since the moduli spaces for symmetric Toledo invariant are isomorphic). The case of $G={\mathrm{PU}}(p,q)$ has been studied in [@bradlow-garcia-prada-gothen:2001; @bradlow-garcia-prada-gothen:2003]. So the remaining ones are $G={\mathrm{PSp}}(2n,{\mathbb{R}})$, $G={\mathrm{PSO}}^*(2n)$ and $G={\mathrm{PSO}}_0(2,n)$ and we deal with them in this paper. The paper builds mainly on the Cayley correspondence, proved in general in [@biquard-garcia-prada-rubio:2015]. It implies that if $G$ is a classical Lie group of hermitian type of tube type or an associated adjoint form, there is a real reductive Lie group $G^*$ such that the variety ${{\mathcal M_{\mathrm{max}}}}(G)$ is isomorphic to the moduli space ${{\mathcal M}}^{K^2}(G^*)$ of $K^2$-twisted $G^*$-Higgs bundles over $X$. So we use this result to transfer our study of connectedness of ${{\mathcal M_{\mathrm{max}}}}(G)$ to the study of connectedness of ${{\mathcal M}}^{K^2}(G^*)$. Then we take advantage of the long literature on this subject, which helps to compute $\pi_0({{\mathcal M}}^{K^2}(G^*))$. We follow this procedure in the cases of $G={\mathrm{PSp}}(2n,{\mathbb{R}})$ and $G={\mathrm{PSO}}^*(2n)$, and use the study carried out in [@oliveira:2011] and [@garcia-prada-oliveira:2011], respectively. The situation is slightly different in these two cases in the sense that for ${\mathrm{PSp}}(2n,{\mathbb{R}})$ the Cayley correspondence uncovers “hidden” topological invariants of maximal $ {\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundles, while for ${\mathrm{PSO}}^*(2n)$, the Cayley correspondence does not uncover any “hidden” topological invariant, since all of them are already “visible” on the ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ side. The case of the group ${\mathrm{PSO}}_0(2,n)$ is even easier since, contrary to the other two cases, every maximal ${\mathrm{PSO}}_0(2,n)$-Higgs bundle lifts to a maximal ${\mathrm{SO}}_0(2,n)$-Higgs bundle. So we use this information together with the results of [@bradlow-garcia-prada-gothen:2005] to count the components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,n))$, without needing to use the Cayley correspondence. But of course it still holds and, through it, our result gives a new proof of the main result of [@aparicio-garcia-prada:2013], on the number of connected components of ${{\mathcal M}}({\mathrm{SO}}_0(1,m))$ for $m{\geqslant}3$ odd. We prove then the following (see Theorems \[thm:main\], \[thm:mainPSO\*(2n)\] and \[thm:mainPSO(2,n)\]): Let $|\pi_0({{\mathcal M_{\mathrm{max}}}}(G))|$ be the number of non-empty connected components of ${{\mathcal M_{\mathrm{max}}}}(G)$. If $G={\mathrm{PSp}}(2n,{\mathbb{R}})$, then - $|\pi_0({{\mathcal M_{\mathrm{max}}}}(G))|=3$ if $n{\geqslant}3$ is odd. - $|\pi_0({{\mathcal M_{\mathrm{max}}}}(G))|=2^{2g+1}+2$ if $n{\geqslant}4$ is even. If $G={\mathrm{PSO}}^*(2n)$, then - $|\pi_0({{\mathcal M_{\mathrm{max}}}}(G))|=1$ if $n{\geqslant}3$ is odd. - $|\pi_0({{\mathcal M_{\mathrm{max}}}}(G))|=2$ if $n{\geqslant}4$ is even. If $G={\mathrm{PSO}}_0(2,n)$, then $|\pi_0({{\mathcal M_{\mathrm{max}}}}(G))|=2$ if $n{\geqslant}4$ is even. The cases of ${\mathrm{PSp}}(2,{\mathbb{R}})$ and ${\mathrm{PSp}}(4,{\mathbb{R}})$ are special and known for a long time. First, ${\mathrm{PSp}}(2,{\mathbb{R}})\cong{\mathrm{PSL}}(2,{\mathbb{R}})$, so Goldman [@goldman:1988] and Hitchin [@hitchin:1987] proved that ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2,{\mathbb{R}}))$ is connected (in fact they both proved that ${{\mathcal M}}({\mathrm{PSL}}(2,{\mathbb{R}}))$ is connected for any Toledo invariant). Regarding ${\mathrm{PSp}}(4,{\mathbb{R}})$, we know that it is isomorphic to ${\mathrm{SO}}_0(2,3)$, hence it was proved in [@bradlow-garcia-prada-gothen:2005] that ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(4,{\mathbb{R}}))$ has $2^{2g+1}+4g-5$ non-empty connected components. Our theorem completely settles the case of ${\mathrm{PSp}}(2n,{\mathbb{R}})$. The cases of ${\mathrm{PSp}}(2n,{\mathbb{R}})$ and ${\mathrm{PSO}}^*(2n)$ for $n$ odd also follow easily without using directly the Cayley correspondence since there are no obstructions to lift to ${\mathrm{Sp}}(2n,{\mathbb{R}})$ and ${\mathrm{SO}}^*(2n)$. Indeed, the result for ${\mathrm{PSp}}(2n,{\mathbb{R}})$ with $n$ odd, was already known by Theorem 8 of [@guichard-wienhard:2010]. Furthermore in loc. cit. it was proved that, for $n{\geqslant}4$ even, ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ has at least $2^{2g}+2$ connected components, and our theorem shows that indeed there are further $2^{2g}$ components. The case of ${\mathrm{PSO}}^*(2n)$ with $n=1$ is also special, since ${\mathrm{SO}}^*(2)$ is compact and isomorphic to ${\mathrm{SO}}(2)$, so its adjoint is the trivial group. We also disregard the groups ${\mathrm{PSO}}^*(4)$ and ${\mathrm{PSO}}_0(2,2)$ because they are not simple and the corresponding hermitian symmetric spaces are not irreducible. Finally, the case of ${\mathrm{PSO}}_0(2,n)$ for $n$ odd is not included since in this case ${\mathrm{PSO}}_0(2,n)={\mathrm{SO}}_0(2,n)$, so the result is known from [@bradlow-garcia-prada-gothen:2005]. As an application of the fact that ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,8))$ has $2$ non-empty connected components, it follows immediately from the results of [@biquard-garcia-prada-rubio:2015], that we can for the first time count the number of maximal components of the moduli of Higgs bundles for a real exceptional group, namely $E_6^{-14}$. The moduli space ${{\mathcal M_{\mathrm{max}}}}(E_6^{-14})$ has $2$ non-empty connected components. It is important to note that everything we just said goes through to the moduli space of reductive representations of $\pi_1X$ in $G$, due to the non-abelian Hodge correspondence [@hitchin:1987; @simpson:1988; @simpson:1992; @donaldson:1987; @corlette:1988; @garcia-prada-gothen-mundet:2008]. General results =============== Higgs bundles for adjoint forms ------------------------------- Since several groups will come into play, we provide the general definition of a $G$-Higgs bundle for any real reductive Lie group $G$, which we assume admits a complexification $G^{\mathbb{C}}$. Let $H\subseteq G$ be a maximal compact subgroup and $H^{\mathbb{C}}$ be its complexification. Let ${\mathfrak{g}^{\mathbb{C}}}={\mathfrak{h}^{\mathbb{C}}}\oplus{\mathfrak{m}^{\mathbb{C}}}$ be the corresponding Cartan decomposition of the complexification of the Lie algebra ${\mathfrak{g}}$ of $G$. Then ${\mathfrak{m}^{\mathbb{C}}}$ is a representation of $H^{\mathbb{C}}$ through the representation $\iota:H^{\mathbb{C}}\to{\mathrm{GL}}({\mathfrak{m}^{\mathbb{C}}})$, induced by the adjoint representation $\operatorname{Ad}:G^{\mathbb{C}}\to{\mathrm{GL}}({\mathfrak{g}^{\mathbb{C}}})$. This is sometimes called the *isotropy representation*. Given an $H^{\mathbb{C}}$-principal bundle $E$ over $X$, let $E({\mathfrak{m}^{\mathbb{C}}})=E\times_\iota{\mathfrak{m}^{\mathbb{C}}}$ be the associated vector bundle. Let $L$ be a holomorphic line bundle over $X$, and let $K$ be the canonical line bundle of $X$. \[def:definition of Higgs bundle\] An *$L$-twisted $G$-Higgs bundle* over $X$ is a pair $(E,\varphi)$ where $E$ is a holomorphic $H^{\mathbb{C}}$-principal bundle $X$ and $\varphi$ is a holomorphic section of $E({\mathfrak{m}^{\mathbb{C}}})\otimes L$. The section $\varphi$ is called the *Higgs field*. If $L\cong K$, we simply say that $(E,\varphi)$ is a *$G$-Higgs bundle*. The general notion of (semi,poly)stability of $L$-twisted $G$-Higgs bundles deduced in Definition 2.9 of [@garcia-prada-gothen-mundet:2008] is necessary to consider the corresponding moduli spaces ${{\mathcal M}}^L(G)$ of polystable $L$-twisted $G$-Higgs bundles. We shall not need here the precise notion of (semi,poly)stability, so we do not state it. It is however important to notice that if we have an $L$-twisted $G$-Higgs bundle $(E,\varphi)$, then the relevant subobjects to consider to check its (semi,poly)stability arise from reductions of structure group of $E$ to parabolic subgroups $P\subset H^{\mathbb{C}}$ and to antidominant characters $\chi:{\mathfrak{p}}\to{\mathbb{C}}$ of ${\mathfrak{p}}$, the Lie algebra of $P$, which are compatible in a certain way with the Higgs field $\varphi$. We refer to [@garcia-prada-gothen-mundet:2008] for the details. Suppose $G$ is a real, connected, reductive Lie group, with $H$ as a maximal compact and let $Z(G)$ denote its centre. Let $\hat G$ be a normal subgroup of $G$ such that $\hat G\subset Z(G)\cap H$. Then $\hat G\subset Z(H^{\mathbb{C}})\subset H^{\mathbb{C}}$. Consider the quotient group $G/\hat G$. An $L$-twisted $G$-Higgs bundle $(\tilde E,\tilde\varphi)$ is mapped to a $G/\hat G$-Higgs bundle by $$\label{eq:GtoG/Z(G)} (\tilde E,\tilde\varphi)\mapsto (E,\varphi)$$ where $E$ is the $H^{\mathbb{C}}/\hat G$-bundle associated to $\tilde E$ via $H^{\mathbb{C}}\to H^{\mathbb{C}}/\hat G$ and where $\varphi=\tilde\varphi$ (this makes sense because $\tilde E({\mathfrak{m}^{\mathbb{C}}})=E({\mathfrak{m}^{\mathbb{C}}})$, since $\hat G\subset Z(G)$ acts trivially in ${\mathfrak{m}^{\mathbb{C}}}$ via the isotropy representation). \[prop:2.5\] An $L$-twisted $G$-Higgs bundle is polystable if and only if the corresponding $L$-twisted $G/\hat G$-Higgs bundle under is polystable. The surjective map $H^{\mathbb{C}}\to H^{\mathbb{C}}/\hat G$ gives a one-to-one correspondence between parabolic subgroups of $H^{\mathbb{C}}$ and of $H^{\mathbb{C}}/\hat G$, as $P\mapsto P/\hat G$ (recall that $\hat G\subset Z(H^{\mathbb{C}})$ hence $\hat G\subset P$). Moreover, the reductions of an $H^{\mathbb{C}}$-bundle to a parabolic subgroup $P$ are the same as the ones from the associated $H^{\mathbb{C}}/\hat G$-bundle to $P/\hat G$. This says that the subobjects to consider in both cases to check polystability are the same, hence the result follows. Hence we have a morphism $$\label{eq:morphismGtoG/Z} {{\mathcal M}}^L(G)\to{{\mathcal M}}^L(G/\hat G)$$ between the moduli spaces which, generally, is neither injective nor surjective. If $G$ is semisimple, with finite centre, then all we just said applies by taking $\hat G=Z(G)$ and for the adjoint form $G/Z(G)$. The moduli space of $G$-Higgs bundles on $X$ will be denoted just by ${{\mathcal M}}(G)$. Hermitian type groups, Toledo invariant and Milnor-Wood inequality ------------------------------------------------------------------ If we consider only the moduli of those $G$-Higgs bundles with fixed topological type $c$, denote the corresponding moduli space by ${{\mathcal M}}_c(G)$. When $G$ is connected, the values of $c$ are indexed by $\pi_1(G)$. Of course we have a disjoint union ${{\mathcal M}}(G)=\bigsqcup_c{{\mathcal M}}_c(G)$. Note that each ${{\mathcal M}}_c(G)$ is a union of connected components. Suppose that $G$ is a *hermitian group*. By this we mean a non-compact, real, connected, simple, Lie group, with finite centre, of hermitian type. Let $H$ be a maximal compact subgroup. The *hermitian type* condition on $G$ means, by definition, that $G/H$ is a hermitian symmetric space (of non-compact type) which admits a complex structure. The centre of $H$ is continuous thus $\pi_1(G)=\pi_1(H)$ has a unique factor isomorphic to the integers ${\mathbb{Z}}$. So, for such $G$, the topological type of a $G$-Higgs bundle determines a unique integer $d$. For hermitian groups, $G$-Higgs bundles have also a topological invariant given by a rational number $\tau$, called the *Toledo invariant*. It can be defined by considering a special character — the Toledo character — of the complexification of the Lie algebra of $H$: $\chi_T:{\mathfrak{h}^{\mathbb{C}}}\to{\mathbb{C}}$. There is a non-zero integer $q$ such that $\chi_T^q$ lifts to a character $\tilde\chi_T^q:H^{\mathbb{C}}\to{\mathbb{C}}^*$ and the Toledo invariant $\tau\in{\mathbb{Q}}$ of a $G$-Higgs bundle is defined as the product of $1/q$ with the degree of the line bundle associated to $\tilde\chi_T^q$. See [@biquard-garcia-prada-rubio:2015] for details. Given a $G$-Higgs bundle $(E,\varphi)$, its Toledo invariant $\tau(E,\varphi)\in{\mathbb{Q}}$ and its integer invariant $d(E,\varphi)\in{\mathbb{Z}}$ defined above, are rational multiples of each other, where the rational number is independent of $(E,\varphi)$. Hence $\tau$ and $d$ are basically the same topological invariant. There is a bound for $\tau$, above which the moduli spaces are empty, since there are no semistable $G$-Higgs bundles. Precisely, we have the following result from [@biquard-garcia-prada-rubio:2015]. \[thm:MW\] Let $(E,\varphi)$ be a semistable $G$-Higgs bundle. Then its Toledo invariant $\tau(E,\varphi)$ verifies a Milnor-Wood type of inequality: $$|\tau(E,\varphi) |{\leqslant}\operatorname{rk}(G/H)(2g-2),$$ where $\operatorname{rk}(G/H)$ denotes the rank of the symmetric space $G/H$. This bound for $\tau$ yields a corresponding bound for the integer $d$. From the Cartan decomposition ${\mathfrak{g}}={\mathfrak{h}}\oplus{\mathfrak{m}}$, we see that ${\mathfrak{m}}$ is the tangent space to $G/H$ at the point $[H]$. The almost complex structure on ${\mathfrak{m}^{\mathbb{C}}}$ yields an $H^{\mathbb{C}}$-invariant decomposition ${\mathfrak{m}^{\mathbb{C}}}={\mathfrak{m}}^+ \oplus{\mathfrak{m}}^-$ in $\pm\sqrt{-1}$-eigenspaces. For a $G$-Higgs bundle $(E,\varphi)$ over $X$, the decomposition of ${\mathfrak{m}^{\mathbb{C}}}$ yields the bundle decomposition $E({\mathfrak{m}^{\mathbb{C}}})=E({\mathfrak{m}}^+)\oplus E({\mathfrak{m}}^-)$, thus the Higgs field decomposes as $\varphi=(\beta,\gamma) \in H^0(E({\mathfrak{m}}^+)\otimes K)\oplus H^0(E({\mathfrak{m}}^-)\otimes K)$. In fact, [@biquard-garcia-prada-rubio:2015 Theorem 1.2] provides a more refined bound for $\tau$, in terms of the ranks of the sections $\beta$ and $\gamma$. However, for our purposes, the one given above suffices. \[rmk:MW-adjoint\] The Milnor-Wood inequality for $G/Z(G)$-Higgs bundles is the same as for $G$-Higgs bundles, since the associated symmetric spaces are the same. Let ${{\mathcal M_{\mathrm{max}}}}(G)$ denote the subspace of ${{\mathcal M}}(G)$ consisting of $G$-Higgs bundles with maximal Toledo invariant (hence maximal $|d|$). It is a particularly interesting subspace in the sense that special phenomenon occur on it. These phenomena differ depending on whether the group is of tube type or not. Indeed, the hermitian type groups divide into two families: tube type and non-tube type. The symmetric space $G/H$ can be geometrically realised as a bounded symmetric domain in ${\mathfrak{m}}^+$, through the Harish-Chandra embedding. The group $G$ is said to be of *tube type* if the Shilov boundary of this bounded domain is a symmetric space of compact type. If $G$ is *not* of tube type, then every polystable $G$-Higgs bundle with maximal Toledo invariant is in fact not stable but strictly polystable, thus reduces to a certain subgroup of $G$. This rigidity phenomenon imposes strong conditions on the geometric structure of ${{\mathcal M_{\mathrm{max}}}}(G)$. See [@bradlow-garcia-prada-gothen:2005 Theorem 4.9] and [@biquard-garcia-prada-rubio:2015 Theorem 1.4]. Our main interest in this paper is on hermitian groups of tube type. For these, there is also a certain rigidity phenomenon on ${{\mathcal M_{\mathrm{max}}}}(G)$, known as the *Cayley correspondence*. To briefly explain it, recall that for such $G$, the Shilov boundary of the embedding of $G/H$ in ${\mathfrak{m}}^+$ is a symmetric space of compact type of the form $H/H'$. This domain is biholomorphic to a ‘tube’ over the symmetric cone $G^*/H'$, where $G^*/H'$ is the non-compact dual symmetric space of $H/H'$. Of course, $H'$ is a maximal compact subgroup of $G^*$ and the Cartan decomposition of ${\mathfrak{g}}^*$, the Lie algebra of $G^*$, is ${\mathfrak{g}}^*={\mathfrak{h}}'\oplus{\mathfrak{m}}$. We refer to $G^*$ as the *Cayley partner* of $G$. With this notation, the Cayley correspondence states the following. \[thm:Cayleycorresp\] Suppose $G$ is a hermitian group of tube type and assume that it is either a classical or an adjoint group. Then there is an isomorphism of complex algebraic varieties $${{\mathcal M_{\mathrm{max}}}}(G)\xrightarrow{\cong}{{\mathcal M}}^{K^2}(G^*).$$ The statement of Theorem 1.3 of [@biquard-garcia-prada-rubio:2015] is more general, since it applies also for exceptional groups. Furthermore, there is also a statement for other groups, namely coverings of classical or exceptional groups, under a certain topological constraint. Since we do not deal with those cases here, the above statement of the theorem is enough for our purposes. Note that in the statement of Theorem \[thm:Cayleycorresp\], when we write ${{\mathcal M}}^{K^2}(G^*)$ we are *not* fixing any topological invariant of $K^2$-twisted $G^*$-Higgs bundles. If we have a hermitian group of tube type, then the Cayley partner of its adjoint form is the obvious one, as the next result shows. \[prop:Cayleypartneradjoint\] Let $G$ be a hermitian group and $G^*$ be its Cayley partner. Then the Cayley partner of $G/Z(G)$ is $G^*/Z(G)$. The group $G^*$ is completely determined by ${\mathfrak{m}}$ and by the group $H'\subset H$, which is such that $H'^{\mathbb{C}}\subset H^{\mathbb{C}}$ is the stabiliser subgroup of a regular element in ${\mathfrak{m}}^+$ (or ${\mathfrak{m}}^-$). The definition of regular element is given in Definition 2.7 and Proposition 2.9 of [@biquard-garcia-prada-rubio:2015]. The maximal compact subgroup of $G/Z(G)$ is $H/Z(G)$. Moreover, $Z(G)\subset H^{\mathbb{C}}$ acts trivially on ${\mathfrak{m}^{\mathbb{C}}}$ hence on ${\mathfrak{m}}^+$, so $Z(G)\subset H'\subset G^*$ and also $Z(G)\subset H'^{\mathbb{C}}$. Now, the stabiliser in $H^{\mathbb{C}}/Z(G)$ of a regular element of ${\mathfrak{m}}^+$ is exactly $H'^{\mathbb{C}}/Z(G)$. So the Cayley partner of $G/Z(G)$ is the group with maximal compact $H'/Z(G)$ and whose Cartan decomposition is ${\mathfrak{g}}^*={\mathfrak{h}}'\oplus{\mathfrak{m}}$, thus is $G^*/Z(G)$. Hitchin function on $K^2$-twisted $G$-Higgs bundles {#subsec:Hit-func} --------------------------------------------------- Given any real reductive Lie group $G$, and for a fixed topological type $c$, the standard method to identify and count connected components of ${{\mathcal M}}_c(G)$ relies on the study of the *Hitchin function* $f:{{\mathcal M}}_c(G)\to{\mathbb{R}}_+$, defined by $$f(E,\varphi)=\|\varphi\|_{L^2}^2=\int_{X}B(\varphi,\tau_h(\varphi))\omega.$$ Here $\omega$ is the volume form, $B$ is a non-degenerate quadratic form on ${\mathfrak{g}}$, extending the Killing form on the derived subalgebra, and $\tau_h:\Omega^{1,0}(X,E({\mathfrak{m}^{\mathbb{C}}}))\to \Omega^{0,1}(X,E({\mathfrak{m}^{\mathbb{C}}}))$ is the involution given by the combination of complex conjugation on complex $1$-forms with the compact conjugation on ${\mathfrak{g}^{\mathbb{C}}}$ which determines its compact form. The map $\tau_h$ is given fibrewise by the metric $h$ solving the Hitchin equations, hence the metric which provides the Hitchin-Kobayashi correspondence between polystable $K^2$-twisted $G$-Higgs bundles and solutions to the $G$-Hitchin equations. See [@garcia-prada-gothen-mundet:2008] for details. The essential feature of this function is that it is proper (and bounded below), since from this property we know that the identification of connected components basically reduces to the identification of connected components of the subvarieties of ${{\mathcal M}}_c(G)$ local minima of $f$. Now, there are general $K^2$-twisted $G$-Hitchin equations, for any line bundle and there is an associated Hitchin-Kobayashi correspondence between polystable $K^2$-twisted $G$-Higgs bundles and solutions to the $K^2$-twisted $G$-Hitchin equations, proved in [@garcia-prada-gothen-mundet:2008]. Hence we can define the Hitchin function $f:{{\mathcal M}}^{K^2}_c(G)\to{\mathbb{R}}_+$ by precisely the same formula. Moreover, the Uhlenbeck weak compactness theorem still applies just as in [@hitchin:1987 Proposition 7.1] to prove the following. \[prop:Hitchfunctionalproper\] For any real reductive Lie group $G$ and any topological type $c$, the Hitchin function $f:{{\mathcal M}}^{K^2}(G)\to{\mathbb{R}}_+$ is proper. This proposition is in fact valid for any $L$-twisting and not just $K^2$. Higgs bundles for ${\mathrm{PSp}}(2n,{\mathbb{R}})$ =================================================== Definitions ant topological type -------------------------------- We start now with our first case of study. Higgs bundles for ${\mathrm{PSp}}(2n,{\mathbb{R}})$. The real projective symplectic group ${\mathrm{PSp}}(2n,{\mathbb{R}})$ is the adjoint form of the group ${\mathrm{Sp}}(2n,{\mathbb{R}})$ of automorphisms of ${\mathbb{R}}^{2n}$ preserving a symplectic form: $${\mathrm{PSp}}(2n,{\mathbb{R}})={\mathrm{Sp}}(2n,{\mathbb{R}})/\{\pm I_{2n}\}={\mathrm{Sp}}(2n,{\mathbb{R}})/{\mathbb{Z}}_2.$$ It is a real semisimple, connected, Lie group. It is a split real form of ${\mathrm{PSp}}(2n,{\mathbb{C}})$ but is also a group of hermitian type, like ${\mathrm{Sp}}(2n,{\mathbb{R}})$, because its maximal compact subgroup ${\mathrm{U}}(n)/{\mathbb{Z}}_2$ has a continuous centre ${\mathrm{U}}(1)/{\mathbb{Z}}_2$, homeomorphic to the circle ${\mathrm{U}}(1)$. Although our main interest is for now on ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundles, we shall also need the related notion of ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundles. We now define these, following the general Definition \[def:definition of Higgs bundle\]. So an *${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle* over $X$ is a pair $(\tilde E,\tilde\varphi)$ where $\tilde E$ is a holomorphic ${\mathrm{GL}}(n,{\mathbb{C}})$-principal bundle on $X$ and $\tilde\varphi$ is a section of $\tilde E({\mathfrak{m}^{\mathbb{C}}})\otimes K$. In this case, $\tilde E({\mathfrak{m}^{\mathbb{C}}})$ is the vector bundle associated to $\tilde E$ and to the isotropy representation ${\mathrm{GL}}(n,{\mathbb{C}})\to {\mathrm{GL}}({\mathfrak{m}^{\mathbb{C}}})$, with ${\mathfrak{m}^{\mathbb{C}}}=S^2{\mathbb{V}}\oplus S^2{\mathbb{V}}^*$ and ${\mathbb{V}}$ the standard ${\mathrm{GL}}(n,{\mathbb{C}})$-representation in ${\mathbb{C}}^n$. A *${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle* over $X$ is a pair $(E,\varphi)$ where $E$ is a holomorphic ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$-principal bundle on $X$ and $\varphi=\tilde\varphi$. We can define an ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle over $X$ in terms of vector bundles as a triple $(V,\beta,\gamma)$ where $V$ is a holomorphic vector bundle on $X$, $\beta$ is a section of $S^2V\otimes K$ and $\gamma$ a section of $S^2V^*\otimes K$. Comparing with $(\tilde E,\tilde \varphi)$ of the above definition, $V$ is the vector bundle canonically associated to $\tilde E$ and $\tilde\varphi=(\beta,\gamma)$. In contrast, in a ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle $(E,\varphi)$, the principal bundle $E$ has structure group ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$, hence there is no standard way to define Higgs bundles for ${\mathrm{PSp}}(2n,{\mathbb{R}})$ in terms of vector bundles. An ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle is mapped to a ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle as in . Topological type of ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundles ------------------------------------------------------------------- The adjoint group ${\mathrm{PSp}}(2n,{\mathbb{R}})$, has ${\mathrm{U}}(n)/{\mathbb{Z}}_2={\mathrm{U}}(n)/\pm I_n$ as a maximal compact subgroup. Its fundamental group fits in the exact sequence $$\label{eq:pi1U(n)/Z2 extension} 1\to\pi_1{\mathrm{U}}(n)\to\pi_1({\mathrm{U}}(n)/{\mathbb{Z}}_2)\to{\mathbb{Z}}_2\to 0.$$ The next result is basic and well-known, but since we did not find any proof in the literature, we include it here. \[prop:pi1U(n)/Z2\] The fundamental group of ${\mathrm{U}}(n)/{\mathbb{Z}}_2$ is $$\pi_1({\mathrm{U}}(n)/{\mathbb{Z}}_2)\cong\begin{cases} {\mathbb{Z}}\times{\mathbb{Z}}_2 &\ n \text{ even}\\ {\mathbb{Z}}&\ n \text{ odd}\\ \end{cases}$$ More precisely, when $n$ is even, is the trivial extension $$\label{eq:pi1U(n)/Z2 extension n even} 1\to {\mathbb{Z}}\to{\mathbb{Z}}\times{\mathbb{Z}}_2\to{\mathbb{Z}}_2\to 0,$$ whereas when $n$ is odd, the inclusion $\pi_1{\mathrm{U}}(n)\hookrightarrow \pi_1({\mathrm{U}}(n)/{\mathbb{Z}}_2)$ is multiplication by $2$, $$\label{eq:pi1U(n)/Z2 extension n odd} 1\to {\mathbb{Z}}\xrightarrow{\times 2}{\mathbb{Z}}\to{\mathbb{Z}}_2\to 0.$$ In any case, $\pi_1({\mathrm{U}}(n))\cong{\mathbb{Z}}$ is a subgroup of index $2$. Consider the universal cover of ${\mathrm{U}}(n)$ (which of course is the same as the universal cover of ${\mathrm{U}}(n)/{\mathbb{Z}}_2$). As a manifold this is ${\mathrm{SU}}(n)\times{\mathbb{R}}$ but as a Lie group it is the semi-direct product ${\mathrm{SU}}(n)\rtimes{\mathbb{R}}$ corresponding to the ${\mathbb{R}}$-action on ${\mathrm{SU}}(n)$ given by $A\cdot t=\left(\begin{smallmatrix}e^{-2\pi i t} & 0 \\ 0 & I_{n-1}\end{smallmatrix}\right)A\left(\begin{smallmatrix}e^{2\pi i t} & 0 \\ 0 & I_{n-1}\end{smallmatrix}\right)$; see [@aguilar-socolovsky:2000] for details. The covering map is $$p:{\mathrm{SU}}(n)\rtimes{\mathbb{R}}\to{\mathrm{U}}(n),\ p(A,t)=\left(\begin{smallmatrix}e^{2\pi i t} & 0 \\ 0 & I_{n-1}\end{smallmatrix}\right)A,$$ thus $\pi_1({\mathrm{U}}(n)/{\mathbb{Z}}_2)\cong p^{-1}(\pm I_n)$ is the abelian group generated by $(I_n,1)$ and $(-I_n,0)$ when $n$ is even, and by $(X,1/2)$, with $X=\left(\begin{smallmatrix}1 & 0 \\ 0 & -I_{n-1}\end{smallmatrix}\right)$, when $n$ is odd. This proves that $\pi_1({\mathrm{U}}(n)/{\mathbb{Z}}_2)\cong{\mathbb{Z}}\times{\mathbb{Z}}_2$ if $n$ is even and $\pi_1({\mathrm{U}}(n)/{\mathbb{Z}}_2)\cong{\mathbb{Z}}$ if $n$ is odd. The proof of and follows because $\pi_1{\mathrm{U}}(n)\cong\ker(p)$ is the cyclic group generated by $(I_n,1)$ independently of the parity of $n$. So ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundles over $X$ are topologically classified by $$\label{eq:topinv(d,w),d} (d,w)\in{\mathbb{Z}}\times {\mathbb{Z}}_2\quad\text{ if }n\text{ even}\hspace{1cm}\text{and}\hspace{1cm}d\in{\mathbb{Z}}\quad\text{ if }n\text{ odd}.$$ A ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle lifts to a Higgs bundle for its universal cover $\widetilde{\mathrm{PSp}}(2n,{\mathbb{R}})$ precisely when its topological type is trivial. It is however more useful to understand the lifting to the $2$-cover ${\mathrm{Sp}}(2n,{\mathbb{R}})$, and the obstruction to the existence of such lifting, via , can easily be read off from the topological invariants . \[prop:obstr-lift-PSptoSp\] Let $(E,\Phi)$ be a ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle. - If $n$ is even and the topological type of $(E,\varphi)$ is given by $(d(E),w(E))\in{\mathbb{Z}}\times{\mathbb{Z}}_2$, then it lifts to an ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle if and only if $w(E)=0$. Moreover, if $w(E)=0$ then any two lifts differ by a $2$-torsion line bundle on $X$. - If $n$ is odd and the topological type of $(E,\varphi)$ is given by $d(E)\in{\mathbb{Z}}$, then it lifts to an ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle if and only if $d(E)$ is even. Moreover, if $d(E)$ is even then any two lifts differ by a $2$-torsion line bundle on $X$. Since the Higgs field is unchanged in , the only obstruction to lifting $(E,\Phi)$ is the obstruction to lifting the ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$-bundle $E$ to a ${\mathrm{GL}}(n,{\mathbb{C}})$-principal bundle. Of course, ${\mathrm{U}}(n)/{\mathbb{Z}}_2$ and ${\mathrm{U}}(n)$ are maximal compact subgroups of ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$ and ${\mathrm{GL}}(n,{\mathbb{C}})$ respectively. Let ${\mathrm{GL}}(n,{{\mathcal O}})$ and ${\mathrm{GL}}(n,{{\mathcal O}})/{\mathbb{Z}}_2$ denote the sheaves of holomorphic functions in ${\mathrm{GL}}(n,{\mathbb{C}})$ and ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$ on $X$, respectively. We can see $E$ as an element of $H^1(X,{\mathrm{GL}}(n,{{\mathcal O}})/{\mathbb{Z}}_2)$ and want to lift it to an element of $H^1(X,{\mathrm{GL}}(n,{{\mathcal O}}))$. Suppose $n$ is even. Then the result follows from the following commutative diagram, using : $$\xymatrix{ H^1(X,{\mathbb{Z}}_2)\ar[r]&H^1(X,{\mathrm{GL}}(n,{{\mathcal O}}))\ar[r]\ar[d]&H^1(X,{\mathrm{GL}}(n,{{\mathcal O}})/{\mathbb{Z}}_2)\ar[r]^(.73){E\mapsto w(E)}\ar[d]^{E\mapsto(d(E),w(E))}&{\mathbb{Z}}_2\ar[r]\ar@{=}[d] &0\\ 0\ar[r]&{\mathbb{Z}}\ar[r]_{d\mapsto (d,0)}&{\mathbb{Z}}\times{\mathbb{Z}}_2\ar[r]_{(d,w)\mapsto w}&{\mathbb{Z}}_2\ar[r] &0}$$ The case $n$ odd is the same, but using . Maximal Toledo -------------- From we have a morphism ${{\mathcal M}}({\mathrm{Sp}}(2n,{\mathbb{R}}))\to{{\mathcal M}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$, and Proposition \[prop:obstr-lift-PSptoSp\] says that ${{\mathcal M}}_d({\mathrm{Sp}}(2n,{\mathbb{R}}))$ maps onto ${{\mathcal M}}_{(d,0)}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ when $n$ is even and onto ${{\mathcal M}}_{2d}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ when $n$ is odd. \[prop:MW-PSp\] Let $(E,\varphi)$ be a polystable ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle. - If $n$ is even and the topological type of $(E,\varphi)$ is $(d(E),w(E))\in{\mathbb{Z}}\times{\mathbb{Z}}_2$, then $$|d(E)|{\leqslant}n(g-1).$$ - If $n$ is odd and the topological type of $(E,\varphi)$ is $d(E)\in{\mathbb{Z}}$, then $$|d(E)|{\leqslant}n(2g-2).$$ Suppose $n$ is even. The Milnor-Wood inequality of Theorem \[thm:MW\] is independent of the torsion part of $\pi_1G$. So we can assume that $w(E)=0$ and hence that $(E,\varphi)$ lifts to a polystable ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle $(\tilde E,\varphi)$ such that $d(\tilde E)=d(E)$. Denote the Toledo invariant for ${\mathrm{Sp}}(2n,{\mathbb{R}})$ as $\tau_{\mathrm{Sp}}$. By Theorem \[thm:MW\], $|\tau_{\mathrm{Sp}}(\tilde E,\varphi)|{\leqslant}n(2g-2)$. As $d(\tilde E)=\tau_{\mathrm{Sp}}(\tilde E,\varphi)/2$ (cf. [@biquard-garcia-prada-rubio:2015]), the result follows for $n$ even. (Note that the Toledo invariant of $(E,\varphi)$ also verifies $|\tau_{\mathrm{PSp}}(E,\varphi)|{\leqslant}n(2g-2)$, by Remark \[rmk:MW-adjoint\], so this shows that, for $n$ even, $d(E)=\tau_{\mathrm{PSp}}(E,\varphi)/2$.) Suppose now $n$ is odd. If $d(E)$ is even, $(E,\varphi)$ lifts to a polystable ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle $(\tilde E,\tilde\varphi)$, but now with $d(\tilde E)=d(E)/2$. The same argument as above proves $|d(E)|{\leqslant}n(2g-2)$. This also shows that $d(E)=\tau_{\mathrm{PSp}}(E,\varphi)$ for any value of $d(E)$ (possibly odd), since the there there is a constant rational number $q$ such that $d(E)=q\tau_{\mathrm{PSp}}(E,\varphi)$ independent of $(E,\varphi)$. So, since $|\tau_{\mathrm{PSp}}(E,\varphi)|{\leqslant}n(2g-2)$, we conclude that $|d(E)|{\leqslant}n(2g-2)$ also when $d(E)$ is odd. From now on we shall consider the subspace ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))\subset{{\mathcal M}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ with maximal positive Toledo invariant, that is $$\begin{split} &{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))=\bigsqcup_{w\in{\mathbb{Z}}_2}{{\mathcal M}}_{(n(g-1),w)}({\mathrm{PSp}}(2n,{\mathbb{R}}))\quad\text{if }n\text{ even},\\ &{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))={{\mathcal M}}_{n(2g-2)}({\mathrm{PSp}}(2n,{\mathbb{R}}))\quad\text{if }n\text{ odd}. \end{split}$$ The count of components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ follows immediately whenever $n$ is odd, since we know from [@garcia-prada-gothen-mundet:2013] that ${{\mathcal M_{\mathrm{max}}}}({\mathrm{Sp}}(2n,{\mathbb{R}}))$ has $3\times 2^{2g}$ non-empty connected components. These are mapped to ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ and Propositions \[prop:obstr-lift-PSptoSp\] and \[prop:MW-PSp\] ensure that the map ${{\mathcal M_{\mathrm{max}}}}({\mathrm{Sp}}(2n,{\mathbb{R}}))\to{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ is a surjective fibration, with every fibre having $2^{2g}$ elements. Hence the $3\times 2^{2g}$ connected components collapse onto the $3$ components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$. This is an alternative proof of Theorem 8 of [@guichard-wienhard:2010] for the case $n{\geqslant}3$ odd. The situation is different if $n$ is even since ${{\mathcal M_{\mathrm{max}}}}({\mathrm{Sp}}(2n,{\mathbb{R}}))\to{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ is no longer surjective. Thus, *we assume $n{\geqslant}4$ is even until the end of Section \[morse quadruples\]*. In order to deal with this case, we use the Cayley correspondence. Since the Cayley partner for ${\mathrm{Sp}}(2n,{\mathbb{R}})$ is ${\mathrm{GL}}(n,{\mathbb{R}})$, then by Theorem \[thm:Cayleycorresp\] and Proposition \[prop:Cayleypartneradjoint\], we have the following. \[thm:Cayley-corresp-PSp-GL/Z2\] The moduli spaces ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ and ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$ are isomorphic as complex algebraic varieties. Thus we have a commutative diagram $$\label{diagram-Sp-PSp} \xymatrix{{{\mathcal M_{\mathrm{max}}}}({\mathrm{Sp}}(2n,{\mathbb{R}}))\ar[r]^(.5){\cong}\ar[d]&{{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}}))\ar[d]\\ {{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))\ar[r]^(.5){\cong}&{{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)}$$ where the vertical maps are the morphisms given by . Our goal of determining the connected components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ can then be achieved by studying the connected components of ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$. This is how we shall proceed from now on. The reason why we prefer to work with the latter moduli space is because it allows us to take advantage of the study done in [@oliveira:2011] for ${\mathrm{PGL}}(n,{\mathbb{R}})$-Higgs bundles, which readily adapts to our setting. $K^2$-twisted Higgs bundles for ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$: definition and obstruction to lifting to ${\mathrm{GL}}(n,{\mathbb{R}})$ -------------------------------------------------------------------------------------------------------------------------------------------------------- Following the Definition \[def:definition of Higgs bundle\], we have that, in vector bundle terms, a *$K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle* is defined as a triple $(V,Q,\varphi)$, where $(V,Q)$ is a rank $n$ holomorphic orthogonal vector bundle, and $\varphi$ is a holomorphic $K^2$-twisted endomorphism $\varphi:V\to V\otimes K^2$, symmetric with respect to $Q$. As in the case of ${\mathrm{PSp}}(2n,{\mathbb{R}})$, we cannot workout a direct definition of $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundles involving only vector bundles, since there are obstructions to lifting them to ${\mathrm{GL}}(n,{\mathbb{R}})$, because $n$ is even. So a *$K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle* over $X$ is a pair $(E,\Phi)$ where $E$ is a holomorphic ${\mathrm{PO}}(n,{\mathbb{C}})$-principal bundle $\Phi$ is a holomorphic section of $E({\mathfrak{m}^{\mathbb{C}}})\otimes K^2$, where $E({\mathfrak{m}^{\mathbb{C}}})$ is the vector bundle associated to $E$ and to the isotropy representation ${\mathrm{PO}}(n,{\mathbb{C}})\to{\mathrm{GL}}({\mathfrak{m}^{\mathbb{C}}})$, with ${\mathfrak{m}^{\mathbb{C}}}=S_Q^2{\mathbb{V}}$ and $({\mathbb{V}},Q)$ being the standard representation of the orthogonal group ${\mathrm{O}}(n,{\mathbb{C}})$. Again, a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle maps canonically to a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle by and this map preserves polystability. As before, we can detect the obstruction to lifting a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle to ${\mathrm{GL}}(n,{\mathbb{R}})$ from the topological invariants which we now recall. Recall that $n{\geqslant}4$ is even. The topological classification of $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundles gets more complicated due to the non-connectedness of ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$. The projective orthogonal group ${\mathrm{PO}}(n)$ is a maximal compact subgroup, thus we shall use this group for the topological classification. Note that ${\mathrm{PO}}(n)$ is also a maximal compact of ${\mathrm{PGL}}(n,{\mathbb{R}})$, which was considered in [@oliveira:2011] and where all the details of the topological classification can be checked. So the topological classification of (twisted) ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$(-Higgs) bundles is the same as the one for (twisted) ${\mathrm{PGL}}(n,{\mathbb{R}})$(-Higgs) bundles. We only briefly sketch it here. There is a first invariant $$\mu_1\in H^1(X,\pi_0{\mathrm{PO}}(n))\cong({\mathbb{Z}}_2)^{2g}$$ which is the obstruction to reducing the structure group to ${\mathrm{PSO}}(n)$. Then it is important to notice that $\pi_0{\mathrm{PO}}(n)\cong{\mathbb{Z}}_2$ acts non-trivially on $$\pi_1{\mathrm{PO}}(n)=\begin{cases} {\mathbb{Z}}_2\times{\mathbb{Z}}_2 & \text{if } n=0\ \text{mod}\ 4 \\ {\mathbb{Z}}_4 & \text{if } n=2\ \text{mod}\ 4. \end{cases}$$ Precisely, the universal cover of ${\mathrm{PO}}(n)$ is ${\mathrm{Pin}}(n)$. If $p:{\mathrm{Pin}}(n)\to{\mathrm{PO}}(n)$ is the projection, then, as a set, $\pi_1{\mathrm{PO}}(n)\cong\ker(p)=\{0,1,\omega_n,-\omega_n\}$, where $\omega_n=e_1\cdots e_n$ is the oriented volume element of ${\mathrm{Pin}}(n)$ in the standard construction of this group via the Clifford algebra $\mathrm{Cl}(n)$; cf. [@lawson-michelson:1989]. The action of $\pi_0{\mathrm{PO}}(n)$ on $\pi_1{\mathrm{PO}}(n)$ identifies $-\omega_n$ with $\omega_n$ and fixes $0$ and $1$ so $\pi_1{\mathrm{PO}}(n)/\pi_0{\mathrm{PO}}(n)\cong\{0,1,\omega_n\}$. In [@oliveira:2011] we defined another invariant $$\mu_2\in\begin{cases} \{0,1,\omega_n\}& \text{if } \mu_1=0\\ \{0,\omega_n\}\cong{\mathbb{Z}}_2 & \text{if } \mu_1\neq 0. \end{cases}$$ The set $\{0,\omega_n\}$ is the quotient of $\{0,1,\omega_n\}$ where $0$ and $1$ are identified by a further action of ${\mathbb{Z}}_2$. It has the structure group of ${\mathbb{Z}}_2$. The fact that the value of the invariant $\mu_2$ depends on the value of $\mu_1$ is consequence of the non-trivial action of $\pi_0$ in $\pi_1$; see Section 3.2 of [@oliveira:2011] or, more generally, [@oliveira:2008 §2]. Hence we have the following proposition, which is a particular case of the general result [@oliveira:2008 Theorem 2.2, §2; Theorem 1.15, §3] and [@oliveira:2011 Proposition 3.1, Theorem 3.1]. Let $n{\geqslant}4$ be even. Then $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundles over $X$ are topologically classified by the invariants $(\mu_1,\mu_2)\in A$, where $$A:=\left(\{0\}\times\{0,1,\omega_n\}\right)\cup\left(\left(({\mathbb{Z}}_2)^{2g}\setminus\{0\}\right)\times{\mathbb{Z}}_2\right).$$ This gives a decomposition $$\label{eq:decomp moduli GLn/Z2 top type} {{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)=\bigsqcup_{(\mu_1,\mu_2)\in A}{{\mathcal M}}^{K^2}_{\mu_1,\mu_2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$$ according to the $2^{2g+1}+1$ topological types. Furthermore, Proposition 4.1 of [@oliveira:2011] is also valid for ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$, showing that the spaces ${{\mathcal M}}^{K^2}_{\mu_1,\mu_2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$ are non-empty for any choice of invariants $(\mu_1,\mu_2)\in A$. The interpretation of the topological invariants as obstruction to lifting is the same as in the case of ${\mathrm{PGL}}(n,{\mathbb{R}})$; see Proposition 3.2 of [@oliveira:2011]. \[prop:obstruction to lifting to GL in terms of invariants\] Let $n{\geqslant}4$ be even. Then a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle lifts to a ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle if and only if either $\mu_1=0$ and $\mu_2\in\{0,1\}$ or $\mu_1\neq 0$ and $\mu_2=0$. Moreover, any two lifts differ by a $2$-torsion line bundle. We thus see that among the $2^{2g+1}+1$ topological types of $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundles, there are $2^{2g}+1$ for which the Higgs bundles lift to ${\mathrm{GL}}(n,{\mathbb{R}})$ and $2^{2g}$ for which such lift does not exist. In order to deal with the ones that do not lift, and since we prefer to naturally work with vector bundles, we consider a group which is similar to ${\mathrm{GL}}(n,{\mathbb{R}})$ but such that its maximal compact has a continuous centre. $K^2$-twisted Higgs bundles for ${\mathrm{EGL}}(n,{\mathbb{R}})$ ---------------------------------------------------------------- Consider the “enhanced” general linear group ${\mathrm{EGL}}(n,{\mathbb{R}})$, defined as $${\mathrm{EGL}}(n,{\mathbb{R}})={\mathrm{GL}}(n,{\mathbb{R}})\times_{{\mathbb{Z}}_2}{\mathrm{U}}(1)=({\mathrm{GL}}(n,{\mathbb{R}})\times{\mathrm{U}}(1))/{\mathbb{Z}}_2,$$ where ${\mathbb{Z}}_2$ is the normal subgroup of ${\mathrm{GL}}(n,{\mathbb{R}})\times{\mathrm{U}}(1)$, generated by $(-I_n,-1)$. From Proposition 5.2 of [@oliveira:2011], $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles can be defined in terms of vector bundles as quadruples $(V,L,Q,\varphi)$ on $X$, where $V$ is a rank $n$ vector bundle, $L$ a line bundle, $Q$ is a nowhere degenerate symmetric $L$-valued quadratic form on $V$ and $\varphi\in H^0(X,S^2_QV\otimes K^2)$, that is $\varphi:V\to V\otimes K^2$ is symmetric with respect to $Q$. This next result is basically proved in Propositions 5.1 and 5.3 of [@oliveira:2011]. The proof in loc. cit. is for ${\mathrm{PGL}}(n,{\mathbb{R}})$-Higgs bundles, but the precise same arguments give the proof in our case. \[prop:fixdeglift\] Every $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle $(E,\Phi)$ lifts to a $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundle $(V,L,Q,\varphi)$. The parity of $\deg(L)$ is fixed in all the lifts of $(E,\Phi)$. Moreover, it is possible to choose the lift to a $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundle $(V,L,Q,\varphi)$ such that either $\deg(L)=0$ or $\deg(L)=1$. Note that a $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundle with $L\cong{{\mathcal O}}$ is a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle. \[cor:lift to GL\] A $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle lifts to a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle if and only if it can be lifted to a $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundle $(V,L,Q,\varphi)$ with $\deg(L)$ even. If a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle lifts to a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle $(V,L,Q,\varphi)$ with $\deg(L)$ odd, then it is clear by Proposition \[prop:fixdeglift\] that we can never lift it to a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle. Suppose now that it can be lifted to $(V,L,Q,\varphi)$ with $\deg(L)$ even. Again by the previous proposition we can assume that $\deg(L)=0$. By taking a square root $F$ of $L^{-1}$, we get $(V\otimes F,L\otimes F^2,Q\otimes\operatorname{Id}_{F^2},\varphi\otimes\operatorname{Id}_F)\cong(V\otimes F,\mathcal{O},Q\otimes\operatorname{Id}_{F^2},\varphi\otimes\operatorname{Id}_F)$ which is again a lift and now a $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundle. The upshot of Proposition \[prop:fixdeglift\] is that we can work with $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles instead of $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundles with the advantage that in the former case the objects $(V,L,Q,\varphi)$ involve holomorphic vector bundles. That is what we will do from now on. From , there is a morphism $$\label{eq:surjectivemor M(EGL)toM(GL/Z2)} {{\mathcal M}}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))\to{{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2),$$ which is surjective by Proposition \[prop:fixdeglift\]. For $i=0,1$, let $$\label{eq:degL=i} {{\mathcal M}}_i^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))\subset{{\mathcal M}}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$$ be the subspace of ${{\mathcal M}}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ consisting of quadruples $(V,L,Q,\varphi)$, where $\deg(L)=i$. Proposition \[prop:fixdeglift\] also tells us that we can write $$\label{eq:decompoM(GL/Z2)lift-notlift} {{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)={{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0\sqcup{{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_1$$ such that restricts two surjective morphisms $$\label{eq:pi} p_i:{{\mathcal M}}_i^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))\to{{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_i,\qquad i=0,1.$$ Topological classification of $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles ------------------------------------------------------------------------------------------ The enhanced orthogonal group ${\mathrm{EO}}(n)={\mathrm{O}}(n)\times_{{\mathbb{Z}}_2}{\mathrm{U}}(1)$ is a maximal compact of ${\mathrm{EGL}}(n,{\mathbb{R}})$ and also of its complexification ${\mathrm{EO}}(n,{\mathbb{C}})={\mathrm{O}}(n,{\mathbb{C}})\times_{{\mathbb{Z}}_2}{\mathbb{C}}^*$. So the topological classification of ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles over $X$ is the same as that of ${\mathrm{EO}}(n,{\mathbb{C}})$-principal bundles, which are just twisted orthogonal bundles $(V,L,Q)$, that is ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles with vanishing Higgs field. For such objects, we have that the determinant of $V$ verifies $(\Lambda^nV)^2\cong L^n$. Since ${\mathrm{EO}}(n,{\mathbb{C}})$ is a non-connected group (because $n$ is even) there is an obvious first topological invariant. Let ${\mathrm{ESO}}(n,{\mathbb{C}})={\mathrm{SO}}(n,{\mathbb{C}})\times_{{\mathbb{Z}}_2}{\mathbb{C}}^*$ be the identity component of ${\mathrm{EO}}(n,{\mathbb{C}})$ and. Then $$\label{eq:ESO(n)toEO(n)} 1\to{\mathrm{ESO}}(n,{\mathbb{C}})\to{\mathrm{EO}}(n,{\mathbb{C}})\to{\mathbb{Z}}_2\to 0.$$ Thus this first invariant of an ${\mathrm{EO}}(n,{\mathbb{C}})$-principal bundle $E$ is $$\overline\mu_1(E)\in H^1(X,{\mathbb{Z}}_2)\cong({\mathbb{Z}}_2)^{2g}$$ given as the image of $E$ under the map $H^1(X,{\mathrm{EO}}(n,\mathcal{O}))\to H^1(X,{\mathbb{Z}}_2)$, induced from . It is the obstruction to reducing the structure group of $E$ to ${\mathrm{ESO}}(n,{\mathbb{C}})$. In terms of the twisted orthogonal bundle $(V,L,Q)$ corresponding to $E$, it is easy to see that $$\overline{\mu}_1(V,L,Q)=\Lambda^nVL^{-n/2}\in H^1(X,{\mathbb{Z}}_2)\cong({\mathbb{Z}}_2)^{2g}.$$ Thus $\overline{\mu}_1(V,L,Q)=0$ if and only if $\Lambda^nV\cong L^{n/2}$. Clearly this generalises the first Stiefel-Whitney class $w_1$ of orthogonal vector bundles, since if $\deg(L)$ is even, then $\overline{\mu}_1(V,L,Q)=w_1(V\otimes L^{-1/2},Q\otimes\operatorname{Id}_{L^{-1}})$. The value of $w_1$ is independent of the choice of the square root of $L$ because $n$ is even. Now we pass to the definition of other topological invariant $\overline\mu_2$ of a twisted orthogonal bundle $(V,L,Q)$. Again, since $\pi_0({\mathrm{EO}}(n,{\mathbb{C}}))$ acts non-trivially on $\pi_1({\mathrm{EO}}(n,{\mathbb{C}}))$, the value of $\overline\mu_2(V,L,Q)$ depends on the value of $\overline\mu_1(V,L,Q)$. Let $2{\mathbb{Z}}$ denote the set of even integers and $2{\mathbb{Z}}+1$ the odd ones. The topological invariant $\overline\mu_2(V,L,Q)$ of $(V,L,Q)$ is given as follows: - If $\overline\mu_1(V,L,Q)=0$, define $$\overline{\mu}_2(V,L,Q):=\begin{cases} (w_2(V\otimes L^{-1/2}),\deg(L))\in{\mathbb{Z}}_2\times 2{\mathbb{Z}}, & \text{ if }\, \deg(L)\text{ even}\\ \deg(L)\in 2{\mathbb{Z}}+1, & \text{ if } \, \deg(L)\text{ odd} \end{cases}$$ where $w_2(V\otimes L^{-1/2})$ is the second Stiefel-Whitney class of $V\otimes L^{-1/2}$. - If $\overline\mu_1(V,L,Q)\neq 0$, define $$\overline\mu_2(V,L,Q):=\deg(L)\in{\mathbb{Z}}.$$ On the first item, $w_2(V\otimes L^{-1/2})$ does not depend on the choice of the square root of $L$ due to the vanishing of $\overline{\mu}_1(V,L,Q)$. The following proposition is a consequence of the study made in [@oliveira:2011]. Let $n{\geqslant}4$ be even. Then $K^2$-twisted ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles over $X$ are topologically classified by the invariants $(\overline{\mu}_1,\overline{\mu}_2)\in B$, where $$B:=\{0\}\times\left(\left({\mathbb{Z}}_2\times2{\mathbb{Z}}\right)\cup(2{\mathbb{Z}}+1)\right)\cup\left(({\mathbb{Z}}_2)^{2g}\setminus\{0\}\right)\times{\mathbb{Z}}.$$ Let ${{\mathcal M}}_{\overline{\mu}_1,\overline{\mu}_2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ denote the subspace of the space of ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles in which the ${\mathrm{EGL}}(n,{\mathbb{R}})$-Higgs bundles have invariants $(\overline{\mu}_1,\overline{\mu}_2)\in B$. Hence we have a decomposition $${{\mathcal M}}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))=\bigsqcup_{(\overline\mu_1,\overline\mu_2)\in B}{{\mathcal M}}^{K^2}_{\overline\mu_1,\overline\mu_2}({\mathrm{EGL}}(n,{\mathbb{R}}))$$ Recall the subspaces of ${{\mathcal M}}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ defined in . Then they decompose according to topological types as follows: $$\label{eq:decom-topinv L=0} {{\mathcal M}}_0^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))=\bigsqcup_{w_2\in\{0,1\}}{{\mathcal M}}_{0,(w_2,0)}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))\sqcup\bigsqcup_{\overline\mu_1\in({\mathbb{Z}}_2)^{2g}\setminus\{0\}}{{\mathcal M}}_{\overline\mu_1,0}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$$ and $$\label{eq:decom-topinv L=1} {{\mathcal M}}_1^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))=\bigsqcup_{\overline\mu_1\in({\mathbb{Z}}_2)^{2g}}{{\mathcal M}}_{\overline\mu_1,1}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}})).$$ Recall now also the decomposition of ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$. From Proposition \[prop:obstruction to lifting to GL in terms of invariants\], $${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0=\bigsqcup_{\mu_2\in\{0,1\}}{{\mathcal M}}_{0,\mu_2}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0\sqcup\bigsqcup_{\mu_1\in({\mathbb{Z}}_2)^{2g}\setminus\{0\}}{{\mathcal M}}_{\mu_1,0}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$$ and $${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_1=\bigsqcup_{\mu_1\in({\mathbb{Z}}_2)^{2g}}{{\mathcal M}}_{\mu_1,\omega_n}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_1.$$ \[prop:mapM(EGL)toM(GL/Z2)topinv\] Let $p_0$ and $p_1$ be the morphisms defined in . The following hold: - for each $w_2\in{\mathbb{Z}}_2$, $p_0$ maps ${{\mathcal M}}_{0,(w_2,0)}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ onto ${{\mathcal M}}_{0,\mu_2}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$, with $\mu_2=w_2$. - for each $\overline\mu_1\in({\mathbb{Z}}_2)^{2g}\setminus\{0\}$, $p_0$ maps ${{\mathcal M}}_{\overline\mu_1,0}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ onto ${{\mathcal M}}_{\mu_1,0}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$, with $\mu_1=\overline\mu_1$. - for each $\overline\mu_1\in({\mathbb{Z}}_2)^{2g}$, $p_1$ maps ${{\mathcal M}}_{\overline\mu_1,1}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ onto ${{\mathcal M}}_{\mu_1,\omega_n}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_1$, with $\mu_1=\overline\mu_1$. Connected components {#morse quadruples} -------------------- For each topological type $(\overline\mu_1,\overline\mu_2)\in B$ fixed, the calculation of the number of connected components of the moduli space ${{\mathcal M}}_{(\overline\mu_1,\overline\mu_2)}({\mathrm{EGL}}(n,{\mathbb{R}}))$ has been carried out in [@oliveira:2011]. There we used the standard method to study the topology of the moduli spaces of Higgs bundles through the Hitchin function $f$, defined in subsection \[subsec:Hit-func\]. By Proposition \[prop:Hitchfunctionalproper\] we also have the “same” proper function on the $K^2$-twisted moduli space. Moreover, all the arguments made in [@oliveira:2011], immediately go through the $K^2$-twisted case. See especially Theorems 8.1 and 8.2 and Propositions 8.4 and 8.5 of [@oliveira:2011]. Therefore we have the following result. Write $z_0=(g-1)n/2\ (\mathrm{mod}\ 2)$. Recall decompositions and . \[ccq\] Let $n{\geqslant}4$ be even. 1. The moduli space ${{\mathcal M}}_0^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ has $2^{2g}+2$ connected components. More precisely, 1. if $w_2\neq z_0$, then ${{\mathcal M}}_{0,(w_2,0)}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ is non-empty and connected. 2. ${{\mathcal M}}_{0,(z_0,0)}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ has $2$ non-empty connected components, namely: - $1$ component where the Higgs bundles cannot be deformed to a $K^2$-twisted ${\mathrm{EO}}(n)$-Higgs bundle. - $1$ component containing $K^2$-twisted ${\mathrm{EO}}(n)$-Higgs bundles with the given invariants. 3. ${{\mathcal M}}_{\overline\mu_1,0}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ is non-empty and connected for each $\overline\mu_1\in({\mathbb{Z}}_2)^{2g}\setminus\{0\}$. 2. The moduli space ${{\mathcal M}}_1^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ has $2^{2g}$ connected components. More precisely, 1. ${{\mathcal M}}_{\overline\mu_1,1}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ is non-empty and connected for each $\overline\mu_1\in({\mathbb{Z}}_2)^{2g}$. Recall now the decomposition of ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$ according to the lifting property to $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})$-Higgs bundles. From Propositions \[ccq\] and \[prop:mapM(EGL)toM(GL/Z2)topinv\] and from the fact that the $2$ connected components of ${{\mathcal M}}_{0,(z_0,0)}^{K^2}({\mathrm{EGL}}(n,{\mathbb{R}}))$ are not collapsed by the morphism $p_0$ (cf. Theorem 10.1 of [@oliveira:2011]), we conclude the following. Let $n{\geqslant}4$ be even. 1. The moduli space ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$ has $2^{2g}+2$ connected components. More precisely, 1. if $\mu_2\neq z_0$, then ${{\mathcal M}}_{0,\mu_2}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$ is non-empty and connected; 2. ${{\mathcal M}}_{0,z_0}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$ has $2$ non-empty connected components, namely: - $1$ component where the Higgs bundle cannot be deformed to a $K^2$-twisted ${\mathrm{PO}}(n)$-Higgs bundle. - $1$ component containing $K^2$-twisted ${\mathrm{PO}}(n)$-Higgs bundles with the given invariants. 3. ${{\mathcal M}}_{\mu_1,0}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$ is non-empty and connected for each $\mu_1\in({\mathbb{Z}}_2)^{2g}\setminus\{0\}$. 2. The moduli space ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_1$ has $2^{2g}$ connected components. More precisely, ${{\mathcal M}}_{\mu_1,\omega_n}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_1$ is non-empty and connected for each $\mu_1\in({\mathbb{Z}}_2)^{2g}$. The connected component of ${{\mathcal M}}_{0,z_0}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)_0$ where the Higgs bundles do not deform to the maximal compact subgroup is the famous *Hitchin component* of the moduli for the split form ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$; cf. [@hitchin:1992]. The next result is now immediate, using the previous proposition and Corollary \[cor:lift to GL\]. Let $n{\geqslant}4$ be even. The moduli space ${{\mathcal M}}^{K^2}({\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2)$ has $2^{2g+1}+2$ non-empty connected components. Of these, $2^{2g}+2$ contain the polystable $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundles which lift to ${\mathrm{GL}}(n,{\mathbb{R}})$ and the remaining $2^{2g}$ contain the ones that do not lift. So we achieve our first goal. \[thm:main\] Let $n{\geqslant}4$ be even. The moduli space ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSp}}(2n,{\mathbb{R}}))$ has $2^{2g+1}+2$ non-empty connected components. Of these, $2^{2g}+2$ contain the polystable ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundles which lift to ${\mathrm{Sp}}(2n,{\mathbb{R}})$ and the remaining $2^{2g}$ contain the ones that do not lift. Immediate from the previous corollary, from Proposition \[thm:Cayley-corresp-PSp-GL/Z2\] and from the fact that a ${\mathrm{PSp}}(2n,{\mathbb{R}})$-Higgs bundle lifts to an ${\mathrm{Sp}}(2n,{\mathbb{R}})$-Higgs bundle if and only if the corresponding $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle (under Theorem \[thm:Cayley-corresp-PSp-GL/Z2\]) lifts to $K^2$-twisted ${\mathrm{GL}}(n,{\mathbb{R}})/{\mathbb{Z}}_2$-Higgs bundle, as one can check from . Higgs bundles for ${\mathrm{PSO}}^*(2n)$ {#Higgs bundles for PSO*} ======================================== Definitions, obstructions and Cayley correspondence --------------------------------------------------- In this section we perform a similar analysis to the one done for ${\mathrm{PSp}}(2n,{\mathbb{R}})$, but for the projective non-compact dual of the orthogonal group. Recall that the non-compact dual of the special orthogonal group ${\mathrm{SO}}^*(2n)$ can be defined by the group of special orthogonal transformations of ${\mathbb{C}}^{2n}$ leaving invariant a non-degenerate skew-hermitian form. Assume $n>1$ (otherwise ${\mathrm{SO}}^*(2)\cong{\mathrm{SO}}(2)$ is compact). Then its centre is $\pm I_{2n}$, hence by definition $${\mathrm{PSO}}^*(2n)={\mathrm{SO}}^*(2n)/{\pm I_{2n}}={\mathrm{SO}}^*(2n)/{\mathbb{Z}}_2.$$ Both groups are of hermitian type and they are of tube type if and only if $n$ is even. The group ${\mathrm{PSO}}^*(4)$ is not simple and the associated hermitian symmetric space is not irreducible, so we do not consider it in this paper. We will be sketchier here, leaving the details for the interested reader. The case of the group ${\mathrm{SO}}^*(2n)$ has been studied in detail in [@bradlow-garcia-prada-gothen:2015]. A maximal compact subgroup of ${\mathrm{SO}}^*(2n)$ is the unitary group ${\mathrm{U}}(n)$, hence ${\mathrm{U}}(n)/{\mathbb{Z}}_2$ is a maximal compact of ${\mathrm{PSO}}^*(2n)$. The Cartan decomposition of the complexified Lie algebra is $\mathfrak{so}(2n,{\mathbb{C}})={\mathfrak{gl}}(n,{\mathbb{C}})\oplus{\mathfrak{m}^{\mathbb{C}}}$, where ${\mathfrak{m}^{\mathbb{C}}}=\Lambda^2{\mathbb{V}}\oplus\Lambda^2{\mathbb{V}}^*$ with ${\mathbb{V}}$ being the fundamental representation of ${\mathrm{GL}}(n,{\mathbb{C}})$. So a *${\mathrm{PSO}}^*(2n)$-Higgs bundle* is a pair $(E,\varphi)$ with $E$ being a ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$-principal bundle and the Higgs field $\Phi$ is a section of $E({\mathfrak{m}^{\mathbb{C}}})\otimes K$. There is no natural way to define ${\mathrm{PSO}}^*(2n)$-Higgs bundles in terms of vector bundles. Since the maximal compact subgroup of ${\mathrm{PSO}}^*(2n)$ is (conjugate to) ${\mathrm{U}}(n)/{\mathbb{Z}}_2$, Proposition \[prop:pi1U(n)/Z2\] tells us that ${\mathrm{PSO}}^*(2n)$-Higgs bundles are topologically classified by $$(d,w)\in{\mathbb{Z}}\times {\mathbb{Z}}_2\quad\text{ if }n\text{ even}\hspace{1cm}\text{and}\hspace{1cm}d\in{\mathbb{Z}}\quad\text{ if }n\text{ odd}.$$ Higgs bundles for ${\mathrm{SO}}^*(2n)$ can also be defined as above, by replacing ${\mathrm{GL}}(n,{\mathbb{C}})/{\mathbb{Z}}_2$ by ${\mathrm{GL}}(n,{\mathbb{C}})$. Then we can define an ${\mathrm{SO}}^*(2n)$-Higgs bundle over $X$ as a triple $(V,\beta,\gamma)$ where $V$ is a holomorphic vector bundle on $X$, $\beta$ is a section of $\Lambda^2V\otimes K$ and $\gamma$ a section of $\Lambda^2V^*\otimes K$. Their topological type is determined by the degree of $V$. An ${\mathrm{SO}}^*(2n)$-Higgs bundle is mapped to a ${\mathrm{PSO}}^*(2n)$-Higgs bundle just has in , preserving polystability by Proposition \[prop:2.5\]. The same argument as in Proposition \[prop:obstr-lift-PSptoSp\] shows the following. \[prop:lifttoSO\*\] The ${\mathrm{PSO}}^*(2n)$-Higgs bundles which lift to an ${\mathrm{SO}}^*(2n)$-Higgs bundle are precisely the ones of topological type $(d,0)$ if $n$ is even, or $d$ even if $n$ is odd. Moreover, in both cases any two lifts differ by a $2$-torsion line bundle on $X$. So there is a morphism ${{\mathcal M}}({\mathrm{SO}}^*(2n))\to{{\mathcal M}}({\mathrm{PSO}}^*(2n))$ between the corresponding moduli spaces, such that ${{\mathcal M}}_{\tilde d}({\mathrm{SO}}^*(2n))$ maps onto ${{\mathcal M}}_{(\tilde d,0)}({\mathrm{PSO}}^*(2n))$ when $n$ is even and onto ${{\mathcal M}}_{2\tilde d}({\mathrm{PSO}}^*(2n))$ when $n$ is odd, where $\tilde d\in{\mathbb{Z}}$ is a topological type of ${\mathrm{SO}}^*(2n)$-Higgs bundles. If $\tau_{{\mathrm{PSO}}^*}$ denotes the Toledo invariant of a semistable ${\mathrm{PSO}}^*(2n)$-Higgs bundle $(E,\Phi)$, then Theorem \[thm:MW\] says that $|\tau_{{\mathrm{PSO}}^*}(E,\Phi)|{\leqslant}[n/2](2g-2)$. The proof of the next result follows the same lines as the one of Proposition \[prop:MW-PSp\]. Let $(E,\Phi)$ be a semistable ${\mathrm{PSO}}^*(2n)$-Higgs bundle. - If $n$ is even and the topological type of $(E,\Phi)$ is $(d(E),w(E))\in{\mathbb{Z}}\times{\mathbb{Z}}_2$, then $$|d(E)|{\leqslant}n(g-1)/2.$$ - If $n$ is odd and the topological type of $(E,\Phi)$ is $d(E)\in{\mathbb{Z}}$, then $$|d(E)|{\leqslant}(n-1)(g-1).$$ Consider the subspace ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))\subset{{\mathcal M}}({\mathrm{PSO}}^*(2n))$ with maximal positive Toledo invariant, that is $$\begin{split} &{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))=\bigsqcup_{w\in{\mathbb{Z}}_2}{{\mathcal M}}_{(n(g-1)/2,w)}({\mathrm{PSO}}^*(2n))\quad\text{if }n\text{ even},\\ &{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))={{\mathcal M}}_{(n-1)(g-1)}({\mathrm{PSO}}^*(2n))\quad\text{if }n\text{ odd}. \end{split}$$ The count of components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ follows immediately in the case $n$ is odd, since we know from [@bradlow-garcia-prada-gothen:2015] that ${{\mathcal M_{\mathrm{max}}}}({\mathrm{SO}}^*(2n))$ is connected. Since the maximal Toledo is even, Proposition \[prop:lifttoSO\*\] says that the map ${{\mathcal M_{\mathrm{max}}}}({\mathrm{SO}}^*(2n))\to{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ is surjective, hence ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ is connected as well. The situation is different whenever $n$ is even since ${{\mathcal M_{\mathrm{max}}}}({\mathrm{SO}}^*(2n))\to{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ is no longer surjective. Hence *suppose $n{\geqslant}4$ is even until the end of Section 4*. Since ${\mathrm{PSO}}^*(2n)$ and ${\mathrm{SO}}^*(2n)$ are of tube type for $n$ even, the Cayley correspondence holds. The Cayley partner for ${\mathrm{SO}}^*(2n)$ is ${\mathrm{U}}^*(n)$, the non-compact dual of the unitary group ${\mathrm{U}}(n)$. Thus from Theorem \[thm:Cayleycorresp\] and Proposition \[prop:Cayleypartneradjoint\] we have the following. \[thm:Cayley-corresp-PSO-U\*/Z2\] The moduli spaces ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ and ${{\mathcal M}}^{K^2}({\mathrm{U}}^*(n)/{\mathbb{Z}}_2)$ are isomorphic as complex algebraic varieties. We have a commutative diagram $$\label{diagram-SO*-PSO*} \xymatrix{{{\mathcal M_{\mathrm{max}}}}({\mathrm{SO}}^*(2n))\ar[r]^(.5){\cong}\ar[d]&{{\mathcal M}}^{K^2}({\mathrm{U}}^*(n))\ar[d]\\ {{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))\ar[r]^(.5){\cong}&{{\mathcal M}}^{K^2}({\mathrm{U}}^*(n)/{\mathbb{Z}}_2).}$$ where the vertical maps are the morphisms given by . $K^2$-twisted Higgs bundles for ${\mathrm{U}}^*(n)/{\mathbb{Z}}_2$ ------------------------------------------------------------------ Recall that $n=2m{\geqslant}4$ is even. The group ${\mathrm{U}}^*(2m)$ admits the compact symplectic group ${\mathrm{Sp}}(2m)$ as a maximal compact. Hence, a *$K^2$-twisted ${\mathrm{U}}^*(2m)/{\mathbb{Z}}_2$-Higgs bundle* is a pair $(E,\varphi)$, with $E$ a ${\mathrm{PSp}}(2m,{\mathbb{C}})$-principal bundle and $\Phi$ a section of $E({\mathfrak{m}^{\mathbb{C}}})$, where ${\mathfrak{m}^{\mathbb{C}}}=\Lambda_\Omega^2{\mathbb{V}}$ and $({\mathbb{V}},\Omega)$ the fundamental representation of ${\mathrm{Sp}}(2m,{\mathbb{C}})$ in ${\mathbb{C}}^{2m}$. In the case of ${\mathrm{U}}^*(2m)$, the principal bundle has structure group ${\mathrm{Sp}}(2m,{\mathbb{C}})$, hence $K^2$-twisted ${\mathrm{U}}^*(2m)$-Higgs are are triples $(V,\Omega,\varphi)$ with $(V,\Omega)$ a rank $2m$ symplectic vector bundle and $\varphi:V\to V\otimes K^2$, skew-symmetric with respect to $\Omega$; cf. [@garcia-prada-oliveira:2011]. As before, not every $K^2$-twisted ${\mathrm{U}}^*(2m)/{\mathbb{Z}}_2$-Higgs bundle lifts to a $K^2$-twisted ${\mathrm{U}}^*(2m)$-Higgs bundle and that is detected by the topological type, given by an element $c\in\pi_1({\mathrm{PSp}}(2m))\cong{\mathbb{Z}}_2$. So a $K^2$-twisted ${\mathrm{U}}^*(2m)/{\mathbb{Z}}_2$-Higgs bundle lifts to a $K^2$-twisted ${\mathrm{U}}^*(2m)$-Higgs bundle if and only if it is topologically is trivial; note that ${\mathrm{U}}^*(2m)$ is the universal cover of ${\mathrm{U}}^*(2m)/{\mathbb{Z}}_2$. So we are again lead to considering the group $${\mathrm{EU}}^*(2m)={\mathrm{U}}^*(2m)\times_{{\mathbb{Z}}_2}{\mathrm{U}}(1).$$ The same argument as in Proposition 5.2 of [@oliveira:2011], but replacing ${\mathrm{O}}(n,{\mathbb{C}})$ by ${\mathrm{Sp}}(2m,{\mathbb{C}})$ shows that a $K^2$-twisted ${\mathrm{EU}}^*(2m)$-Higgs bundle may be defined as a quadruple $(V,L,\Omega,\varphi)$ on $X$, where $V$ is a rank $n$ vector bundle, $L$ a line bundle, $\Omega$ an $L$-valued symplectic form on $V$ and $\varphi\in H^0(X,\Lambda^2_\Omega V\otimes K^2)$. Then we have the following analogue of Proposition \[prop:fixdeglift\]. Every $K^2$-twisted ${\mathrm{U}}^*(2m)/{\mathbb{Z}}_2$-Higgs bundle $(E,\varphi)$ lifts to a $K^2$-twisted ${\mathrm{EU}}^*(2m)$-Higgs bundle $(V,L,\Omega,\varphi)$. The parity of $\deg(L)$ is fixed in all the lifts of $(E,\varphi)$. Moreover, it is possible to choose the lift to a $K^2$-twisted ${\mathrm{EU}}^*(2m)$-Higgs bundle $(V,L,\Omega,\varphi)$ such that either $\deg(L)=0$ or $\deg(L)=1$. Although we do not need this fact here, we have indeed as in Corollary \[cor:lift to GL\] that a $K^2$-twisted ${\mathrm{U}}^*(2m)/{\mathbb{Z}}_2$-Higgs bundle lifts to a $K^2$-twisted ${\mathrm{U}}^*(2m)$-Higgs bundle if and only if it can be lifted to a $K^2$-twisted ${\mathrm{EU}}^*(2m)$-Higgs bundle $(V,L,\Omega,\varphi)$ with $\deg(L)$ even. Hence, by , we have a surjective morphism $${{\mathcal M}}^{K^2}({\mathrm{EU}}^*(2m))\to{{\mathcal M}}^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2).$$ If ${{\mathcal M}}_0^{K^2}({\mathrm{EU}}^*(2m))$ and (resp. ${{\mathcal M}}_1^{K^2}({\mathrm{EU}}^*(2m))$) denote the subspaces of ${{\mathcal M}}^{K^2}({\mathrm{EU}}^*(2m))$ where $\deg(L)=0$ (resp. $\deg(L)=1$), the preceding morphism restricts to two surjective morphisms $$\label{eq:p1U*} p_1:{{\mathcal M}}_0^{K^2}({\mathrm{EU}}^*(2m))\to{{\mathcal M}}_0^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2)$$ and $$\label{eq:p2U*} p_2:{{\mathcal M}}_1^{K^2}({\mathrm{EU}}^*(2m))\to{{\mathcal M}}_1^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2),$$ where ${{\mathcal M}}_i^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2)$ is the subspace of ${{\mathcal M}}^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2)$ whose Higgs bundles have topological type $c=i\in{\mathbb{Z}}_2$. Hence, we have a disjoint union $$\label{eq:union-toptypesU*/Z2} {{\mathcal M}}^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2)={{\mathcal M}}_0^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2)\sqcup{{\mathcal M}}_1^{K^2}({\mathrm{U}}^*(2m)/{\mathbb{Z}}_2).$$ The fundamental group of ${\mathrm{ESp}}(2m)$ (hence of ${\mathrm{EU}}^*(2m)$) is isomorphic to ${\mathbb{Z}}$. So ${\mathrm{EU}}^*(2m)$-Higgs bundles $(V,L,\Omega,\varphi)$ are topologically determined by an integer which is actually the degree of $L$. Notice that $\deg(V)=m\deg(L)$, thus the topological type of the Higgs bundles in ${{\mathcal M}}_0^{K^2}({\mathrm{EU}}^*(2m))$ or in ${{\mathcal M}}_1^{K^2}({\mathrm{EU}}^*(2m))$ is fixed. Observe that this is in contrast with the case of ${\mathrm{EGL}}(n,{\mathbb{R}})$, where we had the decompositions and . Connected components {#connected-components} -------------------- In [@garcia-prada-oliveira:2011], we proved that the moduli space of ${\mathrm{U}}^*(2m)$-Higgs bundles is connected. For that we used that the local minima of the Hitchin proper functional $f$ in ${{\mathcal M}}({\mathrm{U}}^*(2m))$ are exactly the ones with vanishing Higgs field. Now, we also have the Hitchin proper function on ${{\mathcal M}}^{K^2}({\mathrm{EU}}^*(2m))$, by Proposition \[prop:Hitchfunctionalproper\] and the entire argument in loc. cit. does not depend on the twisting by $K$ or $K^2$. On the other hand, the same argument in [@garcia-prada-oliveira:2011] is also independent of the line bundle $L$ where the symplectic form $\Omega$ takes values. Precisely, if one recalls that the study of the smooth minima of $f$ involves the study of subspaces ${\mathbb{H}}^1(C^\bullet_k)$ of weight $k>0$ of the deformation space ${\mathbb{H}}^1(C^\bullet)$ of a $K^2$-twisted ${\mathrm{EU}}^*(2m)$-Higgs bundle (representing a smooth point in the moduli), then one can see, as in the last paragraph of page 259 of [@oliveira:2011], that the line bundle $L$ only plays a role when $k=0$. So it does not play a role in the study of smooth local minima. So we conclude that: \[thm:M\_iEU\* connected\] The spaces ${{\mathcal M}}_0^{K^2}({\mathrm{EU}}^*(2m))$ and ${{\mathcal M}}_1^{K^2}({\mathrm{EU}}^*(2m))$ are both connected and non-empty. Thus we have the count of the connected components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$, for $n{\geqslant}4$ even. \[thm:mainPSO\*(2n)\] Let $n{\geqslant}4$ be even. The moduli space ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}^*(2n))$ has $2$ non-empty connected components. One of them is composed by the ${\mathrm{PSO}}^*(2n)$-Higgs bundles which lift to ${\mathrm{SO}}^*(2n)$-Higgs bundles and the other one by the ones which do not lift. It follows Theorem \[thm:Cayley-corresp-PSO-U\*/Z2\], from the decomposition , from the surjective morphisms and , from Theorem \[thm:M\_iEU\* connected\] and finally from . Higgs bundles for ${\mathrm{PSO}}_0(2,n)$ ========================================= Definitions, obstructions and Cayley correspondence --------------------------------------------------- Now we consider the case of Higgs bundles for the identity component of the projective special orthogonal group with signature $(2,n)$, ${\mathrm{PSO}}_0(2,n)={\mathrm{SO}}_0(2,n)/Z({\mathrm{SO}}_0(2,n))$. The case of the group ${\mathrm{SO}}_0(2,n)$ has been considered in [@bradlow-garcia-prada-gothen:2005]. Both are hermitian groups of tube type, for any $n$. The special orthogonal group ${\mathrm{SO}}(2,n)$ can be defined as the group of volume preserving transformations of ${\mathbb{R}}^{2+n}$ leaving invariant a non-degenerate symmetric bilinear form of signature $(2,n)$. It has two connected components, and denote the one containing the identity by ${\mathrm{SO}}_0(2,n)$. If $n$ is odd, the centre of ${\mathrm{SO}}_0(2,n)$ is trivial so ${\mathrm{PSO}}_0(2,n)={\mathrm{SO}}_0(2,n)$, while if $n$ is even, it is $\pm I_{2+n}$. Thus, for $n$ even, $${\mathrm{PSO}}_0(2,n)={\mathrm{SO}}_0(2,n)/\pm I_{2+n}={\mathrm{SO}}_0(2,n)/{\mathbb{Z}}_2.$$ Similarly to the case of ${\mathrm{PSO}}^*(4)$, we will not consider the group ${\mathrm{PSO}}_0(2,2)$ since it is not simple and the associated hermitian symmetric space is not irreducible. Besides, the fundamental group of ${\mathrm{PSO}}_0(2,2)$ is different from the one of ${\mathrm{PSO}}_0(2,n)$ when $n>2$. Hence we *assume henceforth that $n{\geqslant}4$ is even*. The group $H=({\mathrm{SO}}(2)\times{\mathrm{SO}}(n))/{\mathbb{Z}}_2$, with ${\mathbb{Z}}_2$ acting diagonally, is a maximal compact subgroup of ${\mathrm{PSO}}_0(2,n)$. The Cartan decomposition of the complexified Lie algebra is $\mathfrak{so}(2+n,{\mathbb{C}})={\mathbb{C}}\oplus\mathfrak{so}(n,{\mathbb{C}})\oplus{\mathfrak{m}^{\mathbb{C}}}$, where ${\mathfrak{m}^{\mathbb{C}}}=\operatorname{Hom}({\mathbb{W}},{\mathbb{L}}\oplus{\mathbb{L}}^*)$, with ${\mathbb{W}}$ being the fundamental representation of ${\mathrm{SO}}(n,{\mathbb{C}})$ and ${\mathbb{L}}$ the fundamental representation of ${\mathrm{SO}}(2,{\mathbb{C}})\cong{\mathbb{C}}^*$. So a *${\mathrm{PSO}}_0(2,n)$-Higgs bundle* is a pair $(E,\varphi)$ where $E$ is an $({\mathrm{SO}}(2,{\mathbb{C}})\times{\mathrm{SO}}(n,{\mathbb{C}}))/{\mathbb{Z}}_2$-principal bundle and $\varphi$ is a section of $E({\mathfrak{m}^{\mathbb{C}}})\otimes K$. The following result can be proved as in Proposition \[prop:pi1U(n)/Z2\] by determining the kernel of the universal cover ${\mathbb{R}}\times{\mathrm{Spin}}(n)\to({\mathrm{SO}}(2)\times{\mathrm{SO}}(n))/{\mathbb{Z}}_2$. Recall that we denote by $\omega_n$ the oriented volume element of ${\mathrm{Pin}}(n)$. It has order $2$ or $4$, depending on whether $n$ is multiple of $4$ or not; cf. [@lawson-michelson:1989]. Since $n{\geqslant}4$ is even, $\omega_n$ lies in fact in ${\mathrm{Spin}}(n)$. Recall also that, as a set, $\pi_1{\mathrm{PSO}}(n)=\{0,1,\omega_n,-\omega_n\}$ in the abelian notation (here $1$ is an element of order two). \[prop:pi1(SO(2)xSO(n))/Z2\] Let $n{\geqslant}4$ be even. The fundamental group of $({\mathrm{SO}}(2)\times{\mathrm{SO}}(n))/{\mathbb{Z}}_2$ is $$\pi_1(({\mathrm{SO}}(2)\times{\mathrm{SO}}(n))/{\mathbb{Z}}_2)\cong 2{\mathbb{Z}}\times{\mathbb{Z}}_2\cup(2{\mathbb{Z}}+1)\times\{\pm\omega_n\}\cong{\mathbb{Z}}\times{\mathbb{Z}}_2,$$ where in the second isomorphism ${\mathbb{Z}}\times{\mathbb{Z}}_2$ means the abelian group generated by $(1,\omega_n)$ and $(0,1)$. Moreover, the inclusion $${\mathbb{Z}}\times{\mathbb{Z}}_2\cong\pi_1({\mathrm{SO}}(2)\times{\mathrm{SO}}(n))\hookrightarrow\pi_1(({\mathrm{SO}}(2)\times{\mathrm{SO}}(n))/{\mathbb{Z}}_2)\cong{\mathbb{Z}}\times{\mathbb{Z}}_2$$ in the exact sequence $1\to{\mathbb{Z}}\times{\mathbb{Z}}_2\to{\mathbb{Z}}\times{\mathbb{Z}}_2\to{\mathbb{Z}}_2\to 0$ is given by multiplication by $2$ on the first factor and by the identity on the second one. Thus ${\mathrm{PSO}}_0(2,n)$-Higgs bundles over $X$ are topologically classified by invariants $$(d,\mu)\in 2{\mathbb{Z}}\times{\mathbb{Z}}_2\cup(2{\mathbb{Z}}+1)\times\{\pm\omega_n\}\cong{\mathbb{Z}}\times{\mathbb{Z}}_2.$$ Higgs bundles for the group ${\mathrm{SO}}_0(2,n)$ over $X$ are given by the data $(L,W,Q_W,\beta,\gamma)$ where $L$ is a holomorphic line bundle, from which we consider the rank two bundle $L\oplus L^{-1}$ with the standard orthogonal structure, $(W,Q_W)$ is an special orthogonal vector bundle of rank $n$, $\beta$ is a section of $\operatorname{Hom}(W,L)\otimes K$ and $\gamma$ a section of $\operatorname{Hom}(W,L^{-1})\otimes K$. Their topological type is determined by the degree of $L$ and by the second Stiefel-Whitney class $w_2\in{\mathbb{Z}}_2=\{0,1\}$ of $W$. An ${\mathrm{SO}}^*(2n)$-Higgs bundle is mapped to a ${\mathrm{PSO}}^*(2n)$-Higgs bundle, as shown in , preserving polystability. An argument similar to Proposition \[prop:obstr-lift-PSptoSp\], but using Proposition \[prop:pi1(SO(2)xSO(n))/Z2\], shows the following. \[prop:lifttoSO(2,n)\] The ${\mathrm{PSO}}_0(2,n)$-Higgs bundles which lift to an ${\mathrm{SO}}_0(2,n)$-Higgs bundle are precisely the ones of topological type $(d,\mu)$ with $d$ an even integer and $\mu=0,1$. Moreover, any two lifts differ by a $2$-torsion line bundle on $X$. So the morphism ${{\mathcal M}}({\mathrm{SO}}_0(2,n))\to{{\mathcal M}}({\mathrm{PSO}}_0(2,n))$ maps the space ${{\mathcal M}}_{(\tilde d,w_2)}({\mathrm{SO}}_0(2,n))$ onto ${{\mathcal M}}_{(2\tilde d,w_2)}({\mathrm{PSO}}_0(2,n))$. Using the fact that the Toledo invariant of a semistable ${\mathrm{SO}}_0(2,n)$-Higgs bundle verifies $|\tau_{{\mathrm{SO}}}|{\leqslant}4g-4$ and that the corresponding degree is half of $\tau_{{\mathrm{SO}}}$, one proves the following result, analogously to the previous cases of ${\mathrm{PSp}}(2n,{\mathbb{R}})$ and ${\mathrm{PSO}}^*(2n)$. Let $(E,\varphi)$ be a semistable ${\mathrm{PSO}}_0(2,n)$-Higgs bundle, with $n{\geqslant}4$ even. Let its topological type be given by $(d(E),\mu(E))$. Then $$|d(E)|{\leqslant}4g-4.$$ Consider the subspace ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,n))\subset{{\mathcal M}}({\mathrm{PSO}}_0(2,n))$ with maximal positive Toledo invariant, that is $${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,n))=\bigsqcup_{\mu\in{\mathbb{Z}}_2}{{\mathcal M}}_{(4g-4,\mu)}({\mathrm{PSO}}_0(2,n)).$$ Proposition \[prop:lifttoSO(2,n)\] tells us that the map ${{\mathcal M_{\mathrm{max}}}}({\mathrm{SO}}_0(2,n))\to{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,n))$ is surjective, with ${{\mathcal M}}_{(2g-2,w_2)}({\mathrm{SO}}_0(2,n))$ mapping onto ${{\mathcal M}}_{(4g-4,w_2)}({\mathrm{PSO}}_0(2,n))$. Observe that this is in contrast with the other two cases. This fact allows us to immediately calculate the connected components of ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,n))$, in particular avoiding the use of the Cayley correspondence. Indeed, we know from [@bradlow-garcia-prada-gothen:2005] that ${{\mathcal M}}_{(2g-2,w)}({\mathrm{SO}}_0(2,n))$ has $2^{2g}$ connected components, for each $w_2\in{\mathbb{Z}}_2$. Hence, from Proposition \[prop:lifttoSO(2,n)\], we have the following. \[thm:mainPSO(2,n)\] Let $n{\geqslant}4$ be even. The moduli space ${{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,n))$ has $2$ non-empty connected components. All the ${\mathrm{PSO}}_0(2,n)$-Higgs bundles on them lift to ${\mathrm{SO}}_0(2,n)$, but in one of them they lift to the universal cover $\widetilde{{\mathrm{SO}}_0}(2,n)$ and in the other they do not. Although we did not make use of it, the Calyey correspondence of course still holds. Since the Cayley partner of ${\mathrm{SO}}_0(2,n)$ is ${\mathrm{SO}}_0(1,1)\times{\mathrm{SO}}(1,n-1)$, it turns out from Proposition \[prop:Cayleypartneradjoint\] that the Cayley partner of ${\mathrm{PSO}}_0(2,n)$ is ${\mathrm{SO}}_0(1,1)\times{\mathrm{SO}}_0(1,n-1)$, whose corresponding $K^2$-twisted moduli space is just the product of a vector space with the moduli space of $K^2$-twisted ${\mathrm{SO}}_0(1,n-1)$-Higgs bundles. Hence, it follows from Theorems \[thm:mainPSO(2,n)\] and \[thm:Cayleycorresp\] that the moduli space ${{\mathcal M}}^{K^2}({\mathrm{SO}}_0(1,n-1))$ has $2$ non-empty connected components, showing that this is also the case for ${{\mathcal M}}({\mathrm{SO}}_0(1,n-1))$. This provides a somehow different proof of this result, alternative to the one given in [@aparicio-garcia-prada:2013]. An application: Higgs bundles for $E_6^{-14}$ --------------------------------------------- The exceptional group $E_6^{-14}$ is of hermitian type, but not of tube type. The rigidity phenomena of maximal $E_6^{-14}$-Higgs bundles implies [@biquard-garcia-prada-rubio:2015 Theorem 6.2] that there is a fibration $${{\mathcal M_{\mathrm{max}}}}(E_6^{-14})\to{{\mathcal M_{\mathrm{max}}}}({\mathrm{PSO}}_0(2,8)),$$ with fibre isomorphic to the Jacobian of $X$. Thus Theorem \[thm:mainPSO(2,n)\] immediately provides our final result. \[thm:e6-14\] The moduli space ${{\mathcal M_{\mathrm{max}}}}(E_6^{-14})$ has $2$ non-empty connected components. This is the first case where the maximal connected components of moduli spaces of Higgs bundles over $X$ are determined for an exceptional real group. [99]{} , Universal Covering Group of U(n) and Projective Representations, *Int. J. Theor. Phys.* Ser. A **39**, No. 4, (2000), 997–1013. , Higgs bundles for the Lorentz group, *Illinois J. Math.* **55**, No. 4, (2013), 1299–1326. , Higgs bundles, the Toledo invariant and the Cayley correspondence, Preprint arXiv:1511.07751. , Representations of the fundamental group of a surface in PU(p,q) and holomorphic triples, *C. R. Acad. Sci. Paris Sér. I Math.* **333** (2001), 347–352. , Surface group representations and ${\mathrm{U}}(p,q)$-Higgs bundles, *J. Diff. Geom.* **64** (2003), 111–170. , Maximal surface group representations in isometry groups of classical Hermitian symmetric spaces, *Geom. Dedicata* **122** (2006), 185–213. , Higgs bundles for the non-compact dual of the special orthogonal group, *Geom. Dedicata* **175** (2015), 1–48. , Flat $G$-bundles with canonical metrics, *J. Diff. Geom.* **28** (1988), 361–382. , Twisted harmonic maps and self-duality equations, *Proc. London Math. Soc.* (3) **55** (1987), 127–131. , The Hitchin-Kobayashi correspondence, Higgs pairs and surface group representations, Preprint arXiv:0909.4487v3. , Higgs bundles and surface group representations in the real symplectic group, *J. Topology* **6** (2013), 64–118. , Higgs bundles for the non-compact dual of the unitary group, *Illinois J. Math.* **55** (2011), 1155–1181. , Connectedness of Higgs bundle moduli for complex reductive Lie groups, *Asian J. Math.*, to appear. , Topological components of spaces of representations, *Invent. Math.* **93** (1988), 557–607. , Topological invariants of Anosov representations, *Journal of Topology* **3** (2010), 578–642. , The self-duality equations on a Riemann surface, *Proc. London Math. Soc.* (3) **55** (1987), 59–126. , Lie groups and Teichmüller space, *Topology* **31** (1992), 449–473. , *Spin Geometry*, Princeton Mathematical Series **38**, Princeton University Press, 1989. , *Higgs bundles, quadratic pairs and the topology of moduli spaces*, Ph.D. Thesis, Departamento de Matemática Pura, Faculdade de Ciências, Universidade do Porto, 2008. , Representations of surface groups in the projective general linear group, *Internat. J. Math.* **22** (2011), 223–279. , Constructing variations of Hodge structure using Yang-Mills theory and applications to uniformization, *J. Amer. Math. Soc.* **1** (1988), 867–918. , Higgs bundles and local systems, *Inst. Hautes Études Sci. Publ. Math.* **75** (1992), 5–95. **Oscar Garc[í]{}a-Prada**\ Instituto de Ciencias Matemáticas, CSIC-UAM-UC3M-UCM\ Nicolás Cabrera, 13–15, 28049 Madrid, Spain\ email: oscar.garcia-prada@icmat.es **André Oliveira**\ Centro de Matemática da Universidade do Porto, CMUP\ Faculdade de Ciências, Universidade do Porto\ Rua do Campo Alegre 687, 4169-007 Porto, Portugal\ [www.fc.up.pt](www.fc.up.pt)\ email: andre.oliveira@fc.up.pt *On leave from:*\ Departamento de Matemática, Universidade de Trás-os-Montes e Alto Douro, UTAD\ Quinta dos Prados, 5000-911 Vila Real, Portugal\ [www.utad.pt](www.utad.pt)\ email: agoliv@utad.pt [^1]: [^2]: The first author is partially supported by the European Commission Marie Curie IRSES MODULI Programme PIRSES-GA-2013-612534, the Ministerio de Economía y Competitividad of Spain through Project MTM2013-43963-P and Severo Ochoa Excellence Grant. Second author is partially supported by CMUP (UID/MAT/00144/2013), by the Projects EXCL/MAT-GEO/0222/2012 and PTDC/MAT-GEO/2823/2014 and also by the Post-Doctoral fellowship SFRH/BPD/100996/2014. These are funded by FCT (Portugal) with national (MEC) and European structural funds (FEDER), under the partnership agreement PT2020. Support from U.S. National Science Foundation grants DMS 1107452, 1107263, 1107367 “RNMS: GEometric structures And Representation varieties” (the GEAR Network) is also acknowledged.
--- address: - | Institut für Kernphysik,\ Forschungszentrum Karlsruhe,\ Postfach 3640,\ 76021 Karlsruhe, Germany\ E-mail: tim.huege@ik.fzk.de - | ASTRON,\ P.O. Box 2,\ 7990 AA Dwingeloo, The Netherlands\ E-mail: falcke@astron.nl author: - 'T. HUEGE' - 'H. FALCKE' title: Monte Carlo simulations of radio emission from cosmic ray air showers --- The simulations =============== Two main mechanisms have been considered to contribute to radio emission from cosmic ray air showers: Askaryan-type Čerenkov radiation arising from a negative charge excess moving through the atmosphere at velocities faster than the speed of light in air, and emission generated as a consequence of the deflection of charged particles in the earth’s magnetic field. A number of historical results illustrated that, while the Čerenkov emission mechanism dominates in dense media such as ice, the geomagnetic mechanism is dominant in the atmosphere. Our simulations thus focus on the latter mechanism, interpreting the radio emission as “coherent geosynchrotron radiation” arising from the geomagnetic deflection of highly relativistic electron-positron pairs generated in the air shower cascade[@FalckeGorham]. In order to gain a solid understanding of the emission characteristics, in a first step, we have performed analytical calculations of the expected radio signal[@Analytics]. In a second step, we have then improved on the analytical results with detailed Monte Carlo simulations, which we have directly compared and thus verified with the analytical results and the available historical data[@MonteCarlo]. While these Monte Carlo simulations still incorporate a somewhat simplified air shower model based on analytical parametrizations, they do already take into account the most important air shower characteristics such as longitudinal (arrival time) and lateral particle distributions, energy and track-length distributions, the overall longitudinal development of the air shower and the geometry of the air shower and magnetic field. Our simulations thus for the first time have provided a prediction of the radio emission from realistically modeled air showers. Furthermore, the model is currently being enhanced by substituting the parametrized particle distributions with CORSIKA[@CORSIKA]-generated distributions. We here present only a subset of the results derived with our Monte Carlo code. A more detailed analysis, including a parametrization of the derived dependences, has been published elsewhere[@Results]. Simulation results ================== We first consider the very simple scenario of a vertical air shower with primary particle energy of $10^{17}\,$eV developing to its maximum at $\sim630\,$g$\,$cm$^{-2}$, i.e., at $\sim4000\,$m above ground. The geomagnetic field is adopted as 70$^{\circ}$ inclined with a strength of 0.5$\,$G, corresponding approximately to the field configuration in central Europe. =4.3in Figure \[vertical\] shows the ground-level total field strength emission pattern at $10\,$MHz, visualized as a contour plot, and the frequency spectra derived at various radial distances from the shower center. The emission pattern shows remarkable symmetry and is almost circular. This is not a trivial result, as the emission process itself, i.e., the deflection of electrons and positrons in the geomagnetic field, is a highly directed process. The circularity of the footprint illustrates that most of the emission stems from particles having short track lengths. A slight north-south asymmetry introduced by the inclination of the geomagnetic field is also visible. The frequency spectra shown in the right panel illustrate that the field strength drops quickly to higher observing frequencies. This is a direct consequence of diminishing coherence as the wavelength becomes shorter and thus comparable to the dimensions of the air shower pancake, in particular its thickness of a few meters. The decrease is stronger at larger distances from the shower center. When one enters the incoherent regime the frequency spectra exhibit unphysical seeming features such as rapidly alternating series of maxima and minima. Realistic calculations of the emission in this regime can only be performed with a better underlying air shower model taking into account inhomogeneities in the shower in very mildly thinned calculations. =4.3in Figure \[inclined\] demonstrates the changes arising in the transition from a vertical to a 45$^{\circ}$ inclined air shower (coming from the south). The emission pattern becomes elongated considerably along the shower axis. This is mainly a projection effect directly associated with the inclination of the shower axis. On closer look, however, the emission pattern becomes wider (and less peaked) as a whole, even in the direction perpendicular to the shower axis. The reason for this is that the maximum of the inclined shower located at the same (slant) atmospheric depth of $\sim630\,$g$\,$cm$^{-2}$ now is at much greater geometrical distance from the observer at ground-level. This geometric effect has direct influence on the slope of the radio emission’s lateral distribution. A look at the frequency spectra in the right panel shows that coherence is also retained up to higher frequencies in case of inclined showers. Their larger radio footprint combined with the large solid angle associated with medium to high zenith angles thus makes inclined showers a particularly interesting target for radio observations[@Results; @Franzosen]. At near horizontal inclination, even neutrino-induced air showers might become observable. =4.5in Figure \[energyandxmax\] illustrates two additional parameters that have direct influence on the radio signal. The left panel shows the impact of the primary particle’s energy. The electric field strength at all distances scales as a power-law with the primary particle energy. The power-law index is very close to unity, i.e., that of the linear relation expected for coherent emission. To larger distances, the slope of the power-law gets flatter due to the effect that more energetic showers on average penetrate deeper into the atmosphere and thus have their shower maximum geometrically closer to the observer. As already discussed in the context of inclined showers and illustrated in the right panel, this directly influences the lateral distribution of the radio emission. Since the depth of the shower maximum can in turn be related to the nature of the primary particle, its influence on the radio signal’s lateral distribution can potentially be used to probe the primary particle composition with radio measurements[@ICRC]. Another important result of the simulations (not shown here explicitly) are the predicted linear polarization characteristics of the radio signal[@Results]. They can be used to directly verify the geomagnetic origin of the emission. To make the simulation results available for easy comparison with experimental data, they are also available as a parametrization formula[@Results]. Conclusions =========== We have carried out elaborate Monte Carlo simulations of geosynchrotron radio emission from cosmic ray air showers. Special care has been taken to verify the Monte Carlo results with analytical calculations and historical data, giving us a good understanding of the emission process and thus solid confidence in the predictions. The simulations predict many important characteristics of the radio emission and their relation to parameters of the associated air shower. The total field strength emission pattern is very regular and symmetric in the coherent regime. The geomagnetic origin of the emission can be directly verified with polarization measurements. The frequency spectra cut off quickly to high frequencies, making low observing frequencies around a few tens of MHz desirable. Inclined air showers have a much wider emission pattern and are thus particularly suitable for radio observations. The slope of the lateral distribution can be directly related to the geometrical distance between observer and shower maximum. It is thus not only sensitive to the shower zenith angle, but also to the nature of the primary particle. As expected for coherent emission, the electric field strength scales approximately linearly with the primary particle energy. These predictions will allow to analyze and interpret experimental data such as those provided by LOPES[@LOPES] and other experiments. [0]{} H. Falcke and P. Gorham, [*Astropart. Phys.*]{} [**19**]{}, 477-494 (2003). T. Huege and H. Falcke, [*Astronomy Astroph.*]{} [**412**]{}, 19-34 (2003). T. Huege and H. Falcke, [*Astronomy Astroph.*]{} [**430**]{}, 779-798 (2005). D. Heck et al., [*Forschungszentrum Karlsruhe Report*]{} [**FZKA 6019**]{} (1998). T. Huege and H. Falcke, [*Astropart. Phys.*]{} [**in press**]{} (2005), astro-ph/0505180. T. Gousset, O. Ravel and C. Roy, [*Astropart. Phys.*]{} [**22**]{}, 103-107 (2004). T. Huege et al. - LOPES collaboration, [*Proc. 29th ICRC, Pune, India*]{} (2005). H. Falcke et al. - LOPES collaboration, [*Nature*]{} [**435**]{}, 313-316 (2005).
--- abstract: 'We present a data association method for vision-based multiple pedestrian tracking, using deep convolutional features to distinguish between different people based on their appearances. These re-identification (re-ID) features are learned such that they are invariant to transformations such as rotation, translation, and changes in the background, allowing consistent identification of a pedestrian moving through a scene. We incorporate re-ID features into a general data association likelihood model for multiple person tracking, experimentally validate this model by using it to perform tracking in two evaluation video sequences, and examine the performance improvements gained as compared to several baseline approaches. Our results demonstrate that using deep person re-ID for data association greatly improves tracking robustness to challenges such as occlusions and path crossings.' author: - 'Brian H. Wang$^{1}$, Yan Wang$^{2}$, Kilian Q. Weinberger$^{2}$, and Mark Campbell$^{1}$[^1][^2][^3]' bibliography: - 'references.bib' title: '**Deep Person Re-identification for Probabilistic Data Association in Multiple Pedestrian Tracking** ' --- Introduction ============ Visually tracking the motion of people through a scene over time is a critical capability for many applications involving camera-equipped robots or sensor networks. Examples range from an autonomous car tracking nearby pedestrians, to a team of aerial robots searching for moving people in a search-and-rescue mission. This problem can be broken down into two general stages: people must be detected in video frames, and these detections must then be linked together and used to estimate tracks over time. The first of these two tasks has been extensively studied with the usage of deep convolutional neural networks for object detection [@Girshick2014; @Krizhevsky2012; @Dai2016; @Ren2015]. State of the art algorithms such as Mask-RCNN [@He2017] are capable of detecting people at the pixel level with near-human level accuracy, and have been found to generalize well to different scenes. However, for multiple pedestrian tracking applications, it is not enough to just detect the presence of people - it is equally important to distinguish between individuals and correctly associate detections with currently tracked people. A robot which confuses individuals with one another could assume an incorrect number of people in its environment in an autonomous navigation or search-and-rescue scenario, or an intelligent sensor network could lose track of a person of interest in a security task. ![Examples of challenging situations for a multiple pedestrian tracker: a person being occluded by an obstacle, a large crowd, and two people crossing paths. []{data-label="fig:tracking_challenges"}](figures/challenges.pdf){width="\columnwidth"} This problem becomes highly challenging in the presence of occlusions, crowds, and pedestrians crossing paths with one another. See Figure \[fig:tracking\_challenges\] for an example of some of the situations that complicate multiple-person tracking. A method for modeling appearance and distinguishing between different people visually could provide key information to robustly handle these challenges. Person re-identification (re-ID) provides a promising solution to this problem [@Zheng2015; @Cheng2016; @Li2017; @Wang2018]. Given two images that each contain a person, a re-ID system can generate a likelihood that the two images are of the same person. Similar to people detection in a scene, re-ID is extremely intuitive for humans - consider how quickly most people can spot a friend in a crowd, or pick out a cameo appearance by a favorite celebrity in a movie. However, re-ID is challenging for a computer, and is therefore an active area of research in machine learning and computer vision. State of the art approaches to person re-ID involve modeling the appearances of individuals in a low-dimensional feature space, that is learned with a deep convolutional neural network trained on a large dataset of images of many different people. The network is trained explicitly so that images of the same person are mapped to close-by locations in feature space, while images of different people are mapped to far apart locations. Modern person re-ID methods are becoming very successful as measured by performance on publicly available re-ID benchmark datasets [@Zheng2015; @Zheng2016a; @Ahmed2015; @Cheng2016; @Li2017; @Wang2018]. However, the usefulness of deep re-ID methods for application in probabilistic tracking algorithms has not yet been extensively studied. Traditionally in pedestrian tracking, position information is used to association a new observation to a nearby tracked person. It is proposed here to additionally use appearance, via re-ID, as a cue to aid data association decisions, as seen in the example in Figure \[fig:reid\_data\_assoc\_example\]. This paper studies how well deep learning-based person re-identification improves data association in probabilistic multiple-person tracking. ![Given a set of previous detections of a person, deep person re-ID could hypothetically be used to re-identify this person within the set of detections seen in a new video frame.[]{data-label="fig:reid_data_assoc_example"}](figures/reid_one_person.pdf){width="\columnwidth"} Related Work ============ Probabilistic Data Association ------------------------------ Data association is a difficult problem in multiple-object tracking, and several proven approaches exist that achieve respectable performance. The Rao-Blackwellized particle filter (RBPF) probabilistically evaluates multiple data association decisions at each time step, and propagates multiple hypotheses of assignments forward in time [@Sarkka; @Schulz2003; @Miller2011a]. Multiple Hypothesis Tracking (MHT) works similarly, but rather than performing probabilistic sampling, it grows a tree of possible hypotheses according to deterministic branching decisions [@Blackman2004a; @Kim2015]. Some tracking methods perform data association over long periods of time, using probabilistic graphical models where more nodes are added to the graph as the length of time considered increases [@Zhang2008; @Yang2017; @Keuper2016]. These methods are able to jointly reason about multiple objects in the scene over multiple time steps, and therefore generally achieve higher overall accuracy compared to online methods. However, such methods are not well-suited for applications in robotics or intelligent sensor networks, which generally require tracking to be performed recursively one frame at a time, as the system captures video data sequentially. Person Re-identification ------------------------ The general goal of machine learning-based person re-identification is to learn a method for mapping images of people to low-dimensional feature vectors. These feature vectors should have the property that two images which are of the same person map to nearby feature points (as measured by Euclidean distance), while images of two different people map to feature points that are far apart. This is illustrated in Figure \[fig:reid\_example\]. Note that ideally, re-ID feature representations should be robust to changes in the background, pose, or orientation relative to the camera, as well as to partial occlusions by obstacles. People are able to perform re-identification by looking at visual cues such as clothing color, facial features, body shape, and distinctive accessories, to name just a few. Deep convolutional networks have been shown [@Girshick2014] to learn and extract high quality features from natural images, which are invariant to small local transformations such as rotation and translation. For our re-ID task, these invariants are crucial, as they help ensure that a person’s identity remains unchanged as he or she moves across a scene. Indeed, many successful approaches to re-ID have used deep convolutional neural networks to map images to re-ID feature vectors [@Ahmed2015; @Cheng2016; @Li2014; @Li2017]. Researchers have also created a number of high-quality benchmark datasets, for the purpose of training and evaluating these methods [@Ristani2016; @Zheng2015; @Zheng2016a]. ![Visualization of how images of the same person will map to nearby points in re-ID feature space. In this figure, the re-ID feature space is shown flattened down to two dimensions.[]{data-label="fig:reid_example"}](figures/reid.pdf){width="\columnwidth"} Deep Learning for Multiple Pedestrian Tracking ---------------------------------------------- Deep learning has also been applied directly to the problem of multiple pedestrian tracking. In particular, recurrent neural networks (RNNs) have been employed for tracking [@Milan2016], due to their ability to effectively process time series data. Sadeghian et al. [@Sadeghian2017] use RNNs to learn motion, appearance, and interaction-based cues that indicate similarity between new observations and previously tracked pedestrians. While shown to be effective given sufficient training data, it remains to be seen whether data-driven approaches to data association and tracking are viable for robotic applications, which typically involve significant variations in environments and operating conditions, necessitating an exponentially greater amount of training data for successful performance. Re-ID-based Data Association ============================ Probabilistic Data Association ------------------------------ ![image](figures/data_assoc.pdf){width="\textwidth"} In this section, we begin by defining the general data association problem within the context of multiple object tracking. Given a set of measurements $Z(k)$ and a set of tracked objects $X(k)$, where $k$ is the current time step, the goal of data association is to determine which measurement was generated by which tracked object. Individual measurements are denoted by $z_i(k)\in Z(k)$, $i=1,\dots,m_k$, and estimated object tracks are denoted by $x_j(k)\in X(k)$, $j=1,\dots,n_k$. For the pedestrian tracking task, each time step $k$ corresponds to a distinct video frame, and the measurements $Z(k)$ are a set of bounding boxes or segmentation masks found by a computer vision person detector. Data association is then a discrete assignment problem, where each detection must be assigned to a tracked person. This problem is shown more specifically in Figure \[fig:data\_assoc\_example\]. Each member of the set of new detections (Fig. \[fig:data\_assoc\_example\] center) must be either assigned to a previous track (Fig. \[fig:data\_assoc\_example\] left). In order to perform data association, a likelihood value is typically defined for each possible assignment from measurement $z_i(k)$ to track $x_j(k)$, denoted as $a_{ij}(k)$. Let $\asgn(k)$ be the event that the detection $z_i(k)$ was generated by person $x_j(k)$. The likelihood values $a_{ij}(k)$ are then defined as $$\label{eq:assoc_likelihoods} a_{ij}(k) \equiv \asgnprob$$ for $i=1,\dots,m_k$ and $j=1,\dots,n_k$. Note that a separate likelihood model should also be defined for the event where detection $i$ is used to initialize a new track, as this falls outside the data association problem. Equation (\[eq:assoc\_likelihoods\]) defines the probability of an assignment event only conditioned on the current detection $z_i(k)$ and track state $x_j(k)$. This is the approach used in nearest-neighbors and one-shot data association strategies [@BarShalom2009]. Other approaches, such as the Rao-Blackwellized particle filter and MHT, operate over the full history of measurements, $Z(1),\dots,Z(k)$, using (\[eq:assoc\_likelihoods\]) only for assignments at the last time step. Typically, even these approaches define recursive solutions, which closely examine the most recent event in (\[eq:assoc\_likelihoods\]). In this work, we focus on the individual likelihood shown in (\[eq:assoc\_likelihoods\]), but importantly, this approach can be generalized to any data association method [@BarShalom2009; @Sarkka; @Miller2011a; @Blackman2004a] Since one of the events $\theta_{ij}(k)$ must explain the source of detection $z_i(k)$, and a detection cannot come from two different people, these events are mutually exclusive and exhaustive, and $$\sum_{j=1}^{n_k} a_{ij}(k) = 1 \quad \forall i=1,\dots,m_k.$$ The association likelihoods $a_{ij}$ then depend on the selection of the likelihood model $\asgnprob$ in (\[eq:assoc\_likelihoods\]. Since many of the sensors traditionally used for robotics and tracking applications provide location measurements, data association likelihoods typically depend only on the location of the sensor measurement relative to the expected object position. As an example, points returned by a lidar sensor can only be reliably associated with tracked objects according to the point positions [@Miller2007]. In cases such as these, the sensor likelihood model can be accurately approximated as a Gaussian centered on the expected object position. When using object detections in a video frame, the measurement $z_i(k)$ includes detection position as well as size (for a bounding box) or shape (for a segmenting mask). However, these detections also contain useful information about the appearance of detected people. Thus, camera detection measurements can be decomposed into two independent measurements: a position component $z^{pos}_{i}(k)$ and an appearance component $z^{app}_{i}(k)$. The measurement likelihood model then becomes $$\label{eq:p_asgn_given_z} a_{ij}(k) = P\left(\theta_{ij}(k)\vert z^{pos}_i(k), z^{app}_i(k)\right).$$ Using Bayes rule, (\[eq:p\_asgn\_given\_z\]) can be rewritten as $$a_{ij}(k) = \alpha P\left(z^{pos}_i(k), z^{app}_i(k)\vert \theta_{ij}(k)\right) \times P\left(z^{pos}_i(k),z^{app}_i(k)\right),$$ where $\alpha$ is a normalization factor. The term $P\left(z^{pos}_i(k),z^{app}_i(k)\right)$ represents any prior information that may be available about the likelihood of observing detections; it is common to simply use a uniform distribution over all possible assignments [@Miller2011a]. Assuming a uniform prior, and noting that the position and appearance components of the measurement are independent of one another, the assignment likelihood values are therefore $$\label{eq:assignment_model_pos_app} a_{ij}(k) = \alpha \poslhood\applhood.$$ The likelihood of detection $z_i(k)$ being observed at a certain position $z^{pos}_i(k)$ given that it has been assigned to person $j$ can be accurately modeled as Gaussian, with the likelihood decreasing as distance to the estimated position of person $j$ increases. Deriving the detection appearance model $\applhood$ requires augmenting the estimated state vector for each tracked person to include information on the person’s appearance, in the form of re-ID features. The augmented feature vector for person $j$ is defined as $$\xaug_j(k) = \begin{bmatrix} x_j(k)\\ f_j(k) \end{bmatrix},$$ where $f_j(k)$ is a vector in re-ID feature space. For tracking using bounding box detections, the positional component $x_j(k)$ stores the tracked person’s estimated position, velocity, and bounding box size. In our data association approach, the stored feature vector $f_j(k)$ is formed from a moving average of all feature vectors from detections assigned to person $j$. Re-ID Likelihood Model ---------------------- One final question in the data association likelihood model is how to transform Euclidean distances between re-ID feature points into the likelihood values $\applhood$ used in (\[eq:assignment\_model\_pos\_app\]). When new detections are received, they are each converted to re-ID feature vectors using a deep re-ID model. Each tracked person has an averaged reference feature vector $f_j(k)$ associated with them. Then, given a new detection $z^{app}_i(k)$, which is converted into a re-ID feature vector $g_{i}(k)$, the likelihood of assigning it to person $j$ can be calculated using the softmin function over all tracked person reference vectors, giving $$P\left(\theta_{ij}(k)\vert z^{app}_i(k)\right) = \beta_i\exp\left(-\Vert g_i(k) - f_j(k)\Vert\right).$$ $\beta_i$ is a normalization term used to ensure that $\sum_{j=1}^{n_k}P\left(\theta_{ij}(k)\vert z^{app}_i(k)\right)=1$, and therefore $$\beta_i = \left(\sum_{j=1}^{n(k)} \exp\left(-\Vert g_i(k) - f_j(k)\Vert\right)\right)^{-1}.$$ The softmax function is commonly used in machine learning to convert from feature vectors to class likelihoods in multiclass classification problems; since we are here interested in the minimal distance between pairs of feature vectors, we instead use a softmin to form a discrete probability distribution for $\applhood$ over the $n_k$ possible person assignments. The full data association likelihoods shown in (\[eq:assignment\_model\_pos\_app\]) are then finally computed by multiplying these appearance similarity-based probabilities together with the Gaussian detection position likelihoods. Deep Anytime Re-ID (DaRe) ------------------------- In our multiple pedestrian tracker, we use the Deep Anytime Re-ID (DaRe) architecture from Wang et al. [@Wang2018] to perform person re-identification through transforming images of detected people to re-ID feature vectors. In addition to achieving state of the art performance on re-ID benchmarks, DaRe is particularly well-suited for robotic applications, because it utilizes varying amounts of computational resources depending on the person being re-identified. As an example, a person who is wearing a distinctive outfit and is clearly visible is identified using only the first stage of the DaRe convolutional neural network; someone else who is partially occluded, blurred by camera motion, or wearing clothes that blend into the background is identified using a greater number of convolutional layers. At each stage, DaRe calculates a confidence value for the re-identification result, and stops computation when sufficiently confident. The overall effect is a significant reduction in computation time, as compared to approaches that apply computation uniformly to each person. Our implemented model is trained on the MARS dataset [@Zheng2016a], and makes uses of a dense convolutional neural network (DenseNet) architecture [@Huang2016]. Experimental Validation ======================= In order to experimentally examine the effects of incorporating appearance re-ID into a data association likelihood model, we present results from implementing the above described data association strategy within a Rao-Blackwellized particle filter. The algorithm tracks multiple moving pedestrians within two video sequences showing complex street scenes. In order to understand the impact of re-ID on data association, results are presented using four data association methods: detection position only, deep re-ID likelihood only, position along with a simple appearance model, and finally position combined with deep re-ID. Results are analyzed quantitatively using various numerical metrics of tracking performance, and qualitatively by examining specific cases where the inclusion of deep re-ID improves tracker robustness to certain difficulties. Rao-Blackwellized Particle Filter --------------------------------- There are many different algorithms for performing data association and multiple-object tracking, based on the association likelihoods defined in (\[eq:assignment\_model\_pos\_app\]). In order to consider uncertain data association decisions, our experiments utilize a Rao-Blackwellized particle filter (RBPF) [@Miller2007; @Miller2011a], which samples data association hypotheses (in the form of particles) based on the likelihood model in (\[eq:assignment\_model\_pos\_app\]). Individual objects are tracked using efficient parametric trackers, and decisions on when to initiate new tracks from detections are also made probabilistically according to the detection association likelihoods. Detailed treatments of the RBPF can be found in S[ä]{}rkk[ä]{} et al. [@Sarkka] and Miller and Campbell [@Miller2007]. Our RBPF uses separate linear motion model Kalman filters to track each individual person. Linear motion is only an approximation for the true patterns of pedestrian motion; person detectors provide frequent enough measurement updates to correct for model inaccuracy and adequately track pedestrians. The RBPF could certainly be extended with a more complex physics-based or data-driven motion model; this is outside the focus of this paper. In order to demonstrate the applicability of deep person re-ID to multiple-pedestrian tracking, the RBPF tracker was applied to a pair of challenging video sequences taken from the Multiple Object Tracking Challenge (MOTC) benchmark [@Leal-Taixe2015; @Milan2016a]. The PETS09-S2L1 and MOT17-04 sequences are used, shown in Figures \[fig:pets780\] and \[fig:mot1704\] respectively. These sequences show people walking in various patterns, with occlusions by each other and by other objects, and therefore provide a useful evaluation for tracking. Detection bounding boxes and ground truth person locations are provided for each sequence, with the ground truth data being used only for performance evaluation after running the tracker. It should also be noted that our deep re-ID architecture is trained only on the MARS dataset [@Zheng2016a], and not on images of people from either of the MOTC videos. Therefore, this experiment also studies the ability of our deep re-ID system to generalize to new images of previously unseen people. For each sequence, several data association strategies are compared in a controlled study. The data association variants we use are as follows: Using position only, using deep re-ID only, using a benchmark color histogram appearance model, and finally, using position along with deep re-ID. The benchmark appearance model uses a color histogram in HSV color space to judge visual similarity between tracked people and new detections, and is included in order to evaluate the performance afforded by deep re-ID as compared to a much simpler, non-machine learning-based model. ![A frame from the PETS09-S2L1 video sequence.[]{data-label="fig:pets780"}](figures/pets784.jpg){width="\columnwidth"} ![A frame from the MOT17-04 video sequence.[]{data-label="fig:mot1704"}](figures/mot1704_282.jpg){width="\columnwidth"} Performance is measured quantitatively with the widely used CLEAR MOT metrics, consisting of multiple object tracking accuracy (MOTA) and multiple object tracking precision (MOTP) [@Bernardin2008]. MOTA decreases as the rate of false positives, false negatives, or ID switches increase. An example of an ID switch is shown in Figure \[fig:idswitch\], where the tracker mistakenly swaps the tracks of two different people. The hypothesis behind using deep re-ID for data association is that it should increase MOTA by decreasing the number of ID switches in particular, since ID switches are tracker errors that are directly caused by data association mistakes. In cases such as that seen in Figure \[fig:idswitch\], where two people walk next to one another, data association based on detection positions only is extremely ambiguous. A small amount of detection noise or unexpected motion could easily cause an ID switch. However, incorporating re-ID into data association should intuitively protect against such occurrences, as the tracker could then use appearance information to disambiguate nearby people from one another. ![An example of an ID switch caused by an error from position-only data association. Colors represent different tracks. As two people walk side-by-side, the position-only RBPF tracker initially tracks them correctly (left), but later on switches their track IDs (right).[]{data-label="fig:idswitch"}](figures/idswitch/idswitch.pdf){width="\columnwidth"} MOTP measures the precision with which people’s exact locations are known. This is primarily influenced by the precision of the object detector; a detector that fits bounding boxes more closely around people, or avoids bounding boxes entirely in favor of segmenting masks, would attain higher precision. On the other hand, the choice of data association strategy has no direct influence on precision, and so we do not expect a large effect on MOTP. Method MOTA MOTP FP FN ID Sw. -------------------- --------------- --------------- ------------- ------------- ----------- -- Pos. only 0.905 0.644 154 260 30 Re-ID only 0.109 0.544 597 3434 112 Pos.+Hist. -0.168 0.560 4801 411 218 [****]{}Pos.+re-ID [****]{}0.929 [****]{}0.656 [****]{}114 [****]{}210 [****]{}6 : Results on sequence PETS09-S2L1[]{data-label="table:seq_pets"} Method MOTA MOTP FP FN ID Sw. -------------------- --------------- --------------- -------------- --------------- ------------ -- Pos. only 0.445 0.820 7831 18390 160 Re-ID only 0.156 0.793 2919 36684 535 Pos.+Hist. 0.195 0.738 21977 15437 862 [****]{}Pos.+Re-ID [****]{}0.533 [****]{}0.863 [****]{}2253 [****]{}19871 [****]{}83 : Results on sequence MOT17-04[]{data-label="table:seq_1704"} Quantitative Analysis --------------------- Tracking results from running the RBPF tracker on the two evaluation sequences are shown in Tables \[table:seq\_pets\] and \[table:seq\_1704\]. MOTA, MOTP, FP, FN, and ID Sw. indicate Multiple Object Tracking Accuracy, Multiple Object Tracking Precision, false positives, false negatives, and ID switches, respectively. The PETS09-S2L1 sequence includes 795 video frames showing 19 pedestrians walking, with a total of 4650 ground truth person annotations over all frames. This video sequence is fairly sparse; at any given time, 2 to 8 pedestrians are seen in the video frame simultaneously. The MOT17-04 sequence includes 1050 frames showing 83 pedestrians, with 47557 total ground truth annotations. This sequence is much more complex than the PETS09-S2L1 video; as seen in Figure \[fig:mot1704\], at times upwards of 30 people can be observed in the video frame, greatly increasing the difficulty of tracking. The most significant effect of combining re-ID with position-based data association is a reduction in the number of ID switches, validating our earlier hypothesis. Compared to data association with position only, using position as well as re-ID caused a drop in ID switches from 30 to 6 in the PETS09-S2L1 sequence (an 80% reduction), and from 160 to 83 in the MOT17-04 sequence (a 48% reduction). In order to realize the benefits of re-ID data association, it is not enough to use re-ID without position information, or to use a simple color histogram appearance model. Both approaches perform poorly. In the PETS09-S2L1 sequence, the color histogram method creates so many false tracks that its false positive count exceeds the total amount of ground truth annotations, causing negative MOTA overall. The poor performance of this method could be explained by the fact that color histograms are heavily influenced by colors in the background, as well as by the color of large objects such as a person’s coat. In contrast, a re-ID feature representation can be learned such that it ignores irrelevant information. As expected, MOTP is not significantly affected by the inclusion of deep re-ID for data association as opposed to the position-only setting, though we do observe a modest increase. MOTP does however drop when MOTA is very low, as in the re-ID only and color histogram data association cases. Qualitative Analysis -------------------- We next present a qualitative analysis of a scenario from the tracking results, in order to illustrate how deep re-ID assists data association and reduces the occurrence of ID switches, as seen from our quantitative results. Figure \[fig:comparison\_pets\] shows a series of frames from the PETS09-S2L1 sequence, where a group of people walk across the scene. Tracking results with and without re-ID data association are shown, with the left-side column showing position-only data association tracking, and the right-side column showing tracking using position and re-ID. In frame 150 of the position-only case, an ID switch occurs as one person (in the yellow bounding box) walks behind a signpost while another person, who previously had been lost by the tracker due to an extended period of occlusion, walks out from behind it at nearly the same location. The yellow box can be seen to have switched from one person to the other in frame 155. As the first person walks out from the other side of the signpost in frame 160, a second ID switch from a different person occurs, again due to occlusion, as seen from the movement of the purple box. In frame 165, the yellow box again switches between people, causing one person to be missed by the tracker in frame 170 until a new track is initialized in frame 175. When re-ID as well as position are used for data association, these ID switches do not occur, and the tracker is able to successfully use the distinct appearances of these people to perform reliable tracking even in the presence of occlusions and path crossings. ![Frames 145 to 175 of the PETS09-S2L1 sequence, showing tracking results with position-only data association (left) and position with re-ID data association (right). Frames are cropped from the originals to better focus on this group of people.[]{data-label="fig:comparison_pets"}](figures/figure_comparison/comparison.pdf){width="\columnwidth"} Conclusion ========== A general approach to augmenting traditional sensor likelihood models with deep person re-identification is presented, for application in multiple person tracking. We describe the process of converting images of people into convolutional feature vectors, using a learned deep re-ID model, and show how these feature vectors can be used within the general data association framework. Our results indicate that person re-ID significantly increases tracking performance as compared to data association that uses detection position only, according to quantitative measures of tracking accuracy and consistency. In particular, the usage of deep re-ID for data association is seen to cause an 80% drop in ID switches for tracking in the PETS09-S2L1 video sequence, as well as a 48% reduction in the much more crowded and complex MOT17-04 sequence. The usage of deep re-ID is additionally seen qualitatively to increase tracking robustness to difficulties such as occlusions and path crossings. [^1]: This work was supported by the Office of Naval Research, under grant N00014-17-1-2175. [^2]: $^{1}$Autonomous Systems Lab, Department of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14850, USA. [{bhw45, mc288}@cornell.edu]{} [^3]: $^{2}$Department of Computer Science, Cornell University, Ithaca, NY 14850, USA. [{yw763, kqw4}@cornell.edu]{}
--- abstract: 'We experimentally investigate spin-orbit torques and spin pumping in NiFe/Pt bilayers with direct and interrupted interfaces. The damping-like and field-like torques are simultaneously measured with spin-torque ferromagnetic resonance tuned by a dc bias current, whereas spin pumping is measured electrically through the inverse spin Hall effect using a microwave cavity. Insertion of an atomically thin Cu dusting layer at the interface reduces the damping-like torque, field-like torque, and spin pumping by nearly the same factor of $\approx$1.4. This finding confirms that the observed spin-orbit torques predominantly arise from diffusive transport of spin current generated by the spin Hall effect. We also find that spin-current scattering at the NiFe/Pt interface contributes to additional enhancement in magnetization damping that is distinct from spin pumping.' author: - Tianxiang Nan - Satoru Emori - 'Carl T. Boone' - Xinjun Wang - 'Trevor M. Oxholm' - 'John G. Jones' - 'Brandon M. Howe' - 'Gail J. Brown' - 'Nian X. Sun' date: 'May 25, 2015' title: 'Comparison of spin-orbit torques and spin pumping across NiFe/Pt and NiFe/Cu/Pt interfaces' --- Introduction {#sec:intro} ============ Current-induced torques due to spin-orbit effects [@Brataas2014; @Gambardella2011a; @Haney2013a] potentially allow for more efficient control of magnetization than the conventional spin-transfer torques [@Ralph2008; @Brataas2012a]. The spin Hall effect [@Hoffmann2013] is reported to be the dominant source of spin-orbit torques in thin-film bilayers consisting of a ferromagnet (FM) interfaced with a normal metal (NM) with strong spin-orbit coupling. Of particular technological interest is the spin-Hall “damping-like” torque that induces magnetization switching [@Miron2011; @Liu2012; @Bhowmik2014; @Yu2014b], domain-wall motion [@Haazen2013; @Emori2014d; @Ryu2014; @Ueda2014a], and high-frequency magnetization dynamics [@Demidov2012; @Liu2012a; @Liu2013; @Duan2014a; @Ranjbar2014; @Hamadeh2014a]. While this spin-Hall torque originates from spin-current generation within the bulk of the NM layer, the magnitude of the torque depends on the transmission of spin current across the FM/NM interface [@Haney2013a]. Some FM/NM bilayers with $\sim$1-nm thick FM exhibit another spin-orbit torque that is phenomenologically identical to a torque from an external magnetic field [@Kim2013a; @Garello2013; @Fan2013; @Fan2014; @Pai2014; @Pai2014a; @Emori2014c; @Woo2014]. This “field-like” torque is also interface-dependent, because it may emerge from the Rashba effect at the FM/NM interface [@Gambardella2011a], or the nonadiabaticity [@Ralph2008] of spin-Hall-generated spin current transmitted across the interface [@Haney2013a; @Fan2013; @Fan2014; @Pai2014]. To understand the influence of the FM/NM interface on magnetization dynamics, many studies have experimentally investigated resonance-driven spin pumping from FM to NM [@Tserkovnyak2002a; @Tserkovnyak2002], detected with enhanced damping [@Ghosh2011; @Boone2013; @Boone2014; @Heinrich2011; @Sun2013a] or dc voltage due to the inverse spin Hall effect [@Azevedo2005a; @Saitoh2006; @Mosendz2010a; @Czeschka2011a; @Ando2011; @Deorani2013; @Weiler2014c; @Obstbaum2014; @Rojas-Sanchez2014; @Wang2014]. The parameter governing spin-current transmission across the FM/NM interface is the spin-mixing conductance $G_{\uparrow\downarrow}$ (Ref. [ -‘ [@Brataas2000] ]{}). Simultaneously investigating spin pumping and spin-orbit torques, which are theoretically reciprocal effects [@Brataas2012a], should reveal the interface dependence of the observed torques in FM/NM. Here we investigate spin-orbit torques and magnetic resonance in in-plane magnetized NiFe/Pt bilayers with direct and interrupted interfaces. To modify the NiFe/Pt interface, we insert an atomically thin dusting layer of Cu that does not exhibit strong spin-orbit effects by itself. We use spin-torque ferromagnetic resonance (ST-FMR) [@Sankey2007; @Liu2011] combined with dc bias current to extract the damping-like and field-like torques simultaneously. We also independently measure the dc voltage generated by spin pumping across the FM/NM interface. The interfacial dusting reduces the damping-like torque, field-like torque, and spin pumping by the same factor. This finding is consistent with the diffusive spin-Hall mechanism [@Haney2013a; @Boone2013] of spin-orbit torques, where spin transfer between NM and FM depends on the interfacial spin-mixing conductance. ![image](Figure1.eps){width="1.1\columnwidth"} Experimental Details {#sec:exp} ==================== Samples {#subsec:samples} ------- The two film stacks compared in this study are *sub*/Ta(3)/Ni$_{80}$Fe$_{20}$(2.5)/Pt(4) (“NiFe/Pt”) and *sub*/Ta(3)/Ni$_{80}$Fe$_{20}$(2.5)/Cu(0.5)/Pt(4) (“NiFe/Cu/Pt”), where the numbers in parentheses are nominal layer thicknesses in nm and *sub* is a Si(001) substrate with a 50-nm thick SiO$_2$ overlayer. All layers were sputter-deposited at an Ar pressure of $3\times10^{-3}$ Torr with a background pressure of $\lesssim$1$\times10^{-7}$ Torr. The atomically thin dusting layer of Cu modifies the NiFe/Pt interface with minimal current shunting. The Ta seed layer facilitates the growth of thin NiFe with narrow resonance linewidth and near-bulk saturation magnetization [@Ghosh2011; @Boone2014]. We measured the saturation magnetization $M_s=(5.8\pm0.4)\times10^5$ A/m for both NiFe/Pt and NiFe/Cu/Pt with vibrating sample magnetometry. From four-point measurements on various film stacks and assuming that individual constituent layers are parallel resistors, we estimate the resistivities of Ta(3), NiFe(2.5), Cu(0.5), and Pt(4) to be 240 $\mu\Omega$cm, 90 $\mu\Omega$cm, 60 $\mu\Omega$cm, and 40 $\mu\Omega$cm, respectively. Approximately 70% of the charge current thus flows in the Pt layer. In the subsequent analysis, we also include the small damping-like torque and the Oersted field from the highly resistive Ta layer (see Appendix A). Spin-torque ferromagnetic resonance {#subsec:STFMR} ----------------------------------- We fabricated 5-$\mu$m wide, 25-$\mu$m long microstrips of NiFe/Pt and NiFe/Cu/Pt with Cr/Au ground-signal-ground electrodes using photolithography and liftoff. We probed magnetization dynamics in the microstrips using ST-FMR (Refs. [ -‘ [@Sankey2007; @Liu2011] ]{}) as illustrated in Fig. \[fig:STFMR\](a): an rf current drives resonant precession of magnetization in the bilayer, and the rectified anisotropic magnetoresistance voltage generates an FMR spectrum. The rf current power output was +8 dBm and modulated with a frequency of 437 Hz to detect the rectified voltage using a lock-in amplifier. The ST-FMR spectrum (e.g., Fig. \[fig:STFMR\](b)) was acquired at a fixed rf driving frequency by sweeping an in-plane magnetic field $|\mu_0 H|<80$ mT applied at an angle $|\phi| = 45\degree$ from the current axis. The rectified voltage $V_{mix}$ constituting the ST-FMR spectrum is fit to a Lorentzian curve of the form $$\label{eq:lorentzian} \begin{split} V_{mix} = & S\frac{W^2}{(\mu_0H-\mu_0H_{FMR})^2+W^2}\\ &+A\frac{W(\mu_0H-\mu_0H_{FMR})}{(\mu_0H-\mu_0H_{FMR})^2+W^2}, \end{split}$$ where $W$ is the half-width-at-half-maximum resonance linewidth, $H_{FMR}$ is the resonance field, $S$ is the symmetric Lorentzian coefficient, and $A$ is the antisymmetric Lorentzian coefficient. Representative fits are shown in Fig. \[fig:STFMR\](c). The lineshape of the ST-FMR spectrum, parameterized by the ratio of $S$ to $A$ in Eq. \[eq:lorentzian\], has been used to evaluate the ratio of the damping-like torque to the net effective field from the Oersted field and field-like torque [@Liu2011; @Kondou2012a; @Skinner2014; @Wang2014c; @Mellnik2014; @Pai2014a]. To decouple the damping-like torque from the field-like torque, the magnitude of the rf current in the bilayer would need to be known [@Liu2011; @Wang2014c]. Other contributions to $V_{mix}$ (Refs. [ -‘ [@Yamaguchi2007a; @Ganguly2014; @Kasai2014] ]{}) may also affect the analysis based on the ST-FMR lineshape. ![image](Figure2.eps){width="1.25\columnwidth"} We use a modified approach where an additional dc bias current $I_{dc}$ in the bilayer, illustrated in Fig. \[fig:STFMR\](a), transforms the ST-FMR spectrum as shown in Fig. \[fig:STFMR\](c). A high-impedance current source outputs $I_{dc}$, and we restrict $|I_{dc}|\leq2$ mA (equivalent to the current density in Pt $|J_{c,Pt}|~<~10^{11}$ A/m$^2$) to minimize Joule heating and nonlinear dynamics. The dependence of the resonance linewidth $W$ on $I_{dc}$ allows for quantification of the damping-like torque [@Ando2008a; @Liu2011; @Demidov2011; @Pai2012; @Ganguly2014; @Kasai2014; @Duan2014b; @Emori2015], while the change in the resonance field $H_{FMR}$ yields a direct measure of the field-like torque [@Mellnik2014]. Thus, dc-tuned ST-FMR quantifies both spin-orbit torque contributions. Electrical detection of spin pumping {#subsec:spinpumping} ------------------------------------ ![image](Figure3.eps){width="1.8\columnwidth"} The inverse spin Hall voltage $V_{ISH}$ due to spin pumping was measured in 100-$\mu$m wide, 1500-$\mu$m long strips of NiFe/Pt and NiFe/Cu/Pt with Cr/Au electrodes attached on both ends, similar to the sub-mm wide strips used in Ref. [ -‘ [@Emori2015] ]{}. These NiFe/(Cu/)Pt strips were fabricated on the same substrate as the ST-FMR device sets described in Sec. \[subsec:STFMR\]. The sample was placed in the center of a rectangular TE$_{102}$ microwave cavity operated at a fixed rf excitation frequency of 9.55 GHz and rf power of 100 mW. A bias field $H$ was applied within the film plane and transverse to the long axis of the strip. The dc voltage $V_{dc}$ across the sample was measured using a nanovoltmeter while sweeping the field, as illustrated in Fig. \[fig:ISHcartoon\](a). The acquired $V_{dc}$ spectrum is fit to Eq. \[eq:lorentzian\] as shown by a representative result in Fig. \[fig:ISHcartoon\](b). The inverse spin Hall voltage is defined as the amplitude of the symmetric Lorentzian coefficient $S$ in Eq. \[eq:lorentzian\] (Refs. [ -‘ [@Mosendz2010a; @Czeschka2011a; @Ando2011; @Deorani2013; @Rojas-Sanchez2014] ]{}). We note that the antisymmetric Lorentzian coefficient is substantially smaller, indicating that the voltage signal from the inverse spin Hall effect dominates over that from the anomalous Hall effect. ![image](Figure4.eps){width="1.4\columnwidth"} Results and Analysis ==================== Magnetic resonance properties ----------------------------- Fig. \[fig:broadband\](a) shows the plot of the ST-FMR linewidth $W$ as a function of frequency $f$ for NiFe/Pt and NiFe/Cu/Pt at $I_{dc} = 0$ and $\pm2$ mA. The Gilbert damping parameter $\alpha$ is calculated for each sample in Fig. \[fig:broadband\](a) from $$\label{eq:damping} W = W_0 + \frac{2\pi\alpha}{|\gamma|}f,$$ where $W_0$ is the inhomogeneous linewidth broadening, $f$ is the frequency, and $\gamma$ is the gyromagnetic ratio. With the Landé g-factor $g_L = 2.10$ for NiFe (Refs. [ -‘ [@Mizukami2002; @Ghosh2011; @Weiler2014c; @Boone2014] ]{}), $|\gamma|/2\pi = $(28.0 GHz/T)$\cdot (g_L/2) = 29.4$ GHz/T. From the slope in Fig. \[fig:broadband\](a) at $I_{dc} = 0$, $\alpha = 0.043\pm0.001$ for NiFe/Pt and $\alpha = 0.027\pm0.001$ for NiFe/Cu/Pt. The reduction in damping with interfacial Cu-dusting is consistent with prior studies on FM/Pt with nm-thick Cu insertion layers [@Weiler2014c; @Ghosh2011; @Boone2014; @Rojas-Sanchez2014; @Sun2013a]. A fit of $H_{FMR}$ versus frequency at $I_{dc} = 0$ to the Kittel equation $$\label{eq:kittel} \begin{split} \mu_0 H_{FMR} = &\tfrac{1}{2}\left(-\mu_0 M_{eff} +\sqrt{(\mu_0 M_{eff})^2+4(f/\gamma)^2}\right) \\ &-\mu_0 H_k+\mu_0 \Delta H_{FMR}(I_{dc}), \end{split}$$ shown in Figs. \[fig:broadband\](b),(c), gives the effective magnetization $M_{eff} = 5.6\times10^5$ A/m for NiFe/Pt and $5.9\times10^5$ A/m for NiFe/Cu/Pt, with the in-plane anisotropy field $|\mu_0 H_k|<1$ mT. $M_{eff}$ and $M_{s}$ are indistinguishable within experimental uncertainty, implying negligible perpendicular magnetic anisotropy in NiFe/(Cu/)Pt. When $I_{dc} \neq 0$, the linewidth $W$ is reduced for one current polarity and enhanced for the opposite polarity, as shown in Fig. \[fig:broadband\](a). The empirical damping parameter defined by Eq. \[eq:damping\] changes with $I_{dc}$ (see Appendix B), which indicates the presence of a current-induced damping-like torque. Similarly, $I_{dc} \neq 0$ generates an Oersted field and a spin-orbit field-like torque that together shift the resonance field $H_{FMR}$ as shown in Figs. \[fig:broadband\](b),(c). We discuss the quantification of the damping-like torque in Sec. \[subsec:DLT\] and the field-like torque in Sec. \[subsec:FLT\]. Damping-like torque {#subsec:DLT} ------------------- Fig. \[fig:dampinglike\](a) shows the linear change in $W$ as a function of $I_{dc}$ at a fixed rf frequency of 5 GHz. Reversing the external field (from $\phi =45\degree$ to -135) magnetizes the sample in the opposite direction and reverses the polarity of the damping-like torque. $W$ is related to the current-dependent effective damping parameter $\alpha_{eff}$ at fixed $f$, $\alpha_{eff} = |\gamma|/(2\pi f) (W~-~W_0)$. The magnitude of the damping-like torque is parameterized by the effective spin Hall angle $\theta_{DL}$, proportional to the ratio of the spin current density $J_s$ crossing the FM/NM interface to the charge current density $J_c$ in Pt. $\theta_{DL}$ at each frequency, plotted in Fig. \[fig:dampinglike\](b), is calculated from the $I_{dc}$ dependence of $\alpha_{eff}$ (Refs. [ -‘ [@Petit2007; @Liu2011] ]{}): $$\label{eq:JsJc} |\theta_{DL}| = \frac{2|e|}{\hbar} \frac{\left( H_{FMR}+\tfrac{M_{eff}}{2} \right) \mu_0 M_s t_{F}}{|\sin\phi|} \left| \frac{\Delta\alpha_{eff}}{\Delta J_{c}} \right|,$$ where $t_F$ is the FM thickness. Assuming that the effective spin Hall angle is independent of frequency, we find $\theta_{DL} = 0.087\pm0.007$ for NiFe/Pt and $\theta_{DL} = 0.062\pm0.005$ for NiFe/Cu/Pt. These values are similar to recently reported $\theta_{DL}$ in NiFe/Pt bilayers [@Czeschka2011a; @Weiler2014c; @Liu2011; @Wang2014c; @Ganguly2014; @Kasai2014; @Ando2008a; @Duan2014b]. ![image](Figure5.eps){width="1.15\columnwidth"} $\theta_{DL}$ of NiFe/(Cu/)Pt is related to the intrinsic spin Hall angle $\theta_{SH}$ of Pt through the spin diffusion theory used in Refs. [ -‘ [@Boone2013; @Haney2013a] ]{}. For a Pt layer much thicker than its spin diffusion length $\lambda_{Pt}$, $\theta_{DL}$ is proportional to the real part of the effective spin-mixing conductance $G_{\uparrow\downarrow}^{eff}$, $$\label{eq:GSHE} \theta_{DL} = \frac{2\text{Re}[G_{\uparrow\downarrow}^{eff}]}{{\sigma_{Pt}}/{\lambda_{Pt}}}\theta_{SH} ,$$ where $\sigma_{Pt}$ is the conductivity of the Pt layer and $G_{\uparrow\downarrow}^{eff}~=~ G_{\uparrow\downarrow}(\sigma_{Pt}/\lambda_{Pt})/(2G_{\uparrow\downarrow}+\sigma_{Pt}/\lambda_{Pt})$ includes the spin-current backflow factor [@Tserkovnyak2002; @Boone2013]. Assuming that $\lambda_{Pt}$, $\sigma_{Pt}$, and $\theta_{SH}$ in Eq. \[eq:GSHE\] are independent of the interfacial Cu dusting layer, $G_{\uparrow\downarrow}^{eff}$ is a factor of $1.4\pm0.2$ greater for NiFe/Pt than NiFe/Cu/Pt based on the values of $\theta_{DL}$ found above. Reciprocity of damping-like torque and spin pumping {#subsec:ISH} --------------------------------------------------- Fig. \[fig:ISH\] shows representative results of the dc inverse spin Hall voltage induced by spin pumping, each fitted to the Loretzian curve defined by Eq. \[eq:lorentzian\]. Reversing the bias field reverses the moment orientation of the pumped spin current and thus inverts the polarity of $V_{ISH}$, consistent with the mechanism of the inverse spin Hall effect. By averaging measurements at opposite bias field polarities for different samples, we find $|V_{ISH}|=1.5\pm0.2$ $\mu$V for NiFe/Pt and $|V_{ISH}|=2.6\pm0.2$ $\mu$V for NiFe/Cu/Pt. The inverse spin Hall voltage $V_{ISH}$ is given by [@Mosendz2010a] $$\label{eq:VISH} |V_{ISH}| = \frac{h}{|e|} G_{\uparrow\downarrow}^{eff}|\theta_{SH}|\lambda_{Pt} \tanh\left(\frac{t_{Pt}}{2\lambda_{Pt}}\right)fR_sLP \left(\frac{\gamma h_{rf}}{2\alpha\omega}\right)^2,$$ where $R_s$ is the sheet resistance of the sample, $L$ is the length of the sample, $P$ is the ellipticity parameter of magnetization precession, and $h_{rf}$ is the amplitude of the microwave excitation field. The factor ${\gamma h_{rf}}/{2\alpha\omega}$ is equal to the precession cone angle at resonance in the linear (small angle) regime. By collecting all the factors in Eq. \[eq:VISH\] that are identical for NiFe/Pt and NiFe/Cu/Pt into a single coefficient $C_{ISH}$, Eq. \[eq:VISH\] is rewritten as $$\label{eq:VISHsimp} |V_{ISH}| = C_{ISH} \frac{R_sG_{\uparrow\downarrow}^{eff}}{\alpha^2}.$$ We note that the small difference in $M_{eff}$ for NiFe/Pt and NiFe/Cu/Pt yields a difference in $P$ (Eq. \[eq:VISH\]) of $\sim$1%, which we neglect here. From Eq. \[eq:VISHsimp\], we estimate that $G_{\uparrow\downarrow}^{eff}$ of the NiFe/Pt interface is greater than that of the NiFe/Cu/Pt interface by a factor of $1.4\pm0.2$. The dc-tuned ST-FMR and dc spin-pumping voltage measurements therefore yield quantitatively consistent results, confirming the reciprocity between the damping-like torque (driven by the direct spin Hall effect) and spin pumping (detected with the inverse spin Hall effect). The fact that the diffusive model captures the observations supports the spin-Hall mechanism leading to the damping-like torque. Interfacial damping and spin-current transmission ------------------------------------------------- Provided that the enhanced damping $\alpha$ in NiFe/(Cu/)Pt (Fig. \[fig:broadband\](a)) is entirely due to spin pumping into the Pt layer, the real part of the interfacial spin-mixing conductance can be calculated by $$\label{eq:Geff} \text{Re}[G_{\uparrow\downarrow}^{eff}] = \frac{2e^2 M_s t_F}{\hbar^2 |\gamma|}(\alpha-\alpha_0).$$ Using $\alpha_0 = 0.011$ measured for a reference film stack *sub*/Ta(3)/NiFe(2.5)/Cu(2.5)/TaOx(1.5) with negligible spin pumping into the top NM layer of Cu, we obtain Re$[G_{\uparrow\downarrow}^{eff}] = (11.6\pm0.9)\times10^{14}$ $\Omega^{-1}$m$^{-2}$ for NiFe/Pt and $(5.8\pm0.5)\times10^{14}$ $\Omega^{-1}$m$^{-2}$ for NiFe/Cu/Pt. This factor of 2 difference for the two interfaces is significantly greater than the factor of $\approx$1.4 determined from dc-tuned ST-FMR (Sec. \[subsec:DLT\]) and electrically detected spin pumping (Sec. \[subsec:ISH\]). This discrepancy implies that the magnitude of Re$[G_{\uparrow\downarrow}^{eff}]$ of NiFe/Pt calculated from enhanced damping is higher than that calculated for spin injection. In addition to spin pumping, interfacial scattering effects [@Rojas-Sanchez2014; @Nguyen2014; @Park2000; @Liu2014c], e.g., due to proximity-induced magnetization in Pt [@Sun2013a; @Ryu2014; @Lim2013] or spin-orbit phenomena at the NiFe/Pt interface [@Nembach2014], may contribute to both stronger damping and lower spin injection in NiFe/Pt. Assuming that this interfacial scattering is suppressed by the Cu dusting layer, $\approx$0.010 of $\alpha$ in NiFe/Pt is not accounted for by spin pumping. The corrected Re$[G_{\uparrow\downarrow}^{eff}]$ for NiFe/Pt is $(8.1\pm1.2)\times10^{14}$ $\Omega^{-1}$m$^{-2}$, which is in excellent agreement with Re$[G_{\uparrow\downarrow}^{eff}]$ calculated from first principles [@Liu2014c]. Using $G_{\uparrow\downarrow}^{eff}$ quantified above and assuming $\lambda_{Pt}\approx1$ nm [@Boone2013; @Boone2014; @Kondou2012a; @Ganguly2014; @Kasai2014; @Pai2014a; @Obstbaum2014; @Skinner2014; @Wang2014c], the intrinsic spin Hall angle $\theta_{SH}$ of Pt and the spin-current transmissivity $T=\theta_{DL}/\theta_{SH}$ across the FM/NM interface can be estimated. We obtain $\theta_{SH}\approx0.15$, and $T \approx 0.6$ for NiFe/Pt and $T \approx 0.4$ for NiFe/Cu/Pt. These results, in line with a recent report [@Pai2014a], indicate that the damping-like torque (proportional to $\theta_{DL}$) may be increased by engineering the FM/NM interface, i.e., by increasing $G_{\uparrow\downarrow}^{eff}$. For practical applications, the threshold charge current density required for switching or self-oscillation of the magnetization is proportional to the ratio $\alpha/\theta_{DL}$. Because of the reciprocity of the damping-like torque and spin pumping, increasing $G_{\uparrow\downarrow}^{eff}$ would also increase $\alpha$ such that it would cancel the benefit of enhancing $\theta_{DL}$. Nevertheless, although spin pumping inevitably increases damping, optimal interfacial engineering might minimize damping from interfacial spin-current scattering while maintaining efficient spin-current transmission across the FM/NM interface. Field-like torque {#subsec:FLT} ----------------- We now quantify the field-like torque from the dc-induced shift in the resonance field $H_{FMR}$, derived from the fit to Eq. \[eq:kittel\], as shown in Figs. \[fig:broadband\](b),(c). $M_{eff}$ is fixed at its zero-current value so that $\Delta H_{FMR}$ is the only free parameter [^1]. Fig. \[fig:fieldlike\] shows the net current-induced effective field, which is equivalent to $\sqrt{2}\Delta H_{FMR}$ in our experimental geometry with the external field applied 45$^\circ$ from the current axis. The solid lines show the expected Oersted field $\mu_0 H_{Oe} \approx 0.08$ mT per mA for both NiFe/Pt and NiFe/Cu/Pt based on the estimated charge current densities in the NM layers, $H_{Oe} = \tfrac{1}{2}(J_{c,Pt}t_{Pt}+J_{c,Cu}t_{Cu}-J_{c,Ta}t_{Ta})$, where the contribution from the Pt layer dominates by a factor of $>$6. ![\[fig:fieldlike\]Net current-induced effective field, derived from resonance field shift $\Delta H_{FMR}$ normalized by the field direction angle $|\sin\phi| = 1/\sqrt{2}$. The solid lines denote the estimated Oersted field.](Figure6.eps){width="0.7\columnwidth"} ![image](Figure7.eps){width="1.2\columnwidth"} While the polarity of the shift in $H_{FMR}$ is consistent with the direction of $H_{Oe}$, the magnitude of $\sqrt{2}\Delta H_{FMR}$ exceeds $H_{Oe}$ for both samples as shown in Fig. \[fig:fieldlike\]. This indicates the presence of an additional current-induced effective field due to a field-like torque, $\mu_0 H_{FL} = 0.20\pm0.02$ mT per mA for NiFe/Pt and $\mu_0 H_{FL} = 0.10\pm0.02$ mT per mA for NiFe/Cu/Pt. Analogous to $\theta_{DL}$ for the damping-like torque, the field-like torque can also be parameterized by an effective spin Hall angle [@Pai2014a]: $$\label{eq:thetaFL} |\theta_{FL}| = \frac{2|e|\mu_0 M_s t_{F}}{\hbar} \left|\frac{H_{FL}}{J_{c,Pt}}\right|.$$ Eq. \[eq:thetaFL\] yields $\theta_{FL} = 0.024\pm0.003$ for NiFe/Pt and $0.013\pm0.003$ for NiFe/Cu/Pt, comparable to recently reported results in Ref. [ -‘ [@Fan2013] ]{}. The ultrathin Cu layer at the NiFe/Pt interface reduces the field-like torque by a factor of $1.8\pm0.5$, which is in agreement within experimental uncertainty to the reduction of the damping-like torque (Sec. \[subsec:DLT\]). This suggests that both torques predominantly originate from the spin Hall effect in Pt. Recent studies on FM/NM bilayers using low-frequency measurement techniques [@Fan2013; @Fan2014; @Pai2014] also suggest that the spin Hall effect is the dominant source of the field-like torque. Since the field-like torque scales as the imaginary component of $G_{\uparrow\downarrow}^{eff}$ (Refs. [ -‘ [@Haney2013a; @Ralph2008; @Brataas2012a] ]{}), the Cu dusting layer must modify Re\[$G_{\uparrow\downarrow}^{eff}$\] and Im\[$G_{\uparrow\downarrow}^{eff}$\] identically. We estimate $\text{Im}[G_{\uparrow\downarrow}^{eff}] = (\theta_{FL}/\theta_{DL})\text{Re}[G_{\uparrow\downarrow}^{eff}]$ to be $(2.2\pm0.5)\times10^{14}$ $\Omega^{-1}$m$^{-2}$ for NiFe/Pt and $(1.2\pm0.3)\times10^{14}$ $\Omega^{-1}$m$^{-2}$ for NiFe/Cu/Pt. Because of the relatively large error bar for the ratio of the field-like torque in NiFe/Pt and NiFe/Cu/Pt, our experimental results do not rule out the existence of another mechanism at the FM/NM interface, distinct from the spin Hall effect. For example, the Cu dusting layer may modify the interfacial Rashba effect that can be an additional contribution to the field-like torque [@Gambardella2011a; @Haney2013a; @Fan2014]. Also, the upper bound of the field-like torque ratio is close to the factor of $\approx$2 reduction in damping with Cu insertion, possibly suggesting a correlation between the spin-orbit field-like torque and the enhancement in damping at the FM-NM interface. Elucidating the exact roles of interfacial spin-orbit effects in FM/HM requires further theoretical and experimental studies. Comparison of the dc-tuned and lineshape methods of ST-FMR ---------------------------------------------------------- Accounting for the field-like torque, we determine the effective spin Hall angle $\theta^{rf}_{DL}$ in NiFe/Pt and NiFe/Cu/Pt from the lineshape (Eq. \[eq:lorentzian\]) of the ST-FMR spectra at $I_{dc}=0$ (Refs. [ -‘ [@Liu2011; @Kondou2012a; @Skinner2014; @Wang2014c; @Mellnik2014; @Pai2014a] ]{}). The coefficients in Eq. \[eq:lorentzian\] are $S=V_o\hbar J_{s,rf}/2|e|\mu_0M_st_F$ and $A= V_oH_{rf}\sqrt{1+M_{eff}/H_{FMR}}$, where $V_o$ is the ST-FMR voltage prefactor [@Liu2011] and $H_{rf}\approx\beta J_{c,rf}$ is the net effective rf magnetic field generated by the rf driving current density $J_{c,rf}$ in the Pt layer. $\theta^{rf}_{DL} ={J_{s,rf}}/{J_{c,rf}} $ is calculated from the lineshape coefficients $S$ and $A$: $$\label{eq:SA} |\theta^{rf}_{DL}| =\left|\frac{S}{A}\right|\frac{2|e|\mu_0M_st_F}{\hbar}\beta \sqrt{1+\frac{M_{eff}}{H_{FMR}}}.$$ Fig. \[fig:lineshape\](a) shows $|\theta^{rf}_{DL}|$ obtained by ignoring the field-like torque contribution, i.e., $\beta = t_{Pt}/2$. This underestimates $|\theta^{rf}_{DL}|$, implying identical damping-like torques in NiFe/Pt and NiFe/Cu/Pt. Using $\beta = t_{Pt}/2+H_{FL}/J_{c,Pt}$ extracted from Fig. \[fig:fieldlike\], $\theta^{rf}_{DL}=0.091\pm0.007$ for NiFe/Pt and $0.069\pm0.005$ for NiFe/Cu/Pt plotted in Fig. \[fig:lineshape\](b) are in agreement with $\theta_{DL}$ determined from the dc-tuned ST-FMR method. The presence of a nonnegligible field-like torque in thin FM may account for the underestimation of $\theta^{rf}_{DL}$ based on the lineshape analysis compared to $\theta_{DL}$ from dc-tuned ST-FMR as reported in Refs. [ -‘ [@Ganguly2014; @Kasai2014] ]{}. ----------------------------------------------------------------------------- -- -- -- -- -- \[0.5ex\] $\theta_{DL}$ $\theta_{FL}$ $\text{Re}[G_{\uparrow\downarrow}^{eff}]$ ($10^{14}$ $\Omega^{-1}$m$^{-2}$) $\text{Im}[G_{\uparrow\downarrow}^{eff}]$ ($10^{14}$ $\Omega^{-1}$m$^{-2}$) $C_{ISH}\text{Re}[G_{\uparrow\downarrow}^{eff}]$ (a.u.) $\alpha-\alpha_0$ \[1ex\] ----------------------------------------------------------------------------- -- -- -- -- -- : Parameters related to spin-orbit torques \[tab:params\] Conclusions =========== We have experimentally demonstrated that the spin-orbit damping-like and field-like torques scale with interfacial spin-current transmission. Insertion of an ultrathin Cu layer at the NiFe/Pt interface equally reduces the spin-Hall-mediated spin-orbit torques and spin pumping, consistent with diffusive transport of spin current across the FM/NM interface. Parameters relevant to spin-orbit torques in NiFe/Pt and NiFe/Cu/Pt quantified in this work are summarized in Table \[tab:params\]. We have also found an additional contribution to damping at the NiFe/Pt interface distinct from spin pumping. The dc-tuned ST-FMR technique used here permits precise quantification of spin-orbit torques directly applicable to engineering efficient spin-current-driven devices. ![image](Figure8.eps){width="1.3\columnwidth"} Acknowledgements {#acknowledgements .unnumbered} ================ T.N. and S.E. contributed equally to this work. This work was supported by the Air Force Research Laboratory through contract FA8650-14-C-5706 and in part by FA8650-14-C-5705, the W.M. Keck Foundation, and the National Natural Science Foundation of China (NSFC) 51328203. Lithography was performed in the George J. Kostas Nanoscale Technology and Manufacturing Research Center. S.E. thanks Xin Fan and Chi-Feng Pai for helpful discussions. T.N. and S.E. thank James Zhou and Brian Chen for assistance in setting up the ST-FMR system, and Vivian Sun for assistance in graphic design. ![image](Figure9.eps){width="0.65\columnwidth"} Appendix A: Damping-like torque contribution from Tantalum {#ApA .unnumbered} ========================================================== With the same dc-tuned ST-FMR technique described in Sec. \[subsec:STFMR\], we evaluate the effective spin Hall angle $\theta_{DL}$ of Ta interfaced with NiFe. Because of the high resistivity of Ta, the signal-to-noise ratio of the ST-FMR spectrum is significantly lower than in the case of NiFe/Pt, thus making precise determination of $\theta_{DL}$ more challenging. Nevertheless, we are able to obtain an estimate of $\theta_{DL}$ from a 2-$\mu$m wide, 10-$\mu$m long strip of subs/Ta(6 nm)/Ni$_{80}$Fe$_{20}$(4 nm)/Al$_2$O$_3$(1.5 nm) (“Ta/NiFe”) . The estimated resistivity of Ta(6 nm) is 200 $\mu\Omega$cm and that of NiFe(4 nm) is 70 $\mu\Omega$cm. Fig. \[fig:Ta\](a) shows the change in linewidth $\Delta W$ (or $\Delta \alpha_{eff}$) due to dc bias current $I_{dc}$. The polarity of $\Delta W$ against $I_{dc}$ is the same as in NiFe capped with Pt (Fig. \[fig:dampinglike\](a)). Because the Ta layer is beneath the NiFe layer, this observed polarity is consistent with the opposite signs of the spin Hall angles for Pt and Ta. Here we define the sign of $\theta_{DL}$ for Ta/NiFe to be negative. Using Eq. \[eq:JsJc\] with $M_s = M_{eff} = 7.0\times10^5$ A/m and averaging the values plotted in Fig. \[fig:Ta\](b), we arrive at $\theta_{DL} = -0.034\pm0.008$. This magnitude of $\theta_{DL}$ is substantially smaller than $\theta_{DL} \approx -0.1$ in Ta/CoFe(B) [@Liu2012; @Emori2014d] and Ta/FeGaB [@Emori2015], but similar to reported values of $\theta_{DL}$ in Ta/NiFe bilayers [@Weiler2014c; @Deorani2013]. For the analysis of the damping-like torque in Sec. \[subsec:DLT\], we take into account the $\theta_{DL}$ obtained above and the small charge current density in Ta. In the Ta/NiFe/(Cu/)/Pt stacks, owing to the much higher conductivity of Pt, the spin-Hall damping-like torque from the top Pt(4) layer is an order of magnitude greater than the torque from the bottom Ta(3) seed layer. Appendix B: dc dependence of the empirical damping parameter {#ApB .unnumbered} ============================================================ Magnetization dynamics in the presence of an effective field $\mathbf{H}_{eff}$ and a damping-like spin torque is given by the Landau-Lifshitz-Gilbert-Slonczewski equation: $$\frac{\partial \mathbf{m}}{\partial t} = -|\gamma|{\mathbf{m}}\times\mathbf{H}_{eff} +\alpha\mathbf{m}\times\frac{\partial \mathbf{m}}{\partial t} +\tau_{DL}\mathbf{m}\times(\boldsymbol{\sigma}\times\mathbf{m}),$$ where $\tau_{DL}$ is a coefficient for the damping-like torque (proportional to $\theta_{DL}$) and $\boldsymbol{\sigma}$ is the orientation of the spin moment entering the FM. Within this theoretical framework, it is not possible to come up with a single Gilbert damping parameter as a function of bias dc current $I_{dc}$ that holds at all frequencies. However, at $I_{dc} = 0$ we empirically extract the damping parameter $\alpha$ from the linear relationship of linewidth $W$ versus frequency $f$ (Eq. \[eq:damping\]). We can take the same approach and define an empirical damping parameter $\alpha_{W/f}$ as a function of $I_{dc}$, i.e. $$\label{eq:dampingIdc} W(I_{dc}) = W_0 + \frac{2\pi\alpha_{W/f}(I_{dc})}{|\gamma|}f,$$ where we fix the inhomogeneous linewidth broadening $W_0$ at the value at $I_{dc} = 0$, which does not change systematically as a function of small $I_{dc}$ used here. This approach of setting $\alpha_{W/f}$ as the only fitting parameter in Eq. \[eq:dampingIdc\] well describes our data (e.g., Fig. \[fig:broadband\](a)). We show in Fig. \[fig:alphaIdc\] the resulting $\alpha_{W/f}$ versus $I_{dc}$. The change in $\alpha_{W/f}$ normalized by the charge current density in Pt is $0.0036\pm0.0001$ per $10^{11}$ A/m$^2$ for NiFe/Pt and $0.0025\pm0.0001$ per $10^{11}$ A/m$^2$ for NiFe/Cu/Pt. This empirical measure of the damping-like torque again exhibits a factor of $\approx$1.4 difference between NiFe/Pt and NiFe/Cu/Pt. [10]{} A. Brataas and K. M. D. Hals, Nat. Nanotechnol. [**9**]{}, 86 (2014). P. Gambardella and I. M. Miron, Philos. Trans. A. Math. Phys. Eng. Sci. [**369**]{}, 3175 (2011). P. M. Haney, H.-W. Lee, K.-J. Lee, A. Manchon, and M. D. Stiles, Phys. Rev. B [**87**]{}, 174411 (2013). D. Ralph and M. Stiles, J. Magn. Magn. Mater. [**320**]{}, 1190 (2008). A. Brataas, Y. Tserkovnyak, G. E. W. Bauer, and P. J. Kelly, , in [*Spin Current*]{}, chap. 8, pp. 87–135, 2012. A. Hoffmann, IEEE Trans. Magn. [**49**]{}, 5172 (2013). I. M. Miron, K. Garello, G. Gaudin, P.-J. Zermatten, M. V. Costache, S. Auffret, S. Bandiera, B. Rodmacq, A. Schuhl, and P. Gambardella, Nature [**476**]{}, 189 (2011). L. Liu, C.-F. Pai, Y. Li, H. W. Tseng, D. C. Ralph, and R. A. Buhrman, Science [**336**]{}, 555 (2012). D. Bhowmik, L. You, and S. Salahuddin, Nat. Nanotechnol. [**9**]{}, 59 (2014). G. Yu, P. Upadhyaya, Y. Fan, J. G. Alzate, W. Jiang, K. L. Wong, S. Takei, S. A. Bender, L.-T. Chang, Y. Jiang, M. Lang, J. Tang, Y. Wang, Y. Tserkovnyak, P. K. Amiri, and K. L. Wang, Nat. Nanotechnol. [**9**]{}, 548 (2014). P. P. J. Haazen, E. Murè, J. H. Franken, R. Lavrijsen, H. J. M. Swagten, and B. Koopmans, Nat. Mater. [**12**]{}, 299 (2013). S. Emori, E. Martinez, K.-J. Lee, H.-W. Lee, U. Bauer, S.-M. Ahn, P. Agrawal, D. C. Bono, and G. S. D. Beach, Phys. Rev. B [**90**]{}, 184427 (2014). K.-S. Ryu, S.-H. Yang, L. Thomas, and S. S. P. Parkin, Nat. Commun. [**5**]{}, 3910 (2014). K. Ueda, K.-J. Kim, Y. Yoshimura, R. Hiramatsu, T. Moriyama, D. Chiba, H. Tanigawa, T. Suzuki, E. Kariyada, and T. Ono, Appl. Phys. Express [**7**]{}, 053006 (2014). V. E. Demidov, S. Urazhdin, H. Ulrichs, V. Tiberkevich, A. Slavin, D. Baither, G. Schmitz, and S. O. Demokritov, Nat. Mater. [**11**]{}, 1028 (2012). L. Liu, C.-F. Pai, D. C. Ralph, and R. A. Buhrman, Phys. Rev. Lett. [**109**]{}, 186602 (2012). R.H. Liu, W.L. Lim, and S. Urazhdin, Phys. Rev. Lett. [**110**]{}, 147601 (2013). Z. Duan, A. Smith, L. Yang, B. Youngblood, J. Lindner, V. E. Demidov, S. O. Demokritov, and I. N. Krivorotov, Nat. Commun. [**5**]{}, 5616 (2014). M. Ranjbar, P. Durrenfeld, M. Haidar, E. Iacocca, M. Balinskiy, T. Q. Le, M. Fazlali, A. Houshang, A. Awad, R. Dumas, and J. Akerman, IEEE Magn. Lett. [**5**]{}, 1 (2014). A. Hamadeh, O. d’Allivy Kelly, C. Hahn, H. Meley, R. Bernard, A. H. Molpeceres, V. V. Naletov, M. Viret, A. Anane, V. Cros, S. O. Demokritov, J. L. Prieto, M. Muñoz, G. de Loubens, and O. Klein, Phys. Rev. Lett. [**113**]{}, 197203 (2014). J. Kim, J. Sinha, M. Hayashi, M. Yamanouchi, S. Fukami, T. Suzuki, S. Mitani, and H. Ohno, Nat. Mater. [**12**]{}, 240 (2013). K. Garello, I. M. Miron, C. O. Avci, F. Freimuth, Y. Mokrousov, S. Blügel, S. Auffret, O. Boulle, G. Gaudin, and P. Gambardella, Nat. Nanotechnol. [**8**]{}, 587 (2013). X. Fan, J. Wu, Y. Chen, M. J. Jerry, H. Zhang, and J. Q. Xiao, Nat. Commun. [**4**]{}, 1799 (2013). X. Fan, H. Celik, J. Wu, C. Ni, K.-J. Lee, V. O. Lorenz, and J. Q. Xiao, Nat. Commun. [**5**]{}, 3042 (2014). C.-F. Pai, M.-H. Nguyen, C. Belvin, L. H. Vilela-Leão, D. C. Ralph, and R. A. Buhrman, Appl. Phys. Lett. [**104**]{}, 082407 (2014). C.-F. Pai, Y. Ou, D. C. Ralph, and R. A. Buhrman, (2014), arXiv:1411.3379. S. Emori, U. Bauer, S. Woo, and G. S. D. Beach, Appl. Phys. Lett. [**105**]{}, 222401 (2014). S. Woo, M. Mann, A. J. Tan, L. Caretta, and G. S. D. Beach, Appl. Phys. Lett. [**105**]{}, 212404 (2014). Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. Lett. [**88**]{}, 117601 (2002). Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. B [**66**]{}, 224403 (2002). A. Ghosh, J. F. Sierra, S. Auffret, U. Ebels, and W. E. Bailey, Appl. Phys. Lett. [**98**]{}, 052508 (2011). C. T. Boone, H. T. Nembach, J. M. Shaw, and T. J. Silva, J. Appl. Phys. [**113**]{}, 153906 (2013). C. T. Boone, J. M. Shaw, H. T. Nembach, and T. J. Silva, (2014), arXiv:1408.5921. B. Heinrich, C. Burrowes, E. Montoya, B. Kardasz, E. Girt, Y.-Y. Song, Y. Sun, and M. Wu, Phys. Rev. Lett. [**107**]{}, 066604 (2011). Y. Sun, H. Chang, M. Kabatek, Y.-Y. Song, Z. Wang, M. Jantz, W. Schneider, M. Wu, E. Montoya, B. Kardasz, B. Heinrich, S. G. E. te Velthuis, H. Schultheiss, and A. Hoffmann, Phys. Rev. Lett. [**111**]{}, 106601 (2013). A. Azevedo, L. H. [Vilela Leão]{}, R. L. Rodriguez-Suarez, A. B. Oliveira, and S. M. Rezende, J. Appl. Phys. [**97**]{}, 10C715 (2005). E. Saitoh, M. Ueda, H. Miyajima, and G. Tatara, Appl. Phys. Lett. [**88**]{}, 182509 (2006). O. Mosendz, V. Vlaminck, J. E. Pearson, F. Y. Fradin, G. E. W. Bauer, S. D. Bader, and A. Hoffmann, Phys. Rev. B [**82**]{}, 214403 (2010). F. D. Czeschka, L. Dreher, M. S. Brandt, M. Weiler, M. Althammer, I.-M. Imort, G. Reiss, A. Thomas, W. Schoch, W. Limmer, H. Huebl, R. Gross, and S. T. B. Goennenwein, Phys. Rev. Lett. [**107**]{}, 046601 (2011). K. Ando, S. Takahashi, J. Ieda, Y. Kajiwara, H. Nakayama, T. Yoshino, K. Harii, Y. Fujikawa, M. Matsuo, S. Maekawa, and E. Saitoh, J. Appl. Phys. [**109**]{}, 103913 (2011). P. Deorani and H. Yang, Appl. Phys. Lett. [**103**]{}, 232408 (2013). M. Weiler, J. M. Shaw, H. T. Nembach, and T. J. Silva, IEEE Magn. Lett. [**5**]{}, 1 (2014). M. Obstbaum, M. Härtinger, H. G. Bauer, T. Meier, F. Swientek, C. H. Back, and G. Woltersdorf, Phys. Rev. B [**89**]{}, 060407 (2014). J.-C. Rojas-Sánchez, N. Reyren, P. Laczkowski, W. Savero, J.-P. Attané, C. Deranlot, M. Jamet, J.-M. George, L. Vila, and H. Jaffrès, Phys. Rev. Lett. [**112**]{}, 106602 (2014). H. L. Wang, C. H. Du, Y. Pu, R. Adur, P. C. Hammel, and F. Y. Yang, Phys. Rev. Lett. [**112**]{}, 197201 (2014). A. Brataas, Y. V. Nazarov, and G. E. W. Bauer, Phys. Rev. Lett. [**84**]{}, 2481 (2000). J. C. Sankey, Y.-T. Cui, J. Z. Sun, J. C. Slonczewski, R. A. Buhrman, and D. C. Ralph, Nat. Phys. [**4**]{}, 67 (2007). L. Liu, T. Moriyama, D. C. Ralph, and R. A. Buhrman, Phys. Rev. Lett. [**106**]{}, 036601 (2011). K. Kondou, H. Sukegawa, S. Mitani, K. Tsukagoshi, and S. Kasai, Appl. Phys. Express [**5**]{}, 073002 (2012). T. D. Skinner, M. Wang, A. T. Hindmarch, A. W. Rushforth, A. C. Irvine, D. Heiss, H. Kurebayashi, and A. J. Ferguson, Appl. Phys. Lett. [**104**]{}, 062401 (2014). Y. Wang, P. Deorani, X. Qiu, J. H. Kwon, and H. Yang, Appl. Phys. Lett. [**105**]{}, 152412 (2014). A. R. Mellnik, J. S. Lee, A. Richardella, J. L. Grab, P. J. Mintun, M. H. Fischer, A. Vaezi, A. Manchon, E.-A. Kim, N. Samarth, and D. C. Ralph, Nature [**511**]{}, 449 (2014). A. Yamaguchi, H. Miyajima, T. Ono, Y. Suzuki, S. Yuasa, A. Tulapurkar, and Y. Nakatani, Appl. Phys. Lett. [**90**]{}, 182507 (2007). A. Ganguly, K. Kondou, H. Sukegawa, S. Mitani, S. Kasai, Y. Niimi, Y. Otani, and A. Barman, Appl. Phys. Lett. [**104**]{}, 072405 (2014). S. Kasai, K. Kondou, H. Sukegawa, S. Mitani, K. Tsukagoshi, and Y. Otani, Appl. Phys. Lett. [**104**]{}, 092408 (2014). K. Ando, S. Takahashi, K. Harii, K. Sasage, J. Ieda, S. Maekawa, and E. Saitoh, Phys. Rev. Lett. [**101**]{}, 036601 (2008). V. E. Demidov, S. Urazhdin, E. R. J. Edwards, and S. O. Demokritov, Appl. Phys. Lett. [**99**]{}, 172501 (2011). C.-F. Pai, L. Liu, Y. Li, H. W. Tseng, D. C. Ralph, and R. A. Buhrman, Appl. Phys. Lett. [**101**]{}, 122404 (2012). Z. Duan, C. T. Boone, X. Cheng, I. N. Krivorotov, N. Reckers, S. Stienen, M. Farle, and J. Lindner, Phys. Rev. B [**90**]{}, 024427 (2014). S. Emori, T. Nan, T. M. Oxholm, C. T. Boone, J. G. Jones, B. M. Howe, G. J. Brown, D. E. Budil, and N. X. Sun, Appl. Phys. Lett. [**106**]{}, 022406 (2015). S. Mizukami, Y. Ando, and T. Miyazaki, Phys. Rev. B [**66**]{}, 104413 (2002). S. Petit, C. Baraduc, C. Thirion, U. Ebels, Y. Liu, M. Li, P. Wang, and B. Dieny, Phys. Rev. Lett. [**98**]{}, 077203 (2007). H. Nguyen, W. P. Pratt, and J. Bass, J. Magn. Magn. Mater. [**361**]{}, 30 (2014). W. Park, D. V. Baxter, S. Steenwyk, I. Moraru, W. P. Pratt, and J. Bass, Phys. Rev. B [**62**]{}, 1178 (2000). Y. Liu, Z. Yuan, R. J. H. Wesselink, A. A. Starikov, and P. J. Kelly, Phys. Rev. Lett. [**113**]{}, 207202 (2014). W. L. Lim, N. Ebrahim-Zadeh, J. C. Owens, H. G. E. Hentschel, and S. Urazhdin, Appl. Phys. Lett. [**102**]{}, 162404 (2013). H. T. Nembach, J. M. Shaw, M. Weiler, E. Jué, and T. J. Silva, (2014), arXiv:1410.6243. When $M_{eff}$ is adjustable $M_{eff}$ changes only by $\ll $1%. [^1]: When $M_{eff}$ is adjustable $M_{eff}$ changes only by $\ll$1%.
--- abstract: 'We consider the Independent Chip Model (ICM) for expected value in poker tournaments. Our first result is that participating in a fair bet with one other player will always lower one’s expected value under this model. Our second result is that the expected value for players not participating in a fair bet between two players always increases. We show that neither result necessarily holds for a fair bet among three or more players.' author: - | George T. Gilbert\ Texas Christian University\ g.gilbert@tcu.edu date: November 2009 title: The Independent Chip Model and Risk Aversion --- Introduction ============ The analysis of expected value in poker tournaments is more complex than for cash games. By this, we mean that chips in a cash game are equivalent to cash. On the other hand, the expected value of chips in a tournament are related to expected value in cash in a nonlinear way. In a typical (freezeout) tournament, each player begins with the same number of chips. Once a player is out of chips, he or she is out of the tournament. Play continues until one player has all of the chips. In most tournaments, however, the winning player gets only a portion of the prize money. The last player eliminated gets second place money, the next-to-last eliminated gets third place money, and so forth. Effectively, the winner is forced to give back some of the chips he or she has won. Consequently, most players should play in a somewhat risk averse manner, the extent depending on his or her ability. Modeling Poker Tournaments ========================== In models of poker tournaments where all players have “equal abilities” and “equal opportunities,” the probability a player finishes first equals the fraction of the total chips in play that he or she holds. The model where a player wins or loses a single chip with probability $1/2$ each is the standard Gambler’s Ruin or random walk problem going back to Huygens . In fact, he further considers constant, but unequal, probabilities of winning or losing, which can be interpreted as introducing skill into the model. See also . The probability of finishing first equals the fraction of the total chips in play held by the player much more generally, for instance if a player’s expected gain in chips is zero for each hand. If the player’s proportion of all chips is $x$ and probability of winning the tournament is $f(x)$, then $$(1-f(x))(-x)+f(x)(1-x)=0,$$ from which we see $f(x)=x$. Henke tested this model on data from World Poker Tour final tables. There was reasonable agreement between theory and data. Nevertheless, the model modestly overestimated the probabilities of small stacks winning and modestly underestimated the probabilities of large stacks winning. This could be due to flaws in the model or to differences in the skill of players that led to the disparities in stack sizes. In contrast, the probability of finishing second, third, and so forth is very dependent on the model. Even in the particular model where two players are chosen at random and each wins or loses a single chip from the other with probability $1/2$, a player’s probability of finishing second will depend not just on the fraction of chips held by each player but on the actual number of chips held by the players. In a more general setting, Swan and Bruss give a recursive solution for the probability a particular player is the first to go broke in terms of Markov processes and unfolding. Unfortunately, it is not a computationally practical method for the repeated calculations needed to analyze poker tournaments. In the case of three players, the problem is a discrete Dirichlet problem on a triangle. Ferguson solved the players’ probabilities of finishing first, second, and third for Brownian motion — the limit as the number of chips increases to infinity. Employing a Schwarz-Christoffel transformation, he expresses the answer in terms of the inverse of the incomplete beta function, so it is not easy to actually compute these limiting probabilities. Due to the specialized techniques, even this does not generalize to more than three players . In the early to middle stages of a tournament, the primary factor in determining a player’s expected cash winnings is his or her chip count. Thus, one could model expected winnings as a function of the fraction of the total number of chips the player holds. There are two especially simple models of this. One can use the biased random walk of single steps to model expected winnings rather than the probability of finishing first. Alternatively, Chen and Ankenman, and , propose a model to estimate the probability of finishing first by assuming the probability of doubling one’s chips before going a broke is constant. Again, one can instead consider expected winnings under the same assumption. It would be appropriate to call these the small-pot model and the big-pot model. They were developed with some preliminary comparisons with data from online poker tournaments in . Although skill is naturally incorporated into these models, neither would be appropriate late in a poker tournament when the number of chips held by a player’s opponents becomes a critical factor in both determining the player’s expected winnings and deciding on the optimal play in each hand of the tournament. The Independent Chip Model (ICM) ================================ The best-known model for tournaments between players of (roughly) equal abilities is the Independent Chip Model (ICM). Although it did not arise from any model for the movement of chips from player to player, this model is often used by serious poker players for analysis of the late stages of tournaments. Under the ICM, a player’s probability of finishing first is the fraction of the total chips in play he or she holds. Recursively, the conditional probability of finishing in $k$th position given the $k-1$ players finishing 1st through $(k-1)$st is the fraction of the chips held by the player once the chips of the $(k-1)$ players finishing higher have been taken out of play. Thus, if the fractions of chips held by players 1 through $k$ are $x_{1}, \ldots, x_{k}$, the probability they finish 1st through $k$th is $$p_{k}(x_{1},\ldots,x_{k})=\dfrac{x_{1}x_{2}\cdots x_{k}}{\left(1-x_{1}\right)\left(1-x_{1}-x_{2}\right) \cdots\left(1-x_{1}-x_{2}-\ldots-x_{k-1}\right)}.$$ Ganzfried and Sandholm have used this model to analyze the effects of position in a simulated three-player Texas hold’em tournament under “jam/fold” strategies. This author has data from online single table tournaments that, at first glance, suggest the ICM is a reasonable approximation of the probabilities in the aggregate. We mention in passing that the ICM is essentially the model used since 1990 to determine the first three picks in the National Basketball Association draft. Risk Aversion Under the ICM =========================== We introduce notation we will use in all that follows. Let $q_{k}(x;y;z_{1},..,z_{k})$ denote the probability, under the ICM, that a given player with fraction $y$ of the chips finishes first, one with fraction $x$ finishes somewhere among the top $k+2$ players, with the remaining $k$ places taken by given players with fractions $z_{1}, \ldots, z_{k}$, who finish in this relative order. Thus, by definition, $$\begin{gathered} \label{float}\hypertarget{float} q_{k}(x;y;z_{1},..,z_{k})=p_{k+2}(y,x,z_{1},\ldots,z_{k}) +p_{k+2}(y,z_{1},x,\ldots,z_{k})\\+\cdots+p_{k+2}(y,z_{1},\ldots,z_{k-1},x,z_{k}) +p_{k+2}(y,z_{1},\ldots,z_{k},x).\end{gathered}$$ We begin with two lemmas we will use in proving both of our theorems. \[partfrac\] For every integer $k\ge0$, $$\begin{gathered} q_{k}(x;y;z_{1},..,z_{k}) =\dfrac{y}{x+y}\thinspace p_{k+1}(x+y,z_{1},\ldots,z_{k}) \\ -p_{k+2}(y,z_{1},\ldots,z_{k},1-x-y-z_{1}-\cdots-z_{k}).\end{gathered}$$ We note that the second term is the probability that players with chip fractions $y, z_{1},\ldots,z_{k}$ finish in the first $k+1$ places and that anyone other than the player with fraction $x$ finishes in $(k+2)$nd place. We prove the lemma by induction. For $k=0$, we have $$q_{0}(x;y)=\dfrac{yx}{1-y}=\dfrac{y}{x+y}(x+y)-\dfrac{y(1-x-y)}{1-y}.$$ We now assume the identity for $k$ and prove it for $k+1$. Applying the inductive hypothesis yields $$\begin{aligned} &q_{k+1}(x;y;z_{1},\ldots,z_{k+1})\\ &= q_{k}(x;y;z_{1},\ldots,z_{k})\thinspace\dfrac{z_{k+1}}{1-x-y-z_{1}-\cdots-z_{k}}+p_{k+3}(y,z_{1},\ldots,z_{k+1},x)\\ &=\left[\dfrac{y}{x+y}\thinspace p_{k+1}(x+y,z_{1},\ldots,z_{k})\right.\\ &-p_{k+2}(y,z_{1},\ldots,z_{k},1-x-y-z_{1}-\cdots-z_{k})\bigg]\\ &\times \dfrac{z_{k+1}}{1-x-y-z_{1}-\cdots-z_{k}}+p_{k+3}(y,z_{1},\ldots,z_{k+1},x)\\ &=\dfrac{y}{x+y}\thinspace p_{k+2}(x+y,z_{1},\ldots,z_{k+1}) -p_{k+2}(y,z_{1},\ldots,z_{k},z_{k+1})\\ &+p_{k+3}(y,z_{1},\ldots,z_{k+1},x)\\ &=\dfrac{y}{x+y}\thinspace p_{k+2}(x+y,z_{1},\ldots,z_{k+1})\\ &-p_{k+3}(y,z_{1},\ldots,z_{k+1},1-x-y-z_{1}-\cdots-z_{k+1}),\end{aligned}$$ as desired. \[prodrule\] Let $f>0$ $f'\ge0$, $f''\ge0$ and $g>0$, $g'> 0$, $g''>0$. Then $fg>0$, $(fg)'>0$, $(fg)''>0$. The lemma is an immediate consequence of the product rule. We define a [*fair bet*]{} for a player to be a random variable $W$ (for wager) that is not identically 0 and whose expected value in chips is 0. We may let $W$ stand for either a player’s gain or loss. In our context of poker tournaments, $W$ will be expressed as a fraction of the chips in play and can take on only finitely many values. Here we include subsequent bets in the hand as part of the expected value. Thus we are interpreting the wager, which may be either initiated or accepted by the player, to be the possible gain or loss in chips over the course of the rest of the hand. Our first result is the following. \[main\] Suppose a tournament has prize money for $n$th place which is at least that for $(n+1)$st place and that at least one player still in the tournament will not earn as much as second place prize money. Under the Independent Chip Model, any fair bet in which only one other player can gain or lose chips in the hand being played will lower the player’s expected prize money. We will first break down the expected prize winnings under the ICM into a sum of simpler terms, each of which is either linear or concave down, allowing us to conclude Theorem  by convexity. Consider a tournament paying prize money $m_{1}\ge m_{2}\ge \ldots \ge m_{n}$ for finishing first, second, …, $n$th. Our first reduction is view this as $n$ simultaneous sub-tournaments, the first a winner-take-all paying $m_{1}-m_{2}$ for first place, the second paying $m_{2}-m_{3}$ to the first and second place finishers, through one paying $m_{n}$ to each of the top $n$ finishers. It will suffice to prove that, by participating in a fair bet, a player’s expected winnings will not increase in any of these sub-tournaments and will lose expected value in at least one of them. Denote the player in question by $A$ and the opponent involved in the bet as $B$. Let $A$ have fraction $x$ of the total number of chips in play, let $B$ have fraction $y$, and let $w$ denote the fraction of all chips $A$ loses on the bet (negative when $A$ wins). We will use $u_{i}$ and $z_{i}$ as needed to denote the fraction of chips held by other players. In any sub-tournament where all players get the same prize money (including those with prize 0), $A$’s expected winnings are that amount regardless of wagers. For the winner-take-all sub-tournament, $A$’s expected value participating in the wager is $$\left(m_{1}-m_{2}\right)E[x-w]=\left(m_{1}-m_{2}\right)x,$$ i.e. $A$’s expected value hasn’t changed. All remaining sub-tournaments, of which there is at least one, satisfy the conditions of the theorem. Thus, it suffices to prove the theorem for those tournaments where each of at least two winners gets a prize of 1 and at least one nonwinner gets 0. After losing a wager $w$, the probability $A$ finishes in $m$th place behind players other than $B$ having chip fractions $u_{1},\ldots,u_{m-1}$ is $$p_{m}(u_{1},\ldots,u_{m-1},x-w).$$ This is linear in $w$, so $$E[p_{m}(u_{1},\ldots,u_{m-1},x-w)]=p_{m}(u_{1},\ldots,u_{m-1},x).$$ On the other hand, in those scenarios in which player $B$ finishes ahead of player $A$, with both among the winners, the dependence on $w$ is nonlinear. We partition all such cases by fixing the first $m$ finishers with $B$ in $m$th place and fixing the relative positions of all other finishers except player $A$. Denote the fraction of chips held by the first $m-1$ players by $u_{1},\ldots,u_{m-1}$ and those of the remaining $k$ players other than $A$ or $B$ by $z_{1},\ldots,z_{k}$, where $k=n-m-1$. Setting $\Delta=1-u_{1}-\cdots-u_{m-1}$, player $A$’s expected winnings may be written as $$p_{m-1}(u_{1},\ldots,u_{m-1})\cdot q_{k}((x-w)/\Delta;(y+w)/\Delta;z_{1}/\Delta,\ldots,z_{k}/\Delta).$$ We may drop the leading term, $p_{m-1}(u_{1},\ldots,u_{m-1}),$ and rescale units, dividing $w$, $x$, $y$, and $z_{i}$ by $\Delta$. This leaves us needing to show the concavity of the simpler expression $q_{k}(x-w;y+w;z_{1},\ldots,z_{k})$. Applying Lemma \[partfrac\], we see that $$\begin{gathered} q_{k}(x-w;y+w;z_{1},\ldots,z_{k}) =\dfrac{y+w}{x+y}\thinspace p_{k+1}(x+y,z_{1},\ldots,z_{k})\\ -p_{k+2}(y+w,z_{1},\ldots,z_{k},1-x-y-z_{1}-\cdots-z_{k}).\end{gathered}$$ The first term is linear in $w$. The latter expands to $$-\dfrac{(y+w)z_{1}z_{2}\cdots z_{k}(1-x-y-z_{1}-\cdots-z_{k})}{(1-y-w)(1-y-w-z_{1})\cdots(1-y-w-z_{1}-\cdots-z_{k})}.$$ Thus it suffices to show that $$\label{gk}\hypertarget{gk} g_{k}(w)=\dfrac{y+w}{(1-y-w)(1-y-w-z_{1})\cdots(1-y-w-z_{1}-\cdots-z_{k})}$$ has positive second derivative. Observe that, for the range of relevant wagers, $-(1-\min\{x,y\})\le w\le (1-\min\{x,y\})$, $1/(1-y-w-z_{1}-\cdots-z_{j})$ satisfies the conditions of Lemma \[prodrule\], as does $$g_{0}(w)=\dfrac{y+w}{1-y-w}=\dfrac1{1-y-w}-1.$$ By induction, $g_{k}>0$, $g'_{k}>0$, $g''_{k}>0$ for all $k$. Therefore, $q_{k}(x-w;y+w;z_{1},\ldots,z_{k})$ is concave down and by convexity, $$E[q_{k}(x-w;y+w;z_{1},\ldots,z_{k})]<q_{k}(x;y;z_{1},\ldots,z_{k}),$$ completing the proof of Theorem \[main\]. Theorem \[main\] is false for fair wagers among three or more players. With many players, counterexamples are unusual. On the other hand, they are easy to construct: start with a tournament paying two places and with three players, all participating in a fair wager. Barring the unlikely possibility that expected winnings for all three are unaffected by the wager, the expected winnings for at least one must increase. One could easily add one or more uninvolved players with very small chip stacks to the counterexample. We give another, explicit, counterexample following Theorem \[bystander\]. We move on to examine the impact of a fair wager on the expected winnings of players not involved in the bet. \[bystander\] Suppose a tournament has prize money for $n$th place which is at least that for $(n+1)$st place and that at least one player still in the tournament will not earn as much as second place prize money. Under the Independent Chip Model, the expected prize money of any player not involved in a fair bet between two players will increase. The proof parallels that of Theorem \[main\]. Let $A$ and $B$ denote the two players involved in the fair bet and let $C$ denote one player who is not involved. We break down $C$’s expected winnings into a sum of expected winnings from sub-tournaments paying 1 to each of it winners. The expected winnings of $C$ in a winner-take-all sub-tournament is unaffected by a fair bet. Similarly, they are unaffected in scenarios when neither $A$ nor $B$ finishes ahead of $C$. It suffices to prove that $C$’s expected winnings increase when $B$ finishes ahead of both $A$ and $C$. For sub-tournaments paying the top two finishers, $C$’s expected winnings when $B$ finishes first are $$\dfrac{(y+w)u}{1-y-w}=ug_{0}(w),$$ which we have seen is concave up. In this case we can actually conclude that $C$’s expected winnings must increase for any fair wager among two [*or more*]{} players. As in the proof of Theorem \[main\], we further break down to scenarios in which a fixed $m-1$ other players finish ahead of $B$, who finishes in $m$th place. Again, there is no loss of generality in assuming $B$ finishes first, with the top $k+2$ places paid, with $k\ge1$. Our final reduction is to scenarios where the order of the first $k$ finishers other than $A$, $B$, or $C$ are fixed. Let $x$, $y$, and $u$ denote the respective fractions of all chips in play held by $A$, $B$, and $C$. Let the fractions of the other relevant $k$ finishers be $z_{1},\ldots,z_{k}$. Let $w$ be the amount $B$ wins on the fair wager. When $A$ also finishes in the money, we’ll fix $C$’s position and “float” $A$ as in the proof of Theorem \[main\], summing over the different positions for $C$. To these we’ll add those cases that $A$ does not finish in the money by floating $C$. Noting that only $k+2$ places are paid in this scenario, $C$’s expected winnings are $$\begin{gathered} \left[q_{k+1}(x-w;y+w;u,z_{1},\ldots,z_{k})-p_{k+3}(y+w,u,z_{1},\ldots,z_{k},x-w)\right]\\ +\left[q_{k+1}(x-w;y+w;z_{1},u,z_{2},\ldots,z_{k})-p_{k+3}(y+w,z_{1},u,\ldots,z_{k},x-w)\right]\\ +\cdots+\left[q_{k+1}(x-w;y+w;z_{1},\ldots,u,z_{k})-p_{k+3}(y+w,z_{1},\ldots,u,z_{k},x-w)\right]\\ +q_{k}(u;y+w;z_{1},\ldots,z_{k}).\end{gathered}$$ We apply Lemma \[partfrac\] to each of the differences. The first is $$\begin{aligned} q_{k+1}&(x-w;y+w;u,z_{1},\ldots,z_{k})-p_{k+3}(y+w,u,z_{1},\ldots,z_{k},x-w) \\ =&\dfrac{y+w}{x+y}p_{k+2}(x+y,u,z_{1},\ldots,z_{k})\\ &-p_{k+3}(y+w,u,z_{1},\ldots,z_{k},1-x-y-u-z_{1}-\cdots-z_{k})\\ &-p_{k+3}(y+w,u,z_{1},\ldots,z_{k},x-w)\\ =&\dfrac{y+w}{x+y}p_{k+2}(x+y,u,z_{1},\ldots,z_{k}) -p_{k+2}(y+w,u,z_{1},\ldots,z_{k}),\end{aligned}$$ with similar expressions for the other terms. From the definition of $q$, we can express $C$’s expected winnings as $$\begin{aligned} \dfrac{y+w}{x+y}&\left[ q_{k}(u;x+y;z_{1},\ldots,z_{k}) -p_{k+2}(x+y, z_{1},\ldots,z_{k}, u)\right]\\ &+p_{k+2}(y+w,z_{1},\ldots,z_{k},u).\end{aligned}$$ The first term is linear in $w$. The second term is, essentially, $$z_{1}\cdots z_{k}ug_{k+1}(w)$$ in the notation of equation (\[gk\]) and is thus concave up, completing the proof. As we saw in its proof, Theorem \[bystander\] holds for tournaments paying only two places for fair bets among three or more players, but is false in general. Even then, counterexamples are quite rare. In our counterexample below, the first three finishers win 1 unit. (Perhaps it’s a satellite tournament to earn entry into another tournament.) The bet has two equally likely outcomes: A wins 16 units, B and C each lose 8 units or A loses 16 units, B and C each win 8 units. The expected winnings, to four decimal places, are given in the table below. .3in -------- ------------ ------------------- ------------------- Player Initial Initial Final Chip Count Expected Winnings Expected Winnings A 140 0.9952 0.9914 B  10 0.5256 0.5316 C  10 0.5256 0.5316 D  50 0.9536 0.9455 -------- ------------ ------------------- ------------------- However, there is an interesting special case, with which we conclude this paper. \[merge\] Under the assumptions of Theorem \[bystander\], if two or more players each bet a fixed amount and the total of all bets is won by one of these players with probability proportional to the size of his or her wager, then the expected winnings of all players not involved in the bet increases. In particular, if the chips of two or more players are combined into a single player’s stack, the expected winnings of all other players increase. The case of two players is a special case of Theorem \[bystander\]. The general case can be realized as a sequence of such fair wagers between two players. [99]{} F. Thomas Bruss, Guy Louchard, and John W. Turner, On the $N$-Tower-Problem and Related Problems, Adv. Appl. Prob. [**35:1**]{} (2003), 278–294 (MR1975514). William Chen and Jerrod Ankenman, The Theory of Doubling Up, The Intelligent Gambler [**23**]{} (Spring/Summer 2005), 3–4. Bill Chen and Jerrod Ankenman, The Mathematics of Poker, ConJelCo LLC, 2006. William Feller, An Introduction to Probability Theory and Its Applications, vol. 1, 3rd Edition, Wiley, 1968. Tom Ferguson, Gambler’s Ruin in Three Dimensions, 1995, available at <http://www.math.ucla.edu/~tom/papers/unpublished/gamblersruin.pdf>. Sam Ganzfried and Tuomas Sandholm, An approximate jam/fold equilibrium for 3-player no-limit Texas hold’em tournaments, Proc. of 7th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2008), Padgham, Parkes, Müller and Parsons (eds.), May, 12–16, 2008, Estoril, Portugal. George T. Gilbert, Racing Early in Tournaments, Two Plus Two Internet Magazine, [**2:3**]{} (March 2006). (Available from the author: g.gilbert@tcu.edu.) Tony Henke, Is poker different from flipping coins?, Masters Thesis, Washington University in St. Louis, 2007, available at <http://economics.wustl.edu/conference/Honors/Henke_Thesis.pdf>. Mason Malmuth, Settling Up in Tournaments: Part III, in Gambling Theory and Other Topics, Two Plus Two Publishing, 1994. Eddie Shoesmith, Huygens’ Solution to the Gambler’s Ruin Problem, Historia Mathematica [**13**]{} (1986), 157–167 (MR0851874). Yvik C. Swan and F. Thomas Bruss, A Matrix-Analytic Approach to the $N$-Player Ruin Problem, J. Appl. Prob. [**43**]{} (2006), 755–766 (MR2274798).
--- abstract: 'This paper deals with two questions relative to the inverse coefficient problem of recovering the electric permittivity and conductivity of a medium from partial boundary data at a fixed frequency. The underlying model is the time-harmonic Maxwell equations in the electric field. First, an identifiability result is proved for partial boundary data without restrictive conditions on the inaccessible part of the boundary. The second issue that is addressed, is the data completion problem on the inaccessible part of the boundary. The quasi-reversibility method is studied, and different mixed formulations are proposed. Well-posedness and convergence results are proved. Various two- and three dimensional numerical simulations attest the efficiency of the method, in particular for noisy data.' title: 'About an inverse electromagnetic coefficient problem: uniqueness with partial boundary data and quasi-reversibility method for data completion' --- Introduction ============ Let $\Omega$ denote a bounded and simply connected domain in $\R^d$, $d = 2,3$ of boundary $\Gamma \coloneqq \partial\Omega$. The unit outward normal to $\Omega$ is denoted by $\bfn$. Assume the medium in $\Omega$ to be inhomogeneous and isotropic, of constant magnetic permeability $\mu = \mu_0$ with $\mu_0$ the magnetic permeability in vacuum. Let $\eps, \sigma$ be non-negative functions representing the electric permittivity and conductivity respectively. The refractive index $\kappa$ of the medium in $\Omega$ is defined by $\kappa(\bfx) = \frac{1}{\eps_0}\left(\eps(\bfx) + i\frac{\sigma(\bfx)}{\omega}\right), \bfx \in \Omega$, where $\eps_0$ is the electric permittivity in vacuum. Consider the electric field intensity $\bfE$ satisfying the time-harmonic Maxwell equations at a frequency $\omega > 0$ $$\label{eq:maxwell} \curl\curl\bfE - k^2\kappa\bfE = 0, \quad \stext[r]{in} \Omega,$$ with Dirichlet or Neumann boundary condition on $\Gamma$. The number $k \coloneqq \omega\sqrt{\mu_0\eps_0}$ is the wavenumber. We are interested in the inverse boundary value problem of recovering the electric permittivity $\eps$ and conductivity $\sigma$ from partial boundary data on $\Gamma$ at a fixed frequency $\omega$. The study of dielectric properties of biological tissues or materials is of great interest in medical or industrial applications. Information about the characteristics and composition of a tissue or a material can be used to develop new non-invasive modalities in many practical applications of electric fields in agriculture, bioengineering, geophysical exploration, and medical diagnosis. For instance, microwave imaging (electromagnetic high frequencies) is under investigation for cancer screening or brain stroke detection (see Tournier *et al* [@Tournier; @Tournier2]). The fundamental example of coefficient reconstruction is Calderón’s inverse conductivity problem [@Calderon]. The theoretical and numerical study of Calderón’s problem in Electrical Impedance Tomography (which consists in reconstructing the conductivity of a medium from boundary measurements of electric voltages and currents) has also been extensively addressed in the last two decades. An important number of works have dealt with the questions of uniqueness, stability and reconstruction. Without being exhaustive, we refer for instance to [@SylvesterUhlmann; @Alessandrini; @Borcea; @Uhlmann; @Ammari09; @Caro13; @Ammari17] and references therein. In this paper, we focus on the inverse medium problem associated with the time-harmonic Maxwell equations formulated in the electric field. We are interested in particular in both the uniqueness question for partial data and the data completion problem. The two issues are complementary in view of the numerical coefficient reconstruction from measurements collected only on an accessible part of the boundary. The uniqueness issue aims to answer the (theoretical) question whether it is possible or not to recover the coefficients from the boundary data. In most configurations, one single measurement is not sufficient to identify the coefficient functions, and typical identifiability results state that identical Dirichlet-to-Neumann maps (on part of the boundary) yield identical coefficient functions. From a practical point of view, the knowledge of the complete Dirichlet-to-Neumann map is not realistic since it requires an infinite number of source terms. But the knowledge of a single couple of Dirichlet-Neumann data on part of the boundary only, leads to another difficulty: whereas the boundary data are over-determined on the accessible part, they are under-determined on the inaccessible part. Hence, the numerical resolution of the direct problem which is a crucial part in most identification or minimization algorithms, is not possible without an appropriate data completion procedure. The usual inverse boundary value problem (IBVP) for the time-harmonic (full) Maxwell equations was first proposed in [@Somersalo92]. The lack of ellipticity of Maxwell’s equations adds complexities to the problem. We refer to the introduction of [@CaroZhou14] which gives an interesting overwiew of the results related to the IBVP for Maxwell’s equations. With regard to our first concern, two uniqueness results for certain types of partial data are stated in [@Caro11] and [@Brownetal16]. Their assumptions can be restrictive in the practical applications we have in mind: Caro [@Caro11] imposes some geometrical conditions on the inaccessible part of the boundary. Brown *et al* [@Brownetal16] consider a perfect conducting boundary condition on this part. We propose to fix neither geometrical nor boundary conditions. As mentioned before, the data completion problem consists in recovering data on the inaccessible part of the boundary from measured data on the accessible part. The corresponding Cauchy problem is well-known to be ill-posed (e.g. [@Alessandrini09; @BenBelgacem]). Different regularization methods have been introduced for reconstructing the missing data for elliptic equations (see e.g. [@Andrieux; @Azaiez; @BenBelgacem05; @Kozlov]). We focus on the non-iterative quasi-reversibility approach. It was introduced by Lattès and Lions [@LattesLions67]. The idea is to replace the ill-posed Cauchy problem with a family of well-posed variational problems with additional unknowns which depend on a small regularization parameter. The variational setting is numerically interesting since finite element methods can be used. The quasi-reversibility method has been successfully adopted and validated for Laplace’s equation [@Klibanov91; @Bourgeois05; @BourgeoisDarde; @Darde] and Helmholtz’s equation [@BR18]. In the present paper, we study a quasi-reversibility method for the vector Maxwell equations. To the best of our knowledge, it’s the first time that such an approach is studied for solving a data completion problem in electromagnetics. The article is organized as follows. is devoted to the uniqueness result and its proof. presents a first version of the quasi-reversibility method for solving the data completion problem for Maxwell’s equations in the electric field. Both theoretical and numerical aspects are addressed. proposes two relaxed mixed problems with regularization in order to deal with noisy data. The relaxed problem is shown to be well posed, and convergence to the solution of the initial problem is proven under some conditions on the involved regularization and relaxation parameters. Numerical simulations confirm the theoretical results in two and three dimensions of space and attest the efficiency of the method. Finally, we give some concluding remarks. Uniqueness result {#sec:uniqueness} ================= Let $\Gamma_0$ be a non-empty open subset of $\Gamma$, called accessible part of $\Gamma$ (see ) [@Brownetal16]. The inverse boundary value problem that we are interested in, is to determine the electric permittivity $\eps$ and conductivity $\sigma$ from boundary measurements taken on $\Gamma_0$ for a given (boundary) source term, at a fixed frequency $\omega$. These measurements together with the source term, define a Cauchy data set $C(\eps,\sigma; \Gamma_0)$ (see ) [@CaroZhou14]. The uniqueness question reads as follows: Given a frequency $\omega>0$ and two sets of non-negative coefficients $\{\eps_{j}, \sigma_{j} \}$, $j \in \{1,2\}$, does $C(\eps_1,\sigma_1;\Gamma_0) = C(\eps_2,\sigma_2; \Gamma_0)$ imply $\eps_{1} = \eps_{2}$ and $\sigma_1 = \sigma_2$ in $\Omega$? Two uniqueness results for partial data are stated in [@Caro11] and [@Brownetal16]. Caro imposes some geometrical conditions on the inaccessible part $\Gamma_1$ which is supposed to be either part of a plane or part of a sphere [@Caro11]. Brown *et al* [@Brownetal16] relax this geometrical condition on $\Gamma_1$ (the boundary of the domain is assumed $C^{1,1}$) but consider a perfect conducting boundary condition $\restriction{\bfE}{\Gamma_1} \times \bfn = 0$. This hypothesis can be restrictive in applications. Our can be seen as an improvement of these results. In our configuration, neither geometrical nor boundary conditions are fixed on $\Gamma_1$. The idea behind the proof of is simple and new. It uses the unique continuation principle and results of Caro and Zhou [@CaroZhou14]. Let us address the different definitions useful for our uniqueness result. Let us introduce the vector space $\Hcurl[\Omega] = \set*{\bfu \in L^2(\Omega)^3}{\curl\bfu \in L^2(\Omega)^3}$. For any vector field $\bfu \in \Hcurl[\Omega]$, we define the tangential trace by continous extension of the mapping $\gamma_t(\bfu) \coloneqq \restriction{\bfu}{\Gamma} \times \bfn$. We introduce the trace space $Y(\Gamma) = \set*{\bff \in H^{-1/2}(\Gamma)^3}{\Exists{\bfu \in \Hcurl[\Omega]} \gamma_t(\bfu) = \bff}$ and its restriction to $\Gamma_0$ in the distributional sense $Y(\Gamma_0) = \set*{\restriction{\bff}{\Gamma_0}}{\bff \in Y(\Gamma)}$. \[Def\_accessible\] Let $\Omega$ be a non-empty, open, bounded connected domain in $\R^3$ with Lipschitz boundary $\Gamma$. Let $\Gamma_0$ a smooth non-empty open subset of $\Gamma$ such that meas $\Gamma_0 > 0$. The part $\Gamma_0$ is called the accessible part of the boundary $\Gamma$ and $\Gamma_1 \coloneqq \Gamma \setminus \overline{\Gamma_0}$ the inaccessible part. The set of admissible coefficients $\eps$ and $\sigma$ is given in the following definition. \[Def\_admissible\] The pair of coefficients $\eps$ and $\sigma$ is admissible if $\eps, \sigma \in \mcC^1(\overline{\Omega})$ such that $\eps(\bfx) \geq \tilde{\eps}$ and $\sigma(\bfx) \geq 0$ almost everywhere in $\Omega$ for a strictly positive constant $\tilde{\eps}$. \[Def\_Cauchyset\] Let $\Omega$ and $\Gamma_{0}$ be as in . For a pair of admissible coefficients $(\eps, \sigma)$ defined on $\Omega$ as in , the corresponding Cauchy data set $C(\eps,\sigma; \Gamma_0)$ at a fixed frequency $\omega>0$ consists of pairs $(\bff, \bfg) \in Y(\Gamma_0) \times Y(\Gamma_0)$ such that there exists a solution $\bfE \in \Hcurl[\Omega]$ satisfying for $\kappa = (\eps + i \sigma/\omega)/\eps_0$, and the boundary conditions $\restriction{\bfE}{\Gamma_0} \times \bfn = \bff$ and $\curl\restriction{\bfE}{\Gamma_0} \times \bfn = \bfg$. The definition of Cauchy data sets is used for instance in [@Ola03; @Caro11; @CaroZhou14]. The partial boundary data are also given by the admittance map $\Lambda\colon \restriction{\bfE}{\Gamma} \times \bfn \mapsto \curl\restriction{\bfE}{\Gamma} \times \bfn$ restricted to $\Gamma_0$ if $\omega$ is not a resonant frequency for . The uniqueness result reads as follows. \[thm:uniqueness\] Let $\Omega$ and $\Gamma_0$ be as in . Let $\omega > 0$ a frequency. Assume that $\eps_{j}, \sigma_{j}$, $j \in \collection{1,2}$, are two pairs of admissible coefficients such that $\eps_{1} = \eps_2$ and $\sigma_{1} = \sigma_2$ in $\overline{\mcV}$ where $\mcV$ is a neighbourhood of $\Gamma$. Then, $C(\eps_1,\sigma_1; \Gamma_0) = C(\eps_2,\sigma_2; \Gamma_0)$ implies $\eps_{1} = \eps_{2}$ and $\sigma_1 = \sigma_2$ in $\Omega$. The proof is divided into two parts. ![Example of a neighbourhood $\mcV$.[]{data-label="fig:domain"}](domain.pdf){width="30.00000%"} In a first step, we prove that the Cauchy data sets coincide not only on $\Gamma_0$, but on the whole boundary $\Gamma$. To this end, consider a couple $(\bff,\bfg) \in C(\eps_1,\sigma_1; \Gamma)$, and let $\bfE_1 \in \Hcurl[\Omega]$ satisfy $$\label{eq:Cauchy_1} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \curl\curl\bfE_1 - k^2\kappa_1\bfE_1 &=& 0, & \stext[r]{in} \Omega, \\ \bfE_1 \times \bfn = \bff, & \stext[r]{on} \Gamma, \\ \curl\bfE_1 \times \bfn = \bfg, & \stext[r]{on} \Gamma. \end{array} \right.$$ Since the boundary conditions are obviously satisfied on the accessible part $\Gamma_0$, we get $(\bff,\bfg)\in C(\eps_1,\sigma_1; \Gamma_0)$ and thus $(\bff,\bfg) \in C(\eps_2,\sigma_2; \Gamma_0) = C(\eps_1,\sigma_1; \Gamma_0)$ by assumption. Therefore, there is a field $\bfE_2 \in \Hcurl[\Omega]$ such that $$\label{eq:Cauchy_2} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \curl\curl\bfE_2 - k^2\kappa_2\bfE_2 &=& 0, & \stext[r]{in} \Omega, \\ \bfE_2 \times \bfn = \bff, & \stext[r]{on} \Gamma_0, \\ \curl\bfE_2 \times \bfn = \bfg, & \stext[r]{on} \Gamma_0, \end{array} \right.$$ where we emphasize that the boundary conditions are only satisfied on $\Gamma_0$. The unique continuation principle (see [@Brownetal16 Lemma 5.4] which fits with the above regularity assumptions) applied to the difference $\bfE = \bfE_1 - \bfE_2$ in the neighbourhood $\mcV$ of $\Gamma$ then yields $\bfE = 0$ in $\overline{\mcV}$ since $$\label{eq:Cauchy_couronne} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \curl\curl\bfE - k^2\kappa\bfE &=& 0, & \stext[r]{in} \mcV, \\ \bfE \times \bfn = 0, & \stext[r]{on} \Gamma_0, \\ \curl\bfE \times \bfn = 0, & \stext[r]{on} \Gamma_0, \end{array} \right.$$ where $\kappa = (\eps_1 + i \sigma_1/\omega)/\eps_0 = (\eps_2 + i \sigma_2/\omega)/\eps_0$ by assumption. Consequently, $\bfE_1 \times \bfn = \bfE_2 \times \bfn = \bff$ and $\curl\bfE_1 \times \bfn = \curl\bfE_2 \times \bfn = \bfg$ on the whole boundary $\Gamma$ and $(\bff,\bfg)$ belong to the Cauchy data set $C(\eps_2,\sigma_2; \Gamma)$. Changing the roles of $(\eps_1,\sigma_1)$ and $(\eps_2,\sigma_2)$ proves that $$\label{eq:Cauchy_Gamma} C(\eps_1,\sigma_1;\Gamma) = C(\eps_2,\sigma_2;\Gamma).$$ Now, we infer from the assumptions on the coefficients that $\partial^{\alpha} \eps_1(\bfx) = \partial^{\alpha} \eps_2(\bfx)$ and $\partial^{\alpha} \sigma_1(\bfx) = \partial^{\alpha} \sigma_2(\bfx)$ for $\alpha \in \N^3$ on the boundary $\Gamma$ for any multi-index $\alpha$ such that $\abs{\alpha} \leq 1$. These properties are the assumptions of the global uniqueness Theorem of Caro and Zhou (see [@CaroZhou14 Theorem 1.1]). This gives $\eps_{1} = \eps_{2}$ and $\sigma_1 = \sigma_2$ in $\Omega$ and completes the proof. assumes that the electric permittivity $\eps$ and conductivity $\sigma$ are known in a neighbourhood of $\Gamma$. Notice however, that no condition for the electric field is prescribed on the inaccessible part $\Gamma_1$. This hypothesis is less restrictive than geometrical or boundary conditions on $\Gamma_1$. Indeed, in the biomedical applications that we have in mind, the computational domain represents a head model, and the aim is to identify inner perturbations of a healthy background. In the neighbourhood of the head surface the electromagnetic coefficients can thus be fixed to known (constant) values available from the literature [@McCan; @Tofligi], and the uniqueness theorem applies. The quasi-reversibility method for Maxwell’s equations {#sec:QR} ====================================================== In view of the numerical resolution of the inverse coefficient problem in the domain $\Omega$, it is interesting to propose methods which are able to compute, in a stable way, the electric field $\bfE$ on the domain $\overline{\mcV}$ (see ) from known electric coefficients $\eps$ and $\sigma$ (assumptions of ) and partial boundary data, and then to solve the inverse problem in the remaining domain $U = \Omega \setminus \overline{\mcV}$ from total data $(\restriction{\bfE}{\Gamma_i} \times \bfn, \curl\restriction{\bfE}{\Gamma_i} \times \bfn)$ on the interface $\Gamma_i$ (see ). The aim is thus to map a couple of Cauchy data given on the accessible part $\Gamma_0$, onto the interior boundary of $\partial\mcV$, and this even if the data on $\Gamma_0$ are corrupted by noise. This requires the resolution of a Cauchy problem in the domain $\mcV$ which is known to be ill-posed. The rest of the paper is devoted to the solution of this data completion problem for Maxwell’s equations. Principle --------- The method of quasi-reversibility (called hereafter QR method) provides a regularized solution of Cauchy problems (which are known to be ill-posed) in a bounded domain. It has been introduced in [@LattesLions67] for elliptic equations and has been originally revisited in [@Klibanov91; @Bourgeois05]. In particular, Bourgeois proposed a mixed formulation of quasi-reversibility for Laplace’s equation [@Bourgeois05]. Here we adapt it to Maxwell’s equations. The Cauchy problem reads: find $\bfE \in H(\curl; \Omega)$ solution to $$\label{eq:Cauchy} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \curl\curl\bfE - k^2\kappa\bfE &=& 0, & \stext[r]{in} \Omega, \\ \bfE \times \bfn &=& \bff, & \stext[r]{on} \Gamma_0, \\ \curl\bfE \times \bfn &=& \bfg, & \stext[r]{on} \Gamma_0. \end{array} \right.$$ Let us introduce the spaces $V_\bff = \set*{\bfu \in \Hcurl[\Omega]}{\gamma_t(\bfu) = \bff \stext{on} \Gamma_0}$ for any $\bff \in Y(\Gamma_0)$ and $M = \set*{\bfmu \in H(\curl)}{\gamma_T(\bfmu) = 0 \stext{on} \Gamma_1}$, where $\gamma_T\colon \Hcurl[\Omega] \to Y(\Gamma)^\prime$ and $\gamma_T(\bfmu) = \bfn \times (\restriction{\bfmu}{\Gamma} \times \bfn)$ for smooth vector fields $\bfmu$ (see [@Monk03] for details). We may notice that fields in the vector space $M$ satisfy $\gamma_T(\bfmu) = 0$ on the interior of $\Gamma_1$. Assume that $(\bff,\bfg) \in C(\eps,\sigma; \Gamma_0)$ (see ) where $\eps$ and $\sigma$ are admissible coefficients (see ). We denote by $a(\cdot,\cdot)$ the sesqui-linear form corresponding to (\[eq:maxwell\]): $a(\bfu,\bfv) = \dotprod{\curl\bfu}{\curl\bfv}{} - k^2\dotprod{\kappa\bfu}{\bfv}{}$ with $\bfu, \bfv \in \Hcurl[\Omega]$ and $\dotprod{}{}{}$ the dot-product in $L^2$. On $\Hcurl[\Omega]$, we introduce the linear form $\ell(\cdot)$ defined by $\ell(\bfpsi) = \duality{\bfg}{\gamma_T(\bfpsi)}{Y(\Gamma_0),Y(\Gamma_0)'}$ For small $\delta > 0$, we consider the following weak mixed formulation: Find $(\bfE_\delta,\bfF_\delta) \in V_\bff \times M$ such that $$\label{eq:qr} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \delta\dotprod{\bfE_\delta}{\bfphi}{\Hcurl[\Omega]} + a(\bfphi,\bfF_\delta) &=& 0, & \forall \bfphi \in V_0, \\ a(\bfE_\delta,\bfpsi) - \dotprod{\bfF_\delta}{\bfpsi}{\Hcurl[\Omega]} &=& \ell(\bfpsi), & \forall \bfpsi \in M, \end{array} \right.$$ where $\dotprod{}{}{\Hcurl[\Omega]}$ denotes the dot-product in $\Hcurl[\Omega]$. states that the regularized solution $(\bfE_\delta, \bfF_\delta)$ tends to $(\bfE, 0)$ solution to (\[eq:Cauchy\]) when $\delta$ tends to 0. The proof needs the following preliminary Lemma. \[lm:trace\] The partial trace application $\bfu \mapsto \restriction{\gamma_t(\bfu)}{\Gamma_0}$ is linear, continuous and surjective from $\Hcurl[\Omega]$ to $Y(\Gamma_0)$. Moreover, there exists a continuous lifting application: we can find a constant $r > 0$ such that, for any $\bfv \in Y(\Gamma_0)$, there exists $\bfu \in \Hcurl[\Omega]$ with $\restriction{\gamma_t(\bfu)}{\Gamma_0} = \bfv$ and $\norm{\bfu}{\Hcurl[\Omega]} \leq r \norm{\bfv}{Y(\Gamma_0)}$. is a direct consequence of the fact that the trace application $\gamma_t$ is linear, continuous and surjective from $\Hcurl[\Omega]$ to $Y(\Gamma)$ (see [@Monk03]). Then, we build an homeomorphism between $\faktor{\Hcurl[\Omega]}{V_0}$ and $Y(\Gamma_0)$. \[thm:convergence\] Let $(\bff,\bfg) \in Y(\Gamma_0) \times Y(\Gamma_0)$. For any $\delta > 0$, problem  admits a unique solution $(\bfE_\delta,\bfF_\delta) \in V_\bff \times M$. If, in addition, $(\bff,\bfg)$ belongs to the Cauchy data set $C(\eps, \sigma; \Gamma_0)$, then $$\label{eq:convergence} \lim_{\delta \to 0} (\bfE_\delta,\bfF_\delta) = (\bfE, 0)$$ in $V_f\times M$. Here, $\bfE$ is the unique solution in the set $$K = \set*{\bfv \in V_f}{a(\bfv,\bfpsi) = \ell(\psi)\ \forall \psi \in M}$$ of the minimization problem $$\label{eq:minHc} \inf_{\bfv\in K} \norm{\bfv}{\Hcurl[\Omega]}.$$ The proof relies on the arguments given in [@BR18] where an abstract setting of the quasi-reversibility method is presented (see also [@Bourgeois05]). Indeed, problem  can be written in closed form as a classical variational formulation in the unknowns $(\bfE_\delta,\bfF_\delta)$, involving a continuous and coercive sesqui-linear form $A_\delta\left((\cdot, \cdot)\right)$ defined on $\Hcurl[\Omega] \times M$. Existence and uniqueness thus follow from Lax-Milgram’s Lemma. In , we address convergence in a more general setting for a relaxed version of . Since the arguments are similar, we omit the convergence proof here and refer the reader to the proof of for details. Discretization by FEM and numerical results {#sec:numerical_classic} ------------------------------------------- The numerical solver for the 2D Maxwell equations has been implemented with FreeFem++ (see [@Hecht12]). We consider a regular triangulation $\mcT_h$ of $\overline{\Omega}$. For any $T \in \mcT_h$, let $h_T$ be its diameter. Then $h = \max_{T \in \mcT_h} h_T$ is the mesh parameter of $\mcT_h$. Edge finite elements of order 1 (see [@Nedelec80]) are used to approximate the solution of the regularized problem . Standard arguments in variational theory give an existence and uniqueness result for the associated discretized formulation of . Since problem  can be written as a variational formulation with a continuous and coercive sesqui-linear form, the discrete problem enters within the framework of Céa’s lemma. The discretization error is thus proportional to the interpolation error which is of order $\mcO(h)$ for the considered finite elements (see e.g. [@Monk03]) provided the continuous solution $(\bfE_\delta,\bfF_\delta)$ is regular. However, the coercivity constant of the involved sesqui-linear form is given by $\min\collection{\delta,1}$ and the discretization error thus behaves as $\mcO\left(\frac{h}{\delta}\right)$. The regularization parameter $\delta$ has thus to be chosen with respect to the mesh size and arbitrary small values of $\delta$ are prohibited. In all the simulations, we fix $\eps_0 = \mu_0 = \omega = 1$ and $\kappa = 1 + i$. Our reference solution is the plane wave $\bfE(\bfx) = \bfeta^\perp e^{ik\sqrt{\kappa}\bfeta\cdot\bfx}$ where $\bfeta \in \R^2$ is the wave propagation vector and $\bfeta^\perp$ is a unit vector orthogonal to $\bfeta$. The square-root $\sqrt{\kappa}$ stands for the classical complex square-root with branch-cut along the negative real axis. ### Unit disc In the first example, $\Omega$ is the unit disc discretized with a mesh of size $h = \scnum{2.26e-2}$, ending up with triangles and edges. Two different configurations of accessible/inaccessible parts are tested (cf. ): 1. In the configuration G34, $\Gamma_0$ is the arc of circle starting at angle 0 and ending at angle $\dfrac{3\pi}{2}$. The accessible part then represents of the boundary $\Gamma$. 2. In the configuration GE37, $\Gamma_0$ represents a set of 37 equally distributed electrodes of common length $\displaystyle \frac{\pi}{25}$. Electrodes cover of the boundary $\Gamma$. ![Choice of the accessible part $\Gamma_0$ (grey line). Left: configuration G34. Right: configuration GE37.[]{data-label="fig:config_Gamma"}](Gamma34.pdf "fig:"){width="30.00000%"} ![Choice of the accessible part $\Gamma_0$ (grey line). Left: configuration G34. Right: configuration GE37.[]{data-label="fig:config_Gamma"}](GammaE37.pdf "fig:"){width="30.00000%"} In , we show the relative error in the $L^2(\Omega)$-norm between the exact solution $\bfE$ and its approximation obtained by the numerical resolution of , with respect to $\delta$. For both configurations, we indicate the parameter $\delta$ for which the error is minimal. Notice that the behaviour of the error for the configuration G34 shown in is in agreement with the error analysis which claims that the error behaves as $\mcO\left(\frac{h}{\delta}\right)$: at fixed mesh size the error increases for small values of $\delta$. In , we report different errors obtained for the value $\delta = \num{9.103e-7}$. The approximation of the electric field $\bfE$ is more accurate on the whole domain $\Omega$ than on the inaccessible part $\Gamma_1$. The configuration GE37 yields better results since the accessible part covers a larger part of the boundary $\Gamma$. In , we illustrate these results by plotting the modulus of the error $\abs{\bfE - \bfE_\delta}$ in the domain $\Omega$. The largest errors are located on $\Gamma_1$, and in particular at the intersection between $\Gamma_0$ and $\Gamma_1$. Indeed, the solution $(\bfE_\delta,\bfF_\delta)$ of is the weak solution of a boundary value problem in $\Omega$ with mixed boundary conditions for $\bfE_\delta$ and $\bfF_\delta$. Mixed boundary conditions are known to induce singularities in the fields unless the data do not satisfy some compatibility conditions (see e.g. [@Grisvard] for the simpler case of the Laplace operator). In the present case, it is not clear whether these compatibility conditions are satisfied or not. This could explain the singular behavior at the intersection points. ![Unit disc. QR method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta"}](err_delta_G34.pdf "fig:"){width="45.00000%"} ![Unit disc. QR method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta"}](err_delta_GE37.pdf "fig:"){width="45.00000%"} Configuration $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_1}}{\norm{\bfE \times \bfn}{0,\Gamma_1}}$ $\norm{\bfF_\delta}{0,\Omega}$ --------------- --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- -------------------------------- G34 1.1244e-02 1.7302e-01 1.8374e-04 GE37 7.6288e-04 1.5185e-02 1.8224e-04 : Unit disc. QR method. Errors for $\delta = \num{9.103e-7}$.[]{data-label="tab:err_qr"} ![Unit disc. QR method. Modulus of the error $\abs{\bfE - \bfE_{\delta}}$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr"}](err_qr_G34.png "fig:"){width="45.00000%"} ![Unit disc. QR method. Modulus of the error $\abs{\bfE - \bfE_{\delta}}$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr"}](err_qr_GE37.png "fig:"){width="45.00000%"} ### Ring For the inverse problem we are studying, we are interested in ring-like domains which, for instance, model the different tissues of the head. The goal consists in mapping the data measured on part of the exterior boundary to an inner interface, allowing in a second step to use reconstruction methods in the interior domain. In this section, we present numerical results on a ring of internal radius 0.75, discretized with a mesh size $h = \scnum{2.05e-2}$, triangles and edges. Three measurement configurations are compared: G34 and GE37, both described in the previous section, and GExt where $\Gamma_0$ is the whole external boundary (see ). ![Choice of the accessible part $\Gamma_0$ (grey line). Left: configuration GExt ($\Gamma_0$ covers of the ring boundary). Middle: configuration G34 (). Right: configuration GE37 ().[]{data-label="fig:config_Gamma_ring"}](GammaExt-ring.pdf "fig:"){width="30.00000%"} ![Choice of the accessible part $\Gamma_0$ (grey line). Left: configuration GExt ($\Gamma_0$ covers of the ring boundary). Middle: configuration G34 (). Right: configuration GE37 ().[]{data-label="fig:config_Gamma_ring"}](Gamma34-ring.pdf "fig:"){width="30.00000%"} ![Choice of the accessible part $\Gamma_0$ (grey line). Left: configuration GExt ($\Gamma_0$ covers of the ring boundary). Middle: configuration G34 (). Right: configuration GE37 ().[]{data-label="fig:config_Gamma_ring"}](GammaE37-ring.pdf "fig:"){width="30.00000%"} The relative errors with respect to $\delta$ are shown in . For each configuration, we list in different errors obtained for the value $\delta$ realizing the minimum indicated in . Notice that we split the inaccessible part into an exterior part $\Gamma_1$ and the interior circle $\Gamma_i$. Furthermore, the modulus of the error $\abs{\bfE - \bfE_{\delta}}$ is reported in . We observe that the electric field $\bfE$ is well approximated in the ring by the quasi-reversibility approach when the data are available on the entire exterior boundary (configuration GExt). The transmission of the information on the inner boundary $\Gamma_i$ is very accurate. The configuration GE37 with electrodes leads also to very interesting results with errors below $\pc{4}$ on the inaccessible part $\Gamma_1$ and the inner boundary $\Gamma_i$. The analysis of the results obtained with configuration G34 is less obvious: whereas the error of the auxiliary field $\bfF_\delta$ is very satisfying, we observe on $\Omega$, $\Gamma_1$ and $\Gamma_i$ errors of , , and , respectively. The next section will propose a possible improvement of the quasi-reversibility method for such a configuration. ![Ring. QR method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_ring"}](err_delta_GExt_ring.pdf "fig:"){width="32.00000%"} ![Ring. QR method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_ring"}](err_delta_G34_ring.pdf "fig:"){width="32.00000%"} ![Ring. QR method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_ring"}](err_delta_GE37_ring.pdf "fig:"){width="32.00000%"} Configuration $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_1}}{\norm{\bfE \times \bfn}{0,\Gamma_1}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_\text{i}}}{\norm{\bfE \times \bfn}{0,\Gamma_\text{i}}}$ $\norm{\bfF_\delta}{0,\Omega}$ --------------- --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- -------------------------------- GExt 3.9423e-04 6.6661e-04 6.6661e-04 1.7964e-04 G34 3.5093e-01 6.9598e-01 4.8011e-01 8.1894e-05 GE37 9.9784e-03 3.9068e-02 3.8737e-02 8.4337e-05 : Ring. QR method. Errors.[]{data-label="tab:err_qr_ring"} ![Ring. QR method. Modulus of the error $\abs{\bfE - \bfE_\delta}$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_ring"}](err_qr_GExt_ring.png "fig:"){width="32.00000%"} ![Ring. QR method. Modulus of the error $\abs{\bfE - \bfE_\delta}$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_ring"}](err_qr_G34_ring.png "fig:"){width="32.00000%"} ![Ring. QR method. Modulus of the error $\abs{\bfE - \bfE_\delta}$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_ring"}](err_qr_GE37_ring.png "fig:"){width="32.00000%"} ### Extension of the computational domain For similar configurations of the accessible part, the errors obtained on the disc are more accurate than on the ring. This leads us to the idea of extending the computational domain to a full disc $\tilde{\Omega} = \left(\overline{\Omega} \cup \overline{\Omega_\text{int}}\right)^\circ$ before restricting the solution to the initial ring $\Omega$ through the . extend the coefficients $\eps$ and $\sigma$ defined on $\Omega$ to admissible coefficients $\tilde{\eps}$ and $\tilde{\sigma}$ defined on $\tilde{\Omega}$. compute $(\tilde{\bfE}_\delta,\tilde{\bfF}_\delta)$ on $\tilde{\Omega}$ by the quasi-reversibility method. define the final fields $(\bfE_\delta,\bfF_\delta)$ by restriction of $(\tilde{\bfE}_\delta,\tilde{\bfF}_\delta)$ to the ring $\Omega$. From a theoretical point of view, if the data $(\bff,\bfg)$ belong to the Cauchy data set $C(\tilde{\eps},\tilde{\sigma};\Gamma_0)$ with respect to the extended domain $\tilde{\Omega}$, the sequence $(\tilde{\bfE}_\delta,\tilde{\bfF}_\delta)$ converges to $(\tilde{\bfE},0)$ where $\tilde{\bfE}$ is the solution of the minimization problem (\[eq:minHc\]) in $\tilde{\Omega}$. The restriction $\bfE$ to the ring $\Omega$ satisfies the Cauchy problem with data $(\bff,\bfg)$ on $\Omega$. Thanks to the unique continuation principle, $\bfE$ is the only possible solution and coincides with the limit of the sequence obtained by the quasi-reversibility method applied on $\Omega$. shows the error with respect to $\delta$ obtained by the extension/restriction method for the previous three configurations. The errors obtained with the values of $\delta$ realizing the minima are reported in and illustrated in . We notice a significative improvement of the approximation on the inaccessible part: instead of on $\Gamma_1$, instead of on $\Gamma_{i}$. The drawback lies in the increasing computational cost since the extended domain leads to a larger number of unknowns. The numerical results of the previous sections attest the efficiency of the quasi-reversibility method for Maxwell equations in the case of compatible data belonging to the trace space. ![Ring. QR method. Extension/restriction method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_filled_ring"}](err_delta_GExt_filled_ring.pdf "fig:"){width="32.00000%"} ![Ring. QR method. Extension/restriction method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_filled_ring"}](err_delta_G34_filled_ring.pdf "fig:"){width="32.00000%"} ![Ring. QR method. Extension/restriction method. Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_filled_ring"}](err_delta_GE37_filled_ring.pdf "fig:"){width="32.00000%"} Configuration $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_1}}{\norm{\bfE \times \bfn}{0,\Gamma_1}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_\text{i}}}{\norm{\bfE \times \bfn}{0,\Gamma_\text{i}}}$ $\norm{\bfF_\delta}{0,\Omega}$ --------------- --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- -------------------------------- GExt 3.7745e-04 1.5665e-04 1.5665e-04 3.1125e-04 G34 3.4345e-02 1.2558e-01 8.1477e-03 2.6564e-04 GE37 1.7728e-03 1.1968e-02 2.9520e-04 9.0131e-05 : Ring. QR method. Extension/restriction method. Errors.[]{data-label="tab:err_qr_filled_ring"} ![Ring. QR method. Extension/restriction method. Modulus of the error $\abs{\bfE - \bfE_\delta}$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_filled_ring"}](err_qr_GExt_filled_ring.png "fig:"){width="32.00000%"} ![Ring. QR method. Extension/restriction method. Modulus of the error $\abs{\bfE - \bfE_\delta}$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_filled_ring"}](err_qr_G34_filled_ring.png "fig:"){width="32.00000%"} ![Ring. QR method. Extension/restriction method. Modulus of the error $\abs{\bfE - \bfE_\delta}$. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_filled_ring"}](err_qr_GE37_filled_ring.png "fig:"){width="32.00000%"} A regularized relaxed mixed formulation {#sec:RRQR} ======================================= As pointed out in [@BR18], noisy data will, in general, not belong to the trace space $Y(\Gamma_0)$. To overcome this difficulty, a relaxed version of the mixed problem  has been proposed. In this section, we follow a similar idea. However, the nature of the trace space which is a proper subspace of $H^{-1/2}(\Gamma)$, leads to a modification of the basic vector space. To this end, we will assume in the sequel that the boundary data $(\bff,\bfg)$ belong to $L^2(\Gamma_0)^3 \times L^2(\Gamma_0)^3$. Notice that this assumption does not imply that the data belong to the trace space. A first version {#ss-relax1} --------------- Consider the vector space $$\label{eq:V} V = \set*{\bfv \in \Hcurl[\Omega]}{\gamma_t(\bfv) \in L^2(\Gamma_0)^3}$$ with the norm $$\label{eq:normV} \norm{\bfv}{V} = \left(\norm{\bfv}{\Hcurl[\Omega]}^2 + \norm{\gamma_t\bfv}{0,\Gamma_0}^2\right)^{1/2}.$$ The space $M$ will now be defined as the following subspace of $V$, $$\label{eq:M} M = \set{\bfv \in V}{\gamma_T(\bfv) = 0 \stext{on} \Gamma_1},$$ equipped with the norm of $V$, $\norm{}{V}$. Since we cannot assume that there is a lifting of the boundary data $\bff$ in $V$, the boundary condition will be imposed weakly through a penalization term. The relaxed version of the mixed problem then reads $$\label{eq:qr-relaxed} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \multicolumn{4}{l}{\stext[r]{Find} (\bfE_\alpha,\bfF_\alpha) \in V \times M \stext[l]{such that}} \\ \multicolumn{1}{l}{\delta\dotprod{\bfE_\alpha}{\bfphi}{V} + \eta^2\dotprod{\gamma_t \bfE_\alpha}{\gamma_t\bfphi}{0,\Gamma_0}} & & & \multirow{2}{*}{$\forall \bfphi \in V,$} \\ + a(\bfphi,\bfF_\alpha) &=& \eta^2\dotprod{\bff}{\gamma_t\bfphi}{0,\Gamma_0}, & \\ a(\bfE_\alpha,\bfpsi) - \dotprod{\bfF_\alpha}{\bfpsi}{V} &=& \ell(\bfpsi), & \forall \bfpsi \in M, \end{array} \right.$$ where the subscript $\alpha = (\delta,\eta)$ indicates that the solution of depends on the regularization parameter $\delta > 0$ and the relaxation parameter $\eta > 0$. \[thm:convergence-relaxed\] Let $(\bff,\bfg)\in L^2(\Gamma_0) \times L^2(\Gamma_0)$. For any $\alpha = (\delta,\eta)$ such that $\delta > 0$ and $\eta > 0$, problem  admits a unique solution $(\bfE_\alpha,\bfF_\alpha) \in V \times M$. If, in addition, $(\bff,\bfg)$ belongs to the Cauchy data set $C(\eps, \sigma; \Gamma_0)$, then $$\label{eq:convergence-relaxed} \lim_{\delta \to 0} (\bfE_\alpha,\bfF_\alpha) = (\bfE, 0)$$ in $V \times M$ for any fixed $\eta > 0$. Here, $\bfE$ is the unique solution in the set $$K_\bff = \set*{\bfv \in V}{\gamma_t\bfv = \bff \stext{on} \Gamma_0 \stext{and} a(\bfv,\bfpsi) = \ell(\psi)\ \forall \psi \in M}$$ of the minimization problem $$\label{eq:minV} \inf_{\bfv \in K_\bff} \norm{\bfv}{V}.$$ The following estimates hold true $$\begin{aligned} \label{eq:estim-relaxed-1} \norm{\bfF_\alpha}{V} &\leq \sqrt{2\delta}\norm{\bfE}{V}\ \forall \eta > 0, \\ \label{eq:estim-relaxed-2} \norm{\gamma_t\bfE_\alpha - \bff}{0,\Gamma_0} &\leq \frac{\sqrt{2\delta}}{\eta} \norm{\bfE}{V}. \end{aligned}$$ The proof is based on ideas from [@BR18] where the convergence result has been proven for the classical version. In addition, we prove here that a careful analysis of the arguments allows to obtain the estimates  and . This gives further insight in the behavior of the sequence $(\bfE_\alpha,\bfF_\alpha)$. Indeed, yields a convergence order for the convergence of $\bfF_\alpha$ to $0$ independently from the choice of $\eta$. Estimate suggests that we should choose $\eta$ such that $\frac{\sqrt{2\delta}}{\eta} \to 0$. Large values of $\eta$ should accelerate the convergence of the trace of $\bfE_\alpha$ on $\Gamma_0$. Problem  can be written in the following closed form, $$\left\{ \begin{array}{l} \stext[r]{Find} (\bfE_\alpha,\bfF_\alpha) \in V \times M \stext[l]{such that} \\ A_\alpha\left((\bfE_\alpha,\bfF_\alpha), (\bfphi,\bfpsi)\right) = L_\alpha\left((\bfphi,\bfpsi)\right)\ \forall (\bfphi,\bfpsi) \in V \times M, \end{array} \right.$$ where the sesqui-linear form $A_\alpha(\cdot,\cdot)$ and the linear form $L_\alpha(\cdot)$ are defined on $V \times M$ by $$A_\alpha\left((\bfu,\bfv), (\bfphi,\bfpsi)\right) = \delta\dotprod{\bfu}{\bfphi}{V} + \eta^2\dotprod{\gamma_t\bfu}{\gamma_t\bfphi}{0,\Gamma_0} + a(\bfphi,\bfv) - a(\bfu,\bfpsi) + \dotprod{\bfv}{\bfpsi}{M}$$ and $$L_\alpha\left((\bfphi,\bfpsi)\right) = \eta^2\dotprod{\bff}{\gamma_t\bfphi}{0,\Gamma_0} - \ell(\bfpsi).$$ The continuity of $A_\alpha(\cdot,\cdot)$ and $L_\alpha(\cdot)$ is obvious, and coercivity follows since $$A_\alpha\left((\bfu,\bfv),(\bfu,\bfv)\right) = \delta\norm{\bfu}{V}^2 + \eta^2\norm{\gamma_t\bfu}{0,\Gamma_0}^2 + \norm{\bfv}{V}^2 \geq \min(\delta,1) \norm{(\bfu,\bfv)}{V \times V}^2$$ taking into account that the scalar product in $M$ is the one of $V$. We thus can apply Lax-Milgram’s Lemma to prove existence and uniqueness of a solution of problem . Now consider the minimization problem  on $K_\bff$. According to the assumption $(\bff,\bfg) \in C(\eps,\sigma; \omega)$, the set $K_\bff$ is not empty. It is obviously convex. The strictly convex functional $\bfv \mapsto \norm{\bfv}{V}$ thus admits a unique minimum $\bfE$ satisfying $\gamma_t\bfE = \bff$ and $$a(\bfE,\bfpsi) = \ell(\bfpsi)\ \forall \bfpsi \in M.$$ Together with the second equation of problem , we get $$\label{eq:tmp1} a(\bfE_\alpha - \bfE,\bfpsi) - \dotprod{\bfF_\alpha}{\bfpsi}{V} = 0\ \forall \bfpsi \in M.$$ Now, take $\bfphi = \bfE_\alpha - \bfE$ in problem  and $\bfpsi = \bfF_\alpha$ in and substract the latter from the first one. Taking into account that $\gamma_t\bfE = \bff$, this yields the fundamental relation $$\label{eq:tmp2} \delta\dotprod{\bfE_\alpha}{\bfE_\alpha - \bfE}{V} + \eta^2\norm{\gamma_t\bfE_\alpha - \bff}{0,\Gamma_0}^2 + \norm{\bfF_\alpha}{V}^2 = 0.$$ From , we see that $\norm{\bfE_\alpha}{V} \leq \norm{\bfE}{V}$, i.e. the sequence $(\bfE_\alpha)_\alpha$ is bounded with respect to the two parameters $\delta$ and $\eta$. We then deduce from estimation , $$\norm{\bfF_\alpha}{V} \leq \sqrt{2\delta}\norm{\bfE}{V},$$ and thus the convergence of $(\bfF_\alpha)_\alpha$ to $0$ whenever the regularization parameter $\delta$ tends to $0$. The boundary term can be estimated in a similar way, $$\norm{\gamma_t\bfE_\alpha - \bff}{0,\Gamma_0} \leq \frac{\sqrt{2\delta}}{\eta}\norm{\bfE}{V},$$ which yields estimation . In particular, the sequence $(\gamma_t\bfE_\alpha)_\alpha$ tends to $\bff$ if $\lim\limits_{\delta \to 0} \left(\frac{\sqrt{2\delta}}{\eta}\right) = 0 $ which is, for example, the case for any fixed $\eta > 0$. It remains to prove the convergence of the sequence $(\bfE_\alpha)_\alpha$. Recall that the sequence is bounded in $V$ which is a Hilbert space. Therefore, there is a subsequence of $(\bfE_\alpha)_\alpha$ that converges weakly in $V$ to a limit field $\tilde{\bfE}$. Passing to the limit in the second equation of yields $$a(\tilde{\bfE},\bfpsi) = \ell(\bfpsi)\ \forall \bfpsi \in M$$ if $\delta \to 0$. Moreover, we get on the one hand $$\dotprod{\gamma_t\bfE_\alpha}{\xi}{0,\Gamma_0} \to \dotprod{\gamma_t\tilde{\bfE}}{\xi}{0,\Gamma_0}\ \forall \xi \in L^2(\Gamma_0)^3,\ \xi \cdot \bfn = 0 \stext{on} \Gamma_0$$ from the weak convergence of $(\bfE_\alpha)_\alpha$ in $V$ and the density of $Y(\Gamma_0) \cap L^2(\Gamma_0)^3$ in the subspace of tangential fields of $L^2(\Gamma_0)^3$, and on the other $$\gamma_t\bfE_\alpha \to \bff$$ strongly in $L^2(\Gamma_0)^3$ according to and the assumptions on the parameter set $\alpha$. Consequently, the limit field $\tilde{\bfE}$ satisfies $\gamma_t\tilde{\bfE} = \bff$ and thus belongs to the set $K_f$. The uniqueness of the solution to the minimization problem thus yields $\tilde{\bfE} = \bfE$. Then, $$\begin{aligned} \norm{\bfE_\alpha-\bfE}{V}^2 &= \real{\dotprod{\bfE_\alpha}{\bfE_\alpha - \bfE}{V} - \dotprod{\bfE}{\bfE_\alpha - \bfE}{V}} \\ &= -\frac{\eta^2}{\delta}\norm{\gamma_t(\bfE_\alpha-\bfE)}{0,\Gamma_0}^2 - \frac{1}{\delta}\norm{\bfF_\alpha}{V}^2 - \real{\dotprod{\bfE}{\bfE_\alpha-\bfE}{}} \\ &\leq -\real{\dotprod{\bfE}{\bfE_\alpha}{}} + \norm{\bfE}{V}^2. \end{aligned}$$ Since $(\bfE_\alpha)_\alpha$ converges weakly to $\bfE$, the above inequality implies the strong convergence, at least for a subsequence. It follows from a standard argument that the whole sequence converges strongly to $\bfE$ which completes the proof. A second version {#ss-relax2} ---------------- The choice of the space $V$ in the preceding section has been motivated by the penalization of the boundary condition $\bfE \times \bfn = \bff$ which can no longer be imposed strongly if the data do not belong to the trace space $Y(\Gamma_0)$. One may ask however if it is judicious to restrict the belonging of the fields trace to $L^2$ on the accessible part $\Gamma_0$ only. It seems thus natural to investigate another choice for the basic vector space: let $$\label{eq:W} W \coloneqq \set*{\bfv \in \Hcurl[\Omega]}{\gamma_t(\bfv) \in L^2(\Gamma)^3}$$ with the norm $$\label{eq:normW} \norm{\bfv}{W} = \left(\norm{\bfv}{\Hcurl[\Omega]}^2 + \norm{\gamma_t\bfv}{0,\Gamma}^2\right)^{1/2}.$$ Notice that the space $M$ defined in keeps unchanged since the boundary condition $\bfv \times \bfn = 0$ on $\Gamma_1$ implies, together with the condition $\bfv \in V$ that $M \subset W$ and $\norm{\bfv}{V} = \norm{\bfv}{W}$ for any field in $M$. In order to obtain the associated relaxed formulation, we have several choices. We obviously could just replace the space $V$ by $W$ in the formulation of problem in . The existence and uniqueness of a solution to the mixed problem on $W$ can be proved in the same way as in the proof of . A slight modification occurs in the proof of the convergence of the sequence $(\bfE_\alpha,\bfF_\alpha)$. Indeed, this requires that the Cauchy problem admits a solution in the modified vector space $W$ and implies more regularity of the limit field $\bfE$ on $\Gamma_1$. The rest of the proof keeps unchanged. From a numerical point of view, it seems however appealing to introduce a new parameter $\nu > 0$ that acts as a regularization parameter on the inaccessible part $\Gamma_1$ of the boundary. An appropriate *“tuning”* of the parameters should allow to improve the numerical results. In view of the latter remark, we define the relaxed mixed problem with regularization on $\Gamma_1$ as follows: $$\label{eq:qr-relaxed-reg} \left\{ \begin{array}{rcl@{\hspace{4\tabcolsep}}l} \multicolumn{4}{l}{\stext[r]{Find} (\bfE_\beta,\bfF_\beta) \in W \times M \stext[l]{such that}} \\ \multicolumn{1}{l}{\delta\dotprod{\bfE_\beta}{\bfphi}{V} + \nu\dotprod{\gamma_t\bfE_\beta}{\gamma_t\bfphi}{0,\Gamma_1}} & & & \multirow{2}{*}{$\forall \bfphi \in W,$} \\ \qquad + \eta^2\dotprod{\gamma_t \bfE_\beta}{\gamma_t\bfphi}{0,\Gamma_0} + a(\bfphi,\bfF_\beta) &=& \eta^2\dotprod{\bff}{\gamma_t\bfphi}{0,\Gamma_0}, & \\ a(\bfE_\beta,\bfpsi) - \dotprod{\bfF_\beta}{\bfpsi}{W} &=& \ell(\bfpsi), &\forall \bfpsi \in M, \end{array} \right.$$ where the subscript $\beta = (\delta,\nu,\eta)$ indicates that the solution of depends on the regularization parameters $\delta > 0$ and $\nu > 0$ as well as on the relaxation parameter $\eta > 0$. Notice that the first two terms in the first equation are well defined according to the definition of the space $W$, but that the weight of the different parts of the norm can be chosen independently. \[thm:convergence-relaxed-reg\] Let $(\bff,\bfg) \in L^2(\Gamma_0)^3 \times L^2(\Gamma_0)^3$. For any $\beta = (\delta,\nu,\eta)$ such that $\delta > 0$, $\nu > 0$ and $\eta > 0$, problem  admits a unique solution $(\bfE_\beta,\bfF_\beta) \in W \times M$. If, in addition, $(\bff,\bfg)$ belongs to the Cauchy data set $C(\eps, \sigma; \omega)$ such that there is a solution of the Cauchy problem  in the space $W$, then for any fixed $\eta > 0$, $$\label{eq:convergence-relaxed-reg} \lim_{(\delta,\nu) \to 0} (\bfE_\beta,\bfF_\beta) = (\bfE, 0)$$ in $W \times M$ whenever the parameters $\delta$ and $\nu$ satisfy the relation $$\label{condition:nu} \lim_{(\delta,\nu) \to 0} \frac{\delta}{\nu} = 1.$$ Here, $\bfE$ is the unique solution in the set $$K_f = \set*{\bfv \in W}{\gamma_t\bfv = \bff \stext{on} \Gamma_0 \stext{and} a(\bfv,\bfpsi) = \ell(\psi)\ \forall \psi \in M}$$ of the minimization problem $$\label{eq:minW} \inf_{\bfv \in K_f} \norm{\bfv}{W}.$$ The proof is similar to the one of and we only point out the influence of the parameter $\nu$ on the different steps of the proof. As before, problem  can be written in variational form involving a continuous and coercive sesqui-linear form on $W \times M$. The coercivity constant is now given by $\min(\delta,\nu)$. The assumptions guarantee that the set $K_f$ is not empty and the minimization problem  admits a unique solution $\bfE$. The orthogonality relation now reads as follows, $$\label{eq:ortho} \delta\dotprod{\bfE_\beta}{\bfE_\beta - \bfE}{V} + \nu\dotprod{\gamma_t\bfE_\beta}{\gamma_t(\bfE_\beta - \bfE)}{0,\Gamma_1} + \eta^2\norm{\gamma_t\bfE_\beta - \bff}{0,\Gamma_0}^2 + \norm{\bfF_\beta}{W}^2 = 0.$$ Developping the scalar products and applying Cauchy-Schwarz’ inequality on the real parts yields the following estimation of $(\bfE_\beta)$ $$\delta\norm{\bfE_\beta}{V}^2 + \nu\norm{\gamma_t\bfE_\beta}{0,\Gamma_1}^2 \leq \delta\norm{\bfE_\beta}{V} \norm{\bfE}{V} + \nu\norm{\gamma_t\bfE_\beta}{0,\Gamma_1} \norm{\gamma_t\bfE}{0,\Gamma_1}.$$ In order to get estimates for the $W$-norm, we notice that the left hand side can be minored by $\min(\delta,\nu) \norm{\bfE_\beta}{W}^2$, whereas the right hand side can be majored by $\max(\delta,\nu) \norm{\bfE_\beta}{W} \norm{\bfE}{W}$. Consequently, $$\label{eq:estim-Ebeta} \norm{\bfE_\beta}{W} \leq \frac{\max(\delta,\nu)}{\min(\delta,\nu)} \norm{\bfE}{W}.$$ Under the given assumptions, the sequence $(\bfE_\beta)_\beta$ is thus bounded in $W$ and we obtain as before that $F_\beta$ converges to $0$ in $M$ when $(\delta,\nu) \to 0$ since $$\begin{aligned} \norm{\bfF_\beta}{W}^2 &\leq \delta \norm{\bfE_\beta}{V} \norm{\bfE_\beta - \bfE}{V} + \nu\norm{\gamma_t\bfE_\beta}{0,\Gamma_1} \norm{\gamma_t(\bfE_\beta-\bfE)}{0,\Gamma_1} \\ &\leq \frac{1}{2}\max(\delta,\nu)\norm{\bfE_\beta}{W} \norm{\bfE_\beta - \bfE}{W} \\ &\leq \max(\delta,\nu) \norm{\bfE}{W}^2. \end{aligned}$$ In the same way, we get $$\norm{\gamma_t\bfE_\beta - \bff}{0,\Gamma_0} \leq \frac{\sqrt{\max(\delta,\nu)}}{\eta} \norm{\bfE}{W}.$$ As before, we prove that the sequence $(\bfE_\beta)_\beta$ converges weakly to the solution $\bfE$ of the minimization problem. In order to show strong convergence, we notice that, according to , $$\delta\real{\dotprod{\bfE_\beta}{\bfE_\beta - \bfE}{V}} + \nu\real{\dotprod{\gamma_t\bfE_\beta}{\gamma_t(\bfE_\beta - \bfE)}{0,\Gamma_1}} \leq 0.$$ Hence, $$\begin{aligned} \norm{\bfE_\beta - \bfE}{W}^2 &= \real{\dotprod{\bfE_\beta}{\bfE_\beta - \bfE}{W}} - \real{\dotprod{\bfE}{\bfE_\beta - \bfE}{W}} \\ &\leq \left(1 - \frac{\delta}{\nu}\right) \real{\dotprod{\bfE_\beta}{\bfE_\beta - \bfE}{V}} - \real{\dotprod{\bfE}{\bfE_\beta - \bfE}{W}} \\ &\leq \abs*{1 - \frac{\delta}{\nu}} \norm{\bfE_\beta}{V}\norm{\bfE_\beta - \bfE}{V} - \real{\dotprod{\bfE}{\bfE_\beta - \bfE}{W}} \end{aligned}$$ Now, the first term in the last inequality tends to $0$ according to the assumptions on $\delta$ and $\nu$ and since $(\bfE_\beta)_\beta$ is bounded in $W$ and thus in $V$. The second term tends to $0$ since $\bfE_\beta$ converges weakly to $\bfE$ in $W$. This completes the proof. Link with Tikhonov regularization --------------------------------- As mentioned in [@BR18], there is a link between the quasi-reversibility method formulated as a mixed problem and standard Tikhonov regularization. We shall make precise this link for the classical QR-method. To this end, denote by $A\colon \Hcurl[\Omega] \to M$ the unique linear continuous operator defined by $$a(\bfu,\bfpsi) = \dotprod{A\bfu}{\bfpsi}{\Hcurl[\Omega]}\ \forall \bfpsi \in M$$ according to the Riesz representation theorem. In the same way, define by $\bfG \in M$ the unique Riesz representative of the continous linear form $\ell(\cdot)$ such that $$\dotprod{\bfG}{\bfpsi}{\Hcurl[\Omega]} = \ell(\bfpsi)\ \forall \bfpsi \in M.$$ Then, the Cauchy problem  consists in finding $\bfE \in \Hcurl[\Omega]$ such that $\gamma_t\bfE = \bff$ and $A\bfE = \bfG$. Now, for a given parameter $\delta > 0$, introduce the (real-valued) cost function $$\label{def:Jdelta} J_\delta(\bfv) = \frac{1}{2} \norm{A\bfv - \bfG}{\Hcurl[\Omega]}^2 + \frac{\delta}{2} \norm{\bfv}{\Hcurl[\Omega]}^2$$ defined on $\Hcurl[\Omega]$. The directional derivative of $J_\delta$ in the direction $\bfd \in V_0$ is given by $$\label{eq:Jprime} J_\delta^\prime(\bfv)\bfd = \real{\dotprod{A\bfv - \bfG}{A\bfd}{\Hcurl[\Omega]} + \delta\dotprod{\bfv}{\bfd}{\Hcurl[\Omega]}}.$$ Now, let $(\bfE_\delta,\bfF_\delta) \in V_f \times M$ be the solution of the classical QR-problem . Then, we get from the second equation of that $\bfF_\delta = A\bfE_\delta - \bfG$. Substituting $\bfF_\delta$ by this relation in the first equation and taking the real part, yields $J_\delta^\prime(\bfE_\delta) = 0$ according to and the definition of $A$ and $\bfG$. In conclusion, the field $\bfE_\delta$ of the unique solution of the QR-method is a critical point of the functional $J_\delta$ defined by . In the same way, one can show that the field $\bfE_\alpha$ of the solution of the relaxed QR method  is a critical point of the functional $$J_\alpha(\bfv) = \frac{1}{2}\norm{A\bfv - \bfG}{V}^2 + \frac{\eta^2}{2}\norm{\gamma_t\bfv - \bff}{0,\Gamma_0}^2 + \frac{\delta}{2}\norm{\bfv}{V}^2$$ defined on the vector space $V$ for a parameter set $\alpha = (\delta,\eta)$. Finally, if we let $$J_\beta(\bfv) = J_\alpha(\bfv) + \frac{\nu}{2}\norm{\gamma_t\bfv}{0,\Gamma_1}^2,$$ for any $\bfv\in W$ and $\beta = (\delta,\eta,\nu)$, the field $\bfE_\beta$ of the solution of the regularized relaxed QR method  is a critical point of $J_\beta$. Numerical results with noisy data --------------------------------- We use the same physical parameters as in . In the sequel, we describe the generation of synthetic noisy data. To simplify the notation, we consider here that the input data $\bff$ and $\bfg$ are vectors of degrees of freedom. They are perturbed as follows. First, two vectors $\bfb_\bff$ and $\bfb_\bfg$ are generated, following a standard normal distribution. Then, the perturbed data $(\bff^p,\bfg^p)$ are obtained from $$\bff^p = \bff + p\frac{\norm{\bff}{}}{\norm{\bfb_\bff}{}}\bfb_\bff, \quad \bfg^p = \bfg +p\frac{\norm{\bfg}{}}{\norm{\bfb_\bfg}{}}\bfb_\bfg,$$ where $p > 0$ is the applied level of noise. Numerical results are presented for the second version of the regularized relaxed quasi-reversibility (RR-QR) method  with $\nu = \delta$ and $p = \pc{5}$ noise. The parameter $\eta$ is fixed automatically through the following procedure (see [@BR18]). First, we compute the Riesz representative $\bfG^p$ of $\bfg^p$ in the Hilbert space $M$ by solving $$\dotprod{\bfG^p}{\bfpsi}{W} = \dotprod{\bfg^p}{\bfpsi}{0,\Gamma_0}, \quad \forall \bfpsi \in M.$$ Then, we define: $$\eta = \frac{\norm{\bfG^p}{{W}}}{\norm{\bff^p}{L^2(\Gamma_0)}}.$$ First, let $\Omega$ be the unit disc. In we show the evolution of the error in $L^2(\Omega)$-norm with respect to $\delta$, for the two configurations of the boundary. For the specific case of $\delta$ minimizing the error, we show this error over the whole domain in and we list the different errors in . ![Unit disc. noisy data. RR-QR method. Relative error $\frac{\norm{\bfE - \bfE_\beta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$ at fixed $\eta$ and $\nu=\delta$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_relax"}](err_delta_relax_G34.pdf "fig:"){width="33.00000%"} ![Unit disc. noisy data. RR-QR method. Relative error $\frac{\norm{\bfE - \bfE_\beta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$ at fixed $\eta$ and $\nu=\delta$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_delta_relax"}](err_delta_relax_GE37.pdf "fig:"){width="33.00000%"} ![Unit disc. noisy data. RR-QR method. Modulus of the error $\abs{\bfE - \bfE_\beta}$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_relax"}](err_qr_relax_G34.png "fig:"){width="45.00000%"} ![Unit disc. noisy data. RR-QR method. Modulus of the error $\abs{\bfE - \bfE_\beta}$. Left: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_relax"}](err_qr_relax_GE37.png "fig:"){width="45.00000%"} Configuration $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_0}}{\norm{\bfE \times \bfn}{0,\Gamma_0}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_1}}{\norm{\bfE \times \bfn}{0,\Gamma_1}}$ $\norm{\bfF_\delta}{0,\Omega}$ --------------- --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- -------------------------------- G34 3.4698e-02 2.9145e-02 3.5686e-01 7.0667e-03 GE37 1.8550e-02 3.2848e-02 1.7154e-01 1.1857e-02 : Unit disc. noisy data. RR-QR method. Errors in the approximation of $\bfE$ for $\nu = \delta$ at optimal delta and automatically fixed $\eta$.[]{data-label="tab:err_qr_relax"} With the same settings, we now test the regularized relaxed quasi-reversibility method (RR-QR) in the ring with extension/restriction. The relative error in $\bfE$ with respect to $\delta$ is shown in . For the optimal values of $\delta$, the errors are reported in and illustrated in . One notices the good performance of the method for the three configurations with errors below in all norms except for configuration G34 on $\Gamma_1$ where the error amounts to . One notices that the method performs better in the ring configuration than in the unit disc. ![Ring. noisy data. RR-QR method with extension/restriction. Relative error for $\bfE$ in $L^2(\Omega)$-norm with respect to $\delta$ for $\nu = \delta$. $\eta$ automatically fixed from noise level.[]{data-label="fig:err_delta_relax_filled_ring"}](err_delta_relax_GExt_filled_ring.pdf "fig:"){width="32.00000%"} ![Ring. noisy data. RR-QR method with extension/restriction. Relative error for $\bfE$ in $L^2(\Omega)$-norm with respect to $\delta$ for $\nu = \delta$. $\eta$ automatically fixed from noise level.[]{data-label="fig:err_delta_relax_filled_ring"}](err_delta_relax_G34_filled_ring.pdf "fig:"){width="32.00000%"} ![Ring. noisy data. RR-QR method with extension/restriction. Relative error for $\bfE$ in $L^2(\Omega)$-norm with respect to $\delta$ for $\nu = \delta$. $\eta$ automatically fixed from noise level.[]{data-label="fig:err_delta_relax_filled_ring"}](err_delta_relax_GE37_filled_ring.pdf "fig:"){width="32.00000%"} ![Ring. noisy data. RR-QR method with extension/restriction. Modulus of the error in the approximation of $\bfE$ for optimal $\delta$ and $\nu = \delta$. $\eta$ automatically fixed from noise level. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_relax_filled_ring"}](err_qr_relax_GExt_filled_ring.png "fig:"){width="32.00000%"} ![Ring. noisy data. RR-QR method with extension/restriction. Modulus of the error in the approximation of $\bfE$ for optimal $\delta$ and $\nu = \delta$. $\eta$ automatically fixed from noise level. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_relax_filled_ring"}](err_qr_relax_G34_filled_ring.png "fig:"){width="32.00000%"} ![Ring. noisy data. RR-QR method with extension/restriction. Modulus of the error in the approximation of $\bfE$ for optimal $\delta$ and $\nu = \delta$. $\eta$ automatically fixed from noise level. Left: configuration GExt. Middle: configuration G34. Right: configuration GE37.[]{data-label="fig:err_qr_relax_filled_ring"}](err_qr_relax_GE37_filled_ring.png "fig:"){width="32.00000%"} Errors in the approximation of $\bfE$ for optimal $\delta$ and $\nu=\delta$. $\eta$ automatically fixed from noise level. \[tab:err\_qr\_relax\_filled\_ring\] Configuration $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_0}}{\norm{\bfE \times \bfn}{0,\Gamma_0}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_1}}{\norm{\bfE \times \bfn}{0,\Gamma_1}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_\text{i}}}{\norm{\bfE \times \bfn}{0,\Gamma_\text{i}}}$ $\norm{\bfF_\delta}{0,\Omega}$ --------------- --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- -------------------------------- GExt 9.2775e-03 3.3871e-02 4.2425e-03 4.2425e-03 1.0602e-02 G34 7.9236e-02 3.0600e-02 1.5695e-01 4.2697e-02 8.7901e-03 GE37 2.6712e-02 3.3503e-02 7.7043e-02 1.6996e-02 1.3596e-02 : Ring. noisy data. RR-QR method with extension/restriction. Numerical results in three dimensions ------------------------------------- We finally present tests in 3D. The physical settings remain the same as in the previous section, and we generate a noise of in the data. The parameter $\eta$ is automatically fixed following the above mentioned procedure, and we choose $\nu = \delta$. Here, however, the domain $\Omega$ is a subset of $\R^3$. Two domains are tested. The first one is the unit ball of $\R^3$, discretized with a mesh size $h = \scnum{1.43e-1}$, ending up with tetrahedrons and edges. The accessible part, $\Gamma_0$, is defined as the union of 128 electrodes covering approximatively of the whole boundary. This boundary configuration is illustrated in . ![Accessible part $\Gamma_0$ in 3D configurations. Set of 128 electrodes.[]{data-label="fig:GammaE128"}](GE128.pdf){width="30.00000%"} The second domain is a ring of width 0.3, defined as the unit ball of $\R^3$, minus the ball centered at the origin and of radius 0.7. The mesh size is close to the ball’s one: $h = \scnum{1.41e-1}$, ending up with tetrahedrons and edges. The boundary configuration is the same as for the unit disc: the interior boundary of the ring is part of $\Gamma_1$ and $\Gamma_0$ represents here about of the whole boundary. The quasi-reversibility system will be solved in this 3D ring with the extension/restriction method introduced in . We show in the evolution of the relative error in $L^2$-norm in the whole domain with respect to $\delta$. As in the two dimensional cases, the error decreases with $\delta$, reaches a minimum, and then increases. lists the errors obtained with the value of $\delta$ realizing this minimum. The error $\norm{\bfE_\delta - \bfE}{0,\Omega}$ is shown in this case in . ![Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: Unit ball of $\R^3$. Right: 3D ring of internal radius 0.7.[]{data-label="fig:err_delta_3d"}](err_delta_ball3d.pdf "fig:"){width="35.00000%"} ![Relative error $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ with respect to the regularization parameter $\delta$. Left: Unit ball of $\R^3$. Right: 3D ring of internal radius 0.7.[]{data-label="fig:err_delta_3d"}](err_delta_ring3d.pdf "fig:"){width="35.00000%"} Domain $\frac{\norm{\bfE - \bfE_\delta}{0,\Omega}}{\norm{\bfE}{0,\Omega}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_0}}{\norm{\bfE \times \bfn}{0,\Gamma_0}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_1}}{\norm{\bfE \times \bfn}{0,\Gamma_1}}$ $\frac{\norm{(\bfE - \bfE_\delta) \times \bfn}{0,\Gamma_\text{i}}}{\norm{\bfE \times \bfn}{0,\Gamma_\text{i}}}$ $\norm{\bfF_\delta}{0,\Omega}$ -------- --------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- -------------------------------- Ball 1.0834e-01 2.3864e-02 4.0949e-01 $\cdot$ 2.9336e-03 Ring 1.2414e-01 1.1567e-02 2.7969e-01 5.6528e-02 1.6497e-03 : Errors in the unit ball of $\R^3$ and in a three dimensional ring.[]{data-label="tab:err_3d"} ![Error in the three-dimensional ring. Left: view of the external boundary. Right: cut showing the internal boundary.[]{data-label="fig:err_qr_3d"}](err_qr_GE128.pdf "fig:"){width="48.00000%"} ![Error in the three-dimensional ring. Left: view of the external boundary. Right: cut showing the internal boundary.[]{data-label="fig:err_qr_3d"}](err_qr_GE128_int.pdf "fig:"){width="48.00000%"} Choice of the regularization parameter. General case. ----------------------------------------------------- From the former numerical results, we can attest that the QR-method in its classical or regularized and relaxed version is a reliable tool for data completion and data transmission problems in two and three dimensional configurations. The choice of the regularization parameter $\delta$ is crucial for the quality of the approximation. In the former tests, the parameter has been fixed to an optimal value by evaluating the error in the $L^2(\Omega)$-norm for each value in a given range. In practice, when the exact solution is not known, this procedure is not possible, but classical strategies as the Morozov’s discrepancy principle or the L-curve method can be applied to retrieve a value for $\delta$. Here, the L-curve method consists in drawing the graph of $\norm{\bfE_\delta}{0,\Omega}$ with respect to $\norm{\bfF_\delta}{0, \Omega}$ for different values of $\delta$. We then choose $\delta$ to minimize both norms as much as possible. In , we show the L-curve we obtained in the ring with noise for the configuration G37 and the three-dimensional ring. Interestingly, the minimum of the error on $\Gamma_i$, highlighted by a circle, is located at the corner of the L-curve, which can then be automatically computed with, for example, the triangle method [@CastellanosGomezGuerra02]. ![L-curve corresponding to the RR-QR method with extension/restriction in the ring. noisy data. Left: configuration G37 (2D). Right: 3D ring. []{data-label="fig:l_curve"}](l_curve.pdf "fig:"){width="40.00000%"} ![L-curve corresponding to the RR-QR method with extension/restriction in the ring. noisy data. Left: configuration G37 (2D). Right: 3D ring. []{data-label="fig:l_curve"}](l_curve_3d.pdf "fig:"){width="40.00000%"} The choice of the second regularization parameter $\nu$ should been made in accordance with , i.e. $\lim\limits_{(\delta,\nu) \to 0} \frac{\delta}{\nu} = 1$. Roughly speaking, this amounts to take $\nu = \delta$. However, this condition has only be proven to be sufficient to assure the convergence of the RR-QR method, and other choices of $\nu$ could possibly improve the results. Finally, in all our numerical experiments with the RR-QR method, the relaxation parameter $\eta$ has been fixed according to the heuristical arguments presented here-above (see [@BR18]). Again, this choice is possibly not optimal and could be improved. Conclusion and future works =========================== In this paper, we have addressed two complementary questions relative to the reconstruction of the dielectric coefficients of a medium from partial boundary measurements. The underlying equations are the time-harmonic Maxwell equations formulated in the electric field. First, we have proved an identifiability result which states that identical partial data sets yield identical interior coefficients, assuming that the dielectric properties are known in a neighbourhood of the boundary. This assumption is realistic in many practical applications, and is less restrictive than geometrical or boundary conditions on the inaccessible part of the boundary. Secondly, we have dealt with the issue how to retrieve in a stable way the missing data on the inaccessible part. To this end, we have studied the quasi-reversibility method for the data completion problem. We have proposed different mixed formulations: a classical one, and two regularized relaxed formulations. We have proved their well-posedness and the convergence of the regularized solution to the exact solution under some conditions on the regularization and relaxation parameters. A large variety of two-dimensional and three-dimensional numerical results attest the efficiency of the method, in particular for noisy data. This non-iterative data completion procedure would be an interesting initial step for numerical methods to reconstruct the dielectric properties of a tissue or a material. We have in mind, in particular, brain medical applications (e.g. microwave tomography) and the cortical mapping problem which consists in reconstructing from electric potential measurements available at part of the scalp the potential on the inner cortex surface. This question of multi-layer data completion has been studied for Laplace’s equation in the context of the EEG (electroencelography) inverse source localization [@Clerc12; @Clerc16]. To the best of our knowledge, this problem hasn’t been treated for electromagnetic inverse medium problems and induces both theoretical and numerical challenging questions. This is part of ongoing work. [99]{} Alessandrini G., Stable determination of conductivity by boundary measurements, Appl. Anal., 27:1-3 (1988), pp. 153–172. Alessandrini G., Rondi L., Rosset E. , Vessella S., The stability for the Cauchy problem for elliptic equations, Inverse problems, 25 (2009), 123004. Ammari H., Mathematical Modeling in Biomedical Imaging I: Electrical and Ultrasound Tomographies, Anomaly Detection, and Brain Imaging, Lecture Notes in Mathematics: Mathematical Biosciences subseries, Vol. 1983. Springer-Verlag. Berlin. 2009. Ammari H, Garnier J, Kang H, Nguyen L, Seppecher L., Multi-Wave Medical Imaging: Mathematical Modelling and Imaging Reconstruction, Volume 2. World Scientific. London. 2017. Andrieux S., Baranger T.N., Ben Abda A., Solving Cauchy problems by minimizing an energy-like functional, Inverse Problems, 22 (2006), 115. Azaïez M., Ben Belgacem F., El Fekih ,H., On Cauchy’s problem: II. Completion, regularization and approximation, Inverse Problems, 22 (2006), pp.1307–1336. Ben Belgacem F., Why is the Cauchy problem severely ill-posed?, Inverse Problems, 23 (2007), pp. 823–836. Ben Belgacem F., El Fekih H., On Cauchy’s problem: I. A variational Steklov-Poincaré theory, Inverse Problems, 21 (2005), pp. 1915–1936. Borcea L, Electrical impedance tomography, Inverse Problems, 18(6) (2002), pp. R99–R136. Bourgeois L., A mixed formulation of quasi-reversibility to solve the Cauchy problem for Laplace’s equation, Inverse Problems, 21 (2005), pp.1087–1104. Bourgeois L., Dardé J., A duality-based method of quasi-reversibility to solve the Cauchy problem in the presence of noisy data, Inverse Problems, 26(9) (2010), 095016. Bourgeois L., Recoquillay A., A mixed formulation of the Tikhonov regularization and its application to inverse PDE problems, Mathematical Modelling and Numerical Analysis, 52(1) (2018), pp.123–145. Brown B.M., Marletta M., Reyes J.M., Uniqueness for an inverse problem in electromagnetism with partial data, J. Differential Equations, 260 (2016), pp. 6525–6547. Calderón A.P., On an inverse boundary value problem, Seminar on Numerical Analysis and its Applications to Continuum Physics, Soc. Brasileira de Matemática. Rio de Janeiro. 1980, pp. 65-73. Caro P., On an inverse problem in electromagnetism with local data: stability and uniqueness, Inverse Probl. Imaging, 5 (2011), pp. 297–322. Caro P., García A., Reyes J. M., Stability of the Calderón problem for less regular conductivities, J. Differential Equations, 254:2 (2013), pp.469–492. Caro P., Zhou T., Global uniqueness for an IBVP for the time-harmonic Maxwell equations, Anal. PDE, 7(2) (2014), pp. 375–405. Castellanos J.L., Gómez S., Guerra V., The triangle method for finding the corner of the L-curve, Applied Numerical Mathematics, 43(4) (2002), pp. 359–373. Cimetière A., Delvare F., Jaoua M, Pons F., Solution of the Cauchy problem using iterated Tikhonov regularization, Inverse problems, 17 (2001), 553. Clerc M., Leblond J., Marmorat J.-P., Papadopoulo T., Source localization in EEG using rational approximation on plane sections, Inverse Problems, 28 (2012), 055018. Clerc M., Leblond J., Marmorat J.-P., Papageorgakis C., Uniqueness result for an inverse conductivity recovery problem with application to EEG, Rendiconti dell’Istituto di Matematica dell’Università di Trieste. An International Journal of Mathematics (2016), 48. Dardé J., Iterated quasi-reversibility method applied to elliptic and parabolic data completion problems, Inverse Problems and Imaging, 10(2), (2016), pp.379–407. Grisvard P., [Singularities in Boundary Value Problems]{}, RMA 22, Masson, Springer-Verlag, 1992. Hecht F., New Development in FreeFem++, Journal of Numerical Mathematics, 20(3-4) (2012), pp. 251–265. Ola P., Päivärinta L., Somersalo E., Inverse Problems for Time Harmonic Electrodynamics, Inside Out: Inverse Problems and Applications, Math. Sci. Res. Inst. Publ., vol. 47, Cambridge University Press, Cambridge, 2003, pp. 169–191. Monk P., Finite Element Methods for Maxwell’s Equations, Oxfod University Press, 2003. Kozlov V.A., Mazya V.G., Fomin A.V., An iterative method for solving the Cauchy problem for elliptic equation, Comput. Math. Phys., 31 (1991), pp. 45–52. Klibanov M. V., Santosa F., A computational quasi-reversibility method for cauchy problems for Laplace’s equation, SIAM J. Appl. Math., 51 (1991), pp.1653–1675. Lattès R., Lions J.-L., Méthode de Quasi-réversibilité et Applications, Dunod, Paris, 1967. McCann H, Pisano G., Beltrachini L., Variation in Reported Human Head Tissue Electrical Conductivity Values, Brain Topography (2019). https://doi.org/10.1007/s10548-019-00710-2 Nédélec J.-C., Mixed finite elements in $\R^3$, Numer. Math., 35 (1980), pp. 315–341. Somersalo E., Isaacson D., Cheney M., A linearized inverse boundary value problem for Maxwell’s equations, J. Comput. Appl. Math., 42(1) (1992), pp. 123–136. Sylvester J., Uhlmann G., A global uniqueness theorem for an inverse boundary value problem, Ann. of Math. (2) 125:1 (1987), pp. 153–169. Tofighi M.R., Daryoush A., [Measurement Techniques for the Electromagnetic Characterization of Biological Materials]{}, Handbook of Engineering Electromagnetics, CRC Press (2004). Tournier P.-H., Aliferis I., Bonazzoli M., de Buhan M., Darbas M., Dolean V., Hecht F., Jolivet P., El Kanfoud I., Migliaccio C., Nataf F., Pichot C., Microwave Imaging of Cerebrovascular Accidents by Using High-Performance Computing, Parallel Computing, 85 (2019), pp. 88-97. Tournier P.-H., Bonazzoli M., Dolean V., Rapetti, Hecht F., Nataf F., Aliferis I, El Kanfoud I., Migliaccio C., de Buhan M., Darbas M., Semenov S., Pichot C., Numerical Modeling and High Speed Parallel Computing: New Perspectives for Tomographic Microwave Imaging for Brain Stroke Detection and Monitoring, IEEE Antennas and Propagation Magazine, 59(5) (2017), pp. 98-110. Uhlmann G., [Electrical impedance tomography and Calderón’s problem]{}, Inverse Problems, 25:12 (2009), 123011.
--- author: - 'G. Nakamura' - 'B. Grammaticos' - 'C. Deroulers' - 'M. Badoual' title: 'Effective epidemic model for COVID-19 using accumulated deaths' --- Introduction ============ Outbreaks of infectious diseases have been a common occurrence throughout history, often linked or followed by disruptions in societies and human activities [@morabia2004]. There are several ways to measure the impact of outbreaks but death tolls are the most relevant ones whenever the disease can threaten lives. For instance, 32 million persons have died between 1981 and 2019 in the ongoing HIV epidemics, 700 000 in 2018 alone [@whoreport2018]. Aside from this large scale epidemic, the world has experienced several other recent outbreaks with varying degrees of severity and scale: Zika fever, whose symptoms are mild but can produce long-lasting effects in newborns (microcephaly) [@mlakarNEJM2016; @oliveiraLancet2017]; Ebola virus disease, with a high mortality rate estimated between 20 and 75% [@ebolaNEJM2015; @kerkhoveSciData2015]; Swine flu/H1N1, which became a pandemic in 2009-2010 although with a lower mortality rate than regular flu [@simonsenPLOSMed2013]. In 2019-2020, the severe acute respiratory syndrome COVID-19 has emerged as the most recent pandemic, caused by the virus named SARS-CoV-2 [@chanLancet2020; @chenLancet2020]. Due to its novelty and lack of previous exposition, humans have no immunity against this threat, leading to increased number of infections. At the time of this writing, the specifics of the pathogen transmission are still being investigated, as well as the complete infection process once the virus enters the host. However, it has been shown that the main human-to-human transmission mode occurs by the spreading of contaminated droplets, similar to other flu-like diseases [@chanLancet2020]. In sharp contrast with H1N1, however, the mortality rate of COVID-19 is estimated in the range 1-4%, with higher ill-outcomes among persons of old age [@world2020report; @baudLancet2020]. This situation creates a unique scenario where healthcare facilities and workers can be overwhelmed in a short period of time, ultimately leading to untreated patients of COVID-19 as well as other diseases [@imperialcollege2020]. To make matters even worse, asymptomatic patients can spread the pathogen for an extended period of time, showing none or mild symptoms during the course of the infection. As a result, laboratory tests to detect the viral load are necessary to identify the correct number of cases outside hospital environments. Similar to the H1N1 pandemic, the required number of tests far exceeds the current amount available in most countries. Without timely tracking of new cases, contact tracing becomes a challenging task, hindering estimates of new cases per infection summarized by the basic reproduction number, $\mathcal{R}_0$. The significance of this parameter lies in the fact that it provides a way to glimpse the values of the transmission rates. Those can then be used in compartmental models – mathematical models that describe the evolution of epidemics assuming nearly homogeneous populations [@keelingJRSoc2005; @bansalJRSoc2007]. Earlier estimates for $\mathcal{R}_0$ using epidemiological data from Wuhan, China, set $\mathcal{R}_0$ between $1.5$ to $5.7$ without additional measures to restrict the spreading [@kucharskiLancet2020; @liNEJM2020; @sancheCDC2020]. With measures in place – such as lockdown, self-isolation, and social distancing – $\mathcal{R}_0$ was estimated to be around $1.05$ [@kucharskiLancet2020]. More importantly, the insufficient number of tests, in addition to the long waiting time for lab results, affects the quality of epidemic models. In the absence of mass testings, the death toll can be used as an alternative metric to probe the extension of the epidemic. The medical staff can assess the cause of death from clinical reports, which may contain test results or not, using the best of their knowledge. Additional tests may also be appended to reports to further specify the cause of death. Both numbers of cases and deaths are publicly available as part of a global effort to tackle the pandemic. [Here, we study the evolution of COVID-19 deaths in order to reduce the issues caused by the limited number of laboratory confirmed tests. We show that the accumulated deaths can be effectively described by simple functions, namely, sigmoids whose parameters are explained in terms of the SIR epidemic model. The SIR model is a compartmental model with the following health states: susceptible, infective, and removed [@kermackProcRSocA1927]. The removed state represents those who have passed away or have developed immunity, either by recovering from the disease or any other method such as vaccination.]{} The SIR model was chosen in detriment of epidemic models with additional health states or reinfections because it is the simplest model that addresses immunity. [Among our results, we show that crude mortality rates can be computed from the parameters of sigmoids and that the rates can change up to one order of magnitude, depending on the severity of the outbreak in a given region. ]{} The paper is organized as follows. Sec. \[sec:data\] contains the description of the data and variables used along the text. Sec. [\[sec:sir\]]{} explains how the SIR model is reduced from a system of differential equations to a single non-linear differential equation, with emphasis on the expansion around equilibrium. Data is modeled in Sec. \[sec:effective\] via sigmoidal functions, whose parameters are explained in terms of the epidemiological parameters of the SIR model. Time windows are addressed in Sec. \[sec:timewindows\], with special emphasis on $\mathcal{R}_0$, crude mortality rate, and quantitative effects of the confinement. Final comments and conclusions are listed in Sec \[sec:con\]. Data {#sec:data} ==== The European Center for Disease Prevention and Control (ECDC) provides COVID-19 data updated in a daily schedule [@ecdc2020]. The daily reports portray the distribution of new confirmed cases and new deaths presented as time series. The dataset also displays the population size $N$ according to 2018 World Bank census for each geographical region (see Table \[tab:example\]). ------------ ----- ------- ------ ------- -------- --------- ------- -------------- ---------- date day month year cases deaths country geoId country code $N$ 08/04/2020 8 4 2020 3777 1417 France FR FRA 66987244 07/04/2020 7 4 2020 3912 833 France FR FRA 66987244 06/04/2020 6 4 2020 1873 518 France FR FRA 66987244 05/04/2020 5 4 2020 4267 1053 France FR FRA 66987244 04/04/2020 4 4 2020 5233 2004 France FR FRA 66987244 ------------ ----- ------- ------ ------- -------- --------- ------- -------------- ---------- : \[tab:example\] Example of ECDC time series for daily number of new cases and new deaths in France [@ecdc2020]. ![\[fig1\] Evolution of COVID-19 deaths in France. a) Accumulated COVID-19 deaths (circles). Countrywide measures to increase social distance and enforce confinement were introduced on March 16, and progressively removed starting from May 11 in a per region criteria. b) Asymmetry of the daily deaths (dotted line) in France induced by measures to control and reduce the spreading of the virus among the population. (Solid line) Bézier curve for 7 days moving average for daily deaths.](figure1ab.eps){width="95.00000%"} To better grasp the nature of the data, consider the number of accumulated deaths in France, as shown in Fig. \[fig1\]. Similar to other European countries, France was heavily affected by the pandemic nearing the mark of 30 000 deaths, with a sharp increase in deaths of infected patients around April-May. These deaths are linked to infections which took place between 3 to 6 weeks prior. On March 16, the French government implemented measures to mitigate the propagation of the disease. The measures included confinement of non-essential workers and temporary closures of schools and universities, as well as commercial stores and services. The effects of said measures did not show up immediately on the data (see Fig. \[fig1\]b) but instead they appeared after some time had passed, around 4 weeks, reducing the number of daily fatalities. Before diving into modeling, we seek for general features that can be used to model outbreaks. These features concern the spreading regime of the disease among the population, excluding spatial effects and temporal delays. As noted above, the death toll [experienced]{} a rapid growth between March and April (see Fig. \[fig1\]). This regime is the hallmark of epidemics and denotes the exponential phase. In general, the growth rate in compartmental epidemic models is summarized by $\mathcal{R}_0$ that is the ratio of the transmission rate $\alpha$ to the removal rate $\lambda$. As the name implies, the transmission rate dictates the average number of persons a given infective individual typically infects during a fixed time interval. The inverse of the removal rate is the characteristic time in which a person remains contagious. The inflection point is another important feature. However, unlike small scale outbreaks, where the disease spreads uninterruptedly among elements of a given population, the introduction of large scale control measures forcibly [modifies]{} the transmission rate. This sudden perturbation reduces the value of the transmission rate in a short time interval, creating an artificial inflection point. After the exponential phase, the system relaxes toward an equilibrium state with no infective people, with a characteristic time scale, namely, the relaxation time $\tau$. We shall investigate the relationship between $\tau$ with epidemiological parameters in the next section. Considering these three aspects, and using Fig. \[fig1\] as reference, it becomes clear that the number of accumulated deaths is a monotonic function, with an early exponential growth only to be replaced by a smooth relaxation towards equilibrium. Therefore, sigmoids appears as ideal candidates to model the data. SIR model {#sec:sir} ========== For the sake of simplicity, let us assume the spreading dynamics can be approximated by the standard SIR model in a homogeneous population of size $N$. The model comprises a population whose subjects can be classified in three distinct health states, namely, susceptible, [ infective]{}, and removed. The removed state [ includes]{} individuals that have either died or recovered from the disease. The latter are assumed to not be infective anymore, nor susceptible to become sick again because of some kind of immunity. The fraction of individuals in each compartment is, respectively, $S(t)$, $I(t)$, and $R(t)$, at time instant $t$. The dynamic goes as follows. Infective subjects in the population transmit the pathogen to susceptible ones, under adequate conditions. The transmission occurs with rate $\alpha$, and we assume the homogeneous mixture of the population, that is, each person in the population is statistically equivalent to each other [@bansalJRSoc2007]. [ Once infected, the person remains infective for an average period $1/\lambda$, where $\lambda$ is the removal rate]{}. The dynamical equations that describe the model are: \[eq:sir\] $$\begin{aligned} \label{eq1} \frac{d S}{dt} &= -\alpha S(t)I(t),\\ \label{eq2} \frac{d I}{dt} &= +\alpha S(t)I(t) - \lambda I(t),\\ \label{eq3} \frac{dR}{dt} &= +\lambda I(t), \end{aligned}$$ with the constraint $S+I+R = 1$, i.e., conservation of the population size. The model certainly simplifies or neglects recent aspects of the COVID-19 pandemic such as differentiation between asymptomatic and symptomatic transmission or age-dependent rates [@premLancet2020; @giordanoNatMed2020]. In fact, research on the biological characteristics of the ongoing pandemic are still ongoing [@gandhiNEJM2020], but evidence indicates re-infections should be minimal in recovered patients [@otaNatRevImmuno2020]. Therefore, we make a case for an approximate description of the problem via the SIR model over the inclusion of extra complexities and uncertainties, aiming to capture dominant aspects. The system of differential equations (\[eq1\]-\[eq3\]) can be further reduced to a single first-order differential equation as follows. From (\[eq3\]) and (\[eq1\]), one finds $ S(t) = S_0 \textrm{e}^{ -(\alpha / \lambda ) \, R(t) } $. The constant $S_0 = S(0) \textrm{e}^{ (\alpha / \lambda ) \, R(0) }$ depends on the initial conditions $S(0)$ and $R(0)=1 -S(0)-I(0)$. Usually, we are more interested in scenarios in which $R(0) =0$ and thus $S_0 = 1 - I(0)$, similar to the onset of an emerging disease. The complete expression for $S_0$ must be used for different initial conditions, which can become a problem whenever the ratio $\alpha/\lambda$ is unknown. Ignoring constant solutions, it can be shown [@harkoApplMathComput2014] that the equation can be further reduced to $$\label{eq:r} \frac{d R}{d t} = -\lambda S_0 \textrm{e}^{-(\alpha/\lambda ) R(t)} - \lambda R + \lambda,$$ whose general solution can be obtained by quadrature $$\label{eq:t} t- t_0 =\frac{1}{\lambda} \int_{R(0)}^{R(t)} \frac{d r}{1 - r - S_0 \textrm{e}^{-(\alpha/\lambda) r } }.$$ The stationary condition in (\[eq:r\]) gives the value $R_{\infty}$ as a solution of the transcendental equation $ R_{\infty} = 1 - S_0 \textrm{e}^{-(\alpha/\lambda)R_{\infty}}$. ![\[fig:sir\] SIR model. a) Numerical solution of the SIR model (empty circles) with $\mathcal{R}_0=1.3$. A good agreement is found between the sigmoidal curve (\[eq:r\_sol\]) whose parameters were found by least-square fitting (line) and the SIR model, being less accurate for increasing values of $\mathcal{R}_0$. b) The curve that represents the infective fraction of the population in the SIR model is symmetrical, with peak at the center, as long as parameters remain constant. ](figure2ab.eps){width="95.00000%"} The solution $R(t)$ can be obtained by inverting (\[eq:t\]), for which a general method remains unknown. To circumvent the issue, one may expand (\[eq:r\]) around points of interest, for example, around $R = 0$ or $R = R_\infty$. Each expansion has advantages and issues. The linear term in the expansion around $R = 0$ dictates the exponential growth of $R(t)$, with an effective rate $\alpha S_0 - \lambda$. We use the adjective effective rather loosely here because at some point the curve should change its curvature and converge to an equilibrium value. Unfortunately, the competition between the early exponential growth and relaxation towards equilibrium is often difficult to assess near the onset of the outbreak, requiring higher order contributions in the expansion. Alternatively, we can get a better picture of the problem by expanding $R(t)$ around the equilibrium. By doing so, the expansion only requires contributions up to second order, as it already carries the information regarding $R_{\infty}$. Furthermore, the expansion near the equilibrium also ensures that $\tau$ describes the dominant relaxation time, rather than combinations of several decay modes, each one with its own timescale. Define $\delta R(t) = R_{\infty} - R(t) \geqslant 0$. The expansion of (\[eq:r\]) near $\delta R \ll 1$, together with the transcendental equation for $R_\infty$, gives $$\label{eq:approx} \frac{1}{\lambda}\frac{d }{d t}\delta R = - \left[1 - (1-R_{\infty})\frac{\alpha}{\lambda}\right] \delta R + \frac{1-R_{\infty}}{2}\,\left(\frac{\alpha}{\lambda}\right)^2 \delta R^2 + o(\delta R^3).$$ Keeping terms up to $o(\delta R^2)$, one converts (\[eq:approx\]) into a Bernoulli equation with relaxation time $\tau = [\lambda - \alpha (1- R_{\infty})]^{-1}$. Solving for $\delta R$ and transforming back to $R$, we find the approximate solution (see Fig. [\[fig:sir\]]{}) $$\label{eq:r_sol} R(t) = \frac{R_{\infty}-A \, \textrm{e}^{-t/\tau}}{\;\;\;\; 1+B\, \textrm{e}^{-t/\tau}} \;,$$ with $A = R_{\infty} - (1+B)R(0) $, $B = c_0 \tau / z_0 $, $\, z_0 = [R_{\infty}-R(0)]^{-1}- c_0 \tau$, and $c_0 = (\lambda/2)(1-R_{\infty})(\alpha/\lambda)^2$. Effective model {#sec:effective} =============== The sigmoidal expression in (\[eq:r\_sol\]) satisfies the requirements listed in Sec. \[sec:data\] and it is an excellent candidate to model the data of accumulated deaths. However, the removed compartment corresponding to $R(t)$ holds both recovered and deceased fractions of the population, i.e., all the infected who are unable to spread the disease. We can simplify this issue by assuming the existence of a simple relation between $R(t)$ and the accumulated number of deaths divided by the population size, $g(t)$, at the time instant $t$. To keep the model as simple as possible, we neglect temporal delays and impose $$g(t) = f R(t),$$ where the crude mortality rate $f$ is the ratio between deceased and infected. As such, it also can be used as an estimator for the likelihood to die after contracting the disease. The equilibrium value $g_{\infty}$ varies from country to country and can be used to characterize the outbreak, more specifically, to assess the impact of the outbreak in the afflicted population. The monotonic nature of cumulative quantities together with the upper bound $g_{\infty}$ restrict the possible functional forms for $g(t)$. Sigmoids are natural candidates to describe $g(t)$ since they are bounded and monotonic. Here we consider the following expression in tandem with (\[eq:r\_sol\]): $$\label{eq:eff} g_{\textrm{eff}}(t) \equiv \frac{g_{\infty} - a\, \textrm{e}^{-t/\tau} }{1+b\, \textrm{e}^{-t/\tau}}.$$ The timescale $\tau$ dictates the relaxation of $g(t)$ towards $g_{\infty}$, [with inflection at $t_c=\tau \ln b$]{}. Equation (\[eq:eff\]) has 3 or 4 parameters depending on whether $g_{\textrm{eff}}(t)$ must pass by the initial entry $g_{\textrm{eff}}(0)=g(0)=g_0$ or not. The former implies the constraint $a = g_{\infty} - g_0(1+b)$, whereas the later $g_{\textrm{eff}}(0) = (g_{\infty}-a)/(1+b)$ with $a \leqslant g_{\infty}$. Since the data are noisy, we do not require the fitting curve to pass by $g_0$. The relationship between $g(t)$ and $R(t)$ is consistent if the following equations hold: $$\begin{aligned} \label{eq:sys1} \frac{1}{\tau} & = \lambda - \alpha \left( 1-\frac{g_\infty}{f}\right),\\ \label{eq:sys2} g_\infty &= f -f S_0 \textrm{e}^{-(\alpha/\lambda f) g_\infty},\\ \label{eq:sys3} b& =\frac{(f-g_\infty)[g_\infty -g_{\textrm{eff}}(0)]\alpha^2}{2\omega\lambda^2 f^2- (f-g_\infty)[g_\infty -g_{\textrm{eff}}(0)]\alpha^2} \end{aligned}$$ The system of equations (\[eq:sys1\]-\[eq:sys3\]) connects the epidemiological parameters $(\alpha, \lambda, f, S_0)$ with the parameters $(\tau, g_{\infty}, b)$ of the sigmoid curve (\[eq:r\_sol\]), which can be estimated by a least-square fitting procedure. However, there are more variables than equations so at least one epidemiological variable must be [fixed]{}. There are two reasonable choices, namely, either $S_0$ or $\lambda$. The first choice should be selected if the data includes the onset of the outbreak because $S_0 \approx 1$, since the majority of the population should be in the susceptible state at the early stage of the epidemic. However, if the beginning of the outbreak is unknown, the assumption $S_0 \approx 1$ is no longer valid. Alternatively, one may consider an estimate for the removal rate $\lambda$, or its probability distribution. In this case, the input parameter to solve (\[eq:sys1\]-\[eq:sys3\]) depends solely on the characteristics of the disease and local demographics. It does not require details concerning the disease spreading nor the moment in which the outbreak begins. Therefore, evidence-based values for $\lambda$ from patient data are far more suitable for the purposes of this study, and shall be used hereafter. Time windows {#sec:timewindows} ============ Fig. \[fig:singlewindow\] shows the death toll in France from March 16 to May 25. Unlike the theoretical SIR model, agreement between data and the sigmoidal fit (performed with a standard non-linear least-square fitting procedure) is poor. The fitted curve becomes negative in early March and converges to a different equilibrium value. Thus, we must refute the sigmoid to describe the death toll in the complete time interval. Alternatively, the parameter optimization of the SIR model agrees well with the input data, except for the first 20 days. However, the optimization returns $\lambda_{\textrm{opt}}= 0.103$ days$^{-1}$ which does not match the signature value for COVID-19 $\lambda = 0.2$ day$^{-1}$. If the value $\lambda=0.2$ days$^{-1}$ is held fixed during the optimization (not shown), then the optimal set of parameters become highly sensitive to the initial guess of the optimization algorithm. ![\[fig:singlewindow\] Issues when one tries to model the data with a single time window using French COVID-19 deaths from March 16 to May 25. Parameter optimization of the SIR model (dashed line) agrees well with input data (circles), especially after April 5. The optimization neglects the effects of control measures to reduce the transmission rate, while reducing the removal rate to $\lambda = 0.103$ day$^{-1}$. The fit by a sigmoidal curve (solid line) underestimates $g_{\infty}$ and the curve turns negative in early March. ](figure3.eps){width="0.9\columnwidth"} The observation above inquires whether the SIR model describes the data or not. However, this issue can be understood by inspecting the number of daily deaths (see Fig. [\[fig1\]]{}a). An asymmetry is observed around the peak of daily deaths (see Fig. [\[fig1\]]{}b), which is in sharp contrast with the corresponding curve in the theoretical SIR model (see Fig. [\[fig:sir\]]{}b). The asymmetry can be explained by a variation in the transmission rate. Unlike other recent epidemics, several countries have adopted control measures such as lockdowns of non-essential workers, flight and other travel restrictions. The efforts effectively reduce the transmission rate of COVID-19 once they are in place. Thus, the data must be divided in non-overlapping time windows, each with its own set of epidemiological parameters. ![\[fig:world\] Global evolution of COVID-19. Accumulated deaths in millions (full circles). Daily deaths in the thousands (line with circles). ](figure4.eps){width="0.9\columnwidth"} ![\[fig:world2\] [ Time windows and the global evolution of COVID-19. The data (circles) are separated in two consecutive time windows (February 29 - April 9) and (April 10 - May 31). The fit (\[eq:eff\]) in the first time window (solid line) predicts over $2.0$ million deaths in a scenario without control measures. With control measures in place in the second time window, the death toll converges to $0.6$ million deaths (dashed line). (inset) residue per degree of freedom $r^2$. The size of the first time window is chosen as to maximize the adherence of the fitting curve (minimize $r^2$) while increasing the number of data points. $r^2$ forms a plateau for sizes between $29$ to $41$ days followed by a sudden increase for $42$ days, indicating that $41$ days is the optimal size of the time window (February 29 - April 9). ]{} ](figure5.eps){width="0.9\columnwidth"} As a first example of data where the necessity to consider several time windows is obvious, we consider the global deaths between February 29 and May 31 (see Fig. \[fig:world\]). This case is interesting because countries afflicted by the epidemic have implemented various control measures at different schedules, of varying effectiveness. Thus, reduction of the global transmission rate cannot be pinpointed to a single day or week a priori. Similar to Fig. \[fig1\], the number of daily deaths exhibits an asymmetry, with center in mid-April, indicating the approximate interval in which the global transmission rate changes. European countries with most cases at the time (France, Italy, Spain) introduced lockdown and other strategies in mid-March, with others following shortly. Afterwards, the daily number of deaths starts to reduce, followed by an oscillatory pattern (7 days period) likely tied to work routines of medical staff and death cases reports. In the following, we decide to divide the observation time in two consecutive time windows. Let us explain how we determine the optimal separation time (optimal duration of the first time window). Let $g_k$ be the fraction of the global population that dies due to complications caused by COVID-19, starting from February 29. The index $k=0,1,\ldots, m-1$ indicates the number of elapsed days in the time window with duration $m$. The data are fitted via (\[eq:eff\]) with parameters $g_{\infty}, a, b $ and $\tau$, using trust region reflective algorithm (Python/Scipy). It is convenient to fit the data using $\tau^{-1}$ instead of $\tau$ and a finite interval $0 \leqslant \tau^{-1} \leqslant 1$, so the infection lasts at least one day in the mathematical model. In the fitting procedure, we use bounds for $b$ which depends on the time interval $t_c$ between the inflection point and the initial entry, more specifically, $t_c = \tau \ln b$. For $t_c > 0$ we restrict the search to $t_c/\tau < 10 $ so that $b \sim o(10^{4})$; for $t_c < 0$, i.e., starting the counting after the inflection point, the parameter space of $b$ is limited between 0 and 1. Thus, we set the $0 < b \leqslant 10^{4}$. The remaining parameters are restricted to $0 \leqslant g_{\infty}, a \leqslant 1$. The quality of the fit is quantified by the square residue divided by the size of time window, namely, $r^2 = (1/m)\sum_{k=0}^{m-1}[g_k - g_{\textrm{eff}}(k)]^2$. The first time window comprises $m$ consecutive days that minimize $r^2$ while increasing $m$, as indicated by the inset in Fig. \[fig:world2\], from February 29 to April 10, whereas days starting from the $m+1$ day are put into the second time window. The curve fit in the first interval corresponds to a scenario in which control measures were not implemented outside China and South Korea. These two time windows are consistent with the asymmetry observed in the number of daily deaths in Fig. \[fig:world\], whose peak lies in mid-April. The fit converges to near $2.0$ million deaths, significantly above the equilibrium value ($0.6$ million deaths) in the second time window, from April 11 to May 31. We stress that the number of fatalities avoided ($1.4$ million) highlights the effectiveness of lockdowns and other control measures. The curve fitting in the first time window, which includes the phase with exponential growth, can be challenging though. Even more so if the fitting data contains a large number of entries around the inflection point induced by the introduction of control measures. This inflection point is not the natural inflection point which occurs in an uncontrolled epidemic with constant parameters. Due to the significance of inflection points in the curve fitting, the induced inflection point can be responsible for the artificially low value of the equilibrium value $g_{\infty}$ in the first time window. Also, early data often contain far less entries, thus being more susceptible to fluctuations. Conversely, the presence of inflection points caused by the introduction of control measures greatly simplifies the fitting procedure in the second time window. For instance, Fig. \[fig:denmark\] depicts the death toll in Denmark after April 3, within the second time window. The fitting (\[eq:eff\]) is in excellent agreement with data for $g_{\infty} = (1.03 \pm 0.01) \times 10^{-4}$ (approximately $600$ deaths) and $\tau = 15.84 \pm 0.65$ days. Next, we solve the system (\[eq:sys1\]-\[eq:sys3\]) for $\lambda$ distributed according to some probability distribution, centered around $\lambda = 0.2$ days$^{-1}$. In practical terms, the width of the probability distribution forms the ground to compute uncertainties of the transmission and crude mortality rates. However, the finer details of the distribution of $\lambda$ remain unknown so we resort to [a]{} uniform distribution [for virus shedding]{}, whose interval lies between 3 and 7 days [@wolfelNature2020; @premLancet2020]. By doing so, we overestimate the uncertainties of the remaining epidemiological parameters of the SIR model [and find $\mathcal{R}_0 = 0.81 \pm 0.07$. Our estimate for the crude mortality rate $f = (0.68 \pm 0.15) \times 10^{-4}$ is compatible with the estimate $f_{\textrm{DNK}} = 8.2 \times 10^{-4} $ ( confidence interval: $[5.9 - 15.4] \times 10^{-4} $) obtained by screening antibodies of $20 640$ blood donors below the age of 70, in Denmark [@erikstrupMedrxiv2020]. Also, the asymptotic value $R_{\infty} = g_{\infty}/f = 0.15 \pm 0.03$ places the fraction of infected well below the threshold ($60\%$) for herd immunity. Thus, a new wave of infections is likely to occur if the disease becomes seasonal unless a vaccine becomes available or social distancing measures remain in place. ]{} ![\[fig:denmark\] COVID-19 deaths in Denmark from April 4 to May 31. The sigmoidal fitting (line) of input data (cicles) followed by the resolution of (\[eq:sys1\]-\[eq:sys2\]) produce $\mathcal{R}_0 = 0.81$ and $f=6.86\times 10^{-4}$. ](figure6.eps){width="0.9\columnwidth"} ![\[fig:france\] Evolution of COVID-19 in France between March 16 and May 31. Reduced number of data points for clarity. Data (full circles) and sigmoidal fit (solid line) between March 16 and April 3, with $\mathcal{R}_{0,1} = 3.25 $ and crude mortality rate $f_1=4.23 \times 10^{-3}$. The effects of confinement emerge shortly after April 3, with a significant reduction in $\mathcal{R}_{0,2}=0.74$, as indicated by the sigmoidal fit (dashed line) of data in the second time window (empty squares). The mortality rate remains nearly unchanged $f_2=4.60 \times 10^{-3}$. ](figure7.eps){width="0.9\columnwidth"} We can now move to more complicated cases, in which the data themselves contains artifacts and the fitting procedure can be tricky. That is the case of France, for instance. The lockdown was issued on March 16, and unaccounted deaths in nursing homes were added to the official statistics on April 3 and 4, producing a large fluctuation in the number of daily deaths. In this case, we separate the time windows according to the number of deaths. The first time window lies between 100 and 5000 deaths (March 16 - April 3), whereas the remaining days until May 31 comprises the second time window, as shown in Fig. \[fig:france\]. The curve fitting in the second time window is far more stable and insensitive to initial guesses, or bounds, with equilibrium death toll nearing $29440 \pm 158$ and time scale $\tau = 15.12\pm 0.72$ days, which approaches the typical recovery time for mild COVID-19 infections. The solution of the system (\[eq:sys1\]-\[eq:sys3\]) returns $\mathcal{R}_{0,2} = 0.74 \pm 0.08 < 1$ in agreement with the decline of new infections, where the notation $\mathcal{R}_{0,k}$ with $k=1$ and $2$ refers to the first and second time windows, respectively. In addition, the crude mortality rate in the second time window reads $f_2 = (4.67\pm 1.16)\times 10^{-3}$, one order of magnitude higher than Denmark or Germany (see Table. \[tab:params\]). ----------------- ----------------- ----------------- -------------------- ------------------ -------------------------------------- $\alpha$ $\mathcal{R}_0$ $ f \, (10^{-3}) $ $ S_0$ $ g_{\infty} \, (10^{-4}) $ [France]{} $0.17 \pm 0.07$ $0.74 \pm 0.08$ $4.67 $0.97 \pm 0.02$ $4.40 \pm 0.02$ \pm 1.16$ [Italy]{} $0.19 \pm 0.07$ $0.84 \pm 0.07$ $ 4.17 $0.96 \pm 0.02$ $5.75 \pm 0.01$ \pm 0.78$ [Spain]{} $0.19 \pm 0.07$ $0.83 \pm 0.07$ $2.31 \pm $0.90 \pm 0.06$ $6.08 \pm 0.04$ 0.63$ [UK]{} $0.18 \pm 0.07$ $0.82 \pm 0.07$ $5.86 \pm 1.08$ $ 0.97 \pm 0.01$ $6.26 \pm 0.05$ [Germany]{} $0.18 \pm 0.07$ $0.83 \pm 0.07$ $0.39 $0.90 \pm 0.06$ $1.05 \pm 0.01$ \pm 0.11$ [Sweden]{} $0.19 \pm 0.07$ $0.85 \pm 0.06$ $4.84 \pm $0.98 \pm 0.01$ $5.08 \pm 0.18$ 0.77$ [Denmark]{} $0.18 \pm 0.07$ $0.81 \pm 0.07$ $0.68 $0.95 \pm 0.02$ $1.03 \pm 0.01$ \pm 0.15$ [Belgium]{}[^1] $0.17 \pm 0.07$ $0.73 \pm 0.08$ $9.97 \pm 2.55$ $0.97 \pm 0.02$ $8.54 \pm 0.05$ ----------------- ----------------- ----------------- -------------------- ------------------ -------------------------------------- : \[tab:params\] Parameters of the SIR model and sigmoid from April 4 to May 31 for selected countries, [using a uniform distribution for $\lambda$.]{} The analysis in the first time window requires more care. The fitting procedure requires bounds, otherwise the fit may incorrectly converge to an equilibrium value that is much lower than the one obtained for the second time window. In addition, multiple solutions can be found for (\[eq:sys1\]-\[eq:sys3\]), several of which are not realistic. Instead of using brute force, it is far more convenient to approximate the solution and set $S_0 = 0.99$. In such case, (\[eq:sys3\]) is discarded and the remaining equations produce $\mathcal{R}_{0,1} = 3.25 \pm 1.71 $. The crude mortality rate in the first time window, $f_1 = (4.23 \pm 0.58)\times 10^{-3} $, shares the same order of magnitude as $f_2$, indicating that overall the French [health]{} care system remained responsive throughout the epidemic. If control measures had not been implemented, the expected number of deaths would have soared and reached 240 000, with equilibrium infected fraction of population $R_{\infty}=0.96$, assuming the mortality rate would have remained roughly the same. Conclusion {#sec:con} ========== The tragic developments in the COVID-19 pandemic have exposed flawed aspects in protocols used to assess large scale epidemics. Despite the various improvements in the global capacity to produce laboratory tests, usually based on reverse-transcription polymerase chain reaction (rRT-PCR), the majority of afflicted countries were unable to [enforce]{} mass testing policies. This shortcoming has also been experienced in 2002 with the SARS epidemic but in a smaller scale. The lack of mass testing contributed in keeping the number of cases unknown, affecting the accuracy of disease spreading models. In this paper, we resort to the death toll from April 4 to May 31 instead of reported cases as our primary data source as death certificates are mandatory and may contain other medical assessments linking the cause of death with COVID-19 outbreaks. We emphasize this methodology is not immune itself from sub-notification, nor unforeseen delays on death certificates. We model the death toll via the sigmoid curve in (\[eq:eff\]), which requires the parameters $g_{\infty}$ (capacity) and $\tau$ (time scale). The capacity limits the growth of the outbreak, which is expected to vary with geographic region, healthcare quality, and efforts to control the spreading of the virus. Together with the crude mortality rate $f$, which also depends on healthcare and population demographics, they give access to a far more credible estimate for the infected fraction of the population $R_{\infty} = g_\infty / f$. To keep the model as simple as possible, we also neglect temporal delays between $g(t)$ and $R(t)$. The inclusion of said delay may introduce additional effects, but those are not expected to be dominant in this case since spatial effects are not being investigated. The advantage of our approach relies on the curve fitting of a monotonic curve (death toll) to a sigmoid in sharp contrast to complex optimization schemes for models with multiple health states [@premLancet2020; @giordanoNatMed2020]. Epidemiological parameters are extracted by solving the algebraic system of equations (\[eq:sys1\]-\[eq:sys3\]) for transmission rate, crude mortality rate and initial condition, respectively, $\alpha, f,$ and $S_0$. Alternatively, the epidemiological parameters calculated from the curve fitting can be used as educated guesses in optimization algorithms reducing the likelihood of obtaining unrealistic optimal solutions. We stress that if $f$ is known, then the fraction of reported cases can be easily computed $R(t) = g(t)/f$. We find that $f$ ranges from $o(10^{-4})$ to $o(10^{-3})$, depending on geographic location. Such values lie well below recent the estimate [@onderJAMA2020] using only rRT-PCR tests from hospitalized cases ($7.2\%$), whereas it is more in line with the values obtained from antibody screening with large sample size in Denmark [@erikstrupMedrxiv2020]. Interestingly, countries lightly afflicted by the epidemic exhibit lower $f$ [even though the transmission of the virus is similar to neighbouring countries.]{} Taking Germany or Denmark as examples, we see that both have $f \sim o(10^{-4})$ while maintaining $\mathcal{R}_0$ compatible with other European countries. [ The result is likely attributed to healthcare resources and services either being more accessible or efficient, including early surveillance.]{} [ The sigmoidal fitting becomes far more reliable once the outbreak has been active for some time as the data starts to move away from the exponential phase. The distance from the inflection point is another important factor that affects the quality of the fit, especially the value of $g_{\infty}$. The introduction of lockdowns and other social distancing measures reduce the transmission rate and effectively create a new inflection point, which is different from the one expected without control measures. Thus, the fitting curve may converge to an incorrect equilibrium value if the fitting data include points around the induced inflection point. This can only be solved by introducing disjoint time windows for different regimes of $\alpha$ as Fig. \[fig:france\] shows. The difference between the equilibrium values of $g_{\infty}$ from each time window returns the fraction of avoided deaths and it evaluates the effectiveness of control measures and policies. For instance, approximately $210\, 000$ deaths or about seven times the current projection have been avoided in France. This is compatible with the large decrease in the basic reproduction number, placing it below the endemic threshold. However, some care is needed in interpreting these estimates as our analysis considers only two time windows and therefore does not anticipates second waves of infections. ]{} Concerning deviations in parameters estimated through (\[eq:sys1\]-\[eq:sys3\]), they can be tracked to the uncertainties in the removal rate. In general, the inverse of the removal rate describes the average time required for an infective person to change to the removed state. For COVID-19, virus shedding occurs more prominently between 3 and 7 days, with a peak at day 5 since the onset of symptoms [@wolfelNature2020]. The exact probability distribution for $\lambda$ remains an open issue so our analysis assume a uniform distribution. Finally, the SIR model rests on the random mixing hypothesis but deviations are expected with stronger effects in population with varying demographics or populations at risk. In particular, communicable respiratory diseases become major issues in correctional facilities, given the lack of adequate environmental and sanitary conditions. The combination of higher transmission rate of pathogens and reduced removal rate with reduced healthcare can increase the crude mortality rate for incarcerated individuals. By a similar argument, outbreaks in nursing homes may affect estimates of epidemiological parameters because the disease becomes disproportionately more lethal for older patients. In this study, both effects are neglected in hopes to understand the disease spreading of the average population with the minimal number of parameters possible. Acknowledgments {#acknowledgments .unnumbered} =============== The group belongs to the CNRS consortium “Approches quantitatives du vivant”. Author contributions statement {#author-contributions-statement .unnumbered} ============================== GN wrote the paper and carried out the numerical analysis. CD, BG, and MB designed the research and edited the paper. All authors reviewed the manuscript. Data availability {#data-availability .unnumbered} ================= The datasets analysed and numerical code generated during the current study are available in the Zenodo repository, <https://doi.org/10.5281/zenodo.3931666> Additional information {#additional-information .unnumbered} ====================== **Competing interests** The authors declare no conflict of interest. [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} ** (, , ). . ** (, , ). *et al.* . ** , (). <http://dx.doi.org/10.1056/NEJMoa1600651>. . *et al.* . ** , (). <http://www.sciencedirect.com/science/article/pii/S0140673617313685>. . ** , (). <http://dx.doi.org/10.1056/NEJMc1414992>. . , , , & . ** , (). <http://dx.doi.org/10.1038/sdata.2015.19>. . *et al.* . ** , (). <https://doi.org/10.1371/journal.pmed.1001558>. *et al.* . ** , (). <https://doi.org/10.1016/S0140-6736(20)30154-9>. *et al.* . ** , (). <https://doi.org/10.1016/S0140-6736(20)30211-7>. . (). . *et al.* . ** (). <https://doi.org/10.1016/S1473-3099(20)30195-X>. *et al.* . , (). <http://doi.org/10.25561/77735>. & . ** , (). , & . ** , (). *et al.* . ** , (). <https://doi.org/10.1016/S1473-3099(20)30144-4>. *et al.* . ** , (). <https://doi.org/10.1056/nejmoa2001316>. *et al.* . ** (). <https://doi.org/10.3201/eid2607.200282>. & . ** , (). <http://rspa.royalsocietypublishing.org/content/115/772/700>. & (). . *et al.* . ** (). <https://doi.org/10.1016/S2468-2667(20)30073-6>. *et al.* . ** (). <https://doi.org/10.1038/s41591-020-0883-7>. , & . ** (). <https://doi.org/10.1056/nejmcp2009249>. ** (). <https://doi.org/10.1038/s41577-020-0316-3>. , & . ** , (). <http://www.sciencedirect.com/science/article/pii/S009630031400383X>. *et al.* . ** , (). <https://doi.org/10.1038/s41586-020-2196-x>. *et al.* . ** (). <https://doi.org/10.1093/cid/ciaa849>. , & . ** (). <https://doi.org/10.1001/jama.2020.4683>. [^1]: From April 11 to May 31
--- abstract: 'We address the problem of finding the maximizer of a nonlinear smooth function, that can only be evaluated point-wise, subject to constraints on the number of permitted function evaluations. This problem is also known as fixed-budget best arm identification in the multi-armed bandit literature. We introduce a Bayesian approach for this problem and show that it empirically outperforms both the existing frequentist counterpart and other Bayesian optimization methods. The Bayesian approach places emphasis on detailed modelling, including the modelling of correlations among the arms. As a result, it can perform well in situations where the number of arms is much larger than the number of allowed function evaluation, whereas the frequentist counterpart is inapplicable. This feature enables us to develop and deploy practical applications, such as automatic machine learning toolboxes. The paper presents comprehensive comparisons of the proposed approach, Thompson sampling, classical Bayesian optimization techniques, more recent Bayesian bandit approaches, and state-of-the-art best arm identification methods. This is the first comparison of many of these methods in the literature and allows us to examine the relative merits of their different features.' bibliography: - 'bayesgap.bib' --- Introduction ============ We address the problem of finding the maximizer of a nonlinear smooth function $f: {\cal A} \mapsto \mathbb{R}$ which can only be evaluated point-wise. The function need not be convex, its derivatives may not be known, and the function evaluations will generally be corrupted by some form of noise. Importantly, we are interested in functions that are typically expensive to evaluate. Moreover, we will also assume a finite budget of $T$ function evaluations. This fixed-budget global optimization problem can be treated within the framework of sequential design. In this context, by allowing function queries $a_t\in\A$ to depend on previous points and their corresponding function evaluations, the algorithm must adaptively construct a sequence of queries (or actions) $a_{1:T}$ and afterwards return the element of highest expected value. A typical example of this problem is that of automatic product testing [@kohavi-abtesting; @scott-bandits], where common “products” correspond to configuration options for ads, websites, mobile applications, and online games. In this scenario, a company offers different product variations to a small subset of customers, with the goal of finding the most successful product for the entire customer base. The crucial problem is how best to query the smaller subset of users in order to find the best product with high probability. A second example, analyzed later in this paper, is that of automating machine learning. Here, the goal is to automatically select the best technique (boosting, random forests, support vector machines, neural networks, etc.) and its associated hyper-parameters for solving a machine learning task with a given dataset. For big datasets, cross-validation is very expensive and hence it is often important to find the best technique within a fixed budget of cross-validation tests (function evaluations). In order to properly attack this problem there are three design aspects that must be considered. By taking advantage of *correlation* among different actions it is possible to learn more about a function than just its value at a specific query. This is particularly important when the number of actions greatly exceeds the *finite query budget*. In this same vein, it is important to take into account that a recommendation must be made at time $T$ in order to properly allocate actions and explore the space of possible optima. Finally, the fact that we are interested only in the value of the recommendation made at time $T$ should be handled explicitly. In other words, we are only interested in finding the *best action* and are concerned with the rewards obtained during learning only insofar as they inform us about this optimum. In this work, we introduce a Bayesian approach that meets the above design goals and show that it empirically outperforms the existing frequentist counterpart [@gabillon-unified]. The Bayesian approach places emphasis on detailed modelling, including the modelling of correlations among the arms. As a result, it can perform well in situations where the number of arms is much larger than the number of allowed function evaluation, whereas the frequentist counterpart is inapplicable. The paper presents comprehensive comparisons of the proposed approach, Thompson sampling, classical Bayesian optimization techniques, more recent Bayesian bandit approaches, and state-of-the-art best arm identification methods. This is the first comparison of many of these methods in the literature and allows us to examine the relative merits of their different features. The paper also shows that one can easily obtain the same theoretical guarantees for the Bayesian approach that were previously derived in the frequentist setting [@gabillon-unified]. Related work ============ Bayesian optimization has enjoyed success in a broad range of optimization tasks; see the work of [@brochu-tutorial] for a broad overview. Recently, this approach has received a great deal of attention as a black-box technique for the optimization of hyperparameters [@snoek:2012b; @Hutter:smac; @Wang:rembo]. This type of optimization combines prior knowledge about the objective function with previous observations to estimate the posterior distribution over $f$. The posterior distribution, in turn, is used to construct an *acquisition function* that determines what the next query point $a_t$ should be. Examples of acquisition functions include probability of improvement (PI), expected improvement (EI), Bayesian upper confidence bounds (UCB), and mixtures of these [@Mockus:1982; @Jones:2001; @Srinivas:2010; @Hoffman:2011]. One of the key strengths underlying the use of Bayesian optimization is the ability to capture complicated correlation structures via the posterior distribution. Many approaches to bandits and Bayesian optimization focus on online learning (*e.g.*, minimizing cumulative regret) as opposed to optimization [@Srinivas:2010; @Hoffman:2011]. In the realm of optimizing deterministic functions, a few works have proven exponential rates of convergence for simple regret [@zoghi-detbo; @Munos:2011]. A stochastic variant of the work of @Munos:2011 has been recently proposed by [@Valko:SSOO]; this approach takes a tree-based structure for expanding areas of the optimization problem in question, but it requires one to evaluate each cell many times before expanding, and so may prove expensive in terms of the number of function evaluations. The problem of optimization under budget constraints has received relatively little attention in the Bayesian optimization literature, though some approaches without strong theoretical guarantees have been proposed recently [@Azimi:2011; @hennig-entropy; @snoek-oppcost; @villemonteix-iago]. In contrast, optimization under budget constraints has been studied in significant depth in the setting of multi-armed bandits [@bubeck-pure; @audibert-best; @gabillon-multi; @gabillon-unified]. Here, a decision maker must repeatedly choose query points, often discrete and known as “arms”, in order to observe their associated rewards [@cesa-bianchi-book]. However, unlike most methods in Bayesian optimization the underlying value of each action is generally assumed to be independent from all other actions. That is, the correlation structure of the arms is often ignored. Problem formulation {#sec:problem} =================== In order to attack the problem of Bayesian optimization from a bandit perspective we will consider a discrete collection of arms $\A=\{1,\dots,K\}$ such that the immediate reward of pulling arm $k\in\A$ is characterized by a distribution $\nu_k$ with mean $\mu_k$. From the Bayesian optimization perspective we can think of this as a collection of points $\{a_1,\dots,a_K\}$ where $\mu_k=f(a_k)$. Note that while we will assume the distributions $\nu_k$ are independent of past actions this *does not* mean that the means of each arm cannot share some underlying structure—only that the act of pulling arm $k$ does not affect the future rewards of pulling this or any other arm. This distinction will be relevant later in this section. The problem of identifying the best arm in this bandit problem can now be introduced as a sequential decision problem. At each round $t$ the decision maker will select or “pull” an arm $a_t\in\A$ and observe an independent sample $y_t$ drawn from the corresponding distribution $\nu_{a_t}$. At the beginning of each round $t$, the decision maker must decide which arm to select based only on previous interactions, which we will denote with the tuple $(a_{1:t-1}, y_{1:t-1})$. For any arm $k$ we can also introduce the expected immediate regret of selecting that arm as $$R_k = \mu^* - \mu_k,$$ where $\mu^*$ denotes the expected value of the best arm. Note that while we are interested in finding the arm with the minimum regret, the exact value of this quantity is unknown to the learner. In standard bandit problems the goal is generally to minimize the cumulative sum of immediate regrets incurred by the arm selection process. Instead, in this work we consider the *pure exploration* setting [@bubeck-pure; @audibert-best], which divides the sampling process into two phases: exploration and evaluation. The exploration phase consists of $T$ rounds wherein a decision maker interacts with the bandit process by sampling arms. After these rounds, the decision maker must make a single arm recommendation $\Omega_T\in\A$. The performance of the decision maker is then judged *only* on the performance of this recommendation. The expected performance of this single recommendation is known as the *simple regret*, and we can write this quantity as $R_{\Omega_T}$. Given a tolerance $\epsilon>0$ we can also define the *probability of error* as the probability that $R_{\Omega_T}>\epsilon$. In this work, we will consider both the empirical probability that our regret exceeds some $\epsilon$ as well as the actual reward obtained. Bayesian bandits {#sec:bayesian} ================ We will now consider a bandit problem wherein the distribution of rewards for each arm is assumed to depend on unknown parameters $\theta\in\Theta$ that are shared between all arms. We will write the reward distribution for arm $k$ as $\nu_k(\cdot|\theta)$. When considering the bandit problem from a Bayesian perspective, we will assume a prior density $\theta\sim \pi_0(\cdot)$ from which the parameters are drawn. Next, after $t-1$ rounds we can write the posterior density of these parameters as $$\pi_t(\theta) \propto \pi_0(\theta) \prod_{n<t} \nu_{a_n}(y_n|\theta).$$ Here we can see the effect of choosing arm $a_n$ at each time $n$: we obtain information about $\theta$ only indirectly by way of the likelihood of these parameters given reward observations $y_n$. Note that this also generalizes the *uncorrelated* arms setting. If the rewards for each arm $k$ depend only on a parameter (or set of parameters) $\theta_k$, then at time $t$ the posterior for that parameter would only depend on those times in the past that we had pulled arm $k$. We are, however, only partially interested in the posterior distribution of the parameters $\theta$. Instead, we are primarily concerned with the expected reward for each arm under these parameters, which can be written as $\mu_k = \mathbb E[Y|\theta] = \int y \,\nu_k(y|\theta) \,dy$. The true value of $\theta$ is unknown, but we have access to the posterior distribution $\pi_t(\theta)$. This distribution induces a marginal distribution over $\mu_k$, which we will write as $\rho_{kt}(\mu_k)$. The distribution $\rho_{kt}(\mu_k)$ can then be used to define upper and lower confidence bounds that hold with high probability and, hence, engineer acquisition functions that trade-off exploration and exploitation. We will derive an analytical expression for this distribution next. We will assume that each arm $k$ is associated with a feature vector $x_k\in\R^d$ and where the rewards for pulling arm $k$ are normally distributed according to $$\label{eq:observation} \nu_k(y|\theta) = \Norm(y; x_k^T\theta, \sigma^2)$$ with variance $\sigma^2$ and unknown $\theta\in\R^d$. The rewards for each arm are independent conditioned on $\theta$, but marginally dependent when this parameter is unknown. In particular the level of their dependence is given by the structure of the vectors $x_k$. By placing a prior $\theta\sim \Norm(0,\eta^2I)$ over the entire parameter vector we can compute a posterior distribution over this unknown quantity. One can also easily place an inverse-Gamma prior on $\sigma$ and compute the posterior analytically, but we will not describe this in order to keep the presentation simple. The above linear observation model might seem restrictive. However, because we are only considering $K$ discrete actions (arms), it includes the Gaussian process (GP) setting. More precisely, let the matrix $G\in\R^{K\times K}$ be the covariance of a GP prior. Our experiments will detail two ways of constructing this covariance in practice. We can apply the following transformation to construct the design matrix $X = [x_1\dots x_K]^T$: $$X = VD^{\frac12},\ \text{where}\ G=VDV^T.$$ The rows of $X$ correspond to the vectors $x_k$ necessary for the construction of the observation model in Equation . By restricting ourselves to discrete actions spaces, we can also implement strategies such a Thompson sampling with GPs. The restriction to discrete action spaces poses some scaling challenges in high-dimensions, but it enables us to deploy a broad set of algorithms to attack low-dimensional problems. For this pragmatic reason, many existing popular Bayesian optimization software tools consider discrete actions only. We will now let $X_t=[x_{a_1}\dots x_{a_{t-1}}]^T$ denote the design matrix and $Y_t=[y_1 \dots y_{t-1}]^T$ the vector of observations at the beginning of round $t$. We can then write the posterior at time $t$ as $\pi_t(\theta) = \Norm(\theta; \hat\theta_t, \hat\Sigma_t)$, where $$\begin{aligned} \hat\Sigma_t^{-1} &= \sigma^{-2}X_t^T X_t + \eta^{-2}I, \text{ and}\quad \\ \hat\theta_t &= \sigma^{-2}\hat\Sigma_t X_t^TY_t.\end{aligned}$$ From this formulation we can see that the expected reward associated with arm $k$ is marginally normal $\rho_{kt}(\mu_k)=\mathcal N(\mu_k; \hat\mu_{kt}, \hat \sigma^2_{kt})$ with mean $\hat\mu_{kt}=x_k^T\hat\theta_t$ and variance $\hat\sigma_{kt}^2=x_k^T\hat\Sigma_t x_k$. Note also that the predictive distribution over rewards associated with the $k$th arm is normal as well, with mean $\hat\mu_{kt}$ and variance $\hat\sigma_{kt}^2+\sigma^2$. The previous derivations are textbook material; see for example Chapter 7 of [@Murphy:2012]. ![Example GP setting with discrete arms. The full GP is plotted with observations and confidence intervals at each of $K=10$ arms (mean and confidence intervals of $\rho_{kt}(\mu_k)$). Shown in green is a single sample from the GP.[]{data-label="fig:example"}](figures/example.pdf){width="\columnwidth"} Figure \[fig:example\] depicts an example of the mean and confidence intervals of $\rho_{kt}(\mu_k)$, as well as a single random sample. Here the features $x_k$ were constructed by first forming the covariance matrix with an exponential kernel $k(x,x')=\text e^{-(x-x')^2}$ over the 1-dimensional discrete domain. As with standard Bayesian optimization with GPs, the statistics of $\rho_{kt}(\mu_k)$ enable us to construct many different acquisition functions that trade-off exploration and exploitation. Thompson sampling in this setting also becomes straightforward, as we simply have to pick the maximum of the random sample from $\rho_{kt}(\mu_k)$, at one of the discrete arms, as the next point to query. Bayesian gap-based exploration {#sec:gap} ============================== In this section we will introduce a gap-based solution to the Bayesian optimization problem, which we call BayesGap. This approach builds on the work of [@gabillon-multi; @gabillon-unified], which we will refer to as UGap[^1], and offers a principled way to incorporate correlation between different arms (whereas the earlier approach assumes all arms are independent). At the beginning of round $t$ we will assume that the decision maker is equipped with high-probability upper and lower bounds $U_k(t)$ and $L_k(t)$ on the unknown mean $\mu_k$ for each arm. While this approach can encompass more general bounds, for the Gaussian-arms setting that we consider in this work we can define these quantities in terms of the mean and standard deviation, i.e.$\hat\mu_{kt}\pm \beta\hat\sigma_{kt}$. These bounds also give rise to a confidence diameter $s_k(t)=U_k(t)-L_k(t)=2\beta\hat\sigma_{kt}$. Given bounds on the mean reward for each arm, we can then introduce the gap quantity $$B_k(t) = \max_{i\neq k} U_i(t) - L_k(t),$$ which involves a comparison between the lower bound of arm $k$ and the highest upper bound among all alternative arms. Ultimately this quantity provides an upper bound on the simple regret (see Lemma \[lemma:arm-regret-bound\] in the supplementary material) and will be used to define the exploration strategy. However, rather than directly finding the arm minimizing this gap, we will consider the two arms $$\begin{aligned} J(t) &= \argmin_{k\in\A} B_k(t) \text{ and} \\ j(t) &= \argmax_{k\neq J(t)} U_k(t).\end{aligned}$$ We will then define the exploration strategy as $$\begin{aligned} a_t &= \argmax_{k\in\{j(t), J(t)\}} s_k(t).\end{aligned}$$ Intuitively this strategy will select either the arm minimizing our bound on the simple regret (i.e. $J(t)$) or the best “runner up” arm. Between these two, the arm with the highest uncertainty will be selected, i.e. the one expected to give us the most information. Next, we will define the recommendation strategy as $$\begin{aligned} \Omega_T &= J\big( \argmin_{t\leq T} B_{J(t)}(t) \big), \label{eqn:budget-omega}\end{aligned}$$ i.e. the proposal arm $J(t)$ which minimizes the regret bound, over all times $t\leq T$. The reason behind this particular choice is subtle, but is necessary for the proof of the method’s simple regret bound[^2]. In Algorithm \[alg:gap\] we show the pseudo-code for BayesGap. set $J(t) = \argmin_{k\in\A} B_k(t)$ set $j(t) = \argmin_{k\neq J(t)} U_k(t)$ select arm $a_t = \argmax_{k\in\{j(t),J(t)\}}s_k(t)$ observe $y_t\sim\nu_{a_t}(\cdot)$ update posterior $\hat\mu_{kt}$ and $\hat\sigma_{kt}$ update bound on $H_\epsilon$ and re-compute $\beta$ update posterior bounds $U_k(t)$ and $L_k(t)$ $\Omega_T= J\big( \argmin_{t\leq T} B_{J(t)}(t) \big)$ We now turn to the problem of which value of $\beta$ to use. First, consider the quantity $\Delta_k=|\max_{i\neq k} \mu_i - \mu_k|$. For the best arm this coincides with a measure of the distance to the second-best arm, whereas for all other arms it is a measure of their sub-optimality. Given this quantity let $H_{k\epsilon} = \max(\tfrac12 (\Delta_k+\epsilon), \epsilon)$ be an arm-dependent hardness quantity; essentially our goal is to reduce the uncertainty in each arm to below this level, at which point with high probability we will identify the best arm. Now, given $H_\epsilon=\sum_k H_{k\epsilon}^{-2}$ we define our exploration constant as $$\beta^2 = \big((T-K)/\sigma^2+\kappa/\eta^2\big) \big/ (4H_\epsilon) \label{eq:beta}$$ where $\kappa=\sum_k \|x_k\|^{-2}$. We have chosen $\beta$ such that with high probability we recover an $\epsilon$-best arm, as detailed in the following theorem. This theorem relies on bounding the uncertainty for each arm by a function of the number of times that arm is pulled. Roughly speaking, if this bounding function is monotonically decreasing and if the bounds $U_k$ and $L_k$ hold with high probability we can then apply Theorem \[theorem:budget-bound\] to bound the simple regret of BayesGap[^3]. \[cor:gaussian\] Consider a $K$-armed Gaussian bandit problem, horizon $T$, and upper and lower bounds defined as above. For $\epsilon>0$ and $\beta$ defined as in Equation (\[eq:beta\]), the algorithm attains simple regret satisfying $\Pr(R_{\Omega_T}\leq\epsilon) \geq 1-KTe^{-\beta^2/2}$. Using the definition of the posterior variance for arm $k$, we can write the confidence diameter as $$\begin{aligned} s_k(t) &= 2\beta\sqrt{x_k^T\hat\Sigma_t x_k} \\ &= 2\beta \sqrt{ \sigma^2 x_k^T \big(\textstyle\sum_i N_i(t-1)\, x_ix_i^T + \tfrac{\sigma^2}{\eta^2}I \big)^{-1} x_k} \\ &\leq 2\beta \sqrt{ \sigma^2 x_k^T \big(N_k(t-1)\, x_kx_k^T + \tfrac{\sigma^2}{\eta^2}I \big)^{-1} x_k}. \end{aligned}$$ In the second equality we decomposed the Gram matrix $X_t^TX_t$ in terms of a sum of outer products over the fixed vectors $x_i$. In the final inequality we noted that by removing samples we can only increase the variance term, i.e. here we have essentially replaced $N_i(t-1)$ with $0$ for $i\neq k$. We will let the result of this final inequality define an arm-dependent bound $g_k$. Letting $A=\tfrac1N\tfrac{\sigma^2}{\eta^2}$ we can simplify this quantity using the Sherman-Morrison formula as $$\begin{aligned} g_k(N) &= 2\beta\sqrt{ (\sigma^2/N) x_k^T \big(x_kx_k^T + AI\big)^{-1} x_k} \\ &= 2\beta \sqrt{ \frac{\sigma^2}{N} \frac{\|x_k\|^2}{A} \Big( 1 - \frac{\|x_k\|^2/A}{1+\|x_k\|^2/A} \Big)} \\ &= % 2\beta % \sqrt{\frac{\sigma^2}{N}\frac{\|x_k\|^2}{A + \|x_k\|^2}} \\ % &= 2\beta \sqrt{\frac{\sigma^2\|x_k\|^2}{\tfrac{\sigma^2}{\eta^2} + N\|x_k\|^2}}, \end{aligned}$$ which is monotonically decreasing in $N$. The inverse of this function can be solved for as $$g^{-1}_k(s) = \frac{4(\beta\sigma)^2}{s^2} - \frac{\sigma^2}{\eta^2} \frac{1}{\|x_k\|^2}.$$ By setting $\sum_k g_k^{-1}(H_{k\epsilon})=T-K$ and solving for $\beta$ we then obtain the definition of this term given in the statement of the proposition. Finally, by reference to Lemma \[lemma:gaussian-deviation\] (supplementary material) we can see that for each $k$ and $t$, the upper and lower bounds must hold with probability $1-e^{-\beta/2}$. These last two statements satisfy the assumptions of Theorem \[theorem:budget-bound\] (supplementary material), thus concluding our proof. Here we should note that while we are using Bayesian methodology to drive the exploration of the bandit, we are analyzing this using frequentist regret bounds. This is a common practice when analyzing the regret of Bayesian bandit methods [@Srinivas:2010; @kaufmann-bayesucb]. We should also point out that implicitly Theorem \[theorem:budget-bound\] assumes that each arm is pulled at least once regardless of its bound. However, in our setting we can avoid this in practice due to the correlation between arms. One key thing to note is that the proof and derivation of $\beta$ given above explicitly require the hardness quantity $H_\epsilon$, which is unknown in most practical applications. Instead of requiring this quantity, our approach will be to adaptively estimate it. Intuitively, the quantity $\beta$ controls how much exploration BayesGap does (note that $\beta$ directly controls the width of the uncertainty $s_k(t)$). Further, $\beta$ is inversely proportional to $H_\epsilon$. As a result, in order to initially encourage more exploration we will lower bound the hardness quantity. In particular, we can do this by upper bounding each $\Delta_k$ by using conservative, posterior dependent upper and lower bounds on $\mu_k$. In this work we use three posterior standard deviations away from the posterior mean, i.e. $\hat\mu_k(t)\pm3\hat\sigma_{kt}$. (We emphasize that these are not the same as $L_k(t)$ and $U_k(t)$.) Then the upper bound on $\Delta_k$ is simply $$\hat\Delta_k = \max_{j\neq k} (\hat\mu_j + 3\hat\sigma_j) - (\hat\mu_k - 3\hat\sigma_k).$$ From this point we can recompute $H_\epsilon$ and in turn recompute $\beta$ (step 7 in the pseudocode). For all experiments we will use this adaptive method. **Comparison with UGap.** The method in this section provides a Bayesian version of the UGap algorithm which modifies the bounds used in this earlier algorithm’s arm selection step. By modifying step 6 of the BayesGap pseudo-code to use either Hoeffding or Bernstein bounds we can re-obtain the UGap algorithm. Note, however, that in doing so UGap assumes independent arms with bounded rewards. We can now roughly compare UGap’s probability of error, i.e.$O(KT\exp(-\frac{T-K}{H_\epsilon}))$, with that of BayesGap, $O(KT\exp(-\frac{T-K+\kappa\sigma^2/\eta^2}{H_\epsilon\sigma^2}))$. We can see that with minor differences, these bounds are of the same order. First, we can ignore the additional $\sigma^2$ term as this quantity is primarily due to the distinction between bounded and Gaussian-distributed rewards. The $\eta^2$ term corresponds to the concentration of the prior, and we can see that the more concentrated the prior is (smaller $\eta$) the faster this rate is. Note, however, that the proof of BayesGap’s simple regret relies on the true rewards for each arm being within the support of the prior, so one cannot increase the algorithm’s performance by arbitrarily adjusting the prior. Finally, the $\kappa$ term is related to the linear relationship between different arms. Additional theoretical results on improving these bounds remains for future work. Experiments {#sec:experiments} =========== In the following subsections, we benchmark the proposed algorithm against a wide variety of methods on two real-data applications. In Section \[sec:exp-traffic\], we revisit the traffic sensor network problem of [@Srinivas:2010]. In Section \[sec:exp-model\], we consider the problem of automatic model selection and algorithm configuration. Application to a traffic sensor network {#sec:exp-traffic} --------------------------------------- In this experiment, we are given data taken from traffic speed sensors deployed along highway I-880 South in California. Traffic speeds were collected at $K=357$ sensor locations for all working days between 6AM and 11AM for an entire month. Our task is to identify the single location with the highest expected speed, i.e. the least congested. This data was also used in the work of [@Srinivas:2010]. ![Probability of error on the optimization domain of traffic speed sensors. For this real data set, BayesGap provides considerable improvements over the Bayesian cumulative regret alternatives and the frequentist simple regret counterparts.[]{data-label="fig:traffic"}](figures/traffic-plot.pdf){width="48.00000%"} Naturally, the readings from different sensors are correlated, however, this correlation is not necessarily only due to geographical location. Therefore specifying a similarity kernel over the space of traffic sensor locations alone would be overly restrictive. Following the approach of [@Srinivas:2010], we construct the design matrix treating two-thirds of the available data as historical and use the remaining third to evaluate the policies. In more detail, The GP kernel matrix $G\in\R^{K\times K}$ is set to be empirical covariance matrix of measurements for each of the $K$ sensor locations. As explained in Section 4, the corresponding design matrix is $X = VD^{\frac12}$, where $G=VDV^T$. Following [@Srinivas:2010], we estimate the noise level $\sigma$ of the observation model using this data. We consider the average empirical variance of each individual sensor (i.e. the signal variance corresponding to the diagonal of $G$) and set the noise variance $\sigma^2$ to 5% of this value; this corresponds to $\sigma^2=4.78$. We choose a broad prior with regularization coefficient $\eta=20$. In order to evaluate different bandit and Bayesian optimization algorithms, we use each of the remaining 840 sensor signals (the aforementioned third of the data) as the true mean vector $\mu$ for independent runs of the experiment. Note that using the model in this way enables us to evaluate the ground truth for each run (given by $\mu$, but not observed by the algorithm), and estimate the actual probability that the policies return the best arm. In this experiment, as well as in the next one, we estimate the hardness parameter $H_\epsilon$ using the adaptive procedure outlined at the end of Section 5. We benchmark the proposed algorithm (BayesGap) against the following methods: **(1) [UCBE]{}:** Introduced by [@audibert-best]; this is a variant of the classical UCB policy of [@auer-ucb] that replaces the $\log(t)$ exploration term of UCB with a constant of order $\log(T)$ for known horizon $T$. **(2) [UGap]{}:** A gap-based exploration approach introduced by [@gabillon-unified]. **(3) [BayesUCB]{}** and **[GPUCB]{}:** Bayesian extensions of UCB which derive their confidence bounds from the posterior. Introduced by [@kaufmann-bayesucb] and [@Srinivas:2010] respectively. **(4) [Thompson sampling]{}:** A randomized, Bayesian index strategy wherein the $k$th arm is selected with probability given by a single-sample Monte Carlo approximation to the posterior probability that the arm is the maximizer [@chapelle-thompson; @kaufmann-thompson; @Agrawal:2013]. **(5) [Probability of Improvement (PI)]{}:** A classic Bayesian optimization method which selects points based on their probability of improving upon the current incumbent. **(6) [Expected Improvement (EI)]{}:** A Bayesian optimization, related to PI, which selects points based on the expected value of their improvement. Note that techniques (1) and (2) above attack the problem of best arm identification and use bounds which encourage more aggressive exploration. However, they do not take correlation into account. On the other hand, techniques such ad (3) are designed for cumulative regret, but model the correlation among the arms. It might seem at first that we are comparing apples and oranges. However, the purpose of comparing these methods, even if their objectives are different, is to understand empirically what aspects of these algorithms matter the most in practical applications. The results, shown in Figure \[fig:traffic\], are the probabilities of error for each strategy, using a time horizon of $T=400$. (Here we used $\epsilon=0$, but varying this quantity had little effect on the performance of each algorithm.) By looking at the results, we quickly learn that techniques that model correlation perform better than the techniques designed for best arm identification, even when they are being evaluated in a best arm identification task. The important conclusion is that one must always invest effort in modelling the correlation among the arms. The results also show that BayesGap does better than alternatives in this domain. This is not surprising because BayesGap is the only competitor that addresses budgets, best arm identification and correlation simultaneously. Automatic machine learning {#sec:exp-model} -------------------------- There exist many machine learning toolboxes, such as `Weka` and `scikit-learn`. However, for a great many data practitioners interested in finding the best technique for a predictive task, it is often hard to understand what each technique in the toolbox does. Moreover, each technique can have many free hyper-parameters that are not intuitive to most users. Bayesian optimization techniques have already been proposed to automate machine learning approaches, such as MCMC inference [@Mahendran:2012; @Hamze:2013; @Wang:2013], deep learning [@Bergstra:2011], preference learning [@Brochu:2007; @Brochu:2010], reinforcement learning and control [@martinez-cantin:2007; @Lizotte:2012], and more [@snoek:2012b]. In fact, methods to automate entire toolboxes (`Weka`) have appeared very recently [@HutHooLey12-ParallelAC], and go back to old proposals for classifier selection [@Maron94Moore]. Here, we will demonstrate BayesGap by automating regression with `scikit-learn`. Our focus will be on minimizing the cost of cross-validation in the domain of big data. In this setting, training and testing each model can take a prohibitive long time. If we are working under a finite budget, say if we only have three days before a conference deadline or the deployment of a product, we cannot afford to try all models in all cross-validation tests. However, it is possible to use techniques such as BayesGap and Thompson sampling to find the best model with high probability. In our setting, the action of “pulling an arm” will involve selecting a model, splitting the dataset randomly into training and test sets, training the model, and recording the test-set performance. In this bandit domain, our arms will consist of five `scikit-learn` techniques and associated parameters selected on a discrete grid. We consider the following methods for regression: *Lasso (8 models)* with regularization parameters `alpha` = (0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5), *Random Forests (64 models)* where we vary the number of trees, =(1,10,100,1000), the minimum number of training examples in a node to split =(1,3,5,7) and the minimum number of training examples in a leaf =(2,6,10,14), *linSVM (16 models)* where we vary the penalty parameter = (0.001, 0.01, 0.1, 1) and the tolerance parameter =(0.0001, 0.001, 0.01, 0.1), *rbfSVM (64 models)* where we use the same grid as above for `C` and `epsilon`, and we add a third parameter which is the length scale $\gamma$ of the RBF kernel used by the SVM $\text{\texttt{gamma}}= (0.025, 0.05, 0.1, 0.2)$, and *K-nearest neighbors (8 models)* where we vary number of neighbors $\text{\texttt{n\_neighbors}}=(1,3,5,7,9,11,13,15).$ The total number of models is 160. Within a class of regressors, we model correlation using a squared exponential kernel with unit length scale, i.e., $k(x,x')=\text e^{-(x-x')^2}$. Using this kernel, we compute a kernel matrix $G$ and construct the design matrix as before. ![Boxplot of RMSE over 100 runs with a fixed budget of $T=10$. EI, PI, and GPUCB get stuck in local minima. Note: lower is better.[]{data-label="fig:rmse-boxplot"}](figures/wine_T10.pdf){width="36.00000%"} ![Allocations and recommendations of BayesGap (top) and EI (bottom) over 100 runs at a budget of $T=40$ training and validation tests, and for 160 models (i.e., more arms than possible observations). Histograms along the floor of the plot show the arms pulled at each round while the histogram on the far wall shows the final arm recommendation over 100 different runs. The solid black line on the far wall shows the estimated “ground truth” RMSE for each model. Note that EI quite often gets stuck in a locally optimal rbfSVM.[]{data-label="fig:arms-pulled"}](figures/wine_arms_BayesGap_T40_mod.pdf){width="40.00000%"} ![Allocations and recommendations of BayesGap (top) and EI (bottom) over 100 runs at a budget of $T=40$ training and validation tests, and for 160 models (i.e., more arms than possible observations). Histograms along the floor of the plot show the arms pulled at each round while the histogram on the far wall shows the final arm recommendation over 100 different runs. The solid black line on the far wall shows the estimated “ground truth” RMSE for each model. Note that EI quite often gets stuck in a locally optimal rbfSVM.[]{data-label="fig:arms-pulled"}](figures/wine_arms_EI_T40_mod.pdf){width="40.00000%"} When an arm is pulled we select training and test sets that are each 10% of the size of the original, and ignore the remaining 80% for this particular arm pull. We then train the selected model on the training set, and test on the test set. This specific form of cross-validation is similar to that of repeated learning-testing [@arlot-cv; @burman-rlt]. We use the `wine` dataset from the UCI Machine Learning Repository, where the task is to predict the quality score (between 0 and 10) of a wine given 11 attributes of its chemistry. We repeat the experiment 100 times. We report, for each method, an estimate of the RMSE for the recommended models on each run. Unlike in the previous section, we do not have the ground truth generalization error, and in this scenario it is difficult to estimate the actual “probability of error”. Instead we report the RMSE, but remark that this is only a proxy for the error rate that we are interested in. The performance of the final recommendations for each strategy and a fixed budget of $T=10$ tests is shown in Figure \[fig:rmse-boxplot\]. The results for other budgets are almost identical. *It must be emphasized that the number of allowed function evaluations (10 tests) is much smaller than the number of arms (160 models). Hence, frequentist approaches that require pulling all arms, e.g. UGap, are inapplicable in this domain.* The results indicate that Thompson and BayesGap are the best choices for this domain. Figure \[fig:arms-pulled\] shows the individual arms pulled and recommended by BayesGap (above) and EI (bottom), over each of the 100 runs, as well as an estimate of the ground truth RMSE for each individual model. EI and PI often get trapped in local minima. Due to the randomization inherent to Thompson sampling, it explores more, but in a more uniform manner (possibly explaining its poor results in the previous experiment). Conclusion ========== We proposed a Bayesian optimization method for best arm identification with a fixed budget. The method involves modelling of the correlation structure of the arms via Gaussian process kernels. As a result of combining all these elements, the proposed method outperformed techniques that do not model correlation or that are designed for different objectives (typically cumulative regret). This strategy opens up room for greater automation in practical domains with budget constraints, such as the automatic machine learning application described in this paper. Although we focused on a Bayesian treatment of the UGap algorithm, the same approach could conceivably be applied to other techniques such as UCBE. As demonstrated by [@Srinivas:2010] and in this paper, it is possible to easily show that the Bayesian bandits obtain similar bounds as the frequentist methods. However, in our case, we conjecture that much stronger bounds should be possible if we consider all the information brought in by the priors and measurement models. [^1]: Technically this is UGapEb, denoting bounded horizon, but as we do not consider the fixed-confidence variant in this paper we simplify the acronym. [^2]: See inequality (b) in the the supplementary material. [^3]: The additional Theorem is in supplementary material and is a slight modification of that in [@gabillon-unified].
--- abstract: 'Harrison, Perkins and Scott have proposed simple charged lepton and neutrino mass matrices that lead to the tribimaximal mixing $U_{\rm TBM}$. We consider in this work an extension of the mass matrices so that the leptonic mixing matrix becomes $U_{\rm PMNS}=V_L^{\ell\dagger}U_{\rm TBM}W$, where $V_L^\ell$ is a unitary matrix needed to diagonalize the charged lepton mass matrix and $W$ measures the deviation of the neutrino mixing matrix from the bimaximal form. Hence, corrections to $U_{\rm TBM}$ arise from both charged lepton and neutrino sectors. Following our previous work to assume a Qin-Ma-like parametrization $V_{\rm QM}$ for the charged lepton mixing matrix $V_L^\ell$ in which the [*CP*]{}-odd phase is approximately maximal, we study the phenomenological implications in two different scenarios: $V_L^\ell=V_{\rm QM}^\dagger$ and $V_L^\ell=V_{\rm QM}$. We find that the latter is more preferable, though both scenarios are consistent with the data within $3\sigma$ ranges. The predicted reactor neutrino mixing angle $\theta_{13}$ in both scenarios is consistent with the recent T2K and MINOS data. The leptonic [*CP*]{} violation characterized by the Jarlskog invariant $J_{\rm CP}$ is generally of order $10^{-2}$.' author: - 'Y. H. Ahn[^1], Hai-Yang Cheng[^2], and Sechul Oh[^3]' title: | \ An extension of tribimaximal lepton mixing --- Introduction ============ The large values of the solar ($\theta_{12}$) and atmospheric ($\theta_{23}$) mixing angles may be telling us about some new symmetries of leptons not presented in the quark sector and may provide a clue to the nature of the quark-lepton physics beyond the standard model. If there exists such a flavor symmetry in Nature, the tribimaximal (TBM) [@HPS] pattern for the neutrino mixing will be a good zeroth order approximation to reality : $$\begin{aligned} \sin^{2}\theta_{12}=\frac{1}{3}~,\qquad\sin^{2}\theta_{23}=\frac{1}{2}~,\qquad\sin\theta_{13}=0~. \end{aligned}$$ For example, in a well-motivated extension of the standard model through the inclusion of $A_{4}$ discrete symmetry, the TBM pattern comes out in a natural way in the work of [@He:2006dk]. Although such a flavor symmetry is realized in Nature leading to exact TBM, in general there may be some deviations from TBM. Recent data of the T2K [@Abe:2011sj] and MINOS [@MINOS] Collaborations and the analysis based on global fits [@GonzalezGarcia:2010er; @Fogli:2011qn] of neutrino oscillations enter into a new phase of precise measurements of the neutrino mixing angles and mass-squared differences, indicating that the TBM mixing for three flavors of leptons should be modified. In the weak eigenstate basis, the Yukawa interactions in both neutrino and charged lepton sectors and the charged gauge interaction can be written as $$\begin{aligned} -{\cal L} &=& \frac{1}{2}\overline{\nu_{L}} ~{\cal M}_{\nu} ~(\nu_{L})^c +\overline{\ell_{L}}m_{\ell}\ell_{R} + \frac{g}{\sqrt{2}}W^{-}_{\mu} ~\overline{\ell_{L}}\gamma^{\mu}\nu_{L} + {\rm H.c.} ~. \label{lagrangianA} \end{aligned}$$ When diagonalizing the neutrino and charged lepton mass matrices $U^{\dag}_{\nu}{\cal M}_{\nu}U^{\ast}_{\nu}={\rm diag}(m_{1},m_{2},m_{3}),~ U^{\dag}_{L}m_{\ell}U_{R}={\rm diag}(m_{e},m_{\mu},m_{\tau})$, one can rotate the neutrino and charged lepton fields from the weak eigenstates to the mass eigenstates $\nu_{L}\rightarrow U^{\dag}_{\nu}\nu_{L},~\ell_{L(R)}\rightarrow U^{\dag}_{L(R)}\ell_{L(R)}$. Then we obtain the leptonic $3\times3$ unitary mixing matrix $U_{\rm PMNS}= U^{\dag}_{L}U_\nu$ from the charged current term in Eq. (\[lagrangianA\]). In the standard parametrization of the leptonic mixing matrix $U_{\rm PMNS}$, it is expressed in terms of three mixing angles and three [*CP*]{}-odd phases (one for the Dirac neutrino and two for the Majorana neutrino) [@PDG] $$\begin{aligned} U_{\rm PMNS}={\left(\begin{array}{ccc} c_{13}c_{12} & c_{13}s_{12} & s_{13}e^{-i\delta_{CP}} \\ -c_{23}s_{12}-s_{23}c_{12}s_{13}e^{i\delta_{CP}} & c_{23}c_{12}-s_{23}s_{12}s_{13}e^{i\delta_{CP}} & s_{23}c_{13} \\ s_{23}s_{12}-c_{23}c_{12}s_{13}e^{i\delta_{CP}} & -s_{23}c_{12}-c_{23}s_{12}s_{13}e^{i\delta_{CP}} & c_{23}c_{13} \end{array}\right)}P_{\nu}~, \label{PMNS} \end{aligned}$$ where $s_{ij}\equiv \sin\theta_{ij}$ and $c_{ij}\equiv \cos\theta_{ij}$, and $P_{\nu}={\rm diag}(e^{i\delta_{1}},e^{i\delta_{2}},1)$ is a diagonal phase matrix which contains two [*CP*]{}-violating Majorana phases, one (or a combination) of which can be in principle explored through the neutrinoless double beta ($0\nu2\beta$) decay [@Schechter:1981bd]. For the global fits of the available data from neutrino oscillation experiments, we quote two recent analyses: one by Gonzalez-Garcia [*et al.*]{}  [@GonzalezGarcia:2010er] $$\begin{aligned} \sin^{2}\theta_{12} &=& 0.319^{+0.016~(+0.053)}_{-0.016~(-0.046)}\ , \quad\quad \sin\theta_{13}=0.097^{+0.052}_{-0.050}~(\leq0.217)\ , \nonumber\\ \sin^{2}\theta_{23} &=& 0.462^{+0.082~(+0.185)}_{-0.050~(-0.124)} \ , \label{exp00} \end{aligned}$$ in $1\sigma$ ($3\sigma$) ranges, or equivalently $$\begin{aligned} \theta_{12}=34.4^{\circ+1.0^{\circ}~(+3.2^{\circ})}_{~-1.0^{\circ}~(-2.9^{\circ})}~, ~~~~~\theta_{23}=42.8^{\circ+4.7^{\circ}~(+10.7^{\circ})}_{~-2.9^{\circ}~(~-7.3^{\circ})}~, ~~~~~\theta_{13}=5.6^{\circ+3.0^{\circ}~(+6.9^{\circ})}_{~-2.9^{\circ}~(-5.6^{\circ})}~, \label{exp0} \end{aligned}$$ and the other given by Fogli [*et al.*]{} with new reactor neutrino fluxes  [@Fogli:2011qn]: $$\begin{aligned} \sin^{2}\theta_{12} &=& 0.312^{+0.017~(+0.052)}_{-0.006~(-0.047)}\ , \quad\quad \sin^2\theta_{13}=0.025^{+0.007~(+0.025)}_{-0.007~(-0.020)}\ , \nonumber \\ \sin^{2}\theta_{23}&=&0.42^{+0.08~(+0.22)}_{-0.03~(-0.08)} \ , \label{exp} \end{aligned}$$ corresponding to $$\begin{aligned} \theta_{12}=34.0^{\circ+1.0^{\circ}~(+3.2^{\circ})}_{~-1.0^{\circ}~(-3.0^{\circ})}~, ~~~~~\theta_{23}=40.4^{\circ+4.6^{\circ}~(+12.7^{\circ})}_{~-1.3^{\circ}~(~-4.7^{\circ})}~, ~~~~~\theta_{13}=9.1^{\circ+1.2^{\circ}~(+3.8^{\circ})}_{~-1.4^{\circ}~(-5.0^{\circ})}~. \label{exp1} \end{aligned}$$ The analysis by Fogli [*et al.*]{} includes the T2K [@Abe:2011sj] and MINOS [@MINOS] results. The T2K Collaboration [@Abe:2011sj] has announced that the value of $\theta_{13}$ is non-zero at $90\%$ C.L. with the ranges $$\begin{aligned} 0.03~(0.04)\leq\sin^{2}2\theta_{13}\leq0.28~(0.34) \ , \label{T2K} \end{aligned}$$ or $$\begin{aligned} 4.99^{\circ}~(5.77^{\circ})\leq\theta_{13}\leq15.97^{\circ}~(17.83^{\circ})~ \label{T2K1} \end{aligned}$$ for $\delta_{CP}=0$, $\sin^{2}2\theta_{23}=1$ and the normal (inverted) neutrino mass hierarchy. The MINOS Collaboration found $$\begin{aligned} \sin^{2}2\theta_{13}\leq0.12~(0.20) \ , \label{MINOS} \end{aligned}$$ with a best fit of $$\begin{aligned} \sin^{2}2\theta_{13}=0.041^{+0.047}_{-0.031}~(0.079^{+0.071}_{-0.053}) \ , \label{MINOS} \end{aligned}$$ for $\delta_{CP}=0$, $\sin^{2}2\theta_{23}=1$ and the normal (inverted) neutrino mass hierarchy. The experimental result of non-zero $|U_{e3}|\equiv\sin\theta_{13}$ implies that the TBM pattern should be modified. However, properties related to the leptonic [*CP*]{} violation remain completely unknown yet. The trimaximal neutrino mixing was first proposed by Cabibbo [@Cabibbo:1977nk][^4] (see also [@trimax]) $$\begin{aligned} V_{C}=\frac{1}{\sqrt{3}}{\left(\begin{array}{ccc} 1 & \omega^{2} & \omega \\ 1 & 1 & 1 \\ 1 & \omega & \omega^{2} \end{array}\right)}~, \label{Cabibbo} \end{aligned}$$ with $\omega=e^{i2\pi/3}$ being a complex cube-root of unity. This mixing matrix has maximal [*CP*]{} violation with the Jarlskog invariant $|J_{ CP}|=1/(6\sqrt{3})$. However, this trimaximal mixing pattern has been ruled out by current experimental data on neutrino oscillations. In their original work, Harrison, Perkins and Scott (HPS) [@HPS] proposed to consider the simple mass matrices $$\begin{aligned} M^{2}_{\ell}={\left(\begin{array}{ccc} a & b & b^{\ast} \\ b^{\ast} & a & b \\ b & b^{\ast} & a \end{array}\right)}~,\quad M^{2}_{\nu}={\left(\begin{array}{ccc} x & 0 & y \\ 0 & z & 0 \\ y & 0 & x \end{array}\right)}~, \label{mass1} \end{aligned}$$ that can lead to the tribimaximal mixing, where $a,x,y$ and $z$ are real parameters,[^5] $M^{2}_{\ell}\equiv m_{\ell}m^{\dag}_{\ell}$ and $M_\nu^2\equiv {\cal M}_{\nu}{\cal M}^{\dag}_{\nu}$. The mass matrices are diagonalized by the trimaximal matrix $V_{C}$ for charged lepton fields and the bimaximal matrix $U_{\rm BM}$ defined below for neutrino fields, that is, $V_C^\dagger M_\ell^2 V_C={\rm diag}(m_e^2,m_\mu^2,m_\tau^2)$ and $U_{\rm BM}^\dagger M_\nu^2 U_{\rm BM}={\rm diag}(m_1^2,m_2^2,m_3^2)$. The combination of trimaximal and bimaximal matrices leads to the so-called TBM mixing matrix: $$\begin{aligned} U_{\rm TBM}=V^{\dag}_{C}~U_{\rm BM}={\left(\begin{array}{ccc} \sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} & 0 \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} & -\frac{i}{\sqrt{2}} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} & \frac{i}{\sqrt{2}} \end{array}\right)}~\qquad {\rm with}~U_{\rm BM}={\left(\begin{array}{ccc} \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} \\ 0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \end{array}\right)}~. \label{HPS} \end{aligned}$$ It is clear by now that the tribimaximal mixing is not consistent with the recent experimental data on the reactor mixing angle $\theta_{13}$ because of the vanishing matrix element $U_{e3}$ in $U_{\rm TBM}$. In this work we consider an extension of the tribimaximal mixing by considering small perturbations to the mass matrices $M_\ell^2$ and $M_\nu^2$ which we will call $M_\ell^{\prime~ 2}$ and $M_\nu^{\prime~ 2}$, respectively (see Eq. (\[mass2\]) below) so that $U_\nu=U_{\rm BM}W$ is no longer in the bimaximal form and $U_L = V_C V_L^\ell$ deviates from the trimaximal structure, where $V_L^\ell$ is the unitary matrix needed to diagonalize the matrix $V_C^\dagger M_\ell^{\prime ~2} V_C$. As a consequence, $U_{\rm PMNS}=U_L^{\dagger} U_\nu=V_L^{\ell\dagger}U_{\rm TBM}W=U_{\rm TBM}+$ small perturbations. Hence, the corrections to the TBM pattern arise from both charged lepton and neutrino sectors. Inspired by the T2K and MINOS measurements of a sizable reactor angle $\theta_{13}$, there exist in the literature intensive studies of possible deviations from the exact TBM pattern. However, most of these investigations were focused on the modification of TBM arising from either the neutrino sector [@TBMnu] or the charged lepton part [@Ahn:2011ep; @TBMlep], but not both simultaneously. The paper is organized as follows. In Sec. II, we set up the model by making a general extension to the charged lepton and neutrino mass matrices. Then in Sec. III we study the phenomenological implications by considering two different scenarios for the charged lepton mixing matrix. Our conclusions are summarized in Sec. IV. A simple and realistic extension ================================ In order to discuss the deviation from the TBM mixing, let us consider a simple and general extension of the original proposal by HPS given in Eq. (\[mass1\]), by taking into account perturbative effects on the mass matrices $M^2_{\ell}$ and $M^2_{\nu}$. The generalized mass matrices $M^{\prime ~2}_f$ and $M^{\prime ~2}_{\nu}$ can be introduced as [^6] $$\begin{aligned} M^{\prime ~2}_f = {\left(\begin{array}{ccc} a+g_{3} & b+\chi_{3} & b^{\ast}+\chi^{\ast}_{2} \\ b^{\ast}+\chi^{\ast}_{3} & a+g_{2} & b+\chi_{1} \\ b+\chi_{2} & b^{\ast}+\chi^{\ast}_{1} & a+g_{1} \end{array}\right)}~,\quad M^{\prime ~2}_{\nu} = m^{2}_{0}{\left(\begin{array}{ccc} x' & 0 & y' \\ 0 & 1 & 0 \\ y' & 0 & x' + \rho \end{array}\right)}~, \label{mass2} \end{aligned}$$ where $M^{\prime ~2}_f$ and $M^{\prime ~2}_{\nu}$ are defined as the hermitian square of the mass matrices $M^{\prime ~2}_f \equiv m'_f m'^{\dagger}_f$ and $M^{\prime ~2}_{\nu} \equiv {\cal M}'_{\nu} {\cal M}'^{\dagger}_{\nu}$, respectively, with the subscript $f$ denoting charged fermion fields (charged leptons or quarks). Due to the hermiticity of $M^{\prime ~2}_f$ and $M^{\prime ~2}_{\nu}$, the parameters $a, g_{1,2,3}, m^2_0, x', y', \rho$ are real, while $b$ and $\chi_{1,2,3}$ are complex. The parameters $g_{1,2,3}$, $\chi_{1,2,3}$ and $\rho$ represent small perturbations. Note that the (11), (13), (22) elements ([*i.e.,*]{} $m_0^2 x'$, $m_0^2 y'$ and $m_0^2$) in $M^{\prime ~2}_{\nu}$ are assumed to contain any perturbative effects on the elements $x$, $y$, and $z$ in $M^2_{\nu}$, respectively. For simplicity, it is assumed that $y'$ is real just as the other elements in $M^{\prime ~2}_{\nu}$ and the vanishing off-diagonal elements in $M^2_{\nu}$ remain zeros in $M^{\prime ~2}_{\nu}$. The parameters $a$ and $b$ are encoded in [@HPS] as $$\begin{aligned} a=\frac{\tilde{m}^{2}_{f_{1}}}{3}+\frac{\tilde{m}^{2}_{f_{2}}}{3}+\frac{\tilde{m}^{2}_{f_{3}}}{3}~, \qquad~~\quad b=\frac{\tilde{m}^{2}_{f_{1}}}{3}+\frac{\tilde{m}^{2}_{f_{2}}\omega^{2}}{3}+\frac{\tilde{m}^{2}_{f_{3}}\omega}{3}~, \end{aligned}$$ where the subscript $f_{i}$ indicates a generation of charged fermion field, and $\tilde{m}_{f_{i}}$ represents a bare mass of $f_{i}$, for example, $\tilde{m}_{f_{1}}=\tilde{m}_{e}\ll \tilde{m}_{f_{2}}=\tilde{m}_{\mu}\ll \tilde{m}_{f_{3}}=\tilde{m}_{\tau}$ for charged lepton fields. We first discuss the hermitian square of the neutrino mass matrix, $M^{\prime ~2}_{\nu}$, in Eq. (\[mass2\]). It can be diagonalized by $$\begin{aligned} U_{\nu}={\left(\begin{array}{ccc} \cos\theta & 0 & -\sin\theta \\ 0 & 1 & 0 \\ \sin\theta & 0 & \cos\theta \end{array}\right)} P_{\nu}={\left(\begin{array}{ccc} {1/\sqrt{2}} & 0 & -{1/\sqrt{2}}\\ 0 & 1 & 0 \\ {1/\sqrt{2}} & 0 & {1/\sqrt{2}} \end{array}\right)}W \ , \label{Unu} \end{aligned}$$ with $$\begin{aligned} \tan2\theta=-\frac{2y'}{\rho} \end{aligned}$$ and $$\begin{aligned} W={\left(\begin{array}{ccc} (\cos\theta+\sin\theta)/\sqrt{2} & 0 & (\cos\theta-\sin\theta)/\sqrt{2} \\ 0 & 1 & 0 \\ -(\cos\theta-\sin\theta)/\sqrt{2} & 0 & (\cos\theta+\sin\theta)/\sqrt{2} \end{array}\right)}P_{\nu} \ ,\end{aligned}$$ where the diagonal phase matrix $P_{\nu}$ contains two additional phases, which can be absorbed into the neutrino mass eigenstate fields. For a small perturbation $|\rho|~(\ll |x'|)$, the mixing parameter $\theta$ can be expressed in terms of $$\begin{aligned} \theta=\pi/4+\epsilon ~~~ {\rm with} ~~ |\epsilon|\ll1 ~. \end{aligned}$$ $W$ is then reduced to $$\begin{aligned} W= {\left(\begin{array}{ccc} \cos\epsilon & 0 & -\sin\epsilon \\ 0 & 1 & 0 \\ \sin\epsilon & 0 & \cos\epsilon \end{array}\right)}P_{\nu}~. \label{epsilon} \end{aligned}$$ The neutrino mass eigenvalues are obtained as $$\begin{aligned} m^{2}_{1}&=&m^{2}_{0}(x' +\rho\sin^{2}\theta +y'\sin2\theta),\quad m^{2}_{2}=m^{2}_{0},\quad m^{2}_{3}=m^{2}_{0}(x' +\rho\cos^{2}\theta -y'\sin2\theta) \end{aligned}$$ and their differences are given by $$\begin{aligned} \Delta m^{2}_{21} &\equiv& m^{2}_{2}-m^{2}_{1} =m^{2}_{0}\left(1 -x' +\rho ~\frac{1-\sin2\epsilon}{2\sin2\epsilon}\right)~,\nonumber\\ \Delta m^{2}_{31} &\equiv& m^{2}_{3}-m^{2}_{1} =m^2_0 ~\frac{2\rho}{\sin2\epsilon}~, \label{masssquare} \end{aligned}$$ from which we have a relation $\Delta m^{2}_{21}-\frac{1}{4}\Delta m^{2}_{31}\simeq m^{2}_{2}(1-x')$. It is well known that the sign of $\Delta m^{2}_{21}$ is positive due to the requirement of the Mikheyev-Smirnov-Wolfenstein resonance for solar neutrinos. The sign of $\Delta m^{2}_{31}$ depends on that of $\rho/\sin2\epsilon$: $\Delta m^{2}_{31}>0$ for the normal mass spectrum and $\Delta m^{2}_{31}<0$ for the inverted one. The quantities $m^{2}_{1},m^{2}_{2},m^{2}_{3},\theta$ (or $\epsilon$) are determined by the four parameters $m^{2}_{0},x',y',\rho$, while the Majorana phases in Eq. (\[Unu\]) are hidden in the squared mass eigenvalues. We next turn to the hermitian square of the mass matrix for charged fermions in Eq. (\[mass2\]). This modified charged fermion mass matrix is no longer diagonalized by $V_{C}$ $$\begin{aligned} V^{\dag}_{C} M^{\prime ~2}_f V_{C} = {\left(\begin{array}{ccc} m^{2}_{a}+\eta_{11} & \eta_{12} & \eta_{13} \\ \eta^{\ast}_{12} & m^{2}_{b}+\eta_{22} & \eta_{23} \\ \eta^{\ast}_{13} & \eta^{\ast}_{23} & m^{2}_{c}+\eta_{33} \end{array}\right)}~, \label{AA} \end{aligned}$$ where $$\begin{aligned} m^{2}_{a}=a+b+b^{\ast}~,\qquad m^{2}_{b}=a+b~\omega+b^{\ast}~\omega^{2}~,\qquad m^{2}_{c}=a+b~\omega^{2}+b^{\ast}~\omega~, \end{aligned}$$ corresponding to $\tilde{m}^{2}_{f_{1}},~\tilde{m}^{2}_{f_{2}},~\tilde{m}^{2}_{f_{3}}$, respectively, and $\eta_{ij}$ is composed of the combinations of $g_{1,2,3}$ and $\chi_{1,2,3}$. To diagonalize $V^{\dag}_{C} M^{\prime ~2}_f V_{C} = V^{f}_{L} ~{\rm diag}(m^{2}_{f_{1}},m^{2}_{f_{2}},m^{2}_{f_{3}})~ V^{f\dag}_{L}$, we need an additional matrix $V^{f}_{L}$ which can be, in general, parametrized in terms of three mixing angles and six phases: $$\begin{aligned} V^{f}_{L}={\left(\begin{array}{ccc} c_{2}c_{3} & c_{2}s_{3}e^{i\alpha_{3}} & s_{2}e^{i\alpha_{2}} \\ -c_{1}s_{3}e^{-i\alpha_{3}}-s_{1}s_{2}c_{3}e^{i(\alpha_{1}-\alpha_{2})} & c_{1}c_{3}-s_{1}s_{2}s_{3}e^{i(\alpha_{1}-\alpha_{2}+\alpha_{3})} & s_{1}c_{2}e^{i\alpha_{1}} \\ s_{1}s_{3}e^{-i(\alpha_{1}+\alpha_{3})}-c_{1}s_{2}c_{3}e^{-i\alpha_{2}} & -s_{1}c_{3}e^{-i\alpha_{1}}-c_{1}s_{2}s_{3}e^{i(\alpha_{3}-\alpha_{2})} & c_{1}c_{2} \end{array}\right)}P_{f}~, \label{Vl} \end{aligned}$$ where $s_{i}\equiv \sin\theta_{i}$, $c_{i}\equiv \cos\theta_{i}$ and a diagonal phase matrix $P_{f}={\rm diag}(e^{i\xi_{1}},e^{i\xi_{2}},e^{i\xi_{3}})$ which can be rotated away by the phase redefinition of left-charged fermion fields. The charged fermion mixing matrix now reads $U_L=V_CV_L^f$. Finally, we arrive at the general expression for the leptonic mixing matrix $$\begin{aligned} U_{\rm PMNS}= U_L^{\dagger}U_\nu =V_L^{\ell\dagger}U_{\rm TBM}W \ .\end{aligned}$$ A simple and general extension of the mass matrices given in Eq. (\[mass2\]) thus leads to two possible sources of corrections to the tribimaximal mixing: $V_L^\ell$ measures the deviation of the charged lepton mixing matrix from the trimaximal form and $W$ characterizes the departure of the neutrino mixing from the bimaximal one. The charged lepton mass matrix in Eq. (\[mass2\]) or (\[AA\]) has 12 free parameters. Three of them are replaced by the phases $\xi_{1,2,3}$ in Eq. (\[Vl\]) which can be eliminated by a redefinition of the physical charged lepton fields. The remaining 9 parameters can be expressed in terms of $m_{e},m_{\mu},m_{\tau},\theta_{1},\theta_{2},\theta_{3},\alpha_{1},\alpha_{2},\alpha_{3}$. &gt;From Eqs. (\[AA\]) and (\[Vl\]) the mixing angles and phases can be expressed as $$\begin{aligned} \theta_{1}&\simeq&\frac{|\eta_{23}|}{\tilde{m}^{2}_{\tau}}~,\qquad\quad \theta_{2}\simeq\frac{|\eta_{13}|}{\tilde{m}^{2}_{\tau}}~,\qquad\quad \theta_{3}\simeq\frac{|\eta_{12}|}{\tilde{m}^{2}_{\mu}}~, \qquad \alpha_{1}= \arg(\eta_{23}), \nonumber \\ \alpha_{2}&\simeq& \frac{1}{2}\arg(\eta_{23}) +\arg(\eta_{13}), \quad \alpha_{3}\simeq\frac{1}{2}\left[\arg(\eta_{13}) -\arg(\eta_{23})\right]+\arg(\eta_{12}) \ , \end{aligned}$$ with the condition $\tilde{m}^{2}_{f_{2}}\gg\eta_{22},\eta_{11}$. In the charged fermion sector, there is a qualitative feature that distinguishes the neutrino sector from the charged fermion one. The mass spectrum of the charged leptons exhibits a similar hierarchical pattern to that of the down-type quarks, unlike that of the up-type quarks which show a much stronger hierarchical pattern. For example, in terms of the Cabbibo angle $\lambda \equiv \sin\theta_{\rm C} \approx |V_{us}|$, the fermion masses scale as  $(m_{e},m_{\mu}) \approx (\lambda^{5},\lambda^{2}) m_{\tau}$, $(m_{d},m_{s}) \approx (\lambda^{4},\lambda^{2}) m_{b}$ and $(m_{u},m_{c}) \approx (\lambda^{8},\lambda^{4}) m_{t}$. This may lead to two implications: (i) the Cabibbo-Kobayashi-Maskawa (CKM) matrix [@CKM] is mainly governed by the down-type quark mixing matrix, and (ii) the charged lepton mixing matrix is similar to that of the down-type quark one. Therefore, we shall assume that (i) $V_{\rm CKM}=V^{d\dag}_{L}$ and $V^{u}_{L}=\mathbf{1}$, where $V^{d}_{L}~(V^{u}_{L})$ is associated with the diagonalization of the down-type (up-type) quark mass matrix and $\mathbf{1}$ is a $3\times3$ unit matrix, and (ii) the charged lepton mixing matrix $V^{\ell}_{L}$ has the same structure as the CKM matrix, that is, $V_L^{\ell\dagger}=V_{\rm CKM}$ or $V_{\rm CKM}^\dagger$. Recently, we have proposed a simple [*ansatz*]{} for the charged lepton mixing matrix $V_L^\ell$, namely, it has the Qin-Ma-like parametrization in which the [*CP*]{}-odd phase is approximately maximal [@Ahn:2011ep]. Armed with this [*ansatz*]{}, we notice that the 6 parameters $\theta_{1},\theta_{2},\theta_{3},\alpha_{1},\alpha_{2},\alpha_{3}$ in $V_L^\ell$ are reduced to four independent ones $f,h,\lambda,\delta$. It has the advantage that the TBM predictions of $\sin^2\theta_{23}=1/2$ and especially $\sin^2\theta_{12}=1/3$ will not be spoiled and that a sizable reactor mixing angle $\theta_{13}$ and a large Dirac [*CP*]{}-odd phase are obtained in the mixing $U_{\rm PMNS} = V_L^{\ell\dagger}U_{\rm TBM}$. The Qin-Ma (QM) parametrization of the quark CKM matrix is a Wolfenstein-like parametrization and can be expanded in terms of the small parameter $\lambda$ [@Qin:2011ub]. However, unlike the original Wolfenstein parametrization [@Wolfenstein:1983yz], the QM one has the advantage that its [*CP*]{}-odd phase $\delta$ is manifested in the parametrization and is near maximal, [*i.e.,*]{} $\delta\sim 90^\circ$. This is crucial for a viable neutrino phenomenology. It should be stressed that one can also use any parametrization for the CKM matrix as a starting point. As shown in [@Koide], one can adjust the phase differences in the diagonal phase matrix $P_f$ in Eq. (\[Vl\]) in such a way that the prediction of $\sin^2\theta_{12}$ will not be considerably affected. For $V^{\ell \dag}_{L}=V_{\rm QM}$, the QM parametrization [@Qin:2011ub; @Ahn:2011ep] can be obtained from Eq. (\[Vl\]) by the replacements $s_{1}e^{i\alpha_{1}}=-(f+he^{-i\delta})\lambda^{2}~, s_{2}=f\lambda^{3}~, s_{3}=\lambda~, \alpha_{2}=\delta~, \alpha_{3}=\delta-\pi$ : $$\begin{aligned} V^{f\dag}_{L}=P^{\ast}_{f}{\left(\begin{array}{ccc} 1-\lambda^2/2 & \lambda e^{i\delta} & h\lambda^3 \\ -\lambda e^{-i\delta} & 1-\lambda^2/2 & (f+h e^{-i\delta})\lambda^2 \\ f\lambda^3 e^{-i\delta} & -(f+h e^{i\delta})\lambda^2 & 1 \\ \end{array}\right)}+{\cal O}(\lambda^{4})~. \label{Vl1} \end{aligned}$$ On the other hand, for $V^{\ell}_{L}=V_{\rm QM}$ the QM parametrization is obtained by the replacements $s_{1}e^{i\alpha_{1}}=(f+he^{-i\delta})\lambda^{2}~, s_{2}=h\lambda^{3}~, s_{3}=\lambda~, \alpha_{2}=0~, \alpha_{3}=\delta$ : $$\begin{aligned} V^{f}_{L}={\left(\begin{array}{ccc} 1-\lambda^2/2 & \lambda e^{i\delta} & h\lambda^3 \\ -\lambda e^{-i\delta} & 1-\lambda^2/2 & (f+h e^{-i\delta})\lambda^2 \\ f\lambda^3 e^{-i\delta} & -(f+h e^{i\delta})\lambda^2 & 1 \\ \end{array}\right)}P_{f}+{\cal O}(\lambda^{4})~, \label{Vl2} \end{aligned}$$ where the superscript $f$ denotes $d$ (down-type quarks) or $\ell$ (charged leptons). From the global fits to the quark mixing matrix given by [@CKMfitter] we obtain $$\begin{aligned} f=0.749^{+0.034}_{-0.037}\,,\quad h=0.309^{+0.017}_{-0.012}\,,\quad \lambda=0.22545\pm 0.00065\,, \quad \delta=(89.6^{+2.94}_{-0.86})^\circ\,. \label{eq:QMfh} \end{aligned}$$ Because of the freedom of the phase redefinition for the quark fields, we have shown in [@Ahn:2011it] that the QM parametrization is indeed equivalent to the Wolfenstein one in the quark sector. Finally, the leptonic mixing parameters ($\theta_{23},\theta_{12},\theta_{13},\delta_{CP}$) except Majorana phases can be expressed in terms of five parameters $\theta$ (or $\epsilon$), $\delta,f,h, \lambda$, the last four being the QM parameters in the lepton sector. If we further assume that all the QM parameters except $\delta$ have the same values in both the CKM and PMNS matrices, then only two free parameters left in the lepton mixing matrix are $\epsilon$ and $\delta$. If $\delta$ is fixed to be the same as the CKM one, then there will be only one free parameter $\epsilon$ in our calculation. In the next section, we shall study the dependence of the mixing angles $\sin^{2}\theta_{23},~\sin^{2}\theta_{12},~\sin\theta_{13}$ and the Jarlskog invariant $J_{CP}$ on $\delta$ and $\epsilon$. To make our point clearer, let us summarize the reduction of the number of independent parameters in this work. In the leptonic sector, we start with 16 free parameters (12 from the charged lepton mass matrix $M_{\ell}^{\prime~2}$ and 4 from the neutrino mass matrix $M_{\nu}^{\prime~2}$) as shown in Eq. (\[mass2\]). Among the 12 parameters from $M_{\ell}^{\prime~2}$, three phases can be rotated away by the redefinition of the charged lepton fields. The remaining 9 parameters correspond to three charged lepton masses ($m_{e,\mu,\tau}$) and six angles in the charged lepton mixing matrix $V_L^{\ell}$ as shown in Eq. (\[Vl\]), while the 4 parameters from $M_{\nu}^{\prime~2}$ correspond to three neutrino masses ($m_{1,2,3}$) plus one angle ($\theta$ or $\epsilon$) in the neutrino mixing matrix $U_{\nu}$ as shown in Eq. (\[Unu\]) or (\[epsilon\]). With our [*ansatz*]{} for $V_L^{\ell}$ discussed before, the 6 angles in $V_L^{\ell}$ are reduced to four QM parameters ($f, h, \lambda, \delta$). Thus, the number of parameters finally becomes five ($f, h, \lambda, \delta$ plus $\theta$ (or $\epsilon$)), except for the six lepton masses. Under the further assumption of the QM parameters $f, h, \lambda$ having the same values in both the CKM and PMNS matrices, these five parameters are reduced to only two ones $\delta$ and $\epsilon$. Neutrino phenomenology ====================== We now proceed to discuss the low energy neutrino phenomenology with the neutrino mixing matrix $U_\nu$ (see Eq. (\[Unu\])) characterized by the mixing angle $\theta$ or the small parameter $\epsilon$ and the charged lepton mixing matrix $U_{L}=V_{C}V^{\ell}_{L}$ in which $V^{\ell}_{L}$ is assumed to have the similar expression as the QM parametrization [@Qin:2011ub; @Ahn:2011ep] given by $V^{\dag}_{\rm QM}$ or $V_{\rm QM}$ (see Eq. (\[Vl1\]) and Eq. (\[Vl2\]), respectively). The lepton mixing matrix thus has the form $$\begin{aligned} U_{\rm PMNS} =\left\{ \begin{array}{ll} V_{\rm QM}U_{\rm TBM}W & {\rm for}~ V_L^{\ell\dagger}=V_{\rm QM}, \hbox{} \\ V_{\rm QM}^{\dag}U_{\rm TBM}W & {\rm for}~ V_L^{\ell}=V_{\rm QM}. \hbox{} \end{array} \right.\end{aligned}$$ Therefore, the corrections to the TBM matrix within our framework arise from the charged lepton mixing matrix $V_L^\ell$ characterized by the parameters $f,h,\lambda,\delta$ and the matrix $W$ specified by the parameter $\epsilon$ whose size is strongly constrained by the recent T2K data. Indeed, the parameters $\lambda,~f,~h$ and $\delta$ in the lepton sector are [*a priori*]{} not necessarily the same as that in the quark sector. Hereafter, we shall use the central values in Eq. (\[eq:QMfh\]) of the parameters $(\lambda,~f,~h)$ for our numerical calculations. In the following we consider both cases: 0.4cm [**(i)**]{} $V^{\ell\dag}_{L}=V_{\rm QM}$ 0.5cm With the help of Eqs. (\[HPS\]) and (\[Vl1\]), the leptonic mixing matrix corrected by the replacements $V_{C}\rightarrow U_{L}=V_CV^{\ell}_L=V_{C}V^{\dag}_{\rm QM}$ and $U_{\nu}(\pi/4)\rightarrow U_{\nu}(\pi/4+\epsilon)$, can be written, up to order of $\lambda^{3}$ and $\epsilon^{2}$, as $$\begin{aligned} U_{\rm PMNS}^{\rm (i)}&=&U^{\dag}_{L}~U_{\nu}(\pi/4+\epsilon)= V_{\rm QM} U_{\rm TBM}W \nonumber\\ &=&U_{\rm TBM}+\epsilon{\left(\begin{array}{ccc} -\frac{\epsilon}{2}\sqrt{\frac{2}{3}} & 0 & \sqrt{\frac{2}{3}} \\ \frac{i}{\sqrt{2}}+\frac{\epsilon}{2\sqrt{6}} & 0 & -\frac{1}{\sqrt{6}}+\frac{i\epsilon}{2\sqrt{2}} \\ -\frac{i}{\sqrt{2}}+\frac{\epsilon}{2\sqrt{6}} & 0 & -\frac{1}{\sqrt{6}}-\frac{i\epsilon^{2}}{2\sqrt{2}} \end{array}\right)} \nonumber\\ &+&\lambda{\left(\begin{array}{ccc} -\frac{e^{i\delta}+\lambda+h\lambda^{2}}{\sqrt{6}} & \frac{e^{i\delta}-\frac{\lambda}{2}-h\lambda^{2}}{\sqrt{3}} & -\frac{i(e^{i\delta}-h\lambda^{2})}{\sqrt{2}} \\ \frac{-2 e^{-i\delta}+(\frac{1}{2}-f-he^{-i\delta})\lambda}{\sqrt{6}} & -\frac{e^{-i\delta}+(\frac{1}{2}-f-he^{-i\delta})\lambda}{\sqrt{3}} & \frac{i(\frac{1}{2}+f+he^{-i\delta})\lambda}{\sqrt{2}} \\ \frac{(f+he^{i\delta})\lambda+2fe^{-i\delta}\lambda^{2}}{\sqrt{6}} & -\frac{(f+he^{i\delta})\lambda+fe^{-i\delta}\lambda^{2}}{\sqrt{3}} & \frac{i(f+he^{i\delta})\lambda}{\sqrt{2}} \end{array}\right)} \\ &-&\lambda\epsilon{\left(\begin{array}{ccc} -\frac{i(e^{i\delta}-h \lambda^{2})}{\sqrt{2}}-\epsilon\frac{e^{i\delta}+\lambda+h\lambda^{2}}{2\sqrt{6}} & 0 & \frac{e^{i\delta}+\lambda+h\lambda^{2}}{\sqrt{6}}-\epsilon \frac{i(e^{i\delta}-h\lambda^{2})}{2\sqrt{2}} \\ \frac{i(\frac{1}{2}+f+he^{-i\delta})\lambda}{\sqrt{2}}-\epsilon \frac{2e^{-i\delta}+(f+he^{-i\delta}-\frac{1}{2})\lambda}{2\sqrt{6}} & 0 & \frac{2e^{-i\delta}+(f+he^{-i\delta}-\frac{1}{2})\lambda}{\sqrt{6}} +\epsilon\frac{i(f+he^{-i\delta}+\frac{1}{2})\lambda}{2\sqrt{2}} \\ \frac{i(f+he^{i\delta})\lambda}{\sqrt{2}}+\epsilon\frac{(f+he^{i\delta}) \lambda+2fe^{-i\delta}\lambda^{2}}{2\sqrt{6}} & 0 & -\frac{(f+he^{i\delta})\lambda+2\lambda^{2}fe^{i\delta}}{\sqrt{6}} +\epsilon\frac{i(f+he^{i\delta})\lambda}{2\sqrt{2}} \end{array}\right)}~. \nonumber \label{leptonA} \end{aligned}$$ Note that $U_{\rm PMNS}^{\rm (i)}$ here contains five independent parameters ($\lambda,h,f,\delta$ and $\epsilon$).[^7] By rephasing the lepton and neutrino fields $e \to e \,e^{i\alpha_{1}}$, $\mu \to \mu \,e^{i\beta_{1}}$, $\tau \to \tau \,e^{i\beta_{2}}$ and $\nu_{2} \to \nu_{2} \,e^{i(\alpha_{1}-\alpha_{2})}$, the PMNS matrix is recast to $$\begin{aligned} U_{\rm PMNS}= {\left(\begin{array}{ccc} |U_{e1}| & |U_{e2}| & |U_{e3}|e^{-i(\alpha_{1}-\alpha_{3})} \\ U_{\mu1}e^{-i\beta_{1}} & U_{\mu2}e^{i(\alpha_{1}-\alpha_{2}-\beta_{1})} & |U_{\mu3}| \\ U_{\tau1}e^{-i\beta_{2}} & U_{\tau2}e^{i(\alpha_{1}-\alpha_{2}-\beta_{2})} & |U_{\tau3}| \end{array}\right) P_{\nu}}~, \label{PMNS2} \end{aligned}$$ where $U_{\alpha j}$ is an element of the PMNS matrix with $\alpha=e,\mu,\tau$ corresponding to the lepton flavors and $j=1,2,3$ to the light neutrino mass eigenstates. In Eq. (\[PMNS2\]) the phases defined as $\alpha_{1} = \arg(U_{e1})$, $\alpha_{2} = \arg(U_{e2})$, $\alpha_{3} = \arg(U_{e3})$, $\beta_{1} = \arg(U_{\mu3})$ and $\beta_{2} = \arg(U_{\tau3})$ have the expressions: $$\begin{aligned} \alpha_{1}&=&\tan^{-1}\left(\frac{\lambda\{\sqrt{3}(\epsilon^{2}-2)\sin\delta+6\epsilon\cos\delta -6h\epsilon\lambda^{2}\}}{\sqrt{3}(2-\epsilon^{2})(2-\lambda^{2}-h\lambda^{3}) +\sqrt{3}(\epsilon^{2}-2)\lambda\cos\delta-6\epsilon\lambda\sin\delta}\right)~\ , \nonumber\\ \alpha_{2}&=&\tan^{-1}\left(\frac{\lambda\sin\delta}{1+\lambda\cos\delta -\frac{\lambda^{2}}{2}+h\lambda^{3}}\right)~ \ , \nonumber\\ \alpha_{3}&=&\tan^{-1}\left(\frac{\lambda\{2\sqrt{3}\epsilon\sin\delta+3(2-\epsilon^{2})\cos\delta -3h(2-\epsilon^{2})\lambda^{3}\}}{3\lambda(\epsilon^{2} -2)\sin\delta-2\sqrt{3}\epsilon(2-\lambda^{2}-\lambda\cos\delta-h\lambda^{3})}\right)~\ , \nonumber\\ \beta_{1} &=&\tan^{-1}\left(\frac{3(2-\epsilon^{2})(2-\lambda^{2}-2f\lambda^{2})-6h(2-\epsilon^{2}) \lambda^{2}\cos\delta-4\sqrt{3}\epsilon\lambda(2+h\lambda)\sin\delta}{2\sqrt{3}\epsilon (2-\lambda^{2}+2f\lambda^{2})+4\sqrt{3}\epsilon\lambda(2+h\lambda)\cos\delta-6h(2-\epsilon^{2}) \lambda^{2}\sin\delta}\right)\ , \nonumber\\ \beta_{2}&=&\tan^{-1} \left(\frac{3(2-\epsilon^{2})(1+f\lambda^{2})+3h\lambda^{2}(2-\epsilon^{2})\cos\delta+2\sqrt{3} \epsilon\lambda^{2}(h-2f\lambda)\sin\delta}{2\sqrt{2}\epsilon(f\lambda^{2}-1)+2\sqrt{3}\epsilon \lambda^{2}(h+2f\lambda)\cos\delta-3h\lambda^{2}(2-\epsilon^{2})\sin\delta}\right)~. \label{mixing elements2} \end{aligned}$$ &gt;From Eq. (\[PMNS2\]), the neutrino mixing parameters can be displayed as $$\begin{aligned} \sin^{2}\theta_{12}&=&\frac{|U_{e2}|^{2}}{1-|U_{e3}|^{2}}~,\qquad\qquad\quad \sin^{2}\theta_{23}=\frac{|U_{\mu3}|^{2}}{1-|U_{e3}|^{2}}~,\nonumber\\ \sin\theta_{13}&=&|U_{e3}|~,\qquad\qquad\qquad\quad ~\delta_{CP}=\alpha_{1}-\alpha_{3}~. \label{mixing1} \end{aligned}$$ It follows from Eqs. (\[leptonA\]) and (\[mixing1\]) that the solar neutrino mixing angle $\theta_{12}$ can be approximated, up to order $\lambda^3$ and $\epsilon^{2}$, as $$\begin{aligned} \sin^{2}\theta_{12}&\simeq&\frac{1}{3}+\frac{2\epsilon^{2}}{9}+\frac{2\lambda}{3} \left(\cos\delta+\frac{\epsilon\sin\delta}{\sqrt{3}}+\frac{\epsilon^2\cos\delta}{3}\right)\nonumber\\ &+&\frac{\lambda^{2}}{3}\left(\frac{1}{2}+\frac{2\epsilon\sin2\delta}{\sqrt{3}} -\frac{\epsilon^{2}}{3}(3+4\cos^{2}\delta)\right)~\nonumber\\ &+&\frac{\lambda^{3}}{3}\left(2h-\frac{\epsilon\sin\delta}{\sqrt{3}} +\frac{\epsilon^{2}}{3}(2h-7\cos\delta)\right) \ . \label{Sol} \end{aligned}$$ This indicates that the deviation from $\sin^{2}\theta_{12}=1/3$ becomes small when $\cos\delta$ approaches to zero and the magnitude of $\epsilon$ is less than $\lambda$. Since it is the first column of $V_L^\ell$ that makes the major contribution to $\sin^{2}\theta_{12}$, this explains why we need a phase of order $90^\circ$ for the element $(V_L^\ell)_{21}$: When $|\sin\delta|\approx1$, the present data of the solar mixing angle can be accommodated even for a large $|\epsilon|$ (but less than $\lambda$). The behavior of $\sin^{2}\theta_{12}$ as a function of $\delta$ is plotted in Fig. \[Fig1\] where the horizontal dashed lines denote the upper and lower bounds of the experimental data in $3\sigma$ ranges. The allowed regions for $\delta$ (in radian) lie in the ranges of $1.45\lesssim\delta\lesssim2.17$ and $4.17\lesssim\delta\lesssim4.91$, recalling that the QM phase is $\delta_{\rm QM}=1.56$. Likewise, the atmospheric neutrino mixing angle $\theta_{23}$ comes out as $$\begin{aligned} \sin^{2}\theta_{23}&\simeq&\frac{1}{2}-\frac{\epsilon\lambda}{\sqrt{3}}\left(\sin\delta -\frac{\epsilon\cos\delta}{\sqrt{3}}\right)\nonumber\\ &-&\lambda^{2}\left(\frac{1}{4}+f+h\cos\delta+\epsilon\frac{2h\sin\delta}{\sqrt{3}} +\frac{2\epsilon^{2}}{3}(1-f-h\cos\delta-\cos2\delta)\right)~\nonumber\\ &-&\lambda^{3}\epsilon\left(\frac{\sin\delta}{2\sqrt{3}}(3+4h\cos\delta)-\epsilon \left[\frac{3-8f}{6}\cos\delta+h-2h\cos^{2}\delta\right]\right) \ . \label{Atm} \end{aligned}$$ Fig. \[Fig1\] shows a small deviation from the TBM atmospheric mixing angle with $\theta_{23}<45^\circ$ for $0<|\epsilon|<\lambda$. Owing to the absence of corrections to the first order of $\lambda$ or $\epsilon$ in Eq. (\[Atm\]), the deviation from the maximal mixing of $\theta_{23}$ comes mainly from the terms associated with $\lambda^{2}$ or $\epsilon\lambda$. Especially, for $\sin\delta\approx1$ we have the approximation $\sin^{2}\theta_{23}-\frac{1}{2}\approx-\frac{\epsilon\lambda}{\sqrt{3}}-\lambda^{2}(f+\frac{1}{4})$, which implies $\sin^2\theta_{23}<1/2$ for $0<|\epsilon|<\lambda$. We see from Fig. \[Fig1\] that $\sin^2\theta_{23}$ lies in the ranges $0.43<\sin^{2}\theta_{23}<0.45$ for $0\leq|\epsilon|\lesssim0.1$. The reactor mixing angle $\theta_{13}$ now reads $$\begin{aligned} \sin^2\theta_{13}&=&\frac{2\epsilon\lambda\sin\delta}{\sqrt{3}}+\frac{2\epsilon^{2}}{3} (1-\lambda\cos\delta) \nonumber \\ &+&\lambda^{2}\left(\frac{1}{2}-\epsilon^{2}\right)-\lambda^{3}\epsilon \left(\frac{\sin\delta}{\sqrt{3}}+\frac{2h\epsilon}{3}-\frac{\epsilon\cos\delta}{3}\right)~. \label{Reactor} \end{aligned}$$ Evidently, $\sin\theta_{13}$ depends considerably on the parameters $\lambda$ and $\epsilon$. Thus, we have a non-vanishing $\theta_{13}$ with a central value of $\sin\theta_{13}=\lambda/\sqrt{2}$ or $\theta_{13}=9.2^\circ$ for $\epsilon=0$ [@Ahn:2011ep]. Note that the size of the unknown parameter $\epsilon$ is constrained by the plot of $\sin\theta_{13}$ versus $\delta$ in Fig. \[Fig1\] where the horizontal dot-dashed lines represent the present T2K data for the normal neutrino mass hierarchy. For a negative value of $\epsilon$, the plot for $\sin\theta_{13}$ versus $\delta$ is flipped upside-down. Assuming $\rho>0$, we see from Eq. (\[masssquare\]) that a positive (negative) value of $\epsilon$ leads to a normal (inverted) neutrino mass spectrum. For example, we find $\frac{\lambda}{\sqrt{2}}\leq\sin\theta_{13}\lesssim0.22$ ($0.07\lesssim\sin\theta_{13}\leq\frac{\lambda}{\sqrt{2}}$) for $\delta=1.56$ and $\epsilon\leq0.08$ ($\epsilon\geq-0.11$) . Leptonic [*CP*]{} violation can be detected through the neutrino oscillations which are sensitive to the Dirac [*CP*]{}-phase $\delta_{CP}$, but insensitive to the Majorana phases in $U_{\rm PMNS}$ [@Branco:2002xf]. It follows from Eqs. (\[mixing elements2\]) and (\[mixing1\]) that the Dirac phase $\delta_{CP}=\alpha_1-\alpha_3$ has the expression $$\begin{aligned} \delta_{CP}= \tan^{-1}\left( \frac{\sqrt{3}\lambda\{-2\cos\delta+\lambda\cos2\delta+\lambda^2 \cos\delta+\sqrt{3}\epsilon\lambda\sin2\delta\}}{2\sqrt{3}\lambda\sin \delta\{1-\lambda\cos\delta-\frac{\lambda^2}{2}\}-4\epsilon\{1-\lambda \cos\delta-\frac{3}{2}\lambda^2\cos^2\delta-\lambda^3(h-\frac{\cos\delta}{2})\}} \right) ~, \nonumber \\ \label{DiracCP1} \end{aligned}$$ where terms of order $\epsilon^3,\lambda^4,\epsilon^2\lambda^2$ have been neglected in both numerator and denominator. Assuming $\rho>0$, we show in Table \[DiracCP11\] the predictions for $\delta_{P}$ and $\theta_{13}$ as a function of $\epsilon$ , where we have used the central values of Eq. (\[eq:QMfh\]). $\epsilon$ $\delta_{CP}~[{\rm deg.}]$ $\theta_{13}~[{\rm deg.}]$ ------------------- ---------------------------- ---------------------------- $-0.012\sim0.08$ $-173.6\sim-169$ $9.4\sim5.8$ $-0.11\sim-0.012$ $184.6\sim186.4$ $14.4\sim9.4$ : \[DiracCP11\] Predictions of $\delta_{CP}$ and $\theta_{13}$ as a function of $\epsilon$ in the case of $V^{\ell\dag}_{L}=V_{\rm QM}$. To see how the parameters are correlated with low energy [*CP*]{} violation measurable through neutrino oscillations, let us consider the leptonic [*CP*]{} violation parameter defined through the Jarlskog invariant $J_{CP}\equiv{\rm Im}[U_{e1}U_{\mu2}U^{\ast}_{e2}U^{\ast}_{\mu1}] =\frac{1}{8}\sin2\theta_{12}\sin2\theta_{23} \sin2\theta_{13}\cos\theta_{13} \sin\delta_{CP}$ [@Jarlskog:1985ht] which is expressed as $$\begin{aligned} \label{JCP1} J_{CP}&=&-\frac{\epsilon}{3\sqrt{3}}-\frac{\lambda}{6}\left(\sin\delta+\epsilon\frac{4\cos\delta} {\sqrt{3}}-2\epsilon^{2}\sin\delta\right) \\ &-&\frac{\lambda^{2}}{9}\left(\sin\delta(h+\cos\delta)-\epsilon\sqrt{3}(1+f-\cos2\delta+h\cos\delta) -\epsilon^{2}(h+4\cos\delta)\sin\delta\right). \nonumber \end{aligned}$$ We see from the above equation that $J_{CP}$ is strongly correlated with $\epsilon$ and $\delta$ for the fixed values of $\lambda,~h$ and $f$. As long as $\epsilon\neq0$ (associated with the neutrino part) or $\lambda\neq0$ (associated with the charged lepton part), $J_{CP}$ has a non-vanishing value, indicating a signal of [*CP*]{} violation. Eq. (\[JCP1\]) could be approximated as $J_{CP}\approx-\frac{\epsilon}{3\sqrt{3}}-\frac{\lambda}{6}\sin\delta$. The behavior of $J_{CP}$ is plotted in Fig. \[Fig1\] as a function of $\delta$. When $\sin\delta\approx1$, it is reduced to $J_{CP}\approx-\frac{\epsilon}{3\sqrt{3}}-\frac{\lambda}{6} \leq-\frac{\lambda}{6}~(\geq-\frac{\lambda}{6})$ for $\epsilon>0~(\epsilon<0)$. Assuming $\rho>0$, we find $-0.050\lesssim J_{CP}\lesssim-0.037$ ($-0.037\lesssim J_{CP}\lesssim-0.017$) for $\epsilon\leq0.08$ ($\epsilon\geq-0.11$) and $\delta=1.56$. 0.4cm [**(ii)**]{} $V^{\ell}_{L}=V_{\rm QM}$ 0.5cm The resulting leptonic mixing matrix in this case can be expressed, up to order of $\lambda^{3}$ and $\epsilon^{2}$, as $$\begin{aligned} \label{leptonB} U_{\rm PMNS}^{\rm (ii)}&=&U^{\dag}_{L}~U_{\nu}(\pi/4+\epsilon)= V^{\dag}_{\rm QM} U_{\rm TBM}W \nonumber\\ &=&U_{\rm TBM}+{\left(\begin{array}{ccc} -\frac{\epsilon^{2}}{2}\sqrt{\frac{2}{3}} & 0 & \epsilon\sqrt{\frac{2}{3}} \\ \frac{i\epsilon}{\sqrt{2}}+\frac{\epsilon^{2}}{2\sqrt{6}} & 0 & -\frac{\epsilon}{\sqrt{6}}+\frac{i\epsilon^{2}}{2\sqrt{2}} \\ -\frac{i\epsilon}{\sqrt{2}}+\frac{\epsilon^{2}}{2\sqrt{6}} & 0 & -\frac{\epsilon}{\sqrt{6}}-\frac{i\epsilon^{2}}{2\sqrt{2}} \end{array}\right)}\nonumber\\ &+&\lambda{\left(\begin{array}{ccc} \frac{e^{i\delta}-\lambda-fe^{i\delta}\lambda^{2}}{\sqrt{6}} & -\frac{e^{i\delta}+\frac{1}{2}\lambda-fe^{i\delta}\lambda^{2}}{\sqrt{3}} & \frac{ie^{i\delta}(1+f\lambda^{2})}{\sqrt{2}} \\ \frac{2 e^{-i\delta}+(\frac{1}{2}+f+he^{-i\delta})\lambda}{\sqrt{6}} & \frac{e^{-i\delta}-(\frac{1}{2}+f+he^{-i\delta})\lambda}{\sqrt{3}} & \frac{i(\frac{1}{2}-f-he^{-i\delta})\lambda}{\sqrt{2}} \\ -\frac{(f+he^{i\delta})\lambda-2\lambda^{2}h}{\sqrt{6}} & \frac{(f+he^{i\delta})\lambda+h\lambda^{2}}{\sqrt{3}} & -\frac{i(f+he^{i\delta})\lambda}{\sqrt{2}} \end{array}\right)} \\ &+& \lambda\epsilon{\left(\begin{array}{ccc} -\frac{ie^{i\delta}(1+f\lambda^{2})}{\sqrt{2}}+\frac{-e^{i\delta}\epsilon +\lambda\epsilon+fe^{i\delta}\lambda^{2}\epsilon}{2\sqrt{6}} & 0 & \frac{e^{i\delta}-\lambda-fe^{i\delta}\lambda^{2}}{\sqrt{6}} +\frac{-e^{i\delta}\epsilon-ife^{i\delta}\lambda^{2}\epsilon}{2\sqrt{2}} \\ \frac{i(f+he^{-i\delta}-\frac{1}{2})\lambda}{\sqrt{2}}-\frac{2e^{-i\delta} \epsilon+(\frac{1}{2}+f+he^{-i\delta})\lambda\epsilon}{2\sqrt{6}} & 0 & \frac{2e^{-i\delta}+(\frac{1}{2}+f+he^{-i\delta})\lambda}{\sqrt{6}} +\frac{i(f+he^{-i\delta}-\frac{1}{2})\lambda\epsilon}{2\sqrt{2}} \\ \frac{i(f+he^{i\delta})\lambda}{\sqrt{2}}+\frac{(f+he^{i\delta}) \lambda\epsilon-2h\lambda^{2}\epsilon}{2\sqrt{6}} & 0 & -\frac{(f+he^{i\delta})\lambda+2\lambda^{2}h}{\sqrt{6}}+\frac{i(f+he^{i\delta})\lambda\epsilon}{2\sqrt{2}} \end{array}\right)}~. \nonumber \end{aligned}$$ Just as in case ([i]{}), the exact TBM is recovered when both $\epsilon$ and $\lambda$ go to zero. With the help of Eqs. (\[mixing1\]) and (\[leptonB\]), the solar neutrino mixing angle $\theta_{12}$ can be approximated as $$\begin{aligned} \sin^{2}\theta_{12}&\simeq&\frac{1}{3}+\frac{2\epsilon^{2}}{9}-\frac{2\lambda}{3} \left(\cos\delta+\frac{\epsilon\sin\delta}{\sqrt{3}}+\frac{\epsilon^2\cos\delta}{3}\right)\nonumber\\ &+&\frac{\lambda^{2}}{3}\left(\frac{1}{2}+\frac{2\epsilon\sin2\delta}{\sqrt{3}} -\frac{\epsilon^{2}}{3}(3+4\cos^{2}\delta)\right)~\nonumber\\ &+&\frac{\lambda^{3}}{3}\left(2f\cos\delta+\frac{\epsilon\sin\delta}{\sqrt{3}}(1-2f) +\frac{\epsilon^{2}\cos\delta}{3}(2f+7)\right)\ , \label{Sol2} \end{aligned}$$ which leads to, as in case ([i]{}), a tiny deviation from $\sin^{2}\theta_{12}=1/3$ when $\cos\delta \to 0$ and $\lambda>|\epsilon|$. As expected, since the second column related to $\epsilon$ in the matrix Eq. (\[leptonB\]) is zero, the solar mixing angle is not affected to the first order of $\epsilon$. Because of a minus sign in front of the $\lambda\cos\delta$ term, which constitutes the major correction to $\sin\theta_{12}$, the plot of $\sin^{2}\theta_{12}$ versus $\delta$ (see Fig. \[Fig2\]) is turned upside-down, contrary to case (i). When $\sin\delta \approx 1$, the present data of the solar mixing angle are well accommodated even for a large $|\epsilon|$ (but less than $\lambda$). The allowed regions for $\delta$ lie in the ranges of $1.0<\delta<1.7$ and $4.5<\delta<5.3$. This indicates that when the [*CP*]{}-odd phase $\delta$ is near maximal, the data of $\sin^2\theta_{12}$ can be easily accommodated in case (ii) but only marginally in case (i). Hence, the precise measurements of the solar mixing angle in future experiments will tell which scenario is more preferable. &gt;From Eqs. (\[mixing1\]) and (\[leptonB\]), the atmospheric neutrino mixing angle $\theta_{23}$ comes out as $$\begin{aligned} \sin^{2}\theta_{23}&\simeq&\frac{1}{2}+\frac{\epsilon\lambda}{\sqrt{3}} \left(\sin\delta+\frac{\epsilon\cos\delta}{\sqrt{3}}(1-2\sqrt{3})\right)\nonumber\\ &-&\lambda^{2}\left(\frac{1}{4}-f-h\cos\delta-\epsilon\frac{2h\sin\delta}{\sqrt{3}} +\frac{\epsilon^{2}}{3}(2\sin^{2}\delta+f+h\cos\delta)\right)~\nonumber\\ &+&\frac{\lambda^{3}\epsilon}{2}\Bigg(\frac{\sqrt{3}}{3}\sin\delta(3-2f-4h\cos\delta) \nonumber \\ &-& \frac{\epsilon}{3}\left[(1+2\sqrt{3}-6f-4h\cos\delta)\cos\delta+8h\sin^{2}\delta-4h\right]\Bigg) \ . \label{Atm2} \end{aligned}$$ Fig. \[Fig2\] shows a small deviation from the TBM atmospheric mixing angle with $\theta_{23}>45^\circ$, recalling that $\theta_{23}<45^\circ$ in case (i). It is thus crucial to have precise measurements of the atmospheric mixing angle in the future to see whether $\theta_{23}\leq45^{\circ}$ or $\theta_{23}\geq45^{\circ}$ in order to test different scenarios. For $\sin\delta\approx1$ the deviation from the maximal mixing of $\theta_{23}$ is approximated as $\sin^{2}\theta_{23}-\frac{1}{2}\approx\frac{\epsilon\lambda}{\sqrt{3}}+\lambda^{2}(f-\frac{1}{4})$, which leads to $\sin^2\theta_{23}>1/2$ for $0<|\epsilon|<\lambda$. The behavior of $\sin^{2}\theta_{23}$ is plotted in Fig. \[Fig2\] as a function of $\delta$. Likewise, the reactor mixing angle $\theta_{13}$ can be written as $$\begin{aligned} \sin^2\theta_{13}&=&\frac{2\epsilon^{2}}{3}+\frac{2\epsilon\lambda\sin\delta}{3} (\epsilon\cos\delta-\sqrt{3}\sin\delta)+\lambda^{2}\left(\frac{1}{2}-\epsilon^{2}\right) \nonumber \\ &+& \lambda^{3}\epsilon\left(\frac{\sin\delta(1-2f)}{\sqrt{3}}-\frac{\epsilon\cos\delta(1+2f)}{3}\right)~. \label{Reactor2} \end{aligned}$$ We find $0.07 \lesssim\sin\theta_{13}\leq\frac{\lambda}{\sqrt{2}}$ ($\frac{\lambda}{\sqrt{2}}\leq\sin\theta_{13}\lesssim 0.22$) for $\delta=1.56$ and $\epsilon\leq0.12$ ($\epsilon\geq-0.07$) . The Dirac phase $\delta_{CP}$ has the expression $$\begin{aligned} \delta_{CP}&=&\tan^{-1}\left(\frac{\lambda\{\sqrt{3}(2-\epsilon^{2})(1-f\lambda^{2}) \sin\delta-6\epsilon(1+f\lambda^{2})\cos\delta\}}{\sqrt{3}(2-\epsilon^{2})\{2-\lambda^{2} +(1-f\lambda^{2})\lambda\cos\delta\}+6\epsilon\lambda(1+f\lambda^{2})\sin\delta}\right) \nonumber\\ &-&\tan^{-1}\left(\frac{\lambda\{2\sqrt{3}\epsilon(1-f\lambda^{2})\sin\delta+3(2-\epsilon^{2}) (1+f\lambda^{2})\cos\delta\}}{3\lambda(\epsilon^{2} -2)(1+f\lambda^{2})\sin\delta+2\sqrt{3}\epsilon(2-\lambda^{2}+(1-f\lambda^{2})\cos\delta)}\right)~. \label{Diracphase2} \end{aligned}$$ Assuming $\rho>0$, we show in Table \[DiracCP21\] the predictions for $\delta_{\rm CP}$ and $\theta_{13}$ as a function of $\epsilon$, where we have focused on the central values of Eq. (\[eq:QMfh\]). $\epsilon$ $\delta_{CP}~[{\rm deg.}]$ $\theta_{13}~[{\rm deg.}]$ -------------- ---------------------------- ---------------------------- $0\sim0.08$ $-7.1\sim-5.2$ $9.1\sim12.6$ $-0.11\sim0$ $-15.6\sim-7.1$ $4.1\sim9.1$ : \[DiracCP21\] Predictions of $\delta_{CP}$ and $\theta_{13}$ as a function of $\epsilon$ in the case of $V^{\ell}_{L}=V_{\rm QM}$. The strength of $CP$ violation $J_{CP}$ can be expressed in a similar way to Eq. (\[JCP1\]) $$\begin{aligned} J_{CP}&=&-\frac{\epsilon}{3\sqrt{3}}+\frac{\lambda}{6}\left(\sin\delta+\epsilon\frac{4\cos\delta} {\sqrt{3}}-2\epsilon^{2}\sin\delta\right) \\ &+&\frac{\lambda^{2}}{9}\left(\sin\delta(h-\cos\delta)+\epsilon\sqrt{3}(1-f-\cos2\delta-h\cos\delta) -\epsilon^{2}(h-4\cos\delta)\sin\delta\right), \nonumber \label{JCP2} \end{aligned}$$ which can be approximated as $J_{CP}\approx-\frac{\epsilon}{3\sqrt{3}}+\frac{\lambda}{6}\sin\delta$. When $\sin\delta\approx1$, it is further reduced to $J_{CP}\approx-\frac{\epsilon}{3\sqrt{3}}+\frac{\lambda}{6}\leq\frac{\lambda}{6}~(\geq\frac{\lambda}{6})$ for $\epsilon>0~(\epsilon<0)$. Assuming $\rho>0$, we see from Fig. \[Fig2\] that $0.014\lesssim J_{CP}\lesssim0.037$ ($0.037\lesssim J_{CP}\lesssim0.05$) for $\epsilon\leq0.12$ ($\epsilon\geq-0.07$) and $\delta=1.56$ . Conclusion ========== In their original work, Harrison, Perkins and Scott proposed simple charged lepton and neutrino mass matrices that lead to the tribimaximal mixing $U_{\rm TBM}$. In this paper we considered a general extension of the mass matrices so that the lepton mixing matrix becomes $U_{\rm PMNS}=V_L^{\ell\dagger}U_{\rm TBM}W$. Hence, corrections to the tribimaximal mixing arise from both charged lepton and neutrino sectors: the charged lepton mixing matrix $V_L^\ell$ measures the deviation of from the trimaximal form and the $W$ matrix characterizes the departure of the neutrino mixing from the bimaximal one. Following our previous work to assume a Qin-Ma-like parametrization $V_{\rm QM}$ for $V_L^\ell$ in which the [*CP*]{}-odd phase is approximately maximal, we study the phenomenological implications in two different scenarios: $V_L^\ell=V_{\rm QM}^\dagger$ and $V_L^\ell=V_{\rm QM}$. We found that both scenarios are consistent with the data within $3\sigma$ ranges. Especially, the predicted central value of the reactor neutrino mixing angle $\theta_{13}=9.2^\circ$ is in good agreement with the recent T2K data. However, the data of $\sin^2\theta_{12}$ can be easily accommodated in the second scenario but only marginally in the first one. Hence, the precise measurements of the solar mixing angle in future experiments will test which scenario is more preferable. The leptonic [*CP*]{} violation characterized by the Jarlskog invariant $J_{\rm CP}$ is generally of order $10^{-2}$. 1 cm [**Acknowledgments**]{} This work was supported in part by the National Science Council of R.O.C. under Grants Numbers: NSC-97-2112-M-008-002-MY3, NSC-100-2112-M-001-009-MY3 and NSC-99-2811-M-001-038. [99]{} \#1\#2\#3[Phys. Lett.  [**B\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Nucl. Phys.  [**B\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Phys. Rev.  [**D\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Mod. Phys. Lett. [**A\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Phys. Rep.  [**\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Science [**\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Astrophys. J.  [**\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Eur. Phys. J. [**C\#1**]{}, (\#3) \#2]{} \#1\#2\#3[JHEP [**\#1**]{}, (\#3) \#2]{} \#1\#2\#3[J. Phys.  [**G\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Int. J. Mod. Phys. [**\#1**]{}, (\#3) \#2]{} \#1\#2\#3[Prog. Theor. Phys. [**\#1**]{}, (\#3) \#2]{} P. F. Harrison, D. H. Perkins and W. G. Scott, Phys. Lett.  B [**530**]{}, 167 (2002) \[arXiv:hep-ph/0202074\]; P. F. Harrison and W. G. Scott, Phys. Lett.  B [**535**]{}, 163 (2002) \[arXiv:hep-ph/0203209\]. X. G. He, Y. Y. Keum and R. R. Volkas, JHEP [**0604**]{}, 039 (2006) \[arXiv:hep-ph/0601001\]. K. Abe [*et al.*]{} \[T2K Collaboration\], Phys. Rev. Lett. [**107**]{}, 041801 (2011) \[arXiv:1106.2822 \[hep-ex\]\]. P. Adamson [*et al.*]{} \[MINOS Collaboration\], arXiv:1108.0015 \[hep-ex\]. M. C. Gonzalez-Garcia, M. Maltoni and J. Salvado, JHEP [**1004**]{}, 056 (2010) \[arXiv:1001.4524v3 \[hep-ph\]\]. G. L. Fogli, E. Lisi, A. Marrone, A. Palazzo and A. M. Rotunno, Phys. Rev.  D [**84**]{}, 053007 (2011) \[arXiv:1106.6028 \[hep-ph\]\]. K. Nakamura [*et al*]{}. (Particle Data Group), J. Phys. G [**37**]{}, 075021 (2010). J. Schechter and J. W. F. Valle, Phys. Rev.  D [**25**]{}, 2951 (1982). N. Cabibbo, Phys. Lett.  B [**72**]{}, 333 (1978). L. Wolfenstein, Phys. Rev. D [**18**]{}, 958 (1978); P. F. Harrison and W. G. Scott, Phys. Lett. B [**333**]{}, 471 (1994); R. N. Mohapatra and S. Nussinov, Phys. Lett. B [**346**]{}, 75 (1995). Z. Z. Xing, Phys. Lett.  B [**533**]{}, 85 (2002) \[arXiv:hep-ph/0204049\]. Y. Shimizu, M. Tanimoto and A. Watanabe, Prog. Theor. Phys.  [**126**]{}, 81 (2011) \[arXiv:1105.2929 \[hep-ph\]\]; N. Qin and B. Q. Ma, Phys. Lett.  B [**702**]{}, 143 (2011) \[arXiv:1106.3284 \[hep-ph\]\]; Y. J. Zheng and B. Q. Ma, arXiv:1106.4040 \[hep-ph\]; E. Ma and D. Wegman, Phys. Rev. Lett.  [**107**]{}, 061803 (2011) \[arXiv:1106.4269 \[hep-ph\]\]; X. G. He and A. Zee, Phys. Rev.  D [**84**]{}, 053004 (2011) \[arXiv:1106.4359 \[hep-ph\]\]; T. Araki, Phys. Rev.  D [**84**]{}, 037301 (2011) \[arXiv:1106.5211 \[hep-ph\]\]; S. Morisi, K. M. Patel and E. Peinado, Phys. Rev.  D [**84**]{}, 053002 (2011) \[arXiv:1107.0696 \[hep-ph\]\]; W. Chao and Y. J. Zheng, arXiv:1107.0738 \[hep-ph\]; S. Dev, S. Gupta and R. R. Gautam, Phys. Lett.  B [**704**]{}, 527 (2011) \[arXiv:1107.1125 \[hep-ph\]\]; R. d. A. Toorop, F. Feruglio and C. Hagedorn, Phys. Lett.  B [**703**]{}, 447 (2011) \[arXiv:1107.3486 \[hep-ph\]\]. Y. H. Ahn, H. Y. Cheng and S. Oh, arXiv:1105.4460 \[hep-ph\] (unpublished). S. Dev, S. Gupta and R. R. Gautam, Phys. Lett.  B [**704**]{}, 527 (2011) \[arXiv:1107.1125 \[hep-ph\]\]; P. S. Bhupal Dev, R. N. Mohapatra and M. Severson, Phys. Rev.  D [**84**]{}, 053005 (2011) \[arXiv:1107.2378 \[hep-ph\]\]. Y. H. Ahn, H. Y. Cheng and S. Oh, Phys. Rev.  D [**83**]{}, 076012 (2011) \[arXiv:1102.0879 \[hep-ph\]\]. N. Cabibbo, Phys. Rev. Lett. [**10**]{}, 531 (1963); M. Kobayashi and T. Maskawa, Prog. Theor. Phys.  [**49**]{}, 652 (1973). N. Qin and B. Q. Ma, Phys. Rev.  D [**83**]{}, 033006 (2011) \[arXiv:1101.4729 \[hep-ph\]\]. L. Wolfenstein, Phys. Rev. Lett.  [**51**]{}, 1945 (1983). Y. Koide and H. Nishiura, Phys. Rev.  D [**79**]{}, 093005 (2009) \[arXiv:0811.2839 \[hep-ph\]\]. CKMfitter Group, J. Charles [*et al.,*]{} Eur. Phys. J. C [**41**]{}, 1 (2005) and updated results from http://ckmfitter.in2p3.fr. Y. H. Ahn, H. Y. Cheng, S. Oh, Phys. Lett. B [**701**]{}, 614 (2011) \[arXiv:1105.0450 \[hep-ph\]\]. G. C. Branco, R. Gonzalez Felipe, F. R. Joaquim, I. Masina, M. N. Rebelo and C. A. Savoy, Phys. Rev.  D [**67**]{}, 073025 (2003) \[arXiv:hep-ph/0211001\]. C. Jarlskog, Phys. Rev. Lett.  [**55**]{}, 1039 (1985); D. d. Wu, Phys. Rev.  D [**33**]{}, 860 (1986). [^1]: Email: yhahn@phys.sinica.edu.tw [^2]: Email: phcheng@phys.sinica.edu.tw [^3]: Email: scoh@phys.sinica.edu.tw [^4]: The matrix originally given by Cabibbo was in the form $$\begin{aligned} V_{C}=\frac{1}{\sqrt{3}}{\left(\begin{array}{ccc} 1 & 1 & 1 \\ 1 & \omega & \omega^* \\ 1 & \omega^* & \omega \end{array}\right)}~. \nonumber \end{aligned}$$ If one considers $A_{4}$ discrete symmetry, it will have two subgroups, namely, $Z_{2}$ and $Z_{3}$. The trimaximal matrix given in Eq. (\[Cabibbo\]) is obtained under $Z_{3}$. [^5]: Different from the choice of HPS, the matrix element $y$ in Eq. (\[mass1\]) can be in general introduced as complex: e.g., $(M^{2}_{\nu})_{13}= y$ and $(M^{2}_{\nu})_{31} = y^*$. This case has been considered by Xing [@Xing:2002sw] who pointed out that the off-diagonal terms in $U_{\rm BM}$ will acquire a phase from the complex $y$. It has the interesting implication that a nonzero $\sin\theta_{13}$ will result from the phase of $y$. However, the corresponding Jarlskog invariant is exactly zero and the absence of intrinsic [*CP*]{} violation makes this possibility less interesting. [^6]: TBM could be obtained in models with different discrete symmetries, such as $S_{3},A_{4},S_{4},A_{5}$, dihedral groups, $\cdots$, etc. By considering higher order and radiative effects, the matrices in Eq. (\[mass2\]) can be realized. For example, we have shown in Ref. [@Ahn:2011yj] that these matrices can be obtained by introducing dimension-5 operators to the Lagrangians. [^7]: Our previous work [@Ahn:2011ep] corresponds to case (i) with $\epsilon=0$ .
--- abstract: 'We present a method to engineer the unitary charge conjugation operator, as given by quantum field theory, in the highly controlled context of quantum optics, thus allowing one to simulate the creation of charged particles with well-defined momenta simultaneously with their respective antiparticles. Our method relies on trapped ions driven by a laser field and interacting with a single mode of a light field in a high Q cavity.' author: - 'N.G. de Almeida' title: 'Engineering the unitary charge-conjugation operator of quantum field theory for particle-antiparticle using trapped ions and light fields in cavity QED' --- Introduction ============ #### {#section .unnumbered} The Theory ========== Although well known in quantum field theory, let us begin, for clarity, with a brief review of the main results used here. The charge conjugation is a symmetry of the theory, as for example, given a (density) Lagrangian $\mathcal{L\textrm{(x)}}$ and a unitary charge conjugation operator $\mathcal{\mathit{C}}$, then $\mathcal{\mathit{C^{-1}}L\textrm{(x)}C=L\textrm{(x)}}$. To be specific, consider first [@Greiner; @book; @Ryder; @book] the complex Klein-Gordon field for spin zero particles $$\mathcal{L}\textrm{(x)}=(\partial_{\mu}\phi)(\partial^{\mu}\phi^{*})-m^{2}\phi\phi^{*}$$ where $\phi$ and $\phi^{*}$ are considered independent complex fields, and $m$ is the mass of the particle associated to the field excitations. The Euler-Lagrange equations lead to the Klein-Gordon equation $$(\partial_{\mu}\partial^{\mu}+m^{2})\phi=0.$$ In field quantization, the complex fields $\phi$ and $\phi^{*}$ are regarded as operators, such that ($x\equiv(x,y,z),k\equiv(k_{x},k_{y},k_{z})$) $$\phi(x,t)=\sum_{k}[a_{k}u_{k}(x,t)+b_{k}^{\dagger}u_{k}^{*}(x,t)],$$ $$\phi^{\dagger}(x,t)=\sum_{k}[a_{k}^{\dagger}u_{k}^{*}(x,t)+b_{k}u_{k}(x,t)],$$ are the field operators replacing the complex scalar fields. The functions $u_{k}(x,t)$, and $u_{k}^{*}(x,t)$ are box normalized imposing periodic boundary conditions at a surface of a cube of volume $V$ such that $(u_{j},u_{l})=\delta_{jl}$, $j,l=(k_{x},k_{y},k_{z})$, and imposing commutation relations to the the bosonic operators $a_{k},$ $a_{k}^{\dagger}$, and $b_{k}$, $b_{k}^{\dagger}$. Both $\mathcal{L}\textrm{(x)}$ in Eq.(1) and Eq.(2) are invariant under the gauge transformation of first kind $\phi\rightarrow\phi exp(i\Lambda)$, and so by the Noether’s theorem there is a conserved current $j^{\mu}$and a conserved charge $Q=\int j^{0}d^{3}x$ , whose density is given by $\mathit{\mathcal{Q}}=-i:\phi^{\dagger}\frac{\partial\phi}{\partial t}-\phi\frac{\partial\phi^{\dagger}}{\partial t}:$, and $::$ denotes normal ordering. Using Eqs.(3)-(4) and the definition $(\psi,\chi)\equiv i\int d^{3}x\psi^{*}\overleftrightarrow{\partial_{0}}\chi=i\int d^{3}x(\partial_{0}\psi^{*}\chi-\psi^{*}\partial_{0}\chi)$ , the charge operator can be written as $$Q=\sum_{k}(a_{k}^{\dagger}a_{k}-b_{k}^{\dagger}b_{k}).$$ Note that, given an eigenstate $\left\vert q\right\rangle $ of $Q$ with eigenvalue $q$, then $\phi^{\dagger}\left\vert q\right\rangle $ ($\phi\left\vert q\right\rangle $) is also an eigenstate of $Q$ with eigenvalue $q+1$ ($q-1$), and, since $Q$ commutes with the Hamiltonian, it is a conserved quantity, as should. The charge-conjugation operator $C$ is defined to transform a particle into its antiparticle, i.e., $$\mathit{C^{-1}}\phi(x,t)C=p\phi(x,t);C^{-1}\phi^{\dagger}(x,t)C=p^{*}\phi(x,t)$$ which is equivalent to $\mathit{C^{-1}}a_{k}C=pb_{k}$, $C^{-1}a_{k}^{\dagger}C=p^{*}b^{\dagger}$, and $C^{-1}b_{k}C=p^{*}a_{k}$, $C^{-1}b_{k}^{\dagger}C=pa$, with $\left|p\right|=\pm1$. The charge-conjugation operator $C$ that we want to simulate has the following unitary form[@Greiner; @book] $$C=\exp\left[-\frac{i\pi}{2}\sum_{k}\left[a_{k}^{\dagger}b_{k}+b_{k}^{\dagger}a_{k}-p\left(a_{k}^{\dagger}a_{k}+b_{k}^{\dagger}b_{k}\right)\right]\right].$$ Note that $Q$ anticomutes with $C$: $CQ=-QC$, and therefore in general they do not possess the same eigenstate. Consider now the Dirac field for particles of mass $m$ and spin $1/2$, whose Lagrange density is $$\mathcal{L}\textrm{(x)}=\frac{i}{2}\left[\overline{\psi}\gamma^{\mu}\left(\partial_{\mu}\psi\right)-\left(\partial_{\mu}\overline{\psi}\right)\gamma^{\mu}\psi\right]-m\overline{\psi}\psi\equiv\frac{i}{2}\overline{\psi}\gamma^{\mu}\overleftrightarrow{\partial}\psi-m\overline{\psi}\psi$$ where $\gamma^{\mu}$, $\mu=0,1,2,3$ are the Dirac matrices and $\overline{\psi}=\psi^{\dagger}\gamma^{0}$. The Euler-Lagrange equations now lead to the Dirac equation $$\left(i\gamma^{\mu}\partial_{\mu}-m\right)\psi=0,$$ whose general solutions can be expanded in plane waves ($k\equiv(k_{x},k_{y},k_{z}$) $$\psi(x,t)=\sum_{k,s}[c_{k,s}u_{k,s}\exp-i(kx-\omega_{k}t)+d_{k,s}^{\dagger}v_{k,s}\exp i(kx-\omega_{k}t)],$$ $$\psi^{\dagger}(x,t)=\sum_{k,s}[c_{k,s}^{\dagger}u_{k,s}^{\dagger}\exp i(kx-\omega_{k}t)+d_{k,s}v_{k,s}^{\dagger}\exp-i(kx-\omega_{k}t)],$$ where $s=1,2$ denotes the covariantly generalized spin vector [@Greiner; @book; @Ryder; @book] for the orthogonal energy positive $u_{k,s}(x,t)$ and energy negative $v_{k,s}(x,t)$ spinors, and the fermionic operators $c_{k,s},$$c_{k,s}^{\dagger}$ and $d_{k,s},$$d_{k,s}^{\dagger}$ obeys anticomutation relations, being interpreted as creator and annihilator of particles and antiparticles. Using Eq.(10)-(11) and the orthogonality relations for the spinors: $(u_{k,s}^{\dagger},u_{k,s^{'}})=(v_{k,s}^{\dagger},v_{k,s^{'}})=\delta_{ss^{'}}$ and $(u_{-k,s}^{\dagger},v_{k,s^{'}})=(v_{-k,s}^{\dagger},u_{k,s^{'}})=0$, one finds for the charge operator $Q=\int j^{0}d^{3}x=e\int d^{3}x\psi^{\dagger}(x,t)\psi(x,t)$ $$Q=e\sum_{k,s}(c_{k,s}^{\dagger}c_{k,s}-d_{k,s}^{\dagger}d_{k,s}),$$ where the elementary charge $e$ was explicitly inserted in the current density vector $j_{\mu}=e\overline{\psi}(x,t)\gamma_{\mu}\psi(x,t)$. Now, similarly to the spinless case, requiring that the charge-conjugation operator $C$ , besides being unitary, transforms a particle into its antiparticle as $$\mathit{C^{-1}}c_{k}C=d_{k},\mathit{C^{-1}}c_{k}^{\dagger}C=d_{k}^{\dagger},$$ $$\mathit{C^{-1}}d_{k}C=c_{k},\mathit{C^{-1}}d_{k}^{\dagger}C=c_{k}^{\dagger},$$ the $C$ operator, which we want to engineer, reads[@Greiner; @book] $$C=\exp-\frac{i\pi}{2}\sum_{k,s}\left[d_{k,s}^{\dagger}c_{k,s}+c_{k,s}^{\dagger}d_{k,s}-c_{k,s}^{\dagger}c_{k,s}-d_{k,s}^{\dagger}d_{k,s}\right].$$ Engineering the Unitary Charge-Conjugation Operator for Klein-Gordon and Dirac Fields ===================================================================================== For our purpose, we consider just one pair of particle antiparticle, such that the sum in Eqs.(7) disappears and we are lead essentially with Hamiltonian of the type $H_{CC}=\frac{\pi}{2}\left[a^{\dagger}b+b^{\dagger}a-p\left(a^{\dagger}a+b^{\dagger}b\right)\right]$, which, in the interaction picture, reads $H_{INT}=\frac{\pi}{2}\left(a^{\dagger}b+b^{\dagger}a\right)$. Note that the Hamiltonian $H_{CC}$ producing the particle anti-particle charge-conjugation operation can be combined in a single Hamiltonian given by $H_{I}=ga^{\dagger}b+g*b^{\dagger}a$, provided that we choose $g=\left|g\right|\exp(i\pi/2)$. Now, consider a single two-level ion of mass $m$ whose frequency of transition between the excited state $\left|e\right\rangle $ and the ground state $\left\lfloor g\right\rangle $ is $\omega_{0}.$ This ion is trapped by a harmonic potential of frequency $\nu$ along the axis $x$ and driven by a laser field of frequency $\omega_{l}$. The laser field promotes transitions between the excited and ground states of the ion through the dipole constant $\Omega=\left|\Omega\right|\exp(i\phi_{l})$. Finally, the ion is put inside a cavity containing a single mode of a standing wave field of frequency $\omega_{f}$, such that the Hamiltonian for this system reads $H=H_{0}+H_{1}$, where ($\hbar=1$) $$H_{0}=\omega_{f}a^{\dagger}a+\hbar\nu b^{\dagger}b+\frac{\omega_{0}}{2}\sigma_{z}$$ $$H_{1}=\sigma_{eg}\Omega\exp[i(k_{l}x-\omega_{l}t)]+\lambda a\sigma_{eg}cos(k_{f}x)+h.c.$$ Here, $h.c$. is for hermitian conjugate, $a^{\dagger}$($a$) is the creation (annihilation) operator of photons for the cavity mode field and and $b^{\dagger}$ ($b$) is the corresponding creation (annihilation) operator of phonons for the ion vibrational center-of-mass. The ion center-of-mass position operator is $x=\frac{1}{\sqrt{2m\nu}}(b^{\dagger}+b)$, while $\sigma_{z}=\left|e\right\rangle \left\langle e\right|-\left|g\right\rangle \left\langle g\right|$ and $\sigma_{ij}=\left|i\right\rangle \left\langle j\right|$, $i,j=\left\{ e,g\right\} $are the Pauli operators. Next, we consider the so-called Lamb-Dick regime, $\eta_{\alpha}=\bar{n}k_{\alpha}/\sqrt{1/2m\nu}\ll1$, $\alpha=l,f$ and $\bar{n_{\alpha}}$ is the average photon/phonon number. Thus, in the interaction picture and after discarding the counter rotating terms in the so-called rotating wave approximation (RWA), assuming that $\omega_{0}-\omega_{l}\approxeq\nu$ and $\omega_{0}\approxeq\omega_{f}$, the Hamiltonian above reads $$H_{RWA}=i\eta_{l}\Omega b\sigma_{eg}\exp[i(\omega_{0}-\omega_{l}-\nu)t]+\lambda a\sigma_{eg}\exp[i(\omega_{0}-\omega_{f})]+h.c.$$ An effective Hamiltonian can be obtained from the above one by requiring $\left|\eta_{l}\Omega\right|\sqrt{\left\langle b^{\dagger}b\right\rangle },\left|\lambda\right|\sqrt{\left\langle a^{\dagger}a\right\rangle }\ll\left|\omega_{0}-\omega_{l}-\nu\right|,\left|\omega_{0}-\omega_{f}\right|,$ and neglecting the highly oscillating terms stemming from[@James00] $$H_{eff}=-iH(t)\int^{t}H(t')dt',$$ where, in this notation, the lower limit is to be ignored. From Eq.(18) the following effective Hamiltonian is obtained, provided the atom internal state be prepared in the eigenstate of $\sigma_{gg}$: $$H_{eff}=\omega_{a}a^{\dagger}a+\omega_{b}b^{\dagger}b+ga^{\dagger}b+g^{*}b^{\dagger}a,$$ where $\omega_{a}=\left|\lambda_{a}\right|^{2}/(\omega_{0}-\nu)$; $\omega_{b}=\eta^{2}\left|\Omega\right|^{2}/(\omega_{l}-\omega_{0})$; $g=i\Omega\eta_{l}\lambda_{a}^{*}/(\omega_{0}-\nu)$ and an irrelevant constant was disregarded. We can simplify further by choosing $\omega_{a}=\omega_{b}=\omega$ such that the effective Hamiltonian, after the unitary operation $U=\exp-i\omega t\left(a^{\dagger}a+b^{\dagger}b\right)$, reads $$H_{eff}=ga^{\dagger}b+g^{*}b^{\dagger}a.$$ The desired charge-conjugation operation is obtained applying a laser pulse of duration $\tau$ satisfying $g\tau=\pi/2\exp(i3\pi/2)$. It is to be noted that the particle and antiparticle behavior is encoded in the cavity-mode field, whose quanta creation is denoted by $a^{\dagger}$, and in the vibrational motion of the ion center-of-mass, whose quanta creation in turn is $b^{\dagger}$. The charge-conjugation operator engineering involving just two vibrational center-of-mass motions along the axis $x$ and $y$ of a single ion of mass $m$ can be attained in the following way. Consider the Hamiltonian $H=H_{0}+H_{I}$ for a two-level ion constrained in a two-dimensional harmonic trap driven by two traveling wave fields, characterized by the two frequencies $\nu_{x}$ and $\nu_{y}$ propagating in $x$ and $y$ directions, respectively [@MoyaCessa12], with $$H_{0}=\nu_{x}a^{\dagger}a+\nu_{y}b^{\dagger}b+\frac{\omega_{0}}{2}\sigma_{z}$$ $$H_{1}=\sigma_{eg}\Omega_{x}\exp[-i\left(k_{x}x-\omega_{x}t\right)]+\sigma_{eg}\Omega_{y}\exp[-i\left(k_{y}y-\omega_{y}t\right)]+h.c.,$$ where $x=\sqrt{1/m\nu_{x}}(a+a^{\dagger})$ and $y=\sqrt{1/m\nu_{y}}(b+b^{\dagger})$ are the center-of-mass position operators of the ion in the $x-y$ plane, $\omega_{x}$ and $\omega_{y}$ are the frequencies of the traveling fields of wave vectors $k_{x}$, $k_{y}$. As before, the frequency of transition between the excited state $\left|e\right\rangle $ and the ground state $\left\lfloor g\right\rangle $ is $\omega_{0}$ and $\sigma_{ij}=\left|i\right\rangle \left\langle j\right|$, $i,j=g,e$. In the interaction picture and assuming that $\eta_{x}=k_{x}\sqrt{1/m\nu_{x}},\eta_{y}=k_{y}\sqrt{1/m\nu_{y}}\ll1$, the Hamiltonian above reads $$\begin{aligned} H_{INT} & =h.c.+\Omega_{x}\sigma_{eg}\exp\left[-i(\omega_{x}-\omega_{0})t\right]-i\eta_{x}a\sigma_{eg}\exp\left[-i(\omega_{x}-\omega_{0}+\nu_{x})t\right]-i\eta_{x}a^{\dagger}\sigma_{eg}\exp\left[-i(\omega_{x}-\omega_{0}-\nu_{x})t\right]\nonumber \\ & +\Omega_{y}\sigma_{eg}\exp\left[-i(\omega_{y}-\omega_{0})t\right]-i\eta_{y}b\sigma_{eg}\exp\left[-i(\omega_{y}-\omega_{0}+\nu_{y})t\right]-i\eta_{y}b^{\dagger}\sigma_{eg}\exp\left[-i(\omega_{y}-\omega_{0}-\nu_{y})t\right]\label{2}\end{aligned}$$ Now, by adjusting $\omega_{\alpha}\neq\omega_{0}$, $\alpha=x,y$, such that $\left|\omega_{\alpha}-\omega_{0}\right|t\gg1$, $\omega_{x}-\omega_{0}\cong-\nu_{x}$ and $\omega_{y}-\omega_{0}\cong-\nu_{y}$, we can disregard the the highly oscillating terms (RWA) such that in the weak coupling regime where $\left|\Omega_{\alpha}\right|\sqrt{\overline{n}_{\alpha}}\ll\delta_{\alpha}$, we are left with $$H_{RWA}=-i\eta_{x}a\sigma_{eg}\exp\left(-i\delta_{x}t\right)+i\eta_{x}a^{\dagger}\sigma_{ge}\exp\left(i\delta_{x}t\right)-i\eta_{y}b\sigma_{eg}\exp\left(-i\delta_{y}t\right)+i\eta_{y}b^{\dagger}\sigma_{ge}\exp\left(i\delta_{y}t\right),$$ and $\delta_{\alpha}=\omega_{\alpha}-\omega_{0}+\nu_{\alpha}\cong0$, $\alpha=x,y$ . A particularly simple effective Hamiltonian is found if we let $\delta_{x}=\delta_{y}$. In this case, using Eq.(12) we found that the dynamics of the internal state decouples from the external ones, such that $H_{total}=\sigma_{z}\oplus H_{eff}$, with $$H_{eff}=\frac{\eta_{x}^{2}}{\delta}a^{\dagger}a+\frac{\eta_{y}^{2}}{\delta}b^{\dagger}b+\frac{\eta_{x}\eta_{y}}{\delta}\left(a^{\dagger}b+ab^{\dagger}\right).$$ If we further choose $\nu_{x}=\nu_{y}$ or, equivalently, $\omega_{x}=\omega_{y}$, then the Hamiltonian corresponding the charge-conjugation operator Eq.(7) can be tailored adjusting $\frac{\eta^{2}}{\delta}\tau=\pm\frac{\pi}{2}$ in Eq.(26). Consider now the charge conjugation for fermions, Eq.(15). As previously done for bosons, we will interested in just a single mode and a well defined spin vector, such that we can write Eq.(15) as $C=\exp\left[-\frac{i\pi}{2}\left[\left(d^{\dagger}c+c^{\dagger}d\right)-\left(c^{\dagger}c+d^{\dagger}d\right)\right]\right]$, with the particle and antiparticle operator obeying the anticomutation relation. To engineer this Hamiltonian, let us consider now two two-level ions $1$ and $2$, having the internal states described by pseudo-spin operators $\sigma_{1}^{+}$,$\sigma_{1}^{-}$, $\sigma_{2}^{+}$, $\sigma_{2}^{-}$ possessing the same algebra as those of $c^{\dagger}$,$c$, $d^{\dagger}$ and $d$, respectively, while the one-dimension harmonic motional states of each atom are described by the creation and annihilation bosonic operators $b_{1}^{\dagger}$, $b_{1}$, $b_{2}^{\dagger}$, and $b_{2}$. These two ions are put into the same cavity containing a single mode of frequency $\omega_{a}$ of a electromagnetic standing wave, such that the Hamiltonian for this system can be written as $H=H_{0}+H_{1}$ , with ($\hbar=1$) $$H_{0}=\omega_{a}a^{\dagger}a+\nu_{1}b_{1}^{\dagger}b_{1}+\nu_{2}b_{2}^{\dagger}b_{2}+\frac{\omega_{01}}{2}\sigma_{1}^{z}+\frac{\omega_{02}}{2}\sigma_{2}^{z}$$ $$H_{1}=\lambda_{1}a\sigma_{1}^{+}\cos\left(\eta_{1}x_{1}\right)+\lambda_{2}a\sigma_{2}^{+}\cos\left(\eta_{2}x_{2}\right)+h.c.,$$ where, in $H_{0}$, $a^{\dagger}$ and $a$ are the creation and annihilation operator in Fock space for the cavity mode field, $\nu_{1}$ ($\nu_{2}$) is the frequency of the ion trap $1$ ($2$) $\omega_{01}$ ($\omega_{02}$) is the frequency of transition from the ground to the excited state of ion $1$ ($2$), and, in $H_{1}$, $\eta_{\alpha}=k_{a}\sqrt{1/m\nu_{\alpha}},$ $\alpha=1,2$, is the Lamb-Dick parameter, and$\lambda_{1}$ ($\lambda_{2}$) describes the strength of the coupling between the standing wave and the ion placed at position $x_{1}$ ($x_{2}$) . Assuming that $\eta_{\alpha}\ll1$ and moving to the interaction picture, the Hamiltonian above reads $$H_{1}=\lambda_{1}a\sigma_{1}^{+}\exp\left[i\left(\omega_{o1}-\omega_{a}\right)t\right]+\lambda_{2}a\sigma_{2}^{+}\exp\left[i\left(\omega_{o2}-\omega_{a}\right)t\right]+h.c.$$ For two identical ions, $\omega_{o1}=\omega_{o2}=\omega_{o}$; $\lambda_{1}=\lambda_{2}=\lambda$, and under the assumption of weak coupling, $\left|\lambda_{\alpha}\right|\sqrt{\overline{n}}\ll\delta_{\alpha}=\left(\omega_{o\alpha}-\omega_{a}\right)$, using Eq.(19) the following effective Hamiltonian is obtained: $$H_{eff}=\frac{\lambda^{2}}{\delta}\sigma_{1}^{+}\sigma_{1}^{-}+\frac{\lambda^{2}}{\delta}\sigma_{2}^{+}\sigma_{2}^{-}+\frac{\lambda^{2}}{\delta}a^{\dagger}a\left(\sigma_{1}^{z}+\sigma_{2}^{z}\right)+\frac{\lambda^{2}}{\delta}\left(\sigma_{1}^{+}\sigma_{2}^{-}+\sigma_{1}^{-}\sigma_{2}^{+}\right).$$ If the system is tailored to fit $\frac{\lambda^{2}}{\delta}\tau=\pm\pi/2,$ and the cavity starts from the vacuum state, then the above Hamiltonian gives rise to the desired charge-conjugation evolution operator $$C=\exp\left\{ -i\frac{\pi}{2}\left[\pm\left(\sigma_{1}^{+}\sigma_{1}^{-}+\sigma_{2}^{+}\sigma_{2}^{-}\right)\pm\left(\sigma_{1}^{+}\sigma_{2}^{-}+\sigma_{1}^{-}\sigma_{2}^{+}\right)\right]\right\} .$$ which is similar to the charge-conjugation operator $C=\exp\left[-\frac{i\pi}{2}\left[\left(d^{\dagger}c+c^{\dagger}d\right)-\left(c^{\dagger}c+d^{\dagger}d\right)\right]\right]$. Before to finish this Section, let us briefly stress the feasibility of our proposal. In order to engineer the Hamiltonians allowing to simulate the particle and antiparticle charge-conjugation, we impose the Lamb-Dick approximation, which consists in assuming the ion confined within a region much smaller than the laser wavelength, such that the we can safely choose the Lamb-Dick parameter as $\eta\sim0.1$ [@Meekhof96]. For the case of bosons, as for instance Eq.(21), a one-dimensional trap is required, and we imposed the condition for the laser frequencies $\omega_{a}=\omega_{b}=\omega$, which, in turns, requires that $\left|\lambda_{a}\right|^{2}/(\omega_{0}-\nu)=\eta^{2}\left|\Omega\right|^{2}/(\omega_{l}-\omega_{0}).$ As for example, for a fixed atom-coupling parameter around $\lambda\sim10^{5}s^{-1}$in the microwave domain, this condition can be obtained by adjusting either the laser and the atomic transition frequencies, $\omega_{l}$ and $\omega_{0}$, or the laser intensity $\Omega$, since the mechanical frequency $\nu$, being much lesser than the electromagnetic ones, is irrelevant here. In a similar way, to engineer the Hamiltonian Eq.(26), a bi-dimensional ion trap is required, and the particle and anti-particle simulation is encoded in the vibrational states of the ion in the $x$ and $y$ directions. The condition $\frac{\eta^{2}}{\delta}\tau=\pm\frac{\pi}{2}$ can easily be satisfied by adjusting the external laser frequencies $\omega_{x}=\omega_{y}=\omega$, the ion internal transition frequency $\omega_{0}$, the ion vibrational frequencies $\nu_{x}=\nu_{y}=\nu$ in order to obtain a small detuning $\pm\delta=\omega-\omega_{0}-\nu$, and the laser pulse duration $\tau$. On the other hand, to engineer the Hamiltonian Eq.(30) that simulates the fermion charge-conjugation operator, Eq.(31), the internal states of two identical two-level ions, trapped into the same cavity, are needed. The condition to be matched is that now the strength of the coupling $\lambda$, the pulse duration $\tau$ and the detuning between the cavity mode frequency $\omega$ and the internal transition frequency $\omega_{0}$ of the ions obey $\frac{\lambda^{2}}{\delta}\tau=\pm\pi/2$, which, although feasible, is indeed a more stringent condition. Conclusions =========== In this paper we have proposed a method to engineer the unitary charge conjugation operator as known from quantum field theory for bosons as well as for fermions [@Greiner; @book]. Although easily extendable to other contexts such as cavity QED [@Fabiano], we focus on the highly controllable scenario of trapped ions where quantum controls of single ion states are daily being reported [@Zipkes10], thus opening the possibility of simulating particle and antiparticle charge conjugation. To engineer the bosonic charge-conjugation operator, we relies on two method: the first one uses both a single mode of a vibrational ion state and the single mode of a cavity field state, where the ion is trapped; the second method uses the vibrational harmonic states of a single trapped ion in two different directions. To engineer the charge-conjugation operator for fermions, we propose a scheme which is based on two two-level ions trapped into the same single-mode cavity, such that the fermionic operators are simulated by the pseudo-spin operator related to the internal states of the ions. Acknowledgment ============== The author acknowledge financial support from the Brazilian agency CNPq and Dr. Juan Mateos Guilarte for the kind hospitality during the stay in USAL. This work was performed as part of the Brazilian National Institute of Science and Technology (INCT) for Quantum Information. [1]{} Greiner W and Reinhardt J 1993 *Field Quantization* (Springer) Peskin M E and Schroeder D V (1997) *An Introduction to Quantum Field Theory* (Addison Wesley) Greenberg O W 2002 *Phys. Rev. Lett.* 89 231602 Ryder L H *Quantum Field Theory* 1996 (Cambridge University Press 2nd Edition) Meekhof D M et *al*. 1996 *Phys. Rev. Lett.* 76 1796. James D F V 2000 *Fortschr. Phys.* 48 823 A Hamiltonian of the type of Eq.(14) involving two electromagnetic modes of a bimodal high Q cavity was recently derived in Serra R M, Villas-Bôas C J, de Almeida N G and Moussa M H Y 2005 *Phys. Rev. A* 71 045802; Prado F O, de Almeida N G, Moussa M H Y and Villas-Bôas C J 2006 *Phys. Rev. A* 73 043803, first using a three-level atom, and then a two-level atom. Moya-Cessa H, Soto-Eguibar F, Vargas-Martínez, J M, Júarez-Amaro R and Zúñiga-Segundo A 2012 *Phys. Rep.* 513 229 Zipkes C, Palzer S, Sias C and Köhl M 2010 *Nature* 464 388
--- author: - 'Kentaro [Nagai]{}$^{1}$, Tsutomu [Momoi]{}$^{1,2}$ and Kenn [Kubo]{}$^{3}$' title: Magnetic Order in the Double Exchange Model in Infinite Dimensions --- @align\#1\#2[.6ex]{} \[J10susa\] \[J10susf\] \[sinfphase\] \[sinfmag\] \[Icnpara\] \[phase\] \[munfig\] \[kinene\] \[dosfig\]
--- bibliography: - 'ms.bib' --- biblabel\[1\][\#1]{} makefntext\[1\][\#1]{} {#section .unnumbered} INTRODUCTION ============ Monolayer VSe$_2$ is the one of the most intriguing members of the family of two-dimensional (2D) transition-metal dichalcogenides. This material attracts a special interest of the scientific community due to several recent discoveries, including in-plane piezoelectricity [@1], a pseudogap with Fermi arc [@2] at temperatures above the charge density wave transition ( 220 K for the monolayer [@3]), and especially the existence of ferromagnetism in 2D system[@4; @5; @6; @7; @8; @9; @10; @11]. Experimental results are rather contradictory. A strong room-temperature ferromagnetism with a huge magnetic moment per formula unit has been reported for monolayer VSe$_2$ epitaxially grown on graphite [@4]. A local magnetic phase contrast has also been observed by magnetic force microscopy at the room temperature at the edges of VSe$_2$ flakes exfoliated from a three-dimensional crystal. [@14] XMCD measurements evidence a spin-frustrated magnetic structure in VSe$_2$ on graphite. [@xmcd] Paramagnetism of bulk VSe$_2$ [@para1; @para2] makes these observations more intriguing. Another situation was reported for the monolayers grown on bilayer graphene/silicon carbide substrate. In both works the absence of exchange splitting of the vanadium $3d$ bands observed in angle-resolved photoemission spectroscopy experiments was reported. This result contradicts to other studies that revealed a magnetization value not higher than  5 $\mu_B$.[@12; @13] Based on these results we can conclude that the influence of the substrate is important for description of the magnetic properties of these materials. Theoretical models have been developed to account for the above discrepant observations [@4; @12; @14; @16]. These works mainly focused on the band structure and magnetic moments on vanadium sites. It has been proposed that the presence of charge density waves could cause the quenching of monolayer ferromagnetism due to the band gap opening induced by Peierls distortion [@15]. Phonon spectra of several VSe$_2$ and similar systems were also considered theoretically [@22; @23]. This modeling motivates us to study interplay between magnetism and structural phase transitions in VSe$_2$. Additionally, there is a plethora of works demonstrating a relationship between the symmetry, electronic structure and magnetic properties in transitional metal compounds [@BaCu; @DMI1; @DMI2; @24]. The VSe$_2$ crystal is formed from separate layers along the c-axis direction. Two main phases for this material were predicted to be stable: the H phase characterized by Se stacked over each other and the T phase with one layers of Se rotated by $60^\circ$ around axis normal to the plane of layer. [@16] Atomic structures of the VSe$_2$ monolayer in both H and T phases are shown in Fig.1. Surprisingly, the reported binding energies for different configurations are almost the same despite the colossal difference in magnetic properties and electronic structure (Fig.\[fig1\]). [@16] This finding additionally motivates us to examine various aspects of structural phase transitions in bulk, few-layer and monolayer of VSe$_2$. ![Atomic structure of 2D VSe$_2$ monolayer (top and side view) in H phase (a) and in T phase (b). Vanadium atoms are denoted with red circles, upper and bottom selenium layers denoted with light green and dark green circles, respectively. (c) and (d) panels represent the corresponding spin-polarized band structures. Red lines correspond to spin up states and black ones to spin down, the Fermi level corresponds to 0 eV.[]{data-label="fig1"}](Fig1.png){width="1\columnwidth"} Computational method and model ============================== Electronic properties of the VSe$_2$ system were simulated within Density Functional Theory (DFT) framework using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional [@17] as implemented in the Vienna ab-initio simulation package (VASP) [@19; @20] with a plane-wave basis set. This approach gave reliable results for other systems similar to VSe$_2$ [@25]. Also we include van der Waals interaction using the method of Grimme (DFT (PBE)-D2) [@vdW]. Taking into account London dispersion forces is essential for few-layer VSe$_2$ (see Table \[tab1\] and discussion in section 3.5). The calculation parameters were chosen as follows. The energy cutoff equals to 400 eV and the energy convergence criteria is $10^{-6}$ eV. For the Brillouin zone integration a $10\times10\times1$ gamma centered grid was used for layered structures and $8\times8\times8$ for bulk structures. A vacuum space more than 10 Å in the vertical $z$ direction was introduced for layered structures. The technical parameters are similar to those used in the recent studies of phase stability in layered systems.[@Ersan; @Kaltsas] The optimized atomic positions for T-phase and lattice parameters $a=b=3.31$ Å and $c=6.20$ Å are in good agreement with experiment [@21]. In particular, the corresponding interlayer distance in bulk VSe$_2$ is $3.04$ Å. The calculated band structures of VSe$_2$ monolayer in the T and H phases are in good agreement with previous works. [@16] The calculated magnetic moment of 0.68 $\mu_B$ for initial configuration without rotation of the selenium atoms also agrees with results of the previous work [@18]. To investigate the transition between H and T phases we performed self-consistent calculations of electronic structure and total energies in transitional points between these phases. For this purpose, we rotate either one Se atom or all selenium atoms belonging to the upper layer of VSe$_2$ in supercell as schematically shown in Fig.\[fig2\]. To trace the changes in electronic structure and magnetic properties the calculations for configurations with a 10$^{\circ}$ rotation step were performed. Generally, the rotation can be realized within two models. The first one is to move Se in plane from initial to final point (Fig.\[fig2\] a and c). The second one is to fix the constant V-Se distance for all intermediate steps, which produces an elevation of selenium atoms above the plane at intermediate steps of the migration (Fig.\[fig2\] b and d). We will refer these rotation models as in-plane and arc rotation schemes, respectively. All the calculations were performed for the ferromagnetic ordering of the spins of vanadium atoms. ![Schematic visualization of the plane (a,c) and arc (b,d) types of the Se atoms rotation. (a,b) and (c,d) panels correspond to side and top views, respectively. Initial and final positions of Se are presented with orange and green circles, respectively. Intermediate configurations of selenium atoms obtained with the 20$^{\circ}$ step are denoted with light blue circles.[]{data-label="fig2"}](Fig2.png){width="0.875\columnwidth"} Results and discussion ====================== Rotation of single Se atom -------------------------- At the first step of our study we have simulated the motion of the single Se atom in the monolayer (see Fig. \[fig3\]). For simplicity, we considered an in-plane migration of the atom. Results of the calculations (Fig. 3) evidence a gradual increasing of the total energy of the system during the all processes of the rotation with maximal value at final point. The cause of the large magnitude of the energies and instability of the final configuration is in decreasing of the distance between moved and rigid Se atoms to the value of 1.92 Å. Thus we can conclude that the model of the single Se atom rotation is unrealistic and transition between T and H phases may be realized only with distortion of the whole selenium layer. Further we will consider only this kind of the structural phase transitions. The values of the magnetic moments calculated for intermediate configurations (Fig. \[fig3\]) support our initial guess that the structural transition between the phases affects magnetic properties of VSe$_2$. Note that a deviation of the selenium atoms from equilibrium positions on small angles (less than 10$^{\circ}$) requires much smaller energies of about 0.32 eV and, therefore, should be taken into account for a realistic description of the atomic structure of VSe$_2$ at the room temperature. ![Evolution of the total energy (a) and magnetic moment (b) during in-plane rotation of single Se atom. (c) and (d) panels visualize the initial and final atomic structures. Light and dark green circles denote upper and bottom selenium layers, respectively.[]{data-label="fig3"}](Fig3.png){width="1\columnwidth"} Rotation of the whole Se sheet in the VSe$_2$ monolayer ------------------------------------------------------- Having considered the results concerning the migration of the single Se atom we are in a position to analyze the case of whole upper Se-layer rotation, which will provide a better understanding of the transition between H and T phases. ![Evolution of the total energy (a) and magnetic moment (b) during rotation of whole upper Se layer of VSe$_2$ monolayer within arc model.[]{data-label="fig4"}](Fig4.png){width="1\columnwidth"} The performed simulations for 3$\times$3 supercell with constant V-Se distances when Se atoms elevate from initial and starting positions (Fig. \[fig2\]) revealed the energy barrier of 0.60 eV that is smaller than that observed in the case of the in-plane rotation (Fig.\[figS1\]a in SI). Thus further we will consider only this type of the Se atom migration. To evaluate the temperature required to overcome this barrier one should establish a relation between the calculated energies of the process and temperature of the reactions. We have addressed this question in our previous work [@go] and found that the barrier values of about 0.50 eV and 1.20 eV correspond to the room temperature and 200 $^{\circ}$C, respectively. Thus, the energy barrier of 0.60 eV can be overcome already at the temperatures about 40 $^\circ$C. Four conclusions could be drawn from these results. (i) There is a possibility of the structural phase transition in previously studied VSe$_2$ samples during measurements. (ii) For development of devices based on VSe$_2$ and similar monolayer systems one should take into account possibility of the structural phase transitions caused by the heating of the devices during work. Such a transition can significantly affect the work of the device due to difference in electronic structures of different phases (see Fig. \[fig1\],also changes in band structure Fig.\[figS3\] in SI). (iii) One can use VSe$_2$ and similar systems as temperature detectors. (iv) According to our results there is a low-energy cost to deviate the selenium atoms belonging to one layer on a small angle from the equilibrium positions. It means that one needs to account this for a realistic interpretation of the experimental data. Moderate temperature of the transition between different structural phases requires an examination of the electronic structure and magnetic properties at intermediate steps of the structural phase transition. The obtained calculations results demonstrate that in the case of the ferromagnetic ground state the values of magnetic moments change gradually with small step of 10$^\circ$ of the rotation of Se layer. From Fig.\[fig4\] one can see that at 30$^\circ$ the magnetic moment has the maximal value of 1.05$\mu_{B}$, which is about two times larger than that in the initial configuration. According to the calculated occupation matrices such a magnetic moment change is mainly related to the contributions of $xy$ and $x^2-y^2$ orbitals of vanadium atoms (see Fig. \[fig5\]). Since the total occupation (spin-up + spin-down) of the different orbitals remain almost the same, the orbital magnetic moment values change is fully connected with a redistribution of the electrons between different spin channels due to change of the hybridization between V and Se. ![Partial densities of states calculated for VSe$_2$ monolayer in the ferromagnetic configuration. The arc rotation scheme with the $20^{\circ}$ step was used. Left and right panels correspond to ($d_{xy}$, $d_{yz}$, $d_{xz}$) and ($d_{x^2-y^2}$, $d_{z^2}$) sets of states, respectively. []{data-label="fig5"}](Fig5.png){width="1\columnwidth"} In the case of an antiferromagnetic configuration the situation is more complicated. First of all, the magnetic lattice of VSe$_2$ is frustrated one, which is in agreement with experimental observations.[@xmcd] This means that within a mean-field DFT approach we cannot define an antiferromagnetic collinear-type order corresponding to minimum of the magnetic interaction energy for all V-V bonds, simultaneously. The second complication follows from the fact that the system in question is metal. It means that the magnetization of individual vanadium atom can be very sensitive to the orientation of the neighbouring magnetic moments [@FeGe]. Indeed, our DFT simulations of the VSe$_2$ supercell with antiferromagnetic ordering have revealed a strong suppression of the magnetic moment values of some vanadium atoms in the supercell. In addition, we observe that the details of the magnetic moments suppression strongly depend on the size of the supercell. In this complex situation some information on magnetic couplings in the VSe$_2$ system could be extracted by using the theory of infinitesimal spin rotations approximation [@FeGe; @Liechtenstein]. However, the magnetic couplings calculated in this way can be used for analysis only in the vicinity of the ferromagnetic configuration. The values of the magnetic moments in AFM phase can be stabilized by inclusion of the on-site Coulomb interaction as can be done with DFT+$U$ approach. However, the using of the DFT+$U$ approach in the case of VSe$_2$ is questionable, since the experimental ARPES spectra are in good agreement with GGA band structure as it was shown in Refs.[@xmcd; @arp1; @12]. At the same time the inclusion of the Hubbard $U$ leads to considerable changes in the band structure. Thus, the energy difference between AFM and FM solutions for VSe$_2$ simulated with GGA does not allow us to construct a comprehensive magnetic model and estimate the corresponding magnetic interactions between vanadium atoms. Nevertheless, the results of these calculations evidence that despite the changes of electronic structure at intermediate steps the ferromagnetic configuration remains significantly energetically favorable in all the cases (Fig.\[fig6\]). Thus the possible structural distortions in VSe$_2$ will not provide a suppression of ferromagnetism. Our calculations demonstrate that possible transition from experimentally observed T phase toward H phase should provide an enhancement of ferromagnetic interactions and increasing of magnetic moment. To simulate the experimentally observed paramagnetic state of bulk VSe$_2$ [@para1; @para2] one can use a dynamical mean-field theory. ![Difference of AFM and FM state total energies calculated in arc and plane rotation schemes for 3x3 supercell of VSe$_2$ monolayer.[]{data-label="fig6"}](Fig6.png){width="0.92\columnwidth"} Structural phase transition in bulk VSe$_2$ ------------------------------------------- There are two main differences in the energetics of the structural phases transitions in bulk and monolayer VSe$_2$. The first one is the almost the same value for the energies of the motion of Se layer within both rotation models (Fig.\[fig7\] and Fig.\[figS1\]c in SI). The second one is increasing of the migration barrier (see Fig. \[fig7\]). Both are related to the van der Waals interactions between the layers in the bulk VSe$_2$. The analysis of the calculated partial density of states in this case leads to similar conclusions as above (see Fig.\[figS2\] in SI) ![Total energy (a) and magnetic moment (b) as functions of the rotation angles. The simulation results were obtained for bulk VSe$_2$ within the arc rotation model.[]{data-label="fig7"}](Fig7.png){width="1\columnwidth"} In the case of the rotation of the Se atoms belonging to one layer with constant V-Se distance at intermediate steps of migration, the initial distance of 3.63 Å between rotated and fixed selenium layers decreases by 0.54 Å. This deviation from the optimal interlayer distance provides an increasing of the energy barrier (see also changes in band structure Fig.\[figS4\] in SI). The value of the energy barrier is corresponding with stability of the structural ground state in bulk crystal up to the temperatures above 100$^{\circ}$C. Note that in contrast to monolayer case the structural ground state of bulk VSe$_2$ is T configuration with ferromagnetic orientation of magnetic moments. Structural phase transition in bi- and trilayers of VSe$_2$ ----------------------------------------------------------- Moreover, we examine the energetics of the structural phases transition in the top layer of bi- and trilayers VSe$_2$ with different stacking models (Fig. \[fig8\]). The notation of the types of Bernal stacking is similar to graphite. These results also can be applied for VSe$_2$ non-covalently attached to substrates. ![Schematic representation of the unit cells used for simulating VSe$_2$ trilayers characterized by different stacking models.[]{data-label="fig8"}](Fig8.png){width="1\columnwidth"} Results of the calculations (Fig. \[fig9\]) evidence similarity the case of few-layer VSe$_2$ with monolayer. The configuration of the H type corresponds to the structural ground state for all types of the stacking in few-layer case. The energy required for the transition from T to H phase is about 0.60 eV for AA- and AB- stacking in bilayer. In trilayer the most energetically favorable stacking orders are AAA and ABC. ![Total energy (left panels) and magnetic moment (right panels) of two- (a,b) and three-layer (c,d) VSe$_2$ systems estimated for H, T and intermediate structures.[]{data-label="fig9"}](Fig9.png){width="1\columnwidth"} Thus, similarly to free standing VSe$_2$ monolayer, in top layer of VSe$_2$ there can be transition between two structural configurations at a moderate heating. The magnetic moments of vanadium atoms belonging to the upper layer of the few-layer structures change from 0.64$\mu_B$ to 0.82$\mu_B$. Such a change is fully connected with a redistribution of the electrons between different spin channels, main contributions are from $xy$ and $x^2-y^2$ orbitals of vanadium atoms similar to monolayer case. Therefore, the presence of the substrates does not influence significantly on sensitivity of bi- and trilayer VSe$_2$ systems to structural changes. Interlayer binding energy ------------------------- To understand the effect of interlayer interactions on structural properties we have checked interlayer distances and binding energies. The binding energies E$_b$ for different VSe$_2$ structures were calculated by using the following expression E$_b=($E$-n*$E$_{mono})/m$, where E is the total energy of considered system, E$_{mono}$ - total energy of monolayer, $n$ - number of layers in the considered system, $m$ - average number of interlayer interactions ($m$ = 2, 3/2 and 1 for bulk, 3 and 2-layers, respectively). Results of these calculations are presented in Table \[tab1\]. In the case when van der Waals interaction is neglected we obtain that the distance between V-V atoms belonging to the same layer is 3.33 Å and the Se-Se interlayer distance equals to 3.12 Å. When the van der Waals interaction is taken into account such distances equal to 3.31 Å and 3.04 Å, respectively. In 2- and 3-layer cases we considered the structures (Fig.\[fig8\]) with the lowest total energies. Calculated values of the binding energies evidence that few-layer VSe$_2$ is pure van der Waals structure in contrast to bulk VSe$_2$ where London dispersion forces is a small addition to electrostatic interactions between V-cations and Se-anions from different layers. The changes of interlayer distances are proportional to contribution of the dispersion forces to the binding energies (about 0.1 Å in bulk and 0.4 - 0.6 Å in few-layer systems). Therefore, the energy difference in migration barriers in bulk and few-layer VSe$_2$ can be explained by contribution from electrostatic repulsion of anions from the layer above. ----------- ------------ --------------- --------------------- VSe$_2$ E$_b$ with E$_b$ without Interlayer distance structure vdW, meV vdW, meV with vdW (without vdW), Å T-bulk 7.93 4.79 3.04(3.12) H-bulk 99.67 95.78 3.22(3.32) T-two 15.51 -9.36 3.11(3.51) H-two 24.68 -29.86 3.69(4.27) T-three 19.05 -90.86 3.08(3.57) H-three 100.82 -51.77 3.66(4.14) ----------- ------------ --------------- --------------------- : Interlayer binding energies (meV/formula unit) and interlayer distances calculated for different VSe$_2$ structures with and without vdW interaction.[]{data-label="tab1"} Phonon dispersion ----------------- To complete the picture of physical properties of VSe$_2$ monolayer we have performed calculations of phonon dispersions by using VASP and Phonopy packages [@phonopy]. These combination of the packages is widely used for studying of vibrational properties in similar systems.[@Ersan] For such calculations we used $3\times3\times1$ supercell to obtain sets of forces and mesh grids: $10\times10\times1$ for monolayer and $6\times6\times6$ for bulk. Both H and T phases in nonmagnetic and ferromagnetic configurations were considered. ![Phonon dispersions calculated for nonmagnetic (red dashed line) and ferromagnetic state (blue solid line) of monolayer and bulk VSe$_2$. Both T and H phase structures are presented. Red dots denote experimental frequencies taken from Ref.[@phonon1].[]{data-label="fig10"}](Fig10.png){width="1\columnwidth"} The calculated phonon spectra are presented in Fig. \[fig10\]. For the T phase systems (bulk and monolayer) the resulting dispersions demonstrate a weak sensitivity to the magnetism. It is not the case for the H phase configurations. In the nonmagnetic state for H-monolayer and H-bulk we observe a soft phonon mode in the direction $\Gamma-$M$-$K for monolayer and in all symmetry directions for bulk. Existence of such a mode indicates structural instability. Importantly, in the ferromagnetic case the soft mode disappears, which means that the account of magnetism provides structural stability of H phase in both monolayer and bulk. The cause of this effect of magnetic configurations is the robustness of magnetic interactions (see Fig.\[fig6\] and discussion above) which is the same order of magnitude as difference between structural phase. For H-bulk and T-monolayer ferromagnetic systems the calculations reveal the appearance of the indirect gap of  0.57 THz. Comparison of the calculated dispersion curves with available experimental data from Ref.[@phonon1] obtained by a point-contact spectroscopy and Raman methods can be fulfilled only for the $\Gamma$ point for which experimental oscillation frequencies are  6.04 (25 meV) and  9.67 (40 meV) THz. Our theoretical values of 6.28 and 10.42 THz are in good agreement with experimental data. Structural phase transition by stretching ----------------------------------------- The last step of our survey is the modeling of stretch which can appear in the monolayer due to substrate influence. To simulate this effect, we increase $a$ and $b$ lattice vectors of our structure and then relax atomic positions to find a new ground state corresponding to new lattice parameters. Results of the calculation evidence that a stretching more than 3 percent leads to phase transition of the ground state configuration from H to T in monolayer and bilayer VSe$_2$ (Fig. \[fig11\]a). ![Energy difference between H and T structures of VSe$_2$ (a) and energy barrier (b) as functions of stretching in $a$ and $b$ lattice directions. Lines of different colors correspond to systems with different numbers of layers.[]{data-label="fig11"}](Fig11.png){width="1\columnwidth"} Therefore, the experimentally observed structure[@xmcd] of T type can result from a substrate-induced strain. Another effect of the stretching is a decreasing energy barrier for migration between different configurations (Fig. \[fig11\]b). Here we define the energy barrier as energy difference between T structure and intermediate 30$^{\circ}$ structure CONCLUSIONS =========== Results of first-principles calculations demonstrate that the energy barrier for the transition between two structural states of VSe$_2$ monolayer with a step-by-step rotation of the single Se atom is rather high. From the other hand the energy cost of the rotation of whole selenium layer is rather low (about 0.60 eV for monolayer and 0.80 eV for bulk). In the case of the monolayer it could be realized with a heating of the samples. The excitation energies of the rotation of the selenium layer up to 10$^\circ$ are very low, therefore, the realistic theoretical description of VSe$_2$ (from monolayer to bulk) should take into account these small deviations from ideal crystal structure. Our calculations demonstrate that the transition from the experimentally observed T configuration to the H configuration is accompanied by a considerable change in electronic structure which is a redistribution of $3d$ electrons of vanadium between orbitals. Such transitions significantly influence on transport and thermal properties of VSe$_2$. From the other hand, the values of magnetic moments and total energies of ferro- and antiferromagnetic configurations change gradually between two structural phases. In all the considered cases (bulk, few-layer and monolayer) system demonstrate strong favorability of ferromagnetic structure. The analysis of the calculated phonon dispersions has demonstrated a principal role of the ferromagnetism in stabilization of the atomic structure of the VSe$_2$ monolayer in H phase and similar systems. On the basis of the obtained results we can conclude that the experimentally observed paramagnetism in bulk VSe$_2$ and contradictory results of magnetic measurements for monolayers on different substrates are not caused by structural changes. The calculations for bi- and trilayers demonstrate that the energy barrier of transition is similar to monolayer. The strain, possibly induced by the substrate, provides the change of the most energetically favorable structure from H to T. Therefore, the experimental observation of T configuration can result from a VSe$_2$ structure stretching by more than 3 percent on substrates. Another effect of the stretching is a decrease of the energy barrier of transition between structural phases. Thus both strain and deviation from ideal structure should be taken into account for realistic description of VSe$_2$ monolayer on substrates. Conflicts of interest {#conflicts-of-interest .unnumbered} ===================== There are no conflicts to declare. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the Russian Science Foundation, Grant No. 18-12-00185. {#section-1 .unnumbered} $ $\ Fig. \[figS1\] shows angle dependencies of the total energy and magnetic moment in the case of the rotation of the whole Se upper layer of VSe$_2$ monolayer (a,b) and bulk (c,d) within the plane scheme. ![Total energy (a,c) and magnetic moment (b,d) as functions of the rotation angles obtained within the plane model for all Se atoms belonging to the upper layer. The simulations were performed for VSe$_2$ monolayer (a,b) and bulk (c,d).[]{data-label="figS1"}](FigS1.png){width="0.98\columnwidth"} $ $\ \ \ \ \ \ \ \ For monolayer one can see that such a rotation scheme is less profitable in energy, since the barrier grows. In turn, the magnetic moment demonstrates the same behavior as with the arc rotation model. In the bulk case we obtain almost the same dependencies, but the maximum of the energy barrier at 30$^{\circ}$ becomes larger than that in the case of the monolayer. Also we calculated partial densities of states of VSe$_2$ bulk in intermediate points of arc type of rotation with the 20$^\circ$ step (Fig \[figS2\]). ![Partial densities of states calculated for VSe$_2$ bulk in the ferromagnetic configuration. The arc rotation scheme with the $20^{\circ}$ step was used. Left and right panels correspond to ($d_{xy}$, $d_{yz}$, $d_{xz}$) and ($d_{x^2-y^2}$, $d_{z^2}$) sets of states, respectively.[]{data-label="figS2"}](FigS2.png){width="1\columnwidth"} Figures \[figS3\] and Fig. \[figS4\] give band structures of VSe$_2$ monolayer and bulk obtained within arc scheme rotation with the 10$^\circ$ elementary step. ![Band structures of the bulk VSe$_2$ crystal calculated for atomic structures modified within the arc rotation model from T phase (0$^{\circ}$) to H phase(60$^{\circ}$) with the step of 10$^{\circ}$. All the calculations were performed for ferromagnetic configuration. Red lines correspond to spin up states and black ones to spin down.[]{data-label="figS4"}](FigS4.png){width="1\columnwidth"} ![Band structures of the monolayer VSe$_2$ calculated for atomic structures modified within the arc rotation model from T phase (0$^{\circ}$) to H phase(60$^{\circ}$) with the step of 10$^{\circ}$. All the calculations were performed for ferromagnetic configuration. Red lines correspond to spin up states and black ones to spin down.[]{data-label="figS3"}](FigS3.png){width="1\columnwidth"}
--- abstract: 'In a generalised framework for the Landauer erasure protocol, we study bounds on the heat dissipated in typical nonequilibrium quantum processes. In contrast to thermodynamic processes, quantum fluctuations are not suppressed in the nonequilibrium regime and cannot be ignored, making such processes difficult to understand and treat. Here, we derive an emergent fluctuation relation that virtually guarantees the average heat produced to be dissipated into the reservoir when either the system or reservoir is large (or both), or when the temperature is high. The implication of our result is that second law-like behaviour appears for nonequilibrium processes exponentially quickly in the dimension of the larger subsystem and linearly in the inverse temperature. We achieve these results by generalising a concentration of measure relation for subsystem states to the case where the global state is mixed.' author: - Philip Taranto - 'Felix A. Pollock' - Kavan Modi bibliography: - 'thermo-fluctuations.bib' title: Emergence of a fluctuation relation for heat in typical Landauer processes --- In *Mission Impossible*, once Ethan Hunt hears the secret message, the tape self-destructs. This happens, of course, to ensure that the message doesn’t fall into the wrong hands. Burning the tape randomises the message, erasing its information and emitting heat in the process. Landauer’s Principle **(LP)** tells us that the latter is unavoidable, relating logically irreversible operations to a necessary energy expenditure. It lies at the interface between information theory and thermodynamics, with the profound consequence that “information is physical” [@Landauer1961]. LP underpins the technical challenge of managing the heat generated by computers and was initially postulated from classical thermodynamic considerations [@Landauer1961; @Bennett1982]. Early research efforts aimed to either develop a microscopic, nonequilibrium version of LP [@Shizume1995; @Piechocinska2000; @Lutz2009; @Sagawa2009] or extend it into the quantum domain [@Plenio1999; @Vedral2000; @Janzig2000; @Vaccaro2009; @Hilt2011; @Esposito2011; @Barnett2013]; however, the microscopic versions often relied on specific models, and many quantum extensions assumed the principle to hold *a priori*, before investigating its implications. Perhaps surprisingly, recent experiments demonstrate that LP applies to irreversible, nonequilibrium processes involving individual quantum systems [@Berut2012; @Orlov2012; @Jun2014; @Silva2014]. This has sparked a revival of interest in developing a rigorous formulation of LP in the nonequilibrium, quantum setting, culminating in an equality form of LP derived by Reeb & Wolf **(RW)** within a minimal framework [@Reeb2014]. Despite the substantial work surrounding LP, little is known about the tightness of the Landauer bound at microscopic scales, nor about how Landauer heat can be tamed, *i.e.*, how to minimise the heat required to process quantum information [@Mohammady2016]. Minimising Landauer heat is crucial for our ability to manipulate quantum systems to outperform their classical counterparts, as the quantum advantage often relies on coherent control that suffers from heat fluctuations. An approach to resolving these outstanding issues uses the tools of nonequilibrium statistical physics [@Esposito2009; @Brandao2013; @Goold2015Review; @Vinjanampathy2015; @Tasaki2016; @Goold2015; @Lorenzo2015; @Faist2015; @Croucher2017]. In particular, RW [@Reeb2014] and Goold, Paternostro & Modi **(GPM)** [@Goold2015] have derived tighter bounds on heat than that of Landauer. However, both of these bounds depend on details of the process and are, therefore, difficult to estimate in general. In this Letter, we seek to make generic, process-independent statements about the heat exchanged during nonequilibrium, quantum processes. We prove the emergence of a fluctuation relation for the heat dissipated in a Landauer process, stating that the average heat dissipated into the environment through a generic open quantum process is almost always positive. We analytically prove that this fluctuation relation arises exponentially quickly as the dimensions of either subsystem grows, and linearly in the inverse temperature. Our result extends the minimal framework for describing Landauer processes [@Reeb2014] and is derived by examining fluctuations of the heat distribution [@Goold2015]. We begin the Letter by introducing the former and constructing the latter. **Background.—** Surprisingly, until recently there was no consensus on how LP should be quantitatively expressed. This changed when RW formally derived a bound for the dissipated heat under a minimal set of assumptions [@Reeb2014]: (i) the irreversible process involves a system ${s}$ and a reservoir ${r}$; (ii) the initial joint state is uncorrelated: $\rho_{{{sr}}} = \rho_{s}\otimes \rho_{r}$; (iii) the reservoir is initially in a thermal (Gibbs) state $\rho_{r}:= {\text{e}}^{- \beta H_{r}}/Z$, where $\beta$ is the inverse temperature, $H_{r}$ is the reservoir Hamiltonian, and $Z := \tr{{\text{e}}^{- \beta H_{r}} }$ is the partition function; and (iv) the joint state evolves unitarily: $\rho_{{{sr}}}^\prime = U \rho_{{{sr}}} U^\dagger$. We call such processes *Landauer processes*. RW show that relaxing any one of the assumptions above can lead to violation of the bound $$\begin{gathered} \label{rwbound} \beta \braket{{Q}} \ge \Delta {S}+ R(\Delta {S},d_{r}) := \omega,\end{gathered}$$ where the inequality without $R(\Delta {S},d_{r})$ is Landauer’s bound [^1]. Here, the average heat dissipated into the reservoir is $\braket{{Q}} := \tr{H_{r}(\rho_{r}^\prime - \rho_{r})}$; the change in von Neumann entropy of the system is $\Delta S := {S}(\rho_{s}) - {S}(\rho_{s}^{\prime})$ with ${S}(\rho) := -\tr{\rho \log(\rho)}$; and $R(\Delta {S},d_{r}) \ge 0$ is a correction term that tightens the Landauer bound for finite-sized reservoirs. Crucially, the above framework accounts for processes that will be employed by realistic quantum technologies – namely, nonequilibrium processes that lie outside the realm of traditional thermodynamics and are difficult to treat because of heat fluctuations that are not suppressed. In other words, the heat generated in a single run of the process can vary drastically from its average behaviour. The modern approach to describing nonequilibrium processes employs fluctuation relations [@jrev; @Jarzynski1996; @Jarzynski1997; @Crooks1998; @Crooks1999; @Kurchan2000; @Tasaki2000; @Collin2005; @Douarche2005; @Saira2012; @Campisi2011; @Rastegin2013; @Albash2013; @Rastegin2014; @Goold2014; @Deffner2016]. These relate thermodynamic quantities (*e.g.*, free energy difference) to nonequilibrium quantities (*e.g.*, work or heat), offering a promising route to understanding the thermodynamics of small systems whose relevant dynamics may occur on shorter timescales than equilibration. Recent work demonstrates that this formalism provides a tangible route for the experimental exploration of quantum thermodynamics [@Dorner2013; @Mazzola2013; @Batalhao2014; @Goold2014Heat], including measuring the heat distribution of a Landauer process [@Silva2014]. Applying such tools to Landauer processes, GPM developed a novel bound for the average heat [@Goold2015]. By taking projective measurements of the reservoir energy, the complete distribution of the heat exchanged can be constructed: $P({Q}) := \sum_{mn} P(E_m|E_n)P(E_n) \, \delta ( {Q}- (E_n - E_m))$, where $P(E_n) = \braket{E_n | \rho_{r}| E_n}$ and $P(E_m|E_n) = \sum_l |\braket{E_n | A_l |E_m}|^2$ are the initial and (conditional) final measurement probabilities respectively. The $A_{l=jk} = \sqrt{\lambda_j} \braket{k|U|j}$ are Kraus operators describing the local action of the evolution on the reservoir, with $\{ \lambda_j , \ket{j} \}$ the eigenvalues and eigenstates of $\rho_{s}$ respectively. From this distribution, GPM show that the average exponentiated heat can be written as $$\begin{gathered} \label{gpmbound} \Gamma := \braket{{\text{e}}^{-\beta {Q}}} = \tr{U^\dag \mathbbm{1}_{s}\otimes \rho_{r}U \rho_s \otimes \mathbbm{1}_{r}},\end{gathered}$$ where $\braket{{\text{e}}^{-\beta {Q}}} = \int {\textnormal{d}}{Q}\, P( {Q}) \, {\text{e}}^{- \beta {Q}}$. Invoking Jensen’s inequality [^2], the GPM bound is derived: $\beta \braket{{Q}} \ge -\ln(\Gamma) := \gamma$. For one physically important set of processes, known as *thermal operations*, it is straightforward to show $\gamma = 0$ and so second law-like behaviour arises [^3]. Thermal operations preserve equilibrium distributions and conserve energy, and are therefore considered the set of free operations in the resource theory of thermodynamics [@PhysRevLett.111.250404]. For general $U$, however, $\Gamma$ is not restricted to take any particular value. Although the derivation above is reminiscent of the Jarzynski equality [@Jarzynski1996; @Jarzynski1997], Eq.  is not a true fluctuation relation since $\Gamma$ depends explicitly on the details of the process, *i.e.*, $\Gamma = \Gamma (U, \rho_{s})$. However, we now show that a fluctuation relation emerges very quickly for almost all Landauer processes. **Emergence of fluctuation relation**.— Our main result shows that, for Haar-randomly sampled joint space unitary interactions, a fluctuation relation for heat arises in the limit where the dimension of the system, reservoir, or both, becomes large; or when the temperature is high, *i.e.*, $\Gamma \to 1$ *independently of process details*. In fact, as the dimensions of either ${s}$ or ${r}$ grow, the deviations of $\Gamma$ from unity are at least exponentially suppressed; in the high temperature limit this suppression is at least linear. First, we demonstrate the exponential scaling with dimension through the following Theorem: \[LargeLimitTheorem\] When either the system or the reservoir dimensions are much larger than the other, [i.e.]{.nodecor}, $d_{s}\ll d_{r}$ or $d_{s}\gg d_{r}$, the deviations of $\Gamma$ from unity are at least exponentially suppressed in the dimension of the larger subsystem. Note first that we can write Eq.  as $\Gamma = d_{s}\tr{M_{s}\rho_{s}} = d_{r}\tr{M_{r}\rho_{r}}$, where $$\begin{gathered} \label{opMs} M_{s}:= {\operatorname{tr}_{{r}} \left[ U^\dag \frac{\mathbbm{1}_{s}}{d_{s}} \otimes \rho_{r}U \right]},\quad M_{r}:= {\operatorname{tr}_{{s}} \left[ U \rho_{s}\otimes \frac{\mathbbm{1}_{r}}{d_{r}} U^\dagger \right]}.\end{gathered}$$ The following Lemma is a generalisation of standard concentration of measure results for quantum states (see, *e.g.*, Refs. [@Popescu2006; @Hutter2012]) to the case where the reduced density operators are generated from unitary orbits of mixed states: \[lem:levymixed\] For any $\sigma_{{sr}}= U \tau_{{sr}}U^\dagger$, where $\tau_{{sr}}$ is a fixed system-reservoir density operator and $U$ is a Haar-randomly sampled unitary operator $$\begin{gathered} \label{LevyBound} \operatorname{Prob}\left[ {\left\| \sigma_{s}- \frac{\mathbbm{1}_{s}}{d_{s}} \right\|_{1}} \geq \sqrt{\frac{d_{s}}{d_{r}}} + \epsilon \right] \leq2 \exp{ \left( - \frac{d_{s}d_{r}\epsilon^2 }{16} \right)}.\end{gathered}$$ The same holds with system and reservoir labels swapped. Here, $\sigma_{s}:= {\operatorname{tr}_{{r}} \left[ \sigma_{{sr}}\right]}$ and we use the trace norm [^4]. Importantly, Lemma \[lem:levymixed\] bounds the trace distance of a reduced state from the maximally mixed state for Haar-randomly sampled joint interactions. While the bound in Eq.  is the same as in the usual case of pure joint states, which follows from Levy’s Lemma [@LedouxBook], the extension to mixed joint states is nontrivial, as the geometry of the corresponding space differs considerably. In fact, the following results do not hold for a naïve application of the pure state result, because the trace distance from the identity of $\sigma_{s}$, generated from a convex mixture $\sigma_{{sr}}$, cannot be directly upper bounded by an arbitrary component of the mixture. For a proof of this physically motivated application of Levy’s Lemma, see Appendix \[app:levyproof\]. (Theorem \[LargeLimitTheorem\]) Consider the case where $d_{r}\gg d_{s}$. For a Haar-randomly chosen unitary $U$, the state $M_{s}$, defined in Eq. , is distributed exactly as $\sigma_{s}$ in Lemma \[lem:levymixed\] with $\tau_{{sr}}= \left(\mathbbm{1}_{s}/d_{s}\right) \otimes\rho_{r}$. Writing $\mu_{s}:= {\left\| M_{s}- \mathbbm{1}_{s}/d_{s}\right\|_{1}} $, it follows immediately that: $\operatorname{Prob}\left[ {\mu_{s}} \geq \sqrt{{d_{s}}/{d_{r}}} + \epsilon \right] \leq 2 \exp{ \left( - {d_{s}d_{r}\epsilon^2 }/{16} \right)}$. Choosing $\epsilon = 4\sqrt{(x d_{r}+ \ln(2 d_{s}))/(d_{s}d_{r})}$ for some small $x > 1/d_{r}$ gives $$\begin{gathered} \label{probmus} \operatorname{Prob}\left[ {\mu_{s}} \geq \sqrt{\frac{d_{s}}{d_{r}}} + 4\sqrt{\frac{x d_{r}+ \ln(2 d_{s})}{d_{s}d_{r}}}\right] \leq \frac{\exp{\left( - d_{r}x \right)}}{d_{s}}.\end{gathered}$$ As the reservoir dimension increases, independently of the inverse temperature $\beta$ and for a fixed $d_{s}\geq 2$, the probability that ${\mu_{s}}$ is greater than some vanishingly small quantity is exponentially diminishing in $d_{r}$. Now, consider that $\mu_{s}=\max_P \tr{P\left(M_{s}- {\mathbbm{1}_{s}}/{d_{s}}\right)}$, where the maximisation is taken over all projection operators $P$. Using the fact that $\rho_s$ is a convex mixture of projectors, and multiplying $\mu_{s}$ by $d_{s}$, we have $$\begin{gathered} \label{projmax} d_{s}\, \mu_{s}\geq d_{s}\tr{\rho_{s}\left(M_{s}- \frac{\mathbbm{1}_{s}}{d_{s}} \right)} = | \Gamma - 1| =: \mu.\end{gathered}$$ Hence, $\mu$ is upper bound by a number that has high probability of being extremely small, as shown in Eq. . It follows that $\Gamma \to 1$ at least exponentially in the limit $d_{r}\gg d_{s}$. The Theorem can be proved for $d_{s}\gg d_{r}$ using a similar argument with the state ${M_{r}}$, defined in Eq. . ![image](fullTplot.png){width=".99\linewidth"} The case where $d_{r}\gg d_{s}$ is a familiar one: here, a small quantum system interacts with a large reservoir and we expect the reservoir to behave like a thermodynamic bath. Interestingly, in the converse case where $d_{s}\gg d_{r}$, we also find that the reservoir almost always heats up, even when the reservoir is hot and the initial energy of the system is low. Next, we consider the case where we have a large nonequilibrium system interacting with a large equilibrium reservoir, *i.e.*, $d_{r}\approx d_{s}\gg 2$. One would not necessarily expect that heat should be positive for a complex interaction like this. Moreover, from the above concentration of measure argument, it is not clear how $\Gamma$ behaves when $d_{s}$ and $d_{r}$ are comparable. With the following Theorem, we show that a fluctuation relation also emerges when the overall dimension becomes large: \[HighTotDim\] When the system and reservoir dimensions are similar, we expect $\Gamma\rightarrow 1$ for large $d_{{{sr}}}=d_{s}d_{r}$. We can rewrite $\Gamma$ in terms of the eigenbases of $\rho_{s}= \sum_{k} \lambda_k^{({s})} {\ket{{s}_k} \bra{{s}_k}}$ and $\rho_{r}= \sum_{k} \lambda_k^{({r})} {\ket{{r}_k} \bra{{r}_k}}$: $\Gamma = \sum_{nmpq} \lambda^{({r})}_m \lambda^{({s})}_p |\bra{{s}_n {r}_m} U \ket{{s}_p {r}_q}|^2$, where $\sum_{k} \lambda_k^{({s})}=\sum_{k} \lambda_k^{({r})}=1$. As the joint ${{sr}}$ dimension becomes large, any two bases related by a Haar-random unitary will tend to be mutually unbiased [@BengtssonBruzda2007]; that is, the matrix elements $\bra{{s}_n {r}_m} U \ket{{s}_p {r}_q}\rightarrow 1/\sqrt{d_{s}d_{r}}$. In this limit, we have $\Gamma \rightarrow \sum_{nmpq} \lambda^{({r})}_m \lambda^{({s})}_p /(d_{s}d_{r}) = 1$. Finally, we analyse another scenario where we might expect classical thermodynamic intuition to hold: that of an interaction with a hot reservoir. In the following Theorem, we show that a fluctuation relation emerges in the high temperature limit: \[HighTempThm\] As temperature increases, $\Gamma \to 1$ at least linearly with inverse temperature $\beta$. Consider the completely-positive trace-preserving (CPTP) map $\mathcal{E}_{{sr}}: \mathcal{L}(\mathcal{H}_{r}) \to \mathcal{L}(\mathcal{H}_{s})$: $\mathcal{E}_{{sr}}(\sigma_{r}) := {\operatorname{tr}_{{r}} \left[ U^\dagger {\mathbbm{1}_{s}}/{d_{s}} \otimes \sigma_{r}\, U \right]}$. By the contractivity of the trace distance under CPTP operations, we have $\tilde{\mu} := {\left\| \rho_{r}- {\mathbbm{1}_{r}}/{d_{r}} \right\|_{1}} \geq {\left\| \mathcal{E}_{{sr}}\left( \rho_{r}- {\mathbbm{1}_{r}}/{d_{r}} \right) \right\|_{1}}$. Expanding the action of $\mathcal{E}_{{sr}}$ gives $$\begin{gathered} \tilde{\mu} \ge {\left\| {\operatorname{tr}_{{r}} \left[ U^\dagger \frac{\mathbbm{1}_{s}}{d_{s}} \otimes \rho_{r}\, U \right]} - {\operatorname{tr}_{{r}} \left[ U^\dagger \frac{\mathbbm{1}_{s}}{d_{s}} \otimes \frac{\mathbbm{1}_{r}}{d_{r}} U \right]} \right\|_{1}} = \mu_{s},\end{gathered}$$ where $\mu_{s}= {\left\| M_{s}- {\mathbbm{1}_{s}}/{d_{s}} \right\|_{1}}$. Combining this with Eq. , we have $d_{s}\tilde{\mu} \ge d_{s}\mu_{s}\ge \mu$. Next, taking the limit $\beta \to 0$, we have $ \lim_{\beta \to 0} \tilde{\mu} = \lim_{\beta \to 0} |\sum_{k} \left({{\text{e}}^{- \beta E_k}}/{Z} - {1}/{d_{r}}\right)|$. As $\beta \to 0$, $Z \to d_{r}$ and we can expand the exponential. The zeroth order term cancels with the second term in the last equation, giving: $\lim_{\beta \to 0} \tilde{\mu} = ({1}/{d_{r}}) |\sum_{k} \sum_{n=1}^\infty (- \beta E_k)^n/{(n!)}|$. Since $\tilde{\mu}$ is behaves linearly with $\beta$ in this limit, it follows that $\Gamma \to 1$ at least linearly. Before discussing the implications of these results, we next demonstrate how quickly the fluctuation relation emerges in different regimes. **Speed of convergence.—** In order to test how fast $\Gamma$ converges on the results of Theorems \[LargeLimitTheorem\], \[HighTotDim\], & \[HighTempThm\], we now explore the statistics of simulated dynamics within the parameter space $(d_{s}, d_{r}, \beta)$. We construct processes by Haar-randomly sampling unitary transformations from the joint space and use these unitary transformations to define the system and reservoir Hamiltonians $$\begin{gathered} H_{s}= i \, {\rm tr}_{r}[\log (U)] / t \quad \mbox{and} \quad H_{r}= i \, {\rm tr}_{s}[\log (U)] / t,\end{gathered}$$ where we choose $t=1$ to fix the units of energy. Since we examine the effects of temperature and dimensionality on heat dissipation in random processes, our conclusions should hold generically and describe typical behaviour. The notions of high and low temperature depend on the energy level structure of $H_{r}$, so we must be careful in comparing different processes. At high temperature, we expect significant occupation of all reservoir states, which implies $\beta^{-1}\gg |E_N-E_0|$, where $E_N$ and $E_0$ are the highest and lowest reservoir eigenenergies respectively. On the other hand, in the low temperature regime, even the first excited state (with energy $E_1$) has little population, requiring $\beta^{-1}\ll |E_1-E_0|$. Between these two regimes, the temperature is of the same order as the energy splittings in $H_{r}$. These considerations motivate the definition of the scaled temperature parameters $$\begin{aligned} \label{eqhighlowtemp} &\tilde{T}_{\text{low}} := (\beta |E_1 - E_0|)^{-1}, \quad \tilde{T}_{\text{high}} := (\beta |E_N - E_0|)^{-1}, \notag \\ & \quad \mbox{and} \quad \quad \tilde{T}_{\text{mid}} := \frac{N-1} {\beta \, \sum_{n=1}^{N} |E_{n} - E_{n-1}|},\end{aligned}$$ which are used in the low, high and intermediate temperature regimes respectively. Theorems \[LargeLimitTheorem\], \[HighTotDim\] and \[HighTempThm\] manifest themselves in Fig. \[TemperaturePlots\]. Plotted in each panel (temperature regime) is a fit of $\mu$ to data from a large number of Haar-randomly sampled interactions for a variety of system and reservoir dimensions. The validity of Theorems \[LargeLimitTheorem\] and \[HighTotDim\] can be seen in any temperature regime: in the low dimensional case (red), $\mu$ tends to be the largest, with $\mu$ smaller on average for other cases, where either dimension is large. Reading across the panels of Fig. \[TemperaturePlots\] shows that as temperature increases, $\mu \to 0$ independently of $d_{s}, \, d_{r}$ (note the scale), demonstrating Theorem \[HighTempThm\]. Furthermore, in Appendix \[app:boundcomparison\] we show that in the cases where the fluctuation relation arises, $\gamma$ almost always provides a tighter bound for the heat dissipated than previously known bounds. Our result is counter-intuitive in two cases: (a) when the reservoir is at a high temperature and is of the same dimension as the system, a random interaction could cool it, producing negative heat; and (b) when the system is at a lower temperature than a smaller reservoir, we might also expect the reservoir to cool. However, in both cases the average heat is positive due to Theorems \[LargeLimitTheorem\], \[HighTotDim\] and \[HighTempThm\]. Intuition fails because the interactions between the system and the reservoir are not weak, as is generally assumed for thermodynamic systems. Randomly sampled unitary processes will generally correspond to Hamiltonians with significant interaction terms that are highly entangling [@Popescu2006]. For both of the above cases, the local state of the reservoir is therefore (highly likely to be) more mixed after the interaction than before, and so the average heat is positive. We now discuss broader implications of our findings. **Discussion**.—Our ability to coherently control nonequilibrium quantum systems is pivotal if we are to leap into a world run on quantum technologies. Functional quantum technologies must implement irreversible operations, which necessarily generate heat, leading to decoherence that negatively impacts overall performance. In this Letter, we have demonstrated the emergence of a fluctuation relation for the heat generated in nonequilibrium Landauer processes. The implication being that the heat dissipated to the reservoir in a typical open process is almost always positive. This significantly enhances our understanding of Landauer heat, as previous studies have been unable to make process independent statements on the average heat exchanged during nonequilibrium interactions. Admittedly, a real experiment does not have access to random unitary operations. However, much like burning Ethan Hunt’s tape, securely erasing quantum information requires generating highly random operations that mimic sampling from the full space of operations, *e.g.*, by using a t-design [@Dankert2009]. In fact, many statistical properties of highly entangling quantum circuits are closely related to Haar-randomly sampling from the unitary group, meaning that our fluctuation relation will apply to realistic quantum technologies. Moreover, it will be increasingly important as these devices become larger and hotter, since the regimes in which the fluctuation relation arises are exactly those for which the GPM bound almost always provides the tightest bound on the heat generated (see Appendix \[app:boundcomparison\]). In addition to its practical significance, our result is of foundational importance, as it can be interpreted as a version of the second law of thermodynamics for heat dissipation. The fact that the relation emerges so quickly as the system and reservoir dimensions grow suggests that similar results may hold for systems with constrained interactions. For such small collections of particles, the concentration of physically relevant processes with respect to the Haar measure is unlikely to be particularly small – indeed, an extension of our work to consider restricted sets of operations is warranted. **Acknowledgements**.—We thank Lucas C[é]{}leri, John Goold and Arul Lakshminarayan for valuable discussions. Proof of Lemma \[lem:levymixed\] {#app:levyproof} ================================ The typical application of Levy’s Lemma to reduced quantum states is as follows [@Popescu2006] \[lem:levypure\] For any pure state $\phi_{{sr}}= U \psi_{{sr}}U^\dagger$, where $\psi_{{sr}}$ is a fixed system-reservoir pure state and $U$ is a Haar-randomly sampled unitary, and for arbitrary $\epsilon > 0$, the distribution of distances between the reduced density matrix of the system $\phi_s = {\operatorname{tr}_{{r}} \left[ \phi_{{sr}}\right]}$ and the maximally mixed state $\mathbbm{1}/d_{s}$ satisfies $$\begin{gathered} \label{eq:levyboundpure} \operatorname{Prob}\left[ {\left\| \phi_{s}- \frac{\mathbbm{1}_{s}}{d_{s}} \right\|_{1}} \geq \sqrt{\frac{d_{s}}{d_{r}}} + \epsilon \right] \leq2 \exp{ \left( - \frac{d_{s}d_{r}\epsilon^2 }{16} \right)}.\end{gathered}$$ If our states $M_{s}$ and $M_{r}$ were being generated from pure joint states, we could directly apply Lemma \[lem:levypure\] to achieve the desired result in proving Theorem \[LargeLimitTheorem\]. However, they are instead being generated from mixed joint states: *i.e.*, $M_{s}$ is distributed as ${\operatorname{tr}_{{r}} \left[ U \tau_{{sr}}U^\dagger \right]}$ where $\tau = (\mathbbm{1}_{s}/d_{s}) \otimes \rho_{r}$ (and similarly for $M_{r}$). The following argument shows why the standard version of Levy’s Lemma does not necessarily hold for the mixed case and motivates our proof of Theorem \[lem:levymixed\]. The initial ${{sr}}$ state can always be decomposed as a mixture of pure states: $\tau_{{sr}}= \sum_{kl}\lambda_{kl}\ket{kl}\bra{kl}$. We can therefore write $$\begin{aligned} \mu_s =& \left\|{\rm tr}_{r}\left[\sum_{kl}\lambda_{kl}U\ket{kl}\bra{kl} U^\dagger\right] - \mathbbm{1}/d_s\right\|_1.\end{aligned}$$ Defining $$\begin{aligned} \mu_{kl} :=& \left\|{\rm tr}_{r}\left[U\ket{kl}\bra{kl} U^\dagger\right] - \mathbbm{1}/d_s\right\|_1,\end{aligned}$$ we have that $\mu_{s}\leq \sum_{kl}\lambda_{kl} \mu_{kl}$ ($\leq \max_{kl} \mu_{kl}$), since the partial trace and the trace norm are both convex functions. While Eq.  would apply to each of the $\mu_{kl}$ with $U$ sampled independently, the upper bound on $\mu_{s}$ depends on the full set $\{\mu_{kl}\}$ for each given $U$. Lemma \[lem:levypure\] makes no statistical statements about the latter. Furthermore, since the space of density matrices with fixed spectrum is a geometrically different space from that of pure states (it is a flag manifold rather than a complex projective space), the usual proof of Lemma \[lem:levypure\] cannot be trivially modified. We now proceed to prove our Theorem \[lem:levymixed\]. The proof hinges on a version of the well known Lemma by Levy [@LedouxBook]: Consider a manifold $M$ endowed with a metric $g$ and measure $\mu$, and a Lipschitz continuous function $f:M\rightarrow \mathbb{R}$ with Lipschitz constant $\eta$, *i.e.*, $f$ satisfies $|f(x)-f(y)|\leq \eta \|x-y\|_g$ $\forall x,y \in M$. The value of the function is concentrated around its expectation value $\mathbb{E}_x f$ according to the distribution $$\begin{gathered} {\rm Prob}\left(f(x)\geq \epsilon + \mathbb{E}_x f\right) \leq 2\alpha_M(\epsilon/\eta),\end{gathered}$$ where $\alpha_M(x)$ is a concentration function for $M$, defined as (an upper bound on) the measure of the set of points in the space more than a distance $x$ from the minimal-boundary volume enclosing half the space. \[lem:Levy\] Consider the function $f(U) = \|\sigma_{s}(U)-\mathbbm{1}/d_{s}\|_1$, the trace distance of the reduced state $\sigma_{s}(U) = {\rm tr}_{r}[U\tau_{{sr}}U^\dagger]$ from the maximally mixed state. Using a reverse triangle inequality and the contractivity of the trace norm under partial trace, we have, for any $U,V\in {\rm SU}(d_{{{sr}}})$ $$\begin{aligned} |f(U)-f(V)|=&\left| \left\|\sigma_{s}(U)-\frac{\mathbbm{1}}{d_{s}}\right\|_1-\left\|\sigma_{s}(V)-\frac{\mathbbm{1}}{d_{s}}\right\|_1 \right| \nonumber \\ \leq &\, \left\|\sigma_{s}(U)-\sigma_{s}(V)\right\|_1 \nonumber \\ \leq &\, \left\| U\sigma_{{{sr}}}U^\dagger - V\sigma_{{{sr}}}V^\dagger \right\|_1\nonumber \\ \leq & 2\left\|U-V\right\|_2, \label{eq:Lipschitz1}\end{aligned}$$ where, for the final inequality, we have used Lemma 1 from Ref. [@EpsteinWhaley2016], which relates the penultimate quantity to the Hilbert-Schmidt distance ($\|X\|_2=\sqrt{\tr{XX^\dagger}}$) between the two unitaries. Importantly, this distance induces the Haar measure on the group manifold. Eq.  demonstrates that $f(U)$ is a Lipschitz continuous function on the unitaries with Lipschitz constant $\eta=2$. Calculating the expectation value of $f(U)$ follows a standard argument [@MMuellerLec6; @MMuellerLec8] and is the same as for the case of pure system-reservoir states. The expected trace distance is related to the expected Hilbert-Schmidt distance squared using Jensen’s inequality $$\begin{aligned} \mathbb{E}_U \left\|\sigma_{s}(U)-\frac{\mathbbm{1}}{d_{s}}\right\|_1 \leq & \sqrt{d_{s}} \mathbb{E}_U \left\|\sigma_{s}(U)-\frac{\mathbbm{1}}{d_{s}}\right\|_2 \nonumber \\ \leq & \sqrt{d_{s}} \sqrt{\mathbb{E}_U\left\|\sigma_{s}(U)-\frac{\mathbbm{1}}{d_{s}}\right\|_2^2}.\label{eq:expinequality}\end{aligned}$$ The Hilbert-Schmidt distance can then be expanded in terms of the purity of $\sigma_{s}$ as $$\begin{gathered} \mathbb{E}_U\left\|\sigma_{s}(U)-\frac{\mathbbm{1}}{d_{s}}\right\|_2^2 = \mathbb{E}_U\tr{\sigma_{s}^2}-\frac{1}{d_{s}}.\label{eq:expinequality2}\end{gathered}$$ Lastly, the expectation value of the purity can be calculated for the Haar measure by utilising properties of the swap operator (though the calculation usually involves an average over pure states, it is the same for the unitary orbit of any state); it is $$\begin{gathered} \mathbb{E}_U \tr{\sigma_{s}^2} = \frac{d_{s}+d_{r}}{d_{s}d_{r}+ 1},\end{gathered}$$ which, using Eqs.  & , leads to $$\begin{aligned} \mathbb{E}_U f \leq & \sqrt{d_{s}}\sqrt{\frac{d_{s}+d_{r}}{d_{s}d_{r}+ 1}-\frac{1}{d_{s}}} \nonumber \\ \leq & \sqrt{d_{s}}\sqrt{\frac{d_{s}+d_{r}}{d_{s}d_{r}}-\frac{1}{d_{s}}} = \sqrt{\frac{d_{s}}{d_{r}}}.\end{aligned}$$ Using Lemma \[lem:Levy\], we have that $$\begin{gathered} {\rm Prob}\left(f(U)\geq \epsilon + \sqrt{\frac{d_{s}}{d_{r}}}\right) \leq 2\alpha_\mathcal{U}(\epsilon/2), \label{eq:prelimineq}\end{gathered}$$ where $\alpha_\mathcal{U}(x)$ is the concentration function on the group manifold of ${\rm SU}(d_{{{sr}}})$ equipped with the Haar measure. We can relate this to the concentration function for the sphere, using a Theorem by Gromov [@Gromov1979; @minsurfaces]: Let $S^{n+1}(R)$ be the ($n+1$)-sphere of radius $R$, and let $M$ be a closed ($n+1$)-dimensional Riemannian manifold with ${\rm Ric}(M)\geq n/R^2 = {\rm Ric}(S^{n+1}(R))$, where ${\rm Ric}(X)$ is the infimum of diagonal elements of the Ricci curvature tensor on $X$. Choose $M_0\subset M$ to be a domain with smooth boundary and let $B$ be a round ball in $S^{n+1}(R)$ such that $$\begin{gathered} \frac{{\rm Vol}(M_0)}{{\rm Vol}(M)}=\frac{{\rm Vol}(B)}{{\rm Vol}(S^{n+1}(R))}.\end{gathered}$$ It then follows that $$\begin{gathered} \frac{{\rm Vol}(\partial M_0)}{{\rm Vol}(M)}\geq \frac{{\rm Vol}(\partial B)}{{\rm Vol}(S^{n+1}(R))}.\label{eq:isoperimetric}\end{gathered}$$ \[thm:Gromov\] That is, if the Ricci curvature is everywhere greater than that of some sphere, then the rate at which volume is enclosed as one moves away from the boundary of a region is at least as great as the corresponding rate for a similar region on the sphere. Thus, if the inequality in Eq.  holds, the corresponding concentration functions are related as $\alpha_M(x)\leq\alpha_{S^{n+1}(R)}(x)$. Since the group manifold of ${\rm SU}(d_{{{sr}}})$ is compact and simply connected, it has constant, positive Ricci curvature (with respect to the Hilbert-Schmidt distance) [@Milnor1976]. This can be calculated to be ${\rm Ric}(\mathcal{U})=d_{{{sr}}}/2$ (see, *e.g.*, Chapter 18 of Ref. [@GallierBook]). The manifold dimension is $d_{{{sr}}}^2-1$, so we must compare it with $S^{d_{{{sr}}}^2-1}(R)$, finding that the radius must be at least $R_0=\sqrt{2(d_{{{sr}}}^2-2)/d_{{{sr}}}}$ in order for Theorem \[thm:Gromov\] to apply. Choosing the minimal case, we use the Theorem to upper bound $\alpha_\mathcal{U}(x)$ by $\alpha_{S^{d_{{{sr}}}^2-1}(R_0)}(x) = \exp(-d_{{{sr}}} x^2/4))$ [@LedouxBook]. Combining this with Eq.  leads to the desired result in Eq. . The same argument follows for a function $g(U)=\|\sigma_{r}(U)-\mathbbm{1}/d_{r}\|_1$; therefore the inequality holds under exchange of system and reservoir labels. Bound Comparison {#app:boundcomparison} ================ Although no discernible hierarchy between $\omega$ and $\gamma$ exists, here we compare the tightness of these bounds to $\beta \braket{{Q}}$. In Fig. \[BoundComparison\], we sample 5,000 Haar-random interactions and initial states to analyse general behaviours in regions of the parameter space. The average heat dissipated in these cases is non-negative, so $\gamma - \omega > 0$ implies that $\gamma$ is a tighter bound. The only distribution of $\gamma - \omega$ that has a significant proportion of negative data points occurs for small-scale interactions where the process occurs at low temperature (see (a) in Fig. \[BoundComparison\]). This shows that $\omega$ can provide a tighter bound to the average heat than $\gamma$ in this regime ($\omega$ outperforms $\gamma$ for $\sim35\%$ of such interactions). However, regardless of the temperature, when either dimension is much larger than the other (or both are large), $\gamma$ almost always provides a tighter bound to the average heat (see (b) – (h) in Fig. \[BoundComparison\]). It is interesting to note that as the dimension of the system increases, $\gamma - \omega$ tends to peak around a specific value (see (c), (d), (g) & (h) in Fig. \[BoundComparison\]). The tightness of $\gamma$ with respect to the average heat itself can be understood from the distribution of $\beta \Delta {Q}- \gamma$. Independent of subsystem dimensions, $\gamma$ provides a tight bound when the process occurs at high temperatures (see (b), (d), (f) & (g) in Fig. \[BoundComparison\]), but a rather poor bound for interactions at low temperatures (where all other known bounds also perform poorly). It is also interesting to note that as the dimension of the reservoir increases, $\beta \Delta {Q}- \gamma$ tends to 0 and so the GPM bound is tight. [^1]: The original Landauer bound was postulated and did not provide a quantitative formula for heat, whereas RW do. [^2]: Jensen’s inequality holds for any convex function $f$ of a random variable $X$: . [^3]: Thermal operations are the set of operations that conserve energy of each subsystem, *i.e.*, $U$ such that $[ U, H_{{s}} + H_{{r}} ] = 0$. In this case, we can write the identity element of the system as $\mathbbm{1}_{s}= \lim_{b \to 0} {\text{e}}^{-b H_{s}}$, in which case $\Gamma = \lim_{b \to 0} \tr{U^\dag {\text{e}}^{-b H_{s}} \otimes \frac{{\text{e}}^{-\beta H_{r}}}{Z} U \rho_s \otimes \mathbbm{1}_{r}} = 1$, where the final equality arises due to the commutativity of $U$ with each subsystem Hamiltonian. [^4]: The trace norm is defined for an operator $A$ as .
--- abstract: 'Recently, Neural Architecture Search has achieved great success in large-scale image classification. In contrast, there have been limited works focusing on architecture search for object detection, mainly because the costly ImageNet pretraining is always required for detectors. Training from scratch, as a substitute, demands more epochs to converge and brings no computation saving. To overcome this obstacle, we introduce a practical neural architecture transformation search(NATS) algorithm for object detection in this paper. Instead of searching and constructing an entire network, NATS explores the architecture space on the base of existing network and reusing its weights. We propose a novel neural architecture search strategy in channel-level instead of path-level and devise a search space specially targeting at object detection. With the combination of these two designs, an architecture transformation scheme could be discovered to adapt a network designed for image classification to task of object detection. Since our method is gradient-based and only searches for a transformation scheme, the weights of models pretrained in ImageNet could be utilized in both searching and retraining stage, which makes the whole process very efficient. The transformed network requires no extra parameters and FLOPs, and is friendly to hardware optimization, which is practical to use in real-time application. In experiments, we demonstrate the effectiveness of NATS on networks like [*ResNet*]{} and [*ResNeXt*]{}. Our transformed networks, combined with various detection frameworks, achieve significant improvements on the COCO dataset while keeping fast.' author: - | Junran Peng$^{1,2,3}$ Ming Sun$^{2}$ Zhaoxiang Zhang$^{1,3}$[^1] Tieniu Tan$^{1,3}$ Junjie Yan$^{2}$\ $^1$University of Chinese Academy of Sciences\ $^2$SenseTime Group Limited\ $^3$Center for Research on Intelligent Perception and Computing, CASIA\ bibliography: - 'refs.bib' title: 'Efficient Neural Architecture Transformation Search in Channel-Level for Object Detection' --- Introduction ============ Convolutional neural networks have achieved significant success in recent years. With the development of better optimization and normalization methods [@nair2010rectified; @ioffe2015batch], many remarkable network architectures [@krizhevsky2012imagenet; @Simonyan15; @szegedy2015going; @he2016deep; @huang2017densely; @hu2018squeeze; @sandler2018mobilenetv2; @xie2017aggregated; @Zhang_2018_CVPR] have been designed for image classification based on hand-crafted heuristics. More recently, great efforts have been taken in neural architecture search(NAS) that automates the architecture design process, and noticeable results that surpass human-designed architectures have been reported in image classification [@zoph2016neural; @zoph2018learning; @liu2018progressive; @real2018regularized; @pham2018efficient; @liu2018darts; @cai2018proxylessnas]. These achievements enable extensions into important vision tasks beyond image classification. Object detection is the most fundamental and challenging one of them, and has been widely used in real-world applications. Unlike image classification that predicts class probability for a whole image, object detection needs to locate and recognize multiple objects across a wide range of scales. Even so, most detectors still use networks designed for image classification as backbones and utilize their pretrained weights for initialization to reach high performances regardless of the gap between two tasks. We argue that this manner is sub-optimal and there should be better backbone architecture that experts at object detection. However, there has been little works that studies NAS on backbone for object detection, mainly for two reasons: The finetuning of backbone is always necessary for detectors to converge or achieve a high performance in short time, otherwise detectors are required to be trained for much more epochs with GN [@wu2018group] to reach a comparative performance according to [@he2018rethinking]. Thus it is inefficient to directly conduct neural architecture search on object detection. Besides, the essential gap between image classification and object detection is non-negligible. The experience of NAS in image classification does not suffice for NAS in object detection, that the searching space may need to be re-defined. In this paper, we present effort towards practical meta-learning for object detection task to tackle these two obstacles. Instead of searching an entire network architecture [@zoph2016neural; @zoph2018learning; @liu2018progressive; @real2018regularized; @liu2019auto; @chen2018searching], we search for an architecture transformation strategy that adjusts the structure of existing network to fit the need of detection, and weights of pretrained model could be fully used in both searching stage and re-training stage. We also notice that effective receptive field(ERF) of backbone is one of the critical factor in object detection, especially for handling the huge variation of object scales. As demonstrated in  [@luo2016understanding], dilation of convolution layers is closely relevant to the distribution of ERFs and changing dilation does not influence the kernel size in convolution layer. Therefore a convolution layer with different dilations could reuse the pretrained weights, which makes architecture transformation on dilation-domain possible. Additionally, unlike previous works that search for optimal paths in cell level [@zoph2018learning; @liu2018progressive; @real2018regularized; @pham2018efficient; @liu2018darts; @cai2018proxylessnas] or in network level [@zoph2016neural; @real2017large], our transformation search is conducted in channel level. To be specific, we split the forward signal generated by each path into pieces in channel domain, and treat the sub-paths as the minimum searchable units. As shown in Fig. \[fig:intro\], the searched path becomes a fusion of various operations with respective channels. With the combination of dilation search space and channel-level search strategy, our method, named NATS, is able to efficiently discover high-performance architecture transformation scheme for object detection. In our experiments, NATS for detection could improve the AP of Faster-RCNN based on ResNet-50 and ResNet-101 by $2.0\%$ and $1.8\%$ without any extra parameters or FLOPs, and keep the inference times almost the same. The transformation is also proved to be valid for various type of detectors. On Mask-RCNN [@he2017mask], Cascade-RCNN [@cai2018cascade] and RetinaNet [@lin2017focal], the AP have been improved by $1.9\%$, $1.3\%$ and $1.3\%$ respectively. As shown in Table \[tb:intro\], the searching stage of NATS takes only 2.5 days on 8 1080TI GPUs, and retraining of searched network takes about 1 day(same as training a baseline model) with no need of extra pertraining in ImageNet [@russakovsky2015imagenet], making the whole process efficient and practical. Methods Dataset Size of Train Set Input Size During Search GPU-Days Task ------------------------------------- ------------ ------------------- -------------------------- ---------- ------ NASNet [@zoph2018learning] CIFAR-10 50k $32\times32$ 2000 Cls AmoebaNet [@real2018regularized] CIFAR-10 50k $32\times32$ 3000 Cls PNASNet [@liu2018progressive] CIFAR-10 50k $32\times32$ 150 Cls EAS [@cai2018efficient] CIFAR-10 50k $32\times32$ 10 Cls DPC [@chen2018searching] Cityscapes 5k $769\times769$ 2600 Seg DARTS [@liu2018darts] CIFAR-10 50k $32\times32$ 4 Cls ProxylessNAS [@cai2018proxylessnas] ImageNet 1.3M $224\times224$ 10 Cls Auto-Deeplab [@liu2019auto] Cityscapes 5k $321\times321$ 3 Seg NATS-det COCO 118k $800\times1200$ 20 Det \[tb:intro\] Related works ============= Object detection ---------------- Object detection is one of the most fundamental fields in computer vision for both academic research and industrial application. It aims at finding the location of each object instances and determining the categories given an image. Some fundamental works like R-CNN [@girshick2014rich], Fast-RCNN [@girshick2015fast], Faster R-CNN [@ren2015faster] and SSD [@liu2016ssd] greatly push forward the development of this area. In general, object detectors usually consist of three parts: a backbone that takes in image as input and extract features, a neck attached to backbone that fuses or further encodes the extracted features and a head for classification and localization[^2]. In the past years, great progresses have been achieved in designing each of these modules. For backbones, there are  [@Li_2018_ECCV; @sun2018fishnet] designed specially for object detection manually. The deformable convolution is also proposed to enable backbone to adaptively sample input features, which is proved helpful in performance but hostile to hardware acceleration. FPN [@lin2017feature] is one of the representative work exploring the architecture of neck. It builds a top-down structure with lateral connections to different stages of backbone to integrate features at all scales. Many recent works [@kong2018deep; @kong2017ron; @zhang2018single] propose various multi-scale integration strategies to generate pyramidal feature representations. In  [@liu2018receptive; @luo2016understanding], it is proposed that the effective receptive fields(ERFs) of backbones is essential for object detection and dilation of convolutions could effectively change the distribution of ERFs. Based on these findings, we aim to design a network architecture that holds better ERFs to handle the huge variation of object scales in detection. Neural architecture search -------------------------- Designing network automatically has drawn great attention recently. Several works  [@zoph2018learning; @zoph2016neural; @cai2018efficient; @baker2017designing; @zhong2018practical] introduce reinforcement learning with RNN controller to design cell structure to form a network. In  [@real2018regularized; @liu2018hierarchical; @miikkulainen2019evolving], evolution method has been used to update network structure instead of RL-based controller. These methods are sample-based that often take great amount of computational resources. In  [@pham2018efficient; @cai2018efficient], weights of sampled models could be reused to reduce the search cost. Some other works tend to used gradient-based methods that search for relatively optimal child networks from predefined super-nets, which make NAS with limited computational resources possible. DARTS [@liu2018darts] formulates a super-network based on the continuous relaxation of the architecture representation, which allows efficient search of the architecture using gradient descent. ProxylessNAS [@cai2018proxylessnas] further improves the optimization strategy and imports latency loss to find more efficient architectures. In Auto-DeepLab [@liu2019auto], gradient-based method is also applied to search for backbone of segmentation model. As for search space, most methods tend to search for optimal paths in cell level [@zoph2018learning; @liu2018progressive; @real2018regularized; @pham2018efficient; @liu2018darts; @cai2018proxylessnas] or network level [@zoph2016neural; @real2017large], while in this paper, we propose a novel search space in channel level. Inspired by the idea of function-preserving transformation in  [@cai2018efficient], we propose a neural architecture transformation search(NATS) algorithm to automatically find an optimal strategy to transform the structure of existing networks designed for image classification to fit task of object detection. Methods ======= In this section, we first analyze a crucial factor of backbone for object detection. Then we describe our general strategy of neural architecture search in channel-domain and design a search space that enables effective architecture transformation search specialized in object detection. Revisit effective receptive fields ---------------------------------- Receptive field is one of the most basic concepts in deep CNNs. Unlike in fully connected networks that value of each neuron is associated with entire input to network, a neuron in convolutional networks depends on a certain region of the input. This property enables neurons in convolutional networks to be position-sensitive, and makes dense prediction tasks like object detection and semantic segmentation possible. As carefully studied in  [@luo2016understanding], the distribution of impact in a receptive field is proved to be like a Gaussian and only a small central region of pixels in receptive field effectively contributes to response of neuron in output map. The region is called effective receptive field(ERF). In tasks of image classification, the input sizes are always kept small and networks are only designed to predict the class of a main object in image regardless of localization. As in object detection, the input sizes are often much bigger and detectors are required to handle objects over a large range of scales, thus the ERFs of network designed for image classification could not suffice for this demand[^3]. As mentioned in  [@luo2016understanding], changing dilations could effectively modify the ERFs distribution of convolution layers. Moreover, changing dilation does not influence the kernel size of convolution layer, which enables pretrained weights to be directly reused. Therefore in this work, we constrain our search space to dilations of convolution layers in order to grant network better ERFs for handling the huge variation of object scales. Channel-level neural architecture search ---------------------------------------- A neural network is a directed acyclic graph consisting of a set of nodes connected in order. The directed edges connecting nodes are always associated with some operations that process the input signals, such as convolutional layer, max-pooling and [*etc.*]{} For most gradient-based NAS methods, an over-parameterized super-network is constructed firstly with all candidates paths included and one superior path is selected on each edge with the other candidates removed. However, signals in network often contain numerous channels during forward propagation, which means that a path is not the minimum separable structure unit in network and path-level search methods [@cai2018proxylessnas; @liu2018darts; @liu2019auto] limit the granularity of architecture search. Thus in our work, we treat a channel of signal generated by an operation of certain genotype as the minimum separable structure unit, and transform path-level NAS into channel-level NAS. Given an input signal $x$, the output signal $y^*$ is generated based on the outputs of all $G$ candidate paths during search. Each path is associated with a certain type of operation $O^g$, and we call the categories of operation as genotypes $\mathcal{G}$ with $g \in \mathcal{G}$. While in DARTS and Auto-Deeplab, each entire path is assigned an architecture parameter $\alpha^g$ and $y^*$ is weighted sum of input signals where the weights are calculated by applying softmax to $\alpha^g$: $$y^g = O^g(x), \quad y^* = \sum_{g \in \mathcal{G}}{\frac{exp(\alpha^g)}{\sum_{g'\in \mathcal{G}}{exp(\alpha^{g'})}} y^g}$$ After obtaining the continuous super-architecture with $\alpha$, every edge with mixed operation of all genotypes is replaced with the most likely operation by taking the argmax of $\alpha^g$. Thus only one genotype is selected to handle input signals on each edge in the outcome architecture. ![The structure of block during search. The output of each operation is equally divided into sub-groups in channel domain. Each sub-group of each candidate is assigned an architecture parameter to fit together as output, which makes the search space within each channel group continuous. The search between channel groups is independent.[]{data-label="fig:search"}](figs/search.pdf){height="1.3in"} To apply a more fine-grained architecture search, we equally divide $y^g$ into $N$ groups in channel domain for each genotype as follows: $$y^g \Rightarrow \{y_1^g, y_2^g, ... , y_i^g, ... , y_N^g\}, \quad with \; C_{out}=\sum_{i}^{N}C_i^g,$$ where $i$ denotes the index of channel group and $C_{out}$ denotes the total output channels. As illustrated in Fig. \[fig:search\], instead of assigning path-wise architecture parameters we assign each channel group an architecture parameter $\alpha_i^g$ where $1 \leq i \leq N$. We use the continuous relaxation among genotypes in each channel group and the output of group $i$ is obtained as: $$y_i^* = \sum_{g \in \mathcal{G}}{\frac{exp(a_i^g)}{\sum_{g'\in \mathcal{G}}{exp(a_i^{g'})}} y_i^g}$$ In this way, the super-net is constructed in which nodes are connected with sub-paths in channel domain and architecture parameters $\alpha_i^g$ are learnt for each genotype in each channel group. The training set is divided into two splits, and the optimization alternates between updating network parameters in the first split and updating architecture parameters $\alpha_i^g$ in the other split. Decoding discrete architectures with channel decomposition {#sec:discrete} ---------------------------------------------------------- Unlike  [@liu2018darts],  [@liu2019auto] and  [@cai2018proxylessnas]that select path with the maximum probability and prune redundant paths, the discrete architecture decoding in our method is conducted based on the distribution of $\alpha_i^g$. We first keep the index of genotype with the maximum probability in each channel group as $$ind_i = \mathop{\arg\max}_{g} \alpha_{i}^g,$$ and calculate the intensity of each genotype throughout all channel groups as: $$I^g = \frac{\sum_{i}^{N}{1(ind_i=g)}}{N}$$ As illustrated in Fig. \[fig:decode\], we retain all the paths that have a positive $I^g$ but reset output channels according to $I^g$ as $C_{out}^{g} = C_{out}I^{g}$. The output feature maps of different genotypes are concatenated together to form a final output $y$ as follows: $$\{y^{1}, y^{2}, ... , y^{g}, ..., y^{G}\} \Rightarrow y$$ ![Decoding discrete architecture based on intensity of genotypes.[]{data-label="fig:decode"}](figs/decode.pdf){height="1.3in"} Architecture transformation search for object detection ------------------------------------------------------- Taking bottleneck structure in ResNet as example in our paper, the transformation search is applied on the $3\times3$ convolution layer in the middle. Dilations in both orientation of the convolution $\{d_h, d_w\}$ is set as our search space. Since changing dilations does not modify the kernel size or the shape of weights, we could directly transfer weights of pretrained model to our networks in both searching stage and retraining stage Combining the channel-domain searching strategy with the dilation search space makes our neural architecture transformation search possible. The whole process is gradient-based and extra pretraining is of no need, which makes our method very efficient. During the training of super-network, backbone is initialized with the weights pretrained on ImageNet. For each $3\times3$ convolution layer in stage-3,4,5, weights are copied to all of its dilated replicas. Weight initialization for searched model is different in re-training stage. Since the original $3\times3$ convolution layer has been decomposed into sub-convs with various dilations and output channels, the pretrained weight $W$ with shape $C_{out}\times C_{in} \times K\times K$ is also decomposed into $G$ groups with shape $\{C_{out}^g \times C_{in} \times K \times K\}_{g=1}^G$ in order to fit the weights shape of sub-convs. Experiments and results ======================= COCO dataset ------------ We use the MS-COCO [@lin2014microsoft] for experiment in this paper. It contains 83K training images in [*train*]{}2014 and 40K validation images in [*val*]{}2014. In its 2017 version, it has 118K images in [*train*]{}2017 set and 5K images in [*val*]{}2017(a.k.a [*minival*]{}). The dataset is widely believed challenging in particular due to huge variation of object scales and large number of objects per image. We consider AP@IoU as evaluation metric which averages mAP across IoU threshold ranging from 0.50 to 0.95 with an interval of 0.05. During searching stage, we use [*train*]{}2014 for training model parameters and use 35K images from [*val*]{}2014 that are not in [*minival*]{} for calibrating architecture parameters. During retraining stage, our searched model is trained with [*train*]{}2017 and evaluated with [*minival*]{} as convention. Implementation details ---------------------- In our method we firstly search for an appropriate structure transformation scheme on COCO2014 dataset, then we train our searched model on COCO2017 dataset as mentioned above. We experiment on the Faster-RCNN baselines with FPN [@lin2017feature], and adopt models pretrained in ImageNet [@russakovsky2015imagenet] for weight initialization in both searching and training stages. Num of Groups AP AP$_{50}$ AP$_{75}$ AP$_S$ AP$_M$ $AP_L$ --------------- ------ ----------- ----------- -------- -------- -------- baseline 36.4 58.9 38.9 21.4 39.8 47.2 1 36.9 58.9 39.1 21.3 40.1 47.5 2 37.2 59.6 39.8 21.6 40.8 48.9 4 37.9 60.2 40.9 22.2 40.9 49.9 8 37.8 60.4 40.4 21.4 41.3 50.0 16 38.4 61.0 41.2 22.5 41.8 50.4 32 38.2 60.6 41.0 22.3 41.7 50.1 \[tb:gd\] Channels Per Group AP AP$_{50}$ AP$_{75}$ AP$_S$ AP$_M$ $AP_L$ -------------------- ------ ----------- ----------- -------- -------- -------- baseline 36.4 58.6 38.6 21.0 39.8 47.2 1 38.0 60.5 40.5 22.5 41.4 50.3 8 38.1 60.7 40.7 22.3 41.6 50.2 16 38.2 60.7 40.9 22.4 41.6 50.5 32 38.3 60.9 41.3 22.3 41.9 50.4 64 37.8 60.5 40.4 21.7 40.9 50.3 \[tb:cd\] #### Searching details. We conduct architecture transformation search for 25 epochs in total. To make the super-network converge better, architecture parameters are designed not to be updated in the first 10 epochs. The batch size is 1 image per GPU due to GPU memory constraint. We use SGD optimizer with momentum 0.9 and weight decay 0.0001 for training model weights. Cosine annealing learning rate that decays from 0.00125 to 0.00005 is applied as lr-scheduler. When training architecture parameters $\alpha$, we use Adam optimizer with learning rate 0.01 and weight decay 0.00001. #### Training details. After the architecture searching is finished, we decode discrete architecture as mentioned in  \[sec:discrete\]. We use SGD optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our model is trained for 13 epochs, known as $1\times$ schedule. The initial learning rate is set 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warming up and Synchronized BatchNorm mechanism are applied in both baselines and our searched models for multi-GPU training. It takes approximately 2.5 days to finish the search for 8 1080TI GPUs. Object detection results ------------------------ In our paper, ResNet[@he2016deep] and ResNeXt[@xie2017aggregated] are selected as backbone in all experiment settings. Following the regime mentioned in DCNv2 [@zhu2018deformable], we apply architecture transformation search only on blocks of stage-3,4,5 in backbone. For stage-3 and stage-4, the dilation candidates are {1, 2, 3, (1,3), (3,1)}. The dilation candidates of stage-5 are {1, 2, 3, 4, 5, (1,3), (3,1), (1,5), (5,1)}. No extra parameters or FLOPs is imported in our transformed architectures. #### Group division. We evaluate different ways of dividing output channels into groups. With a given fixed group number($G\in \{1, 2, 4, 8, 16, 32\}$, NATS is applied on ResNet-50. In Table \[tb:gd\], we find that more groups could achieve better performance. With a fixed group number of 16, the transformed architecture achieves an AP of 38.4%(2.0% higher than the baseline). Note that $G=1$ is a special case which is the path-level searching strategy similar to DARTS [@liu2018darts] and ProxylessNAS [@cai2018proxylessnas], and the improvement is limited(only 0.5% over baseline). We also fixed the number of channels($C \in \{1, 8, 16, 32, 64\}$ per searching group. Since different blocks have different channel numbers, the group number can change across layers in this setting. The results are shown in Table \[tb:cd\]. With a fixed channel number per searching group, our transformed ResNet-50 achieves a [*minival*]{} AP of 38.3% which is $1.9\%$ higher than baselines. From both setting we find that searching with a more fine-grained grouping is better in general. We infer that it enables blocks to have more combinations of operations with different dilations, which brings more flexible ERFs. We also find that the improvement of AP increases as scale of objects grows. In model searched with $G=16$, the improvements of AP$_S$, AP$_M$ and AP$_L$ are $1.5\%$, $ 2.0\%$ and $3.2\%$ respectively. Backbone AP AP$_{50}$ AP$_{75}$ AP$_S$ AP$_M$ AP$_L$ ------------------------ -------------- -------------- -------------- -------------- -------------- -------------- R101 38.6 60.7 41.7 22.8 42.8 49.6 R101-NATS [**40.4**]{} [**62.6**]{} [**44.0**]{} [**23.2**]{} [**44.1**]{} [**53.3**]{} X101-32$\times$4d 40.5 63.1 44.1 24.2 45.1 52.9 X101-32$\times$4d-NATS [**41.6**]{} [**64.3**]{} [**45.2**]{} [**24.9**]{} [**45.5**]{} [**54.8**]{} \[tb:deeper\] GENOs AP AP$_{50}$ AP$_{75}$ AP$_S$ AP$_M$ $AP_L$ ---------- -------------- -------------- -------------- -------------- -------------- -------------- baseline 36.4 58.9 38.9 21.4 39.8 47.2 NATS-A 37.7 59.9 40.5 22.0 40.8 49.8 NATS-B 38.0 60.5 40.7 21.8 41.4 [**50.5**]{} NATS-C [**38.4**]{} [**61.0**]{} [**41.2**]{} [**22.5**]{} [**41.8**]{} 50.4 \[tb:geno\] Method Backbone AP AP$_{50}$ AP$_{75}$ AP$_S$ AP$_M$ $AP_L$ -------------- ---------- -------------- -------------- -------------- -------------- -------------- -------------- Faster-RCNN R50 36.4 58.9 38.9 21.4 39.8 47.2 Faster-RCNN R50-NATS [**38.4**]{} [**61.0**]{} [**40.8**]{} [**22.1**]{} [**41.5**]{} [**50.5**]{} Mask-RCNN R50 37.5 59.6 40.5 22.0 41.0 48.4 Mask-RCNN R50-NATS [**39.3**]{} [**61.3**]{} [**42.6**]{} [**23.0**]{} [**42.5**]{} [**51.7**]{} Cascade-RCNN R50 40.7 59.4 44.2 22.9 43.9 54.2 Cascade-RCNN R50-NATS [**42.0**]{} [**61.4**]{} [**45.5**]{} [**24.2**]{} [**45.3**]{} [**55.9**]{} RetinaNet R50 36.0 56.1 38.6 20.4 40.0 48.2 RetinaNet R50-NATS [**37.3**]{} [**57.8**]{} [**39.5**]{} [**20.7**]{} [**40.8**]{} [**49.6**]{} \[tb:detector\] #### Deeper models. It is known that deeper networks have larger ERFs with stronger intensity and may dilute the effects of many approaches, thus we study the impact of architecture transformation on deeper networks. We have compared transformed backbones with baselines on ResNet-101 and ResNeXt-101. As shown in table \[tb:deeper\], architecture transformation yields 1.8% AP improvement on ResNet-101, from 38.6 to 40.4. While in ResNeXt-101, we use the $32\times4d$ configuration and the channels per group is set 32 to be consistent with backbone. Architecture transformation yields 1.1 improvement from 40.5 to 41.6. Comparing ResNet-101 with shallower network like ResNet-50, the improvement of AP$_S$ is relatively small($0.4\%$ [*v.s.*]{} $0.7\%$), but improvement of AP$_L$ is even greater($3.7\%$ [*v.s.*]{} $3.2\%$) even through it is deeper. ResNeXt-101 acts also in the similar way. #### Influence of genotypes. In this section we explore the influence of genotypes included during arch-transformation search. We include different set of dilation candidates as genotypes in this ablation study. We first investigate the necessity of dense dilation candidates. For stage-3,4,5 we set the dilation candidates {1, 3}, {1, 3}, {1, 3, 5} respectively as setting A, and set the dilation candidates {1, 2, 3}, {1, 2, 3}, {1, 2, 3, 4, 5} as setting B. To explore the influence of ratio aspects we add [(1, 3),(3, 1)]{} for stage-3,4 and [(1, 3), (3, 1), (1, 5), (5, 1)]{} for stage-5 as candidates in setting C. As shown in Table \[tb:geno\], NATS-B is higher than PATS-A by 0.3%, which implies that denser dilation candidates is slightly better. NATS-C is 0.4% better than PATS-B, demonstrating that dilations with aspect ratios are beneficial for object detection. #### Various detectors. To validate the generalization ability of our method, we also combine the transformed networks with different type of detectors. Several well-known and remarkable frameworks like Mask-RCNN [@he2017mask], Cascade-RCNN [@cai2018cascade] and RetinaNet [@lin2017focal] are selected in this ablation study. ResNet-50 and the transformed ResNet-50(G=16) are selected as backbones and all the models are trained with $1\times$ lr-schedule. As demonstrated in Table \[tb:detector\], performances of all detectors in chart are improved prominently($1.8\%$ in Mask-RCNN, $1.3\%$ in Cascade-RCNN and $1.3\%$ in RetinaNet). This shows the strong generalization capability of networks searched through our transformation method. Visualization of ERFs --------------------- Following the regime mentioned in  [@luo2016understanding], we visualize the receptive field of neuron on map of the last convolution layer. The input values are set 1 for whole image and only the neuron in the center of output map propagates backward. To focus on only the intensity of connections, ReLUs are abandoned during visualization. As shown in the Fig. \[fig:rf\], the size of ERFs in our transformed network are larger than ERFs of the vanilla structures. While the intensities in the center region are kept strong, the intensities of outer region becomes weaker as the region becomes bigger. It indicates that this type of ERFs could better fit the task of object detection. Conclusion ========== In this paper, we present NATS that can efficiently learn an neural architecture transformation strategy to adapt existing networks to new tasks. We propose a novel architecture search scheme in channel domain and design a search space of dilations targeting at object detection, which makes the neural architecture transformation search possible. Finetuning from pretrained models is feasible in both searching and re-training stages, making the whole process very efficient. Experiments on the COCO dataset have demonstrated that NATS could effectively improve the capability of networks to handle huge variation of object scales and robustly yield improvements on various type of detectors. In the future, we would like to investigate architecture transformation search on depth and width of each stage for object detection task. Acknowledgements ================ This work was supported in part by the National Key R&D Program of China(No.2018YFB-1402605), the Beijing Municipal Natural Science Foundation (No.Z181100008918010), the National Natural Science Foundation of China(No.61836014, No.61761146004, No.61773375, No.61602481) and CAS-AIR. [^1]: Corresponding author. [^2]: For one-stage detectors, the form of head is fully convolution. [^3]: Given a conventional input size of $800\times1200$, size of objects varies from 32 to 800 pixels in COCO, while the size of ERFs in ResNet50 is approximately 100 pixels as shown in \[fig:rf-r50\].
--- abstract: 'We report on observations of the Large Magellanic Cloud with the [*Fermi Gamma-Ray Space Telescope*]{}. The LMC is clearly detected with the Large Area Telescope (LAT) and for the first time the emission is spatially well resolved in gamma-rays. Our observations reveal that the bulk of the gamma-ray emission arises from the 30 Doradus region. We discuss this result in light of the massive star populations that are hosted in this area and address implications for cosmic-ray physics. We conclude by exploring the scientific potential of the ongoing Fermi observations on the study of high-energy phenomena in massive stars.' author: - 'J. Knödlseder for the Fermi LAT collaboration' title: Fermi Observations of the Large Magellanic Cloud --- Introduction ============ Since the early days of high-energy gamma-ray astronomy it has become clear that the gamma-ray flux received at Earth is dominated by emission from the Galactic disk [@clark68]. This emission can be well understood in terms of cosmic-ray interactions with the interstellar medium [@strong07]. At energies $\ga100$ MeV, the generation of diffuse gamma-ray emission is dominated by the decay of $\pi^0$ produced in collisions between cosmic-ray nuclei and interstellar medium nuclei. Ultimately, the study of this hadronic gamma-ray emission may provide hints on the still mysterious origin of the galactic cosmic-rays. However, the interpretation of the galactic diffuse gamma-ray emission is complicated by the fact that a large number and variety of individual sources contribute along the line of sight to the observed emission, thus blurring the link between individual cosmic-ray acceleration sites and observed gamma-ray signatures in our Galaxy. Gamma rays from cosmic-ray interactions are also expected from nearby galaxies, and indeed, the EGRET telescope aboard the Compton Gamma-Ray Observatory ([*CGRO*]{}) has for the first time detected gamma-ray emission from the Large Magellanic Cloud (LMC) [@sreekumar92]. The LMC is an excellent target for studying the link between cosmic-ray acceleration and gamma-ray emission since this galaxy is nearby (bringing the sources fluxes in reach of modern gamma-ray telescopes) and since the system is nearly seen face-on (avoiding the superposition of sources along the line of sight that hampers studies in our own Galaxy). In addition, the LMC is rather active, housing many supernova remnants, bubbles and superbubbles and massive star forming regions that are all potential sites of cosmic-ray acceleration [@biermann04; @cesarsky83; @binns07]. The Large Area Telescope (LAT) aboard the [*Fermi Gamma-Ray Space Telescope*]{} (FGST) provides now the capabilities to study diffuse gamma-ray emission from nearby galaxies in depth, and of the LMC in particular [@digel00; @weidenspointner07]. We report here on the initial analysis of observations taken in the course of the first year’s all-sky survey by the LAT . Observations ============ The LAT is the primary instrument on the FGST satellite which has been launched from Cape Canaveral on June 11th, 2008. The LAT is an imaging, wide field-of-view, high-energy gamma-ray telescope, covering the energy range from below 20 MeV to more than 300 GeV. The LAT is a pair-conversion telescope with a precision tracker made of a stack of 18 x,y silicon tracking planes and a calorimeter made of 96 CsI(Tl) crystals. The tracker array is covered by a segmented anticoincidence shield allowing for the rejection of charged particle backgrounds. The LAT has a large $\sim2.5$ sr field of view, and compared to earlier gamma-ray missions, has a large effective area ($>7000$ cm$^2$ on axis at $\sim1$ GeV for the event selection used in this paper), improved angular resolution ($\sim0.5\deg$ 68% containment radius at 1 GeV) and low dead time ($\sim25$ $\mu$s per event). The $1\sigma$ energy resolution in the 100 MeV - 10 GeV energy range is better than $\sim10$%. A detailed description of the instrument is given by @atwood09. The on-orbit instrument calibration is presented by @abdo09a. The data used in this work covers the period August 8th 2008 – April 24th 2009 and amounts to 211.7 days of continuous sky survey observations. During this period a total exposure of $\sim2.3 \times 10^{10}$ cm s$^2$ (at 1 GeV) has been obtained for the LMC region. Data preparation ---------------- The data analysis presented in this paper has been performed using the version [v9r11]{} and the instrument response functions [P6\_V3]{}. We collected all data obtained within the period August 8th 2008 – April 24th 2009 and applied the [*diffuse event class*]{} filter that has been designed to minimize contamination by instrumental background while retaining a substantial fraction of the signal. As has been pointed out by @atwood09, any harsher event cut would not significantly improve the signal-to-noise ratio. We further excluded from the data all periods where the spacecraft has entered the South Atlantic Anomaly (SAA) and for which the spacecraft z-axis points more than $47\deg$ away from the zenith direction (the zenith direction being defined by the vector running from the Earth center through the spacecraft). While the SAA cut excludes periods of particular large instrumental background from the analysis, the latter cut excludes periods where the Earth enters the field of view. Furthermore, to minimize contamination from Earth albedo photons we exclude photons with zenith angles above $105\deg$ from the analysis. We further restricted the analysis to photon energies above 200 MeV where our current knowledge of the instrument response implies systematic uncertainties that are smaller than $\sim10\%$ and where the redistribution of photons in energy due to incomplete energy measurements becomes negligible. Morphology ---------- To illustrate the distribution of observed gamma-ray photons in the LMC region we show in Fig. \[fig:image\] a counts-map of the area. The arrival directions of observed photons in the 200 MeV - 100 GeV energy range have been binned into $3' \times 3'$ large pixels covering a $10\deg\times10\deg$ large area around the position $(l,b) = (279.5\deg, -33.0\deg)$. The binned map has then been smoothed using a 2D adaptive Gaussian kernel smoothing technique [@ebeling06] to remove Poissonian noise that arises from the relatively small number of counts that have been registered. The signal-to-noise ratio (s.n.r.) has been set to 10 to reduce statistical noise variations to below $\la10\%$ in the image. ![ Preliminary adaptively smoothed (s.n.r. $=10$) [*Fermi*]{}/LAT counts map of a $10\deg\times10\deg$ large region centered on the LMC for the energy range 200 MeV - 100 GeV (greyscale). The contours show the extinction map of @schlegel98 as tracer of the total gas column density in the LMC. Ten linearly spaced contour levels are plotted. The diamond in the north-east of the image designates the location of the blazar CRATES J060106-703606 [@healey2007] that contributes at a low level to the gamma-ray emission in this area. \[fig:image\] ](knodlseder_fig1.eps){width="10cm"} We overlay as contours on the [*Fermi*]{}/LAT counts map the extinction map of @schlegel98 as tracer of the total gas column density in the LMC. To first order the extinction scales linearly with total gas column density, and we chose 10 linearly spaced contours to allow the reader to visually appreciate the distribution of gas column densities in the LMC. Obviously, a substantial fraction of the gas is found in a small area in the north of the LMC, at roughly $(l,b) \sim (279.5\deg, -31.5\deg)$, which coincides with the 30 Doradus star forming region. ![ Preliminary longitude (top) and latitude (bottom) photon intensity profiles of the LMC region for the energy range 200 MeV - 100 GeV. The solid line indicates the expected contributions from diffuse galactic emission, diffuse extragalactic emission, instrumental background and the blazar CRATES J060106-703606 at $(l,b) = (281.04\deg,-29.63\deg)$ in this area of the sky. \[fig:profile\] ](knodlseder_fig2.eps){width="10cm"} The high-energy gamma-ray photons that are observed from the LMC also peak in this area. The photon intensity in the 30 Dor region exceeds $\sim300$ counts/deg$^2$ while in most of the remaining regions of the LMC is remains below $\sim120$ counts/deg$^2$ (the background rate around the LMC is around $\sim50-70$ counts/deg$^2$). The excess near 30 Dor is also clearly seen in the longitude and latitude profiles of the photon intensity observed by LAT that is shown in Fig. \[fig:profile\]. Within a rectangular box covering galactic longitudes $274\deg \le l \le 284\deg$ and galactic latitudes $-36\deg \le b \le -30\deg$ we find a total number of $\sim1800$ counts within the energy range 200 MeV - 100 GeV above the expected contributions from galactic diffuse emission, extragalactic diffuse emission, instrumental background, and the blazar CRATES J060106-703606. These background contributions have been estimated by fitting spatial and spectral templates of their emission components together with a spatial template for the LMC emission to the data. Galactic diffuse emission has been modelled spatially and spectroscopically using the GALPROP model [@strong07] version [54\_59Xvarh7S]{}[^1] while for the combination of extragalactic diffuse emission and residual instrumental background we assume an isotropic emission with a power law spectral distribution. CRATES J060106-703606 is modeled as a point source at $(l,b) = (281.04\deg,-29.63\deg)$ with a power-law spectral distribution. For the LMC we use the extinction map of @schlegel98 as spatial template from which we subtract a pedestal level of $0.07^{\rm m}$ from all pixels and for which we set all pixels outside a radius of $4\deg$ around $(l,b) = (279.65\deg,-33.34\deg)$ to zero in order to extract the LMC emission. As spectral model we assume a power law for the LMC. To describe the morphology of the high-energy gamma-ray emission from the LMC we first fit a point source with free position and flux on top of the background model[^2] to our data. This results in a best-fitting point-source position of $(l,b) = (279.58\deg, -31.72\deg)$ with a statistical 95% confidence error radius of $0.09\deg$ (the systematic position uncertainty is estimated to less than $0.02\deg$). We note that this position is close to that of R 136, the central star cluster of 30 Dor, which is located at $(l,b) = (279.47\deg, -31.67\deg)$, i.e. at an angular distance of $0.11\deg$ from our best-fitted point-source location. The detection significance of the LMC can be estimated using the so-called [*Test Statistics*]{} (TS) which is defined as twice the difference between the log-likelihood $L_1$ that is obtained by fitting the LMC model on top of the background model to the data, and the log-likelihood $L_0$ that is obtained by fitting the background model only, i.e. ${\rm TS} = 2(L_1 - L_0)$. Under the hypothesis that our model satisfactorily explains the [*Fermi*]{}/LAT data, TS follows a $\chi^2_p$ distribution with $p$ degrees of freedom, where $p$ is the number of free parameters in the LMC model [@cash79]. In the particular case of a point source with free position, flux and spectral index we have $p=4$ and the measured TS of 869.1 corresponds to a significance of $29.8 \sigma$. As next step we replace the point source model by an extended source model which we implement as axisymmetric 2D Gaussian shape with variable angular size $\sigma$. In addition to the size we again fitted the position, flux and power law spectral index of the source. This results in a best-fitting source position of $(l,b) = (279.5\deg, -32.2\deg)$ (with a 95% confidence radius of $0.1\deg$), and source extent of $\sigma = 1.0 \pm 0.1\deg$. The TS amounts to 1088.5 which is larger by 219.4 than the value obtained for the point-source model. Since we added one additional parameter (the source extent $\sigma$) with respect to the point-source model we obtain the significance of the source extension from the $\chi^2_1$ distribution to $14.8\sigma$. Alternatively to the geometrical models we also compare the [*Fermi*]{}/LAT data to various spatial templates that trace the interstellar matter distributions in the LMC. For neutral hydrogen (H I) we use the aperture synthesis and multibeam data that @kim2005 have combined from ATCA and Parkes observations. For molecular hydrogen we use CO observations of the LMC obtained with the NANTEN telescope [@yamaguchi01]. We further used the extinction map of @schlegel98 (SFD) as tracer of the total gas column density and compare also our data to the 100 $\mu$m IRIS map that has been obtained by reprocessing the IRAS survey data [@mivilledeschenes05]. The results of this comparison are summarized together with that of the geometrical models in Table \[tab:models\]. The best fits are obtained for the SFD extinction map and the IRAS 100 $\mu$m infrared map which give TS values of $1179.6$ and $1179.1$, respectively. For 2 free parameters (the total flux in the map and the spectral index) this corresponds to a detection significance of $34.5\sigma$. An almost equally good fit is obtained using the neutral hydrogen map. Fitting instead the CO map to the LAT data provides a rather poor fit, suggesting that the gamma-ray morphology differs from that of molecular gas in the LMC. Fitting the H I and CO maps together to the data confirms this result since the fit attributes $97\%$ of the total flux to the H I component. Correspondingly, the TS increase with respect to fitting the H I gas map alone is also negligible. The H I/SFD/IRIS 100 $\mu$m maps fit the data considerably better than a single point source, adding further evidence that the observed high-energy gamma-ray emission is extended in nature. Furthermore, the 2D Gaussian source model cannot reach the fit that is obtained by those tracer maps, suggesting that the emission morphology is more complex than a single Gaussian shape. [lrc]{} LMC model & TS & Parameters\ Point source & 869.1 & 4\ 2D Gaussian source & 1088.5 & 5\ H I gas map & 1173.4 & 2\ CO gas map & 932.2 & 2\ H I + CO gas maps & 1176.1 & 4\ SFD extinction map & 1179.6 & 2\ IRIS 100 $\mu$m infrared map & 1179.1 & 2\ Spectrum -------- ![ Preliminary spectrum of the LMC obtained by fitting the extinction map of @schlegel98 in 12 logarithmically-spaced energy bins covering the energy range 200 MeV - 20 GeV to the [*Fermi*]{}/LAT data. Errors are statistical only. \[fig:spectrum\] ](knodlseder_fig3.eps){width="10cm"} Using the extinction map of @schlegel98 (i.e. our best fitting spatial template of the high-energy emission) we extract a spectrum of the LMC by fitting the data in 12 logarithmically-spaced energy bins covering the energy range 200 MeV - 20 GeV. Above 20 GeV, photons from the LMC become too sparse in our actual data set to allow for meaningful spectral points to be derived. Figure \[fig:spectrum\] shows the LMC spectrum that has been obtained by this method. Our analysis indicates a spectral steepening of the emission with increasing energy, suggesting that a simple power law is an inadequate description of the data. We confirm this trend by fitting the data using a broken power law instead of a simple power law. This results in an improvement of the TS by $10.1$ corresponding to a significance of $2.7\sigma$ ($p=2$) of the spectral steepening. Fitting alternatively an exponentially cutoff power law improves the TS by $7.8$ with respect to the simple power law, corresponding to a significance of $2.8\sigma$ ($p=1$) of the spectral cutoff. Integrating the broken power law or the exponentially cutoff power law model over the energy range 100 MeV – 500 GeV gives identical photon fluxes of $(3.1 \pm 0.2) \times 10^{-7}$  and an energy fluxes of $(2.0 \pm 0.1) \times 10^{-10}$ for the LMC. The systematic uncertainty in these flux measurements amounts to $\sim10\%$. Discussion and conclusions ========================== @sreekumar92 reported the first detection of the LMC in $>100$ MeV gamma rays based on 4 weeks of data collected with the EGRET telescope aboard [*CGRO*]{}. Due to EGRET’s limited angular resolution and the weak emission detected from the LMC, details of the spatial structure of the galaxy could not have been resolved. However, it has been obvious from EGRET data that the LMC was an extended gamma-ray source. [*Fermi*]{}/LAT allows now for the first time to clearly resolve the gamma-ray emission of the LMC and to attribute the emission maximum to the 30 Dor star forming region. While this coincidence could be taken as a hint for an enhanced cosmic-ray density in 30 Dor with respect to the rest of the galaxy, we note that a substantial fraction of the interstellar gas of the LMC is also confined to the 30 Dor area. Consequently, the target density for cosmic-ray interactions is greatly enhanced in this region which implies a corresponding enhancement of the gamma-ray luminosity. Whether the data do also support an enhanced cosmic-ray density in 30 Dor with respect to the rest of the galaxy needs a more detailed analysis of the observations. The rather poor fit of the CO map to the LAT data suggests that the overall distribution of gamma-ray emission differs from that of molecular hydrogen. The distribution of neutral hydrogen fits the data considerably better and the combined fit of H I and CO maps indicates that any contribution to the gamma-ray emission that is correlated to the molecular gas is at best marginal. This agrees well with expectations since the gas budget of the LMC is largely (90-95%) dominated by neutral hydrogen [@fukui99]. Consequently we are presently unable to determine the CO-to-H$_2$ conversion factor, X$_{\rm CO}$, from our LMC data. @fichtel91 performed a detailed modelling of the cosmic-ray distribution in the LMC and predicted an integrated $>100$ MeV photon flux of $(2.3 \pm 0.4) \times 10^{-7}$  for the galaxy. @pavlidou01 predicted an integrated $>100$ MeV photon flux of $1.1\times 10^{-7}$  based on estimates of the LMC supernova rate and total gas densities. Our observed flux of $(3.1 \pm 0.2) \times 10^{-7}$   falls at the hide side of these estimates, yet given the uncertainties in the models the agreement can be judged satisfactorily. Further studies of the LMC with [*Fermi*]{}/LAT will now concentrate on the spectral analysis of the data, with particular emphasize on variations of the spectral shape throughout the galaxy. Thanks to the excellent sensitivity and angular resolution of the LAT, this is the first time that such studies become possible. And other nearby galaxies await their detection, such as the Small Magellanic Cloud or the Andromeda Galaxy (M31). Both should be in reach of [*Fermi*]{} and the comparative study of their diffuse gamma-ray emission should help to understand the impact of the environment and metallicity on the physics of cosmic rays. Abdo, A.A., Ackermann, M., Ajello, M., et al. 2009a, Instrumentation and Methods for Astrophysics, submitted (astro-ph/0904.2226) Atwood, W.B., Abdo, A.A., Ackermann, M., et al. 2009, ApJ, submitted (astro-ph/0902.1089) Binns, W.R., Wiedenbeck, M.E., Arnould, M., et al. 2007, SSRv, 130, 439 Biermann, P.L. 2004, New Astronomy Reviews, 48, 41 Cash, W. 1979, ApJ, 228, 939 Clark, G.W., Garmire, G.P., & Kraushaar, W.L. 1968, ApJ, 153, 203 Cesarsky, C.J., & Montmerle, T. 1983, SSRv, 36, 173 Digel, S.W., Moskalenko, I., Ormes, J.F., et al. 2000, AIP, 528, 449 Ebeling, H., White, D.A., & Rangarajan, F.V.N. 2006, MNRAS, 368, 65 Fichtel, C.E., Özel, M.E., Stone R.G., & Sreekumar, P. 1991, ApJ, 374, 134 Fukui, Y., Mizuno, N., Yamaguchi, R., et al. 1999, PASJ, 51, 745 Healey, S.E., Romani, R.W., Taylor, G.B., et al. 2007, ApJS, 171, 61 Kim, S., Staveley-Smith, L., Dopita, M.A., et al. 2005, ApJS, 143, 487 Miville-Deschênes, M.-A., & Lagache, G. 2005, ApJS, 157, 302 Pavlidou, V., & Fields, B.D. 2001, ApJ, 558, 63 Schlegel, D.J., Finkbeiner, D.P., & Davis, M. 1998, ApJ, 500, 525 Sreekumar, P., Bertsch, D.L., Dingus, B.L., et al. 1992, ApJ, 400, L67 Strong, A.W. 2007, Ap&SS, 309, 35 Weidenspointner, G., Lonjou, V., & Knödlseder, J. 2007, AIP, 921, 498 Yamaguchi, R., Mizuno, N., Onishi, T. et al. 2001, PASJ, 53, 959 [^1]: Available from the website: http://galprop.stanford.edu [^2]: From now on we call the combination of the GALPROP model, the isotropic model and the CRATES J060106-703606 point source the [*background model*]{} of our analysis. The free parameters of this background model are the normalization of the GALPROP model, the intensity and spectral slope of the isotropic component, and the flux and spectral slope of the CRATES J060106-703606 point source.
--- abstract: 'In this paper, we first propose the design of Temporal-Carry-deferring MAC (TCD-MAC) and illustrate how our proposed solution can gain significant energy and performance benefit when utilized to process a stream of input data. We then propose using the TCD-MAC to build a reconfigurable, high speed, and low power Neural Processing Engine (TCD-NPE). We, further, propose a novel scheduler that lists the sequence of needed processing events to process an MLP model in the least number of computational rounds in our proposed TCD-NPE. We illustrate that our proposed TCD-NPE significantly outperform similar neural processing solutions that use conventional MACs in terms of both energy consumption and execution time.' author: - | Ali Mirzaeian, Houman Homayoun, Avesta Sasan\ George Mason University, Fairfax, VA, USA\ amirzaei@gmu.edu, hhomayoun@ucdavis.edu, asasan@gmu.edu bibliography: - 'main.bib' nocite: - '[@icc]' - '[@dc]' - '[@pt]' - '[@rh]' title: '[TCD-NPE: A Re-configurable and Efficient Neural Processing Engine, Powered by Novel Temporal-Carry-deferring MACs]{}' --- =cmr15 at 19pt Introduction and Background {#intro} =========================== Deep neural networks (DNNs) has attracted a lot of attention over the past few years, and researchers have made tremendous progress in developing deeper and more accurate models for a wide range of learning-related applications [@alexnet; @vgg]. The desire to bring these complex models to resource-constrained hardware platforms such as Embedded, Mobile and IoT devices has motivated many researchers to investigate various means of improving the DNN models’ complexity and computing platform’s efficiency [@sze2017efficient]. In terms of model efficiency, researchers have explored different techniques including quantization of weights and features [@courbariaux2015binaryconnect; @han2015deep], formulating compressed and compact model architectures [@howard2017mobilenets; @icnn; @icnntecs; @iandola2016squeezenet; @han2015deep; @8697497], increasing model sparsity and pruning [@yang2017designing; @han2015deep], binarization [@hubara2016binarized; @courbariaux2015binaryconnect], and other model-centered alternatives. On the platform (hardware) side, the GPU solutions have rapidly evolved over the past decade and are considered as a prominent mean of training and executing DNN models. Although GPU has been a real energizer for this research domain, its is not an ideal solution for efficient learning, and it is shown that development and deployment of hardware solutions dedicated to processing the learning models can significantly outperform GPU solution. This has lead to the development of Tensor Processing Units (TPU) [@abadi2016tensorflow], Field Programmable Gate Array (FPGA) accelerator solutions [@lacey2016deep], and many variants of dedicated ASIC solutions [@diannao; @dadiannao; @du2015shidiannao; @eyeris]. Today, there exist many different flavors of ASIC neural processing engines. The common theme between these architectures is the usage of a large number of simple Processing Elements (PEs) to exploit the inherent parallelism in DNN models. Compare to a regular CPU with a capable Arithmetical Logic Unit (ALU), the PE of these dedicated ASIC solutions is stripped down to a simple Multiplication and Accumulation (MAC) unit. However, many PEs are used to either form a specialized data flow [@dadiannao], or tiled into a configurable NoC for parallel processing DNNs [@eyeris]. The observable trend in the evolution of these solutions starting from DianNao [@diannao], to DaDianNao [@dadiannao], to ShiDianNao [@du2015shidiannao], to Eyris [@eyeris] (to name a few) is the optimization of data flow to increase the re-use of information read from memory, and to reduce the data movement (in NOC and to/from memory). Common between previously named ASIC solutions, is designing for data reuse in NOC level but ignoring the possible optimization of the PE’s MAC unit. A conventional MAC operates on two input values at a time, computes the multiplicaiton result, adds it to its previously accumulated sum and output a new and *correct* accumulated sum. When working with streams of input data, this process takes place for every input pair taken from stream. But in many applications, we are not interested in the correct value of intermediate partial sums, and we are only interested in the correct final result. The first design question that we answer in this paper is if we can design a faster and more efficient MAC, if we remove the requirement of generating a correct intermediate sum, when working on a stream of input data. In this paper, we propose the design of Temporally-deferring-Carry MAC (TCD-MAC), and use the TCD-MAC to build a reconfigurable, high speed, and low power MLP Neural Processing Engine (NPE). We illustrated that TCD-MAC can produce an approximate-yet-correctable result for intermediate operations, and could correct the output in the last state of stream operation to generate the correct output. We then build a Re-configurable and specialized MLP Processing Engine using a farm of TCD-MACs (used as PEs) supported by a reconfigurable global buffer (memory) and illustrate its superior performance and lower energy consumption when compared with the state of the art ASIC NPU solutions. To remove the data flow dependency from the picture, we used our proposed NPE to process various Fully Connected Multi-Layer Perceptrons (MLP) to simplify and reduce the number of data flow possibilities and to focus our attention on the impact of PE in the efficiency of the resulting accelerator. RELATED WORK {#related_work} ============ The work in [@eyeris], categorizes the possible data flows into four major categories: 1) No Local Reuse (NLR) where neither the PE (MAC) output nor filter weight is stored in the PE. Examples of accelerator solutions using NLR data flow include [@diannao; @dadiannao; @zhang2015optimizing]. 2) Output Stationary (OS) where the filter and weight values are input in each cycle, but the MAC output is locally stored. Examples of accelerator solutions using OS data flow include [@gupta2015deep; @du2015shidiannao; @peemen2013memory; @Nesta1]. 3) Weight Stationery (WS) where the filter values are locally stored, but the MAC result is passed on. Examples of accelerators using WS data flow include [@sankaradas2009massively; @chakradhar2010dynamically; @gokhale2014240], and 4) Row Stationary (RS and its variant RS+) where some of the reusable MAC outputs and filter weights remain within a local group of PE to reduce data movement for computing the next round of computation. An example of accelerator using RS is [@eyeris]. The OS and NLR are generic data flow and could be applied to any DNN, while the WS and RS only apply to Convolutional Neural Networks (CNN) to promote the reuse of filter weights. Hence, the type of applicable data reuse (output and/or weight) depends on the model being processed. The Multi-Layer Perceptrons (MLP) is a sub-class of NNs that has extensively used for modeling complex and hard to develop functions [@block1962perceptron]. An MLP has a feed-forward structure, and is comprised of three types of layers: (1) An input layer for feeding the information to the model, 2) one or more hidden layer(s) for extracting features, and (3) an output layer that produces the desired output which could be regression, classification, function estimation, etc. Unfortunately, when it comes to MLPs, or when processing Fully Connected (FC) layers, unlike CNNS, no filter weight could be reused. In these models the viable data flows are the OS and NLR. The only possible solution for using the WS solution in processing MLPs is the case of multi-batch processing that may benefit from weight reuse. Another related work is the NPE proposed in [@tu2015rna]. This solution, denoted as RNA, is a special case of NLR, where data flow is controlled through NoC connectivity between different PEs; RNA breaks the MLP model into multi-layer loops that are successively mapped to the accelerator PEs, and uses the PEs as either a multiplier or an adder, dynamically forming a systolic array. In the result section of this paper, We demonstrate that the OS solutions are in general more efficient than NLR solutions. We further illustrate that our proposed TCD-MAC, when used in the context of our proposed NPE, outperform state of the art accelerators that rely on (fastest and most efficient) conventional MAC solutions. Our Proposed MLP Processing Engine ================================== Before describing our proposed NPE solution, we first describe the concept of *temporal carry* and illustrate how this concept can be utilized to build a Temporal Carry deferring Multiplication and Accumulation (TCD-MAC) unit. Then, we describe, how an array of TCD-MAC are used to design a re-configurable and high-speed MLP processing engine, and how the sequence of operations in such NPE is scheduled to compute multiple batches of MLP models. Temporal Carry deferring MAC (TCD-MAC) {#TCD_MAC_section} -------------------------------------- Suppose two vectors $A$ and $B$ each have $N$ M-bit values, and the goal is to compute their dot product, $\sum_{i=0}^{N-1}(A_i*B_i)$ (similar to what is done during the activation process of each neuron in a NN). This could be achieved using a single Multiply-Accumulate (MAC) unit, by working on 2 inputs at a time for N rounds. Fig. \[GV\_TCD-MAC\](A-top) shows the general view of a typical MAC architecture that is comprised of a multiplier and an adder (with 4-bit input width), while Fig. \[GV\_TCD-MAC\](A-bottom) provides a more detailed view of this architecture. The partial products (M partial product for M-bits) are first generated in Data Reshape Unit (DRU). Then the hamming weight compressors (HWC) in the Compression and Expansion Layer (CEL) transform the addition of M partial products into a single addition of two larger binaries, the addition of which in an adder generates the multiplication result. ![image](figures/tci_mac.png){width="1.80\columnwidth"} The building block of the CEL unit are the HWC. A HWC, denoted by C$_{HW}$(m:n), is a combinational logic that implements the Hamming Weight (HW) function for $m$ input-bits (of the same bit-significance value) and generates an $n$-bit binary output. The output $n$ of HWC is related to its input $m$ by: $n = \lceil log_{2}^{m}\rceil$. For example “011010”, “111000”, and “000111” could be the input to a C$_{HW}$(6:3), and all three inputs generate the same Hamming weight value represented by “011”. A Completed HWC function CC$_{HW}$(m:n) is defined as a C$_{HW}$ function, in which $m$ is $2^{n}-1$ (e.g., CC(3:2) or CC(7:3)). Each HWC takes a column of m input bits (of the same significance value) and generates its n-bit hamming weight. In the CEL unit, the output n-bits of each HWC is fed (according to its bit significance values) as an input to the proper $C_{HW}$(s) in the next-layer CEL. This process is repeated until each column contains no more than 2-bits, which is a proper input size for a simple adder. In Fig. \[GV\_TCD-MAC\] it is assumed that a Carry Propagation Adder Unit (CPAU) is used. The result is then added to the previously accumulated value in the output register in the second adder to generate a new accumulated sum. Note that in conventional MAC, the carry (propagation) bits in the CPAUs are spatially propagated through the carry chain which constitutes the critical timing path for both adder and multiplier. Fig.\[GV\_TCD-MAC\].B shows our proposed TCD-MAC. In this solution, only a single CPAU is used. Furthermore, the CPAU is broken into two distinct segments 1) The GENeration (GEN) and Partial CPA (PCPA). The Gen is the first layer of CPA logic that produces the Generate ($G_i^c$) and Propagate ($P_i^c)$ signals for each bit position $i$ at cycle $c$. The TCD-MAC relies on the assumption that we only need to correctly compute the final result of multiplication and accumulation over an array of inputs (e.g. $\sum_{i=0}^{N-1}(A_i*B_i)$), while relaxing the requirement for generating correct intermediate sums. This relaxed specification is applicable when a MAC is used to compute a Neuron value in a DNN. Benefiting from this relaxed requirement, the TCD-MAC skips the computation of PCPA, and injects (defers) the $G_i^c$ and $P_i^c$ generated in cycle c, to the CEL unit in cycle $c+1$. Using this approach, the propagation of carry-bit in the long carry chain (in PCPA) is skipped, and without loss of accuracy, the impact of the carry bit is injected to the correct bit position in the next cycle of computation. We refer to this process as temporal (in time) carry propagation. The Temporally carried $G_i^c$ is stored in a new set of registers denoted as Carry Buffer Unit (CBU), while the $P_i^c$ in each cycle is stored in the output register Unit (ORU). Note that CBU bits can be injected to any of the $C_{HW}(m:n)$ in any of the CEL layers in the same bit position. However, it is desired to inject the CB bits to a $C_{HW}(m:n)$ that is incomplete to avoid an increase in the size and critical path delay of the CEL. ![TCD-MAC cycle time is computed by excluding the PCPA. In the last cycle of computation, the TCD-MAC activates the PCPA to propagate the unconsumed carry bits. []{data-label="cycle_reduction"}](figures/gun_chart.png){width="0.72\columnwidth"} Assuming that a TCD-MAC works on an array of N input pairs, the temporal carry injection is done N-1 times. In the last round, however, the PCPA should be executed. As illustrated in Fig. \[cycle\_reduction\], in this approach, the cycle time of the TCD-MAC could be reduced to that excluding the PCPA, allowing the computation over PCPA to take place in an extra cycle. The one extra cycle allows the unconsumed carry bits to be propagated in PCPA carry chain, forcing the TCD-MAC to generate the correct output. Using this technique we shortened the cycle time of TCD-MAC for a large number of cycles. The saving obtained from shorter cycles over a large number of cycles significantly outweighs the penalty of one extra cycle. To support signed inputs, in TCD-MAC we pre-process the input data. For a partial product $p=a\times b$, if one value ($a$ or $b$) is negative, it is used as the multiplier. With this arrangement, we treat the generated partial sums as positive values and later correct this assumption by adding the two’s complement of the multiplicand during the last step of generating the partial sum. Following example clarify this concept: let’s suppose that $a$ is a positive and $b$ is a negative b-bit binary. The multiplication $b\times a$ can be reformulated as: $$\scriptsize b \times a = (-2^{7}+\sum_{i=0}^{6}x_{i}2^{i}) \times a= -2^{7}a+(\sum_{i=0}^{6}x_{i}2^{i}) \times a \vspace{-2mm}$$ The term $-2^{7}a$ is the two’s complement of multiplicand which is lef-shifted by 7 bits, and the term ($\sum_{i=0}^{6}x_{i}2^{i}) \times a$ is only accumulating shifted version of the multiplicand. TCD-NPE: Our Proposed MLP Neural Processing Engine {#TCD_NPE_section} -------------------------------------------------- TCD-NPE is a configurable neural processing engine which is composed of a 2-D array of TCD-MACs. The TCD-MAC array is connected to a global buffer using a configurable Network on Chip (NOC) that supports various forms of data flow as described in section \[intro\]. However, for simplicity, we limit our discussion to supporting OS and NLR data flows for executing MLPs. This choice is made to help us focus on the performance and energy impact of utilizing TCD-MACs in designing an efficient NPE without complicating the discussion with the support of many different data flows. Figure \[npe\_architecture\] captures the overall TCD-NPE architecture. It is composed of 1) Processing Element (PE) array which is a tiled array of TCD-MACs, 2) Local Distribution Networks (LDN) that manages the PE-array connectivity to memories, 3) Two global buffers, one for storing the filter weights and one for storing the feature maps, and 4) The Mapper-and-controller unit which translates the MLP model into a supported data and control flow. The functionality and design of each of these units are described next: ![TCD-NPE overall architecture. The Mapper algorithm is executed externally, and the sequence of events is loaded into the controller for governing the OS data and control flow.[]{data-label="npe_architecture"}](figures/noc_overview7.png){width="0.9\columnwidth"} ### **PE Array** The PE-array is the computational engine of our proposed TCD-NPE. Each PE in this tiled array is a TCD-MAC. Each TCD-MAC could be operated in two modes: 1) Carry Deferring Mode (CDM), or 2) Carry Propagation Mode (CPM). According to the discussion in section \[TCD\_MAC\_section\], when working with an input stream of size N, the TCD-MAC is operated in the CDM model for N cycles (computing approximate sum), and in the CPM mode in the last cycle to generate the correct output. This is in line with OS data flow as described in section \[related\_work\]. Note that the TCD-MAC in this PE-array could be operated in CPM mode in every cycle allowing the same PE-array architecture to also support the NLR. After computing the raw neuron value (prior to activation), the TCD-MAC writes the computed sum into the NOC bus. The Neuron value is then passed to the quantization and activation unit before being written back to the global buffer. Fig. \[sequence\] captures the logic implementation for quantization (to 16 bits) and Relu[@alexnet] activation in this unit. ![The logic implementation of Quantization (Left) and Relu Activation (right) for signed fixed-point 16bit values []{data-label="sequence"}](figures/Q_A.png){width="0.80\columnwidth"} Consider two layers of an MLP where the input layer contains M feature-values (neurons) and the second layer contains N Neurons. To compute the value of N Neurons, we need to utilize N TCD-MACs (each for M+1 cycles). If the number of available TCD-MACS is smaller than N, the computation of the neurons in the second layer should be unrolled to multiple rolls (rounds). If the number of available TCD-MACs is larger than neurons in the second layer (for small models), we can simultaneously process multiple batches (of the model) to increase the NPE utilization. Note that the size of the input layer (M) will not affect the number of needed TCD-MACs, but dictates how many cycles (M+1) are needed for the computation of each neuron. When mapping a batch of MLP to the PE-array, we should decide how the computation is unrolled and how many batches (K), and how many output neurons (N) should be mapped to the PE-array in each roll. The optimal choice would result in the least number of rolls and the maximum utilization of the NPE. To illustrate the trade-offs in choosing the value of (K, N) let us consider a PE-array of size 18, which is arranged in 6 rows and 3 columns of TCD-MACs (similar to that in Fig. \[npe\_architecture\]). We refer to each row of TCD-MACs as a TCD-MAC Group (TG). In our implementation, to reduce NOC complexity, the TG groups work on computing neurons in the same batch, while different TG groups could be assigned to work on the same or different batches. The architecture in Fig. \[npe\_architecture\] has 6 TG groups. Let us use NPE(K, N) to denote the choice of using the PE-array to compute N neuron values in K batches where $N\times K=18$. In our example PE-array the following selections of K and N are supported: $(K,N) \in (1,18), (2, 9), (3, 6), (6, 3)$. The $(9,2)$ and $(18,1)$ configuration are not supported as the value of N in this configurations is smaller than TG size = 3. Fig. \[TCD\_NPE\].left shows an abstract view of TCD-NPE and describe how the weights and input features (from one or more batches) are fed to the TCD-NPE for different choices of K and N. As an example \[TCD\_NPE\].(left).A shows that input features from one batch are broadcasted between all TGs, while the weights are unicasted to each TCD-MAC. Let us represent the input scenario of processing B batches of U neurons in a hidden or output layer of an MLP model with I input features using $\Gamma(B,I,U)$. Fig. \[TCD\_NPE\].(right) shows the NPE status when a $\Gamma(3,I,9)$ model (3 batches of a hidden layer with 9 neurons in a hidden layer each fed from I input neurons) is executed using each of 4 different NPE(K, N) choices. For example Fig. \[TCD\_NPE\].(right).top shows that using configuration NPE(1,18), we process one batch with 18 neurons at a time. In this example, when using this configuration, the NPE is underutilized (50%) as there exist only 9 neurons in each batch. Following a similar argument, the NPE(6,3) arrangement also have 50% utilization. However the arrangement NPE(2,9), and NPE(3,6) reach 75% utilization (100% for the roll, and 50% for the second roll), hence either NPE(2,9) or NPE(3,6) arrangement is optimal for the $\Gamma(3,I,9)$ problem as they produce the least number of rolls. Note that the value of I in $\Gamma(3,I,9)$ denotes the number of input features which dictate the number of cycles that the NPE(K,N) should be executed. ![Assuming a $6 \times 3$ PE-array of TCD-MACs, the NPE(K, N) could be configured such that (K, N) $\in$ {(1,18), (2,9), (3,6), (6,3)}. This figure illustrate the number of rolls, and utilization when each of NPE(K,N) configurations is used to run a $\Gamma$(3,I,9). model. Each roll is executed I times.[]{data-label="TCD_NPE"}](figures/modes3.png){width="0.85\columnwidth"} ### **Mapping Unit** An MLP has one or more hidden layers and could be presented using Model($I-H_1-H_2-...-H_N-O$), in which $I$ is the number of input features, $H_i$ is the number of Neurons in the hidden layer $i$, and $O$ is the number of output layer neurons. The role of the mapping unit is to find the best unrolling scenario for mapping the sequence of problems $\Gamma(B,I, H_1)$, $\Gamma(B,H_1, H_2)$, ..., $\Gamma(B,H_{N-1}, H_N)$, and $\Gamma(B, H_N,O)$ into minimum number of NPE(K,N) computational rounds. Algorithm \[mapper\_algorithm\] describes the mapper function for unrolling a multi-batch multi-layer MLP problem. In this Algorithm, B is the batch size that could fit in the NPE’s feature-memory (if larger, we can unroll the B into N $\times$ B\* computation round, where B\* is the number of batches that fit in the memory). $M[L]$ is the MLP layer size information, where $M[i]$ is the number of nodes in layer $i$ (with $i=0$ being Input, and $i=N+1$ being Output, and all others are hidden layers). The algorithm schedules a sequence of NPE(K, N) events to compute each MLP layer across all batches. $Tree_{head} = CreateTree(B, M[l])$ $Exec_{Tree} \gets$ Shallowest binary tree (least rolls) from $Tree_{head}$ **Schedule** $\gets$ Schedule computational events by using BFS\ on $Exec_{Tree}$ to report NPE(K,N) and $r$ at each node. return **Schedule**\ $C[i] \gets$ find each $(K_i, N_i)| K_i, N_i \in \mathbb{N}$, & $K_i < B$\ & $size(NPE) = K_i\times N_i$ $M_B = min(B, C[i][1])$. $M_{\Theta} = min(\Theta, C[i][2])$. $\psi = (M_B, M_{\Theta})$ $r = \lfloor B/M_B \rfloor \times \lfloor \Theta/M_{\Theta} \rfloor$ $Node_B \gets$ CreateTree($B \% M_B, \Theta$) $Node_{\Theta} \gets$ CreateTree($B-B\%M_B, \,\, K\%M_{\Theta}$) **Node** $\gets$ $createNode(r, \psi, Node_B, Node_{\Theta})$ return **Node** ![image](figures/mapping2.png){width="2.0\columnwidth"} To schedule the sequence of events, the Alg. \[mapper\_algorithm\] first generates the expanded computational tree of the NPE using $CreateTree$ procedure. This procedure first finds all possible ways that NPE could be segmented for processing N neurons of K batches, where $K \leq B$ and stores them into configuration database C. Then for each of configurations of NPE(K, N), it derives how many rounds (r) of NPE(K, N) computations could be executed. Then it computes a) the number of remaining batches (with no computation) and b) the number of missing neurons in partially computed batches. It, then, creates a tree-node, with 4 major fields 1) the load-configuration $\Psi(K_i^*,N_i^*)$ that is used to partially compute the model using the selected NPE($K_i, N_i$) such that $(K_i^* \leq K_i) \& (N_i^* \leq N_i)$, 2) the number of rounds (rolls) $r$ taken with computational configuration $\Psi$ to reach that node, 3) a pointer to a new problem $Node_B$ that specifies the number of remaining batches (with no computation), and 4) a pointer to a new problem $Node_{\Theta}$ for partially computed batches. Then the $CreateTree$ procedure is recursively called on each of the $Node_B$ and $Node_{\Theta}$ until the batches left, and partial computation left in a (leaf) node is zero. At this point, the procedure returns. After computing the computational tree, the mapper extracts the best execution tree by finding a binary tree with the least number of rolls (where all leaf nodes have zero computation left). The number of rolls is computed by summing up the $r$ field of all computational nodes. Finally, the mapper uses a Breath First Search (BFS) on the Execution Tree ($Exec_{Tree}$ and report the sequence of $r \times $NPE(K, N) for processing the entire binary execution tree. The reported sequence is the optimal execution schedule. Fig. \[mapping\] provides an example for executing 5 batches of a hidden MLP layer with 7 neurons. As illustrated the computation-tree (Fig. \[mapping\].A) is first generated, and then the optimal binary execution tree (Fig. \[mapping\].B) resulting in the minimum number of rolls is extracted. Fig. \[mapping\].C captures the result of scheduling step where BFS search schedule the sequence of $r \times $NPE(K, N) events. ### **Controller** The controller is an FSM that receives the “Schedule” from Mapper and generated the appropriate control signals to control the proper OS data flow for executing the scheduled sequence of events. ![image](figures/buffers3.png){width="2\columnwidth"} ### **memory architecture** The NPE global memory is divided into feature-map memory (FM-Mem), and Filter Weight memory (W-Mem). The FM-Mem consist of two memories with ping-pong style of access, where the input features are read from one memory, and output neurons for the next NN layer, are written to the other memory. When working with multiple batches (B), the input features from the largest number of fitting batches (B\*) is read into feature memory. For simplicity, we have assumed that the feature map is large enough to hold the features (neurons) in the largest layer of at least one MLP (usually the input) layer. Note that the NPE still can be used if this assumption is violated, however, now some of the computed neuron values have to be transferred back and forth between main memory (DRAM) and the FM-Mem for lack of space. The filter memory is a single memory that is filled with the filter weights for the layer of interest. The transfer of data from main memory (DRAM) to the W-Mem and FM-Mem is regulated using Run Length Coding (RLC) compression to reduce data transfer size and energy. The data arrangement of features and weights inside the FM-Mem and W-Mem is shown in Fig. \[mem\_access\_fig\]. The data storage philosophy is to sequentially store the data (weight and input features) needed by NPE (according to its configuration) in consecutive cycles in a single row. This data reshaping solution allows us to reduce the number of memory accesses by reading one row at a time into a buffer, and then consuming the data in the buffer in the next few cycles. We explain this data arrangement concept using the example shown in Fig. \[mem\_access\_fig\]. Fig. \[mem\_access\_fig\] shows the arrangement of data when we use our proposed TCD-NPE in NPE(K,N)=(2,64) configuration to process $B=2$ batches of a hidden layer of an MLP model as defined by $\Gamma(B,I,H)=(2,200,100)$. Note that the PE array size, in this case is $16 \times 8$ which is divided into two $8 \times 8$ arrays for processing each of 2 batches. The W-Mem, shown in left, is filled by storing the first N=64 weights of each outgoing edge from input Neurons (features) to each of the neurons in the hidden layer. Considering that the width of W-Mem is 256 bytes, and each weight is 2 bytes, the width of W-Mem ($W_{W-mem}$) is 128 words. Hence, we can store 64 weights of the outgoing edge from each 2 input neurons in one row. The memory-write process is repeated for $\lceil(I/(W_{W-mem}/N))\rceil=100$ rows, and then the next $N=64$ weights of outgoing edges from each input neuron are written (in this case we only have 36 weights left, as there exist a total of 100 outgoing edges from each input neuron, 64 of which is previously stored) in the next $\lceil(I/(W_{W-mem}/N))\rceil=100$ rows. At processing time, by using the NPE(2,64) configuration, the TCD-NPE consumes $N=64$ weights in each cycle. Hence, with one read from W-Mem, it receives the weights needed for $W_{W-mem}/N=128/64=2$ cycles, reducing the number of memory accesses by half. The FM memory, on the other hand, is divided into $B=2$ segments. Assuming that the width of FM memory is $W_{FM-mem}=64$ words, each segment can store $W_{FM-mem}/B=64/2=32$ input features. The memory, as shown in Fig. \[mem\_access\_fig\], is filled by writing the input features of each batch into subsequent rows of each virtually segmented memory. Note that both FM-Mem and W-Mem should be word writable to support writing to a section of a row without changing the value of other memory bits in the same row. The input features from each batch is written to the $\lceil(I/(W_{FM-mem}/B))\rceil= \lceil(200/(64/2))=7\rceil$ rows. At processing time, using the NPE(2,64) configuration, the TCD-NPE in one access (Reading one row) will receive $W_F/B$ input features from $B$ different batches and store them in a buffer. In each subsequent cycle, it consumes one input from each batch, hence, the arrangement of data and sequential read of data into a buffer will reduce the number of memory accesses by a factor of $W_{FM-mem}/B=64/2=32$. ### **Local Distribution Network (LDN)** The Local Distribution Networks (LDN) interface the read/write buffers and the Network on Chip (NOC). They manage the desired multi- or uni-casting scenarios required for distributing the filter values and feature values across TGs. Figure \[ldn\] illustrate an example of LDNs in an NPE constructed using $6 \times 3$ array of TCD-MACs. As illustrated in this example, the LDNs are used for 1) reading/writing from/to buffers of FM-mem while supporting the desired multi-/uni-casting configuration (generated by controller) to support the selected NPE(K, N) configuration (Fig.\[ldn\].A) and 2) reading from W-mem buffer and multi-/uni-casting the result into TGs (Fig.\[ldn\].B). Note that the LDN in Fig, \[ldn\] is specific to NPE of size $6 \times 3$. For other array sizes, a similar LDN should be constructed. ![An example of LDN for managing the connection between a ($6 \times 3$)-PE-array’s NoC and memory. (A).left: LDN for writing from NoC data bus to FM-mem. (A).right: LDN for reading from FM-mem to NoC bus. (B): LDN for reading from W-mem into NoC filter bus. The FM-mem in this case, is divided into 6 partitions, supporting the simultaneous process of 6 batches at a time.[]{data-label="ldn"}](figures/ldn2.png){width="0.8\columnwidth"} Results {#result_section} ======= In this section, we first evaluate the Power, Performance, and Area (PPA) gain of using TCD-MAC, and then evaluate the impact of using the TCD-MAC in our proposed TCD-NPE. The TCD-MAC and all MACs evaluated in this section operate on signed 16-bit fixed-point inputs. Evaluation and Comparison Framework {#framwwork} ----------------------------------- The PPA metrics are extracted from the post-layout simulation of each design. Each MAC is designed in VHDL, synthesized using Synopsis Design Compiler [@dc] using 32nm standard cell libraries, and is subjected to physical design (targeting max frequency) by using the Synopsys reference flow in IC Compiler [@icc]. The area and delay metrics are reported using Synopsys Primetime [@pt]. The reported power is the averaged power across 20K cycles of simulation with random input data that is fed to Prime timePX [@pt] in FSDB format. The general structure of MACs used for comparison is captured in Fig. \[GV\_TCD-MAC\]. We have compared our solution to a wide array of MACs. In these MACs, for multiplication, we used Booth-Radix-N (BRx2, BRx4, BRx8) and Wallace implementations. For addition we have used Brent-Kung (BK) and Kogge-Stone (KS) adders. Each MAC is identified by the tuple (Multiplier choice, Adder choice). \[TCD\_mac\_PPE\] TCD-MAC PPA assessment {#MAC_PPA} ---------------------- Table \[TCD\_mac\_PPE\] captures the PPA comparison of the TCD-MAC against a popular set of conventional MAC configurations. As reported, the TCD-MAC has a smaller overall area, power and delay compare to all reported MACs. Using TCD-MAC provide 23% to 40% reduction in area, 4% to 31% improvement in power, and an impressive 46% to 62% improvement in PDP when compared to other reported conventional MACs. Note that this improvement comes with the limitation that the TCD-MAC takes one extra cycle to generate the correct output when working on a stream of data. However, the power and delay saving of TCD-MAC significantly outweigh the delay and power for one extra computational cycle. To illustrate this, the throughput and energy improvement of using a TCD-MAC for processing different sizes of input streams (1, 10, 100, 1000) is compared against selected conventional MACs and is reported in Table \[impCAC\_MAC\]. As illustrated, when using the TCD-MAC for processing an array of inputs, the power and delay savings quickly outweigh the delay and power of the added cycle as input stream size increases. \[impCAC\_MAC\] TCD-NPE Evaluation ------------------ In this section, we describe the result of our TCD-NPE implementation as described in section \[TCD\_NPE\_section\]. Table \[TCD\_NPE\_spec\]-top summarizes the characteristics of TCD-NPE implemented, the result of which is reported and discussed in this section. For physical implementation, we have divided the TCD-NPE into two voltage domains, one for memories, and one for the PE array. This allows us to scale down the voltage of memories as they had considerably shorter cycle time compared to that of PE elements. This choice also reduced the energy consumption of memories and highlighted the saving resulted from the choice of MAC in the PE-array. Note that the scaling of the memory voltage could be even more aggressive than what implemented in our solution; In several prior work [@sasan2; @sasan3; @sasan4; @sasan5; @sasan6], it was shown that it is possible to significantly reduce the read/write/retention power consumption of a memory unit by aggressively scaling it supplied voltage while deploying architectural fault tolerance techniques and solutions to mitigate the increase in the memory write/read/retention failure rate. On top of that, learning solutions are also approximate in nature, and inherently less sensitive to small disturbance to their input features. This inherent resiliency could be used to deploy fault tolerant techniques to only protect against bit errors in most significant bits of input feature map, resulting in reduced complexity of deployed fault tolerance scheme. Table \[TCD\_NPE\_spec\]-bottom captures the overall PPA of the implemented TCD-NPE extracted from our post layout simulation results which are reported for a Typical Process, at 85C$^{\circ}$ temperature, when the PE-array and memory elements voltages are set according to Table \[TCD\_NPE\_spec\]. \[TCD\_NPE\_spec\] To compare the effectiveness of TCD-NPE, we compared its performance with a similar NPE which is composed of conventional MACS. According to the discussion in section \[related\_work\], we limit our evaluation to the processing of MLP models. Hence, the only viable data flows are OS and NLR. The TCD-MAC only supports OS, however, by replacing a TCD-MAC with a conventional MAC, we can also compare our solution against OS and NLR. We compare 4 possible data flows that are illustrated in Fig. \[data\_flow\_comp\]. In this Fig. The case (A) is NLR data flow (supported only by conventional MAC) for computing the Neuron values by forming a systolic array withing the PE-array. The case (B) An NLR data flow variant according to [@tu2015rna] when the computation tree is unrolled and mapped to the PEs, forcing the PE to either act as an adder or multiplier. The case (C) is the OS data flow realized by using conventional MAC. And, finally, the case (D) is the OS data flow implemented using TCD-NPE. ![Four possible data flow for processing an MLP model. (A): NLR data flow using conventional MACs to form a systolic array. (B): RNA data flow resulted from unrolling the MLP model and mapping the computation tree to conventional MACs (each used as either multiplier or adder) as described in [@tu2015rna]. (C) The OS data flow using conventional MAC. (D): The OS dataflow using TCD-MAC.[]{data-label="data_flow_comp"}](figures/noc2.png){width="0.80\columnwidth"} For OS dataflows, we have used the algorithm \[mapper\_algorithm\] to schedule the sequence of computational rounds. We have compared the efficiency of each of four data flows (described in Fig. \[data\_flow\_comp\]) on a selection of popular MLP benchmarks characteristic of which is described in Table. \[benchmark\]. \[benchmark\] As illustrated in Fig. \[benchmarking\].left, the execution time of the TCD-NPE is almost half of an NPE that uses a conventional MAC in either OS or NLR data flow, and significantly smaller than the RNA data flow (an NLR variant) that was proposed in [@tu2015rna]. Fig. \[benchmark\].right captures the energy consumption of the TCD-NPE and compares that with a similar NPE constructed using conventional MACs. For each benchmark, the energy consumption is broken into 1) computation energy of PE-array, 2) the leakage of the PE-array, 3) the leakage of the memory, and 4) the dynamic energy of memory (and buffer combined). Note that the voltage of the memory is scaled to a lower voltage, as described in table \[TCD\_NPE\_spec\]. This choice was made as the cycle time of the PE’s was significantly shorter than the memory cycle times. The scaling of the memory voltage increased its associated cycle time to one cycle, however, significantly reduced its dynamic and leakage power, making the PE-array energy consumption the largest energy consumer. In addition, note that by sequentially shaping the data in the memories, and usage of buffers, we significantly reduced the number of required memory accesses, resulting in a significant reduction in the dynamic power consumption of the memories. As illustrated, the TCD-NPE not only produces the fastest solution but also produces the least energy-consuming solutions across all NPE configurations, all data flows and all simulated benchmarks. ![Comparison of TCD-NPE with an NPE constructed using conventional MACs that uses the OS, NLR, or RNA data flow. top): Execution time for various MLP benchmarks. Bottom): Energy consumption for various MLP benchmarks.[]{data-label="benchmarking"}](figures/energy_time_breakdown.pdf){width="0.98\columnwidth"} Conclusion ========== In this paper, we introduced the concept of temporal carry bits and used the concept to design a novel MAC for efficient stream processing (TCD-MAC). We further proposed the design of a Neural Processing Engine (TCD-NPE) that is architected using an array of TCD-MACs as its processing element. We, further, proposed a novel scheduler that schedules the sequence of events to process an MLP model in the least number of computational rounds in the proposed TCD-NPE. We reported that the TCD-NPE significantly outperform similar neural processing solutions that are constructed using conventional MACs in terms of both energy consumption and execution time (performance).
--- abstract: | Testing whether a set $\mathbf{f}$ of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. Algebraic independence testing question is wide open over finite fields (Dvir, Gabizon, Wigderson, FOCS’07). The best complexity known is NP$^{\#\P}$ (Mittmann, Saxena, Scheiblechner, Trans.AMS’14). In this work we put the problem in AM $\cap$ coAM. In particular, dependence testing is unlikely to be NP-hard and joins the league of problems of “intermediate” complexity, eg. graph isomorphism & integer factoring. Our proof method is algebro-geometric– estimating the size of the image/preimage of the polynomial map $\mathbf{f}$ over the finite field. A [*gap*]{} in this size is utilized in the AM protocols. Next, we study the open question of testing whether every annihilator of $\mathbf{f}$ has zero constant term (Kayal, CCC’09). We give a geometric characterization using Zariski closure of the image of $\mathbf{f}$; introducing a new problem called [*approximate*]{} polynomials satisfiability (APS). We show that APS is NP-hard and, using projective algebraic-geometry ideas, we put APS in PSPACE (prior best was EXPSPACE via Gröbner basis computation). As an unexpected application of this to approximative complexity theory we get– Over [*any*]{} field, hitting-set for $\overline{\rm VP}$ can be designed in PSPACE. This solves an open problem posed in (Mulmuley, FOCS’12, J.AMS 2017); greatly mitigating the GCT Chasm (exponentially in terms of space complexity). author: - 'Zeyu Guo [^1]' - 'Nitin Saxena [^2]' - 'Amit Sinhababu [^3]' bibliography: - 'reference.bib' title: '**Algebraic dependencies and PSPACE algorithms in approximative complexity**' --- [**1998 ACM Classification:**]{} I.1 Symbolic and Algebraic Manipulation, F.2.1 Numerical Algorithms and Problems, F.1.3 Complexity Measures and Classes, G.1.2 Approximation. [**Keywords:**]{} algebraic dependence, Jacobian, Arthur-Merlin, approximate polynomial, satisfiability, hitting-set, border VP, finite field, PSPACE, EXPSPACE, GCT Chasm. Introduction ============ Algebraic dependence is a generalization of linear dependence. Polynomials $f_1,\ldots ,f_m \in {\mathbb{F}}[x_1, \ldots ,x_n]$ are called *algebraically dependent* over field ${\mathbb{F}}$ if there exists a nonzero polynomial (called *annihilator*) $A(y_1,\ldots ,y_m)\in {\mathbb{F}}[y_1,\ldots ,y_m]$ such that $A(f_1,\ldots,f_m)=0$. If no $A$ exists, then the given polynomials are called *algebraically independent* over ${\mathbb{F}}$. The *transcendence degree* (trdeg) of a set of polynomials is the analog of rank in linear algebra. It is defined as the maximal number of algebraically independent polynomials in the set. Both algebraic dependence and linear dependence share combinatorial properties of the matroid structure [@ehrenborg1993apolarity]. The algebraic matroid examples may not be linear (esp. over ${\mathbb{F}}_p$) [@ingleton1971representation]. The simplest examples of algebraically independent polynomials are $x_1,\ldots,x_n \in {\mathbb{F}}[x_1, \ldots$, $x_n] $. As an example of algebraically dependent polynomials, we can take $f_1=x$, $f_2=y$ and $f_3=x^2+y^2$. Then, $y_1^2 + y_2^2-y_3$ is an annihilator. The underlying field is crucial in this concept. For example, polynomials $x+y$ and $x^p + y^p$ are algebraically dependent over $\mathbb{F}_p$, but algebraically independent over $\mathbb{Q}$. Thus, the following computational question [*AD(${\mathbb{F}}$)*]{} is natural and it is the first problem we consider in this paper: Given algebraic circuits $f_1,\ldots ,f_m \in {\mathbb{F}}[x_1, \ldots ,x_n]$, test if they are algebraically dependent. It can be solved in PSPACE using a classical result due to Perron [@Per27; @Plo05; @C76]. Perron proved that given a set of algebraically dependent polynomials, there exists an annihilator whose degree is upper bounded by the product of the degrees of the polynomials in the set. This exponential degree bound on the annihilator is tight [@Kay09]. The annihilator may be quite hard, but it turns out that the decision version is easy over zero (or large) characteristic using a classical result known as the Jacobian criterion [@Jac41; @BMS13]. The Jacobian efficiently reduces algebraic dependence testing of $f_1,\ldots,f_m$ over ${\mathbb{F}}$ to linear dependence testing of the differentials $df_1,\ldots,df_m$ over ${\mathbb{F}}(x_1,\ldots,x_n)$, where we view $df_i$ as the vector $( \frac{\partial f_i} {\partial x_1},\ldots, \frac{\partial f_i} {\partial x_n})$. Placing $df_i$ as the $i$-th row gives us the Jacobian matrix $J$ of $f_1,\ldots,f_m$. If the characteristic of the field is zero (or larger than the product of the degrees $\deg(f_i)$) then the trdeg equals ${\text{rank}}(J)$. It follows from [@Sch80] that, with high probability, ${\text{rank}}(J)$ is equal to the rank of $J$ evaluated at a random point in ${\mathbb{F}}^n$. This gives a simple randomized polynomial time algorithm solving AD(${\mathbb{F}}$) for certain ${\mathbb{F}}$. For fields of positive characteristic, if the polynomials are algebraically dependent, then their Jacobian matrix is not full rank. But the converse is not true. There are infinitely many input instances (set of algebraically independent polynomials) for which Jacobian fails. The failure can be characterized by the notion of ‘inseparable extension’ [@pandey2016algebraic]. For example, $x^p, y^p$ are algebraically independent over $\mathbb{F}_p$, yet their Jacobian determinant vanishes. Another example is, $\{ x^{p-1}y, xy^{p-1} \}$ over $\mathbb{F}_p$ for prime $p>2$. [@mittmann2014algebraic] gave a criterion, called Witt-Jacobian, that works over fields of prime characteristic $p$; improving the complexity of independence testing problem from PSPACE to NP$^{\#\P}$. [@pandey2016algebraic] gave another generalization of Jacobian criterion that is efficient in special cases. Given that an efficient algorithm to tackle prime characteristic is not in close sight, one could speculate the problem to be NP-hard or even outside the polynomial hierarchy PH. In this work we show that: [*For finite fields, AD(${\mathbb{F}}$) is in AM $\cap$ coAM*]{} (Theorem \[thm\_amcoam\]). This rules out the possibility of NP-hardness, under standard complexity theory assumptions [@AB09]. **Constant term of the annihilators.** We come to the second problem [*AnnAtZero*]{} that we discuss in this paper: Testing if the constant term of [*every*]{} annihilator, of the set of algebraic circuits $\mathbf{f}=\{f_1,\ldots,f_m\}$, is zero. Note that the annihilators of $\mathbf{f}$ constitute an ideal of the polynomial ring ${\mathbb{F}}[y_1,\ldots,y_m]$; this ideal is principal when trdeg of $\mathbf{f}$ is $m-1$ [@Kay09 Lem.7]. In this case, we can decide if the constant term of the minimal annihilator is zero in PSPACE, as the [*unique*]{} annihilator (up to scaling) can be computed in PSPACE. If trdeg of $\mathbf{f}$ is less than $m-1$, the ideal of the annihilators of $\mathbf{f}$ is no longer principal. Although the ideal is finitely generated, finding the generators of this ideal is computationally very hard. (Eg. using Gröbner basis techniques, we can do it in EXPSPACE [@derksen2015computational Sec.1.2.1].) In this case, can we decide if all the annihilators of $\mathbf{f}$ have constant term zero? [*We give two equivalent characterizations of AnnAtZero– one geometric and the other algebraic –using which we devise a PSPACE algorithm to solve it in all cases*]{} (Theorem \[thm-aps\]). Interestingly, there is an algebraic-complexity application of the above algorithm. [*We give a PSPACE-explicit construction of a hitting-set of the class $\overline{\rm VP}_{{\overline{{\mathbb{F}}}}_q}$*]{} (Theorem \[thm-hsg\]). $\overline{\rm VP}_{{\overline{{\mathbb{F}}}}_q}$ consists of $n$-variate degree $d=n^{O(1)}$ polynomials, over the field ${\overline{{\mathbb{F}}}}_q$, that can be ‘infinitesimally approximated’ by size $s=n^{O(1)}$ algebraic circuits. This problem is interesting as natural questions like explicit construction of the normalization map (in Noether’s Normalization Lemma NNL) reduce to the construction of a hitting-set of $\overline{\rm VP}$ [@mulmuley2017geometric]; which was previously known to be only in EXPSPACE [@mulmuley2017geometric; @mul12]. This was recently improved greatly, over the field $\mathbb{C}$, by [@forbes2017pspace]. Their proof technique uses real analysis and does not apply to finite fields. We need to develop purely algebraic concepts to solve the finite field case (namely through AnnAtZero), which then apply to [*any*]{} field. To further motivate the concept of algebraic dependence, we list a few recent problems in computer science. The first problem is about constructing an explicit randomness extractor for sources which are polynomial maps over finite fields. Using Jacobian criterion, [@DGW09; @bib:Dvi09] solved the problem for fields with large characteristic. The second application is in the famous polynomial identity testing (PIT) problem. To efficiently design hitting-sets, for some interesting models, [@BMS13; @ASSS11; @KS15] constructed a family of trdeg-preserving maps. For more background and applications of algebraic dependence testing, see [@pandey2016algebraic]. The annihilator has been a key concept to prove the connection between hitting-sets and lower bounds [@heintz1980testing], and in bootstrapping ‘weak’ hitting-sets [@AGS17]. Our results {#contrib} ----------- In this paper, we give Arthur-Merlin protocols & algorithms, with proofs using basic tools from algebraic geometry. The first theorem we prove is about AD(${\mathbb{F}}_q$). \[thm\_amcoam\] Algebraic dependence testing of circuits in ${\mathbb{F}}_q[\mathbf{x}]$ is in AM $\cap$ coAM. This result vastly improves the current best upper bound known for AD(${\mathbb{F}}_q$)– from being ‘outside’ the polynomial hierarchy (namely NP$^{\#\P}$ [@mittmann2014algebraic]) to ‘lower’ than the second-level of polynomial hierarchy (namely AM $\cap$ coAM). This rules out the possibility of AD(${\mathbb{F}}_q$) being NP-hard (unless polynomial hierarchy collapses to the second-level [@AB09]). Recall that, for zero or large characteristic ${\mathbb{F}}$, AD(${\mathbb{F}}$) is in coRP (Section \[sec-prelim\]). We conjecture such a result for AD(${\mathbb{F}}_q$) too. Our second result is about the problem AnnAtZero (i.e. testing whether all the annihilators of given $\mathbf{f}$ have constant term zero). A priori it is unclear why it should have complexity better than EXPSPACE (note: ideal membership is EXPSPACE-complete [@mayr1982complexity]). Firstly, we relate to a (new) version of polynomial system satisfiability, over the algebraic closure $\overline{{\mathbb{F}}}$: Given algebraic circuits $f_1,\ldots,f_m \in {\mathbb{F}}[x_1,\ldots,x_n]$, does there exist $ \mathbf\beta \in \overline{{\mathbb{F}}}({\varepsilon})^n $ such that for all $i$, $f_i(\mathbf{\beta})$ is in the ideal ${\varepsilon}\overline{{\mathbb{F}}}[{\varepsilon}] $? If yes, then we say that $\mathbf{f}:= \{f_1,\ldots,f_m\}$ is in APS. It is easy to show: Function field ${\overline{{\mathbb{F}}}}({\varepsilon})$ here can be equivalently replaced by [*Laurent polynomials*]{} $\overline{{\mathbb{F}}}[{\varepsilon}, {\varepsilon}^{-1}]$, or, the field ${\overline{{\mathbb{F}}}}(({\varepsilon}))$ of [*formal Laurent series*]{} (use mod ${\varepsilon}\overline{{\mathbb{F}}}[{\varepsilon}]$). A reason why these objects appear in algebraic complexity can be found in [@burgisser2004complexity Sec.5.2] & [@LL89 Sec.5]. They help algebrize the notion of ‘infinitesimal approximation’ (in real analysis think of ${\varepsilon}\rightarrow0$ & $1/{\varepsilon}\rightarrow\infty$). A notable computational issue involved is that the degree bound of ${\varepsilon}$ required for $\beta$ is exponential in the input size [@LL89 Prop.3]; this may again be a “justification” for APS requiring that much space. Classically, the [*exact*]{} version of APS has been extremely well-studied– Does there exist $\mathbf{\beta} \in \overline{{\mathbb{F}}}^n$ such that for all $i$, $f_i(\mathbf{\beta})=0 $? This is what Hilbert’s Nullstellensatz (HN) characterizes and yields an impressive PSPACE algorithm [@koiran1996hilbert; @kollar1988sharp]. Note that if system $\mathbf{f}$ has an exact solution, then it is trivially in APS. But the converse is not true. For example, $\{x, xy-1\}$ is in APS, but there is no exact solution in $\overline{{\mathbb{F}}}$. To see the former, assign $x={\varepsilon}$ and $y= 1/{\varepsilon}$. Also, the instance $\{x,x+1\}$ is neither in APS nor has an exact solution. Finally, note that if we restrict $ \mathbf{\beta}$ to come from $\overline{{\mathbb{F}}}[{\varepsilon}]^n$ then APS becomes equivalent to exact satisfiability and HN applies. This can be seen by going modulo ${\varepsilon}\overline{{\mathbb{F}}}[{\varepsilon}]$, as the quotient $\overline{{\mathbb{F}}}[{\varepsilon}]/{\varepsilon}\overline{{\mathbb{F}}}[{\varepsilon}]$ is $\overline{{\mathbb{F}}}$. Coming back to AnnAtZero, we show that it is equivalent both to a geometric question and to deciding APS. This gives us, with more work, the following surprising consequence. \[thm-aps\] APS is NP-hard and is in PSPACE. We apply this to design hitting-sets and solving NNL (refer [@mulmuley2017geometric] for the background). \[thm-hsg\] There is a PSPACE algorithm that (given input $n,s,r$ in unary & suitably large ${\mathbb{F}}_q$) outputs a set, of points from ${\mathbb{F}}_q ^n$ of size poly$(nsr, \log q)$, that hits all $n$-variate degree-$r$ polynomials over ${\overline{{\mathbb{F}}}}_q $ that can be infinitesimally approximated by size $s$ circuits. [**More applications?**]{} The exact polynomials satisfiability question HN (over ${\overline{{\mathbb{F}}}}$) is highly expressive and, naturally, most computer science problems get expressed that way. We claim that in a similar spirit, the APS question expresses those computer science problems that involve ‘infinitesimal approximation’. One prominent example is the concept of [*border rank*]{} of tensor polynomials (used in matrix multiplication algorithms and GCT, see [@burgisser2013algebraic; @landsberg2012tensors; @le2014powers]). Border rank computation of a given tensor (over ${\overline{{\mathbb{F}}}}$) can easily be reduced to an APS instance and, hence, now solved in PSPACE; this matches the complexity of tensor rank itself [@schaefer2016complexity]. From the point of view of Gröbner basis theory, APS is a problem that seems a priori much harder than HN. Now that both of them have a PSPACE algorithm, one may wonder whether it can be brought all the way down to NP or AM? (In fact, $\text{HN}_{{\mathbb{C}}}$ is known to be in AM, conditionally under GRH [@koiran1996hilbert].) Our methods in the proof of Theorem \[thm-aps\] imply an interesting “degree bound” related to the (prime) ideal $I$ of annihilators of polynomials $\mathbf f$. Namely, $I=\sqrt{I_{\le d}}$, where $I_{\le d}$ refers to the subideal generated by degree $\le d$ polynomials of $I$, $d$ is the Perron-like bound $(\max_{i\in [m]}\deg(f_i))^k$, and $k:= \text{ trdeg}(\mathbf f)$. This is equivalent to the geometric fact, which we prove, that the varieties defined by the two ideals $I$ and $I_{\le d}$ are equal (Theorem \[thm-randReduct\]). This again is an exponential improvement over what one expects to get from the general Gröbner basis methods; because, the generators of $I$ may well have doubly-exponential degree. The hitting-set result (Theorem \[thm-hsg\]) can be applied to compute, in PSPACE, the explicit system of parameters (esop) of the [*invariant ring*]{} of the variety $\Delta[\det,s]$, over ${\overline{{\mathbb{F}}}}_q$, with a given group action [@mulmuley2017geometric Thm.4.9]. Also, we can now construct, in PSPACE, polynomials in ${\mathbb{F}}_q[x_1,\dots,x_n]$ that cannot even be approximated by ‘small’ algebraic circuits. Such results were previously known only for characteristic zero fields, see [@forbes2017pspace Thms.1.1-1.4]. Bringing this complexity down to P is the longstanding problem of blackbox PIT (& lower bounds), see [@Saxena09; @shpilka2010arithmetic; @Saxena13]. Mulmuley [@mul12] pointed out that small hitting-sets for ${\overline{\rm VP}}$ can be designed in EXPSPACE which is a far worse complexity than that for VP. He called it the GCT Chasm. We bridge it somewhat, as the proof of Theorem \[thm-hsg\] shows that small hitting-sets for ${\overline{\rm VP}}_{{\overline{{\mathbb{F}}}}}$ can be designed in PSPACE (like those for VP) for [*any*]{} field ${\mathbb{F}}$. Proof ideas {#idea} ----------- **Proof idea of Theorem \[thm\_amcoam\].** Suppose we are given algebraic circuits $\mathbf{f}:= \{f_1,\ldots,f_m\}$ computing in ${\mathbb{F}}_q[x_1,\dots,x_n]$. For the AM and coAM protocols, we consider the following system of equations over a ‘small’ extension ${\mathbb{F}}_{q'}$: For $b=(b_1,\dots,b_n)\in {\mathbb{F}}_{q'}^n$, define the system of equations $f_i(x_1,\dots,x_n)=b_i$, for $i\in [m]$. We denote the number of solutions of the above system in ${\mathbb{F}}_{q'}^n$ as $N_b$. Let $f:{\mathbb{F}}_{q'}^n\to {\mathbb{F}}_{q'}^m$ be the polynomial map $a\mapsto (f_1(a),\dots,f_m(a))$. [*AM gap*]{}. \[Theorem \[thm\_am\]\] We establish bounds for the number $N_{f(a)}$, where $a$ is a random point in ${\mathbb{F}}_{q'}^n$. If $f_1,\dots,f_m$ are independent, we show that $N_{f(a)}$ is relatively small. Whereas, if the polynomials are algebraically dependent then $N_{f(a)}$ is much more. Assume $\mathbf{f}$ are algebraically independent. Wlog (see the full version of [@pandey2016algebraic Sec.2]) we can assume that $m=n$ and for all $i\in[n]$, $\{x_i,f_1,\ldots,f_n\}$ are algebraically dependent. The first step is to show that the zeroset defined by the system of equations, for random $f(a)$, has dimension $\leq 0$. This is proved using the Perron degree bound on the annihilator of $\{x_i,f_1,\ldots,f_n\}$. Next, one can apply an affine version of Bezout’s theorem to upper bound $N_{f(a)}$. On the other hand, suppose $\mathbf{f}$ are algebraically dependent, say with annihilator $Q$. Let ${\mathrm{Im}}(f):=f({\mathbb{F}}_{q'}^n)$ be the image of $f$. Since $Q$ vanishes on ${\mathrm{Im}}(f)$, we know that ${\mathrm{Im}}(f)$ is relatively small, whence we deduce that $N_{f(a)}$ is large for ‘most’ $a$’s. [*coAM gap*]{}. \[Theorem \[thm\_coam\]\] We pick a random point $b=(b_1,\dots,b_m)\in {\mathbb{F}}_{q'}^m$ and bound $N_b$, which is the number of solutions of the system defined above. In the dependent case, we show that $N_b=0$ for ‘most’ $b$’s. But in the independent case, we show that $N_b\ge1$ for ‘many’ (may be not ‘most’!) $b$’s. The ideas are based on those sketched above. The two kinds of gaps shown above are based on the set $f^{-1}(f(\mathbf x))$ resp. ${\mathrm{Im}}(f)$. Note that membership in either of these sets is testable in NP (the latter requires nondeterminism). Based on this and the gaps between the respective cardinalities, we can invoke Lemma \[lem\_amprotocol\] and devise the AM and coAM protocols for AD(${\mathbb{F}}_{q'}$), which also apply to AD(${\mathbb{F}}_{q}$). [*Remark–*]{} One advantage in our problem is that we could sample a random point in the set ${\mathrm{Im}}(f)$. In contrast, it is not clear how to sample a random point in the zeroset ${\mathrm{Zer}}(\mathbf f):= \{\mathbf x\in {\mathbb{F}}_{q'}^n \,\mid\, f(\mathbf x)= \mathbf 0\}$. Thus, we manage to side-step the NP-hardness associated with most zeroset properties. Eg. computing the dimension of ${\mathrm{Zer}}(\mathbf f)$ is NP-hard. **Proof idea of Theorem \[thm-aps\].** Let algebraic circuits $\mathbf f:= \{f_1,\dots,f_m\}$ in ${\mathbb{F}}[x_1,\dots,x_n]$ be given over a field ${\mathbb{F}}$. We want to determine if the constant term of every annihilator for $\mathbf f$ is zero. Redefine the polynomial map $f:{\overline{{\mathbb{F}}}}^n\to {\overline{{\mathbb{F}}}}^m$; $a\mapsto (f_1(a),\dots,f_m(a))$. For a subset $S$ of an affine (resp. projective) space, write $\overline{S}$ for its [*Zariski closure*]{} in that space, i.e. it is the smallest subset that contains $S$ and equals the zeroset ${\mathrm{Zer}}(I)$ of some polynomial ideal $I$. [*APS vs AnnAtZero.*]{} \[Theorem \[thm\_approx\]\] Now, we interpret the problem AnnAtZero in a geometric way through Lemma \[lem\_geom\]: The constant term of every annihilator of $\mathbf f$ is zero iff the origin point $\mathbf 0\in {\overline{{\mathrm{Im}}(f)}}$. This has a simple proof using the ideal-variety correspondence [@Har92]. Note that the stronger condition $\mathbf 0\in{\mathrm{Im}}(f)$ is equivalent to the existence of a common solution to the equations $f_i(x_1,\dots,x_n)$ $=0$, $i=1,\dots,m$. The latter problem (call it HN for Hilbert’s Nullstellensatz) is known to be in AM if ${\mathbb{F}}=\mathbb{Q}$ and GRH is assumed [@koiran1996hilbert]. However, ${\mathrm{Im}}(f)$ is not necessarily Zariski closed; equivalently, it may be strictly smaller than ${\overline{{\mathrm{Im}}(f)}}$. So, we need new ideas to test $\mathbf{0}\in \overline{{\mathrm{Im}}(f)}$. Next, we observe that although $\mathbf{0}\in \overline{{\mathrm{Im}}(f)}$ is not equivalent to the existence of a solution $\mathbf x\in {\overline{{\mathbb{F}}}}^n$ to $f(\mathbf x)= \mathbf 0$, it [*is*]{} equivalent to the existence of an “approximate solution” $\mathbf x\in {\overline{{\mathbb{F}}}}({\varepsilon})^n$, which is an $n$-tuple of rational functions in a formal variable ${\varepsilon}$. The proof idea of this uses a degree bound on ${\varepsilon}$ due to [@LL89]. We called this problem APS. As AnnAtZero problem is already known to be NP-hard [@Kay09], APS is also NP-hard. [*Upper bounding APS.*]{} We now know that: Solving APS for $\mathbf f$ is equivalent to solving AnnAtZero for $\mathbf f$. AnnAtZero was previously known to be in PSPACE in the special case when the trdeg $k$ of ${\mathbb{F}}(\mathbf f)/{\mathbb{F}}$ equals $m$ or $m-1$, but the general case remained open (best being EXPSPACE). In this work we prove that AnnAtZero is in PSPACE even when $k<m-1$. Our simple idea is to reduce the input to a smaller $m=k+1$ instance, by choosing new polynomials $g_1,\dots,g_{k+1}$ that are random linear combinations of $f_i$’s. We show that with high probability, replacing $\{f_1,\dots,f_m\}$ by $\{g_1,\dots,g_{k+1}\}$ preserves YES/NO instances as well as the trdeg. This gives a randomized poly-time reduction from the case $k<m-1$ to $k=m-1$ (Theorem \[thm-randReduct\]). The latter has a standard PSPACE algorithm. For notational convenience view ${\overline{{\mathbb{F}}}}$ as the [*affine line*]{} ${\mathbb{A}}$. Define $V:=\overline{{\mathrm{Im}}(f)}\subseteq {\mathbb{A}}^m$. Proving that the above reduction (of $m$) does preserve YES/NO instances amounts to proving the following geometric statement: If $V$ does not contain the origin $O\in{\mathbb{A}}^m$, then with high probability, the variety $V':=\overline{\pi(V)}$ does not contain the origin $O'\in{\mathbb{A}}^{k+1}$ either, where $\pi:{\mathbb{A}}^m\to {\mathbb{A}}^{k+1}$ is a random linear map. As $\pi$ is picked at random, the kernel $W$ of $\pi$ is a random linear subspace of ${\mathbb{A}}^m$. We have $O'\not\in \pi(V)$ whenever $V\cap W=\emptyset$, but this is not sufficient for proving $O'\not\in \overline{\pi(V)}$, since $V$ may “get arbitrarily close to $W$” in ${\mathbb{A}}^m$ and meet $W$ “at infinity”. Inspired by this observation, we consider projective geometry instead of affine geometry, and prove that $O'\not\in V'$ holds as long as the projective closure of $V$ and that of $W$ are disjoint. The proof uses the construction of a projective subvariety– the [*join*]{} –to characterize $\pi^{-1}(V')$, and eventually rules out $W\subseteq \pi^{-1}(V')$ (Lemma \[lem\_projcriterion\]). Moreover, we show that this holds with high probability if $O\not\in V$: by (repeatedly) using the fact that a generic (=random) hyperplane section reduces the dimension of a variety by one. **Proof idea of Theorem \[thm-hsg\].** Define ${\mathbb{A}}:={\overline{{\mathbb{F}}}}_q$ and assume wlog $q\ge\Omega(sr^{2})$ [@AL86]. [@heintz1980testing Thm.4.4] showed that a hitting-set, of size $h:= O(s^2n^2\log q)$ in ${\mathbb{F}}_q^n$, [*exists*]{} for the class of degree-$r$ polynomials, in ${\mathbb{A}}[x_1,\dots,x_n]$, that can be infinitesimally approximated by size-$s$ algebraic circuits. So, we can search over all possible subsets of size $h$ from ${\mathbb{F}}_q^n$ and ‘most’ of them are hitting-sets. How do we certify that a candidate set ${\mathcal{H}}$ is a hitting-set? The idea is to use universal circuits. A [*universal circuit*]{} has $n$ essential variables $\mathbf x=\{x_1,\ldots,x_n\}$ and $s':=O(sr^4)$ auxiliary variables $\mathbf y= \{y_1,\ldots, y_{s'}\}$. We can fix the auxiliary variables, from ${\mathbb{A}}({\varepsilon})$, in such a way so that it can output any homogeneous circuit of size-$s$, approximating a degree-$r$ polynomial in ${\overline{\rm VP}}_{\mathbb{A}}$. Given a universal circuit $\Psi$, certification of a hitting-set ${\mathcal{H}}$ is based on the following observation, that follows from the definitions: Candidate set ${\mathcal{H}}=: \{\mathbf v_1,\ldots, \mathbf v_h\}$ is a hitting-set iff   $\forall \mathbf y\in {\mathbb{A}}({\varepsilon})^{s'} $, $\,\Psi(\mathbf{y,x})\notin {\varepsilon}{\mathbb{A}}[{\varepsilon}][\mathbf x] $ $ \,\Rightarrow\, $ $\exists i\in[h]$, $ \Psi(\mathbf y, \mathbf v_i)\notin {\varepsilon}{\mathbb{A}}[{\varepsilon}] $. Equivalently: Candidate set ${\mathcal{H}}= \{\mathbf v_1,\ldots, \mathbf v_h\}$ is [*not*]{} a hitting-set iff   $\exists \mathbf y\in {\mathbb{A}}({\varepsilon})^{s'} $, $\,\Psi(\mathbf{y,x})\notin {\varepsilon}{\mathbb{A}}[{\varepsilon}][\mathbf x]\,$ and $\,\forall i\in[h]$, $ \Psi(\mathbf y, \mathbf v_i)\in {\varepsilon}{\mathbb{A}}[{\varepsilon}] $. Note that this hitting-set certification is more challenging than the one against polynomials in VP; because the degree bounds for ${\varepsilon}$ are exponentially high and moreover, we do not know how to frame the first ‘non-containment’ condition as an APS instance. To translate it to an APS instance, our key idea is the following. Pick $q\ge\Omega(s'r^2)$ so that a hitting-set exists, in ${\mathbb{F}}_q^n$, that works against polynomials approximated by the specializations of $\Psi$. Suppose $\Psi(\mathbf{\alpha,x})$ is not in ${\varepsilon}{\mathbb{A}}[{\varepsilon}][\mathbf x]$, for some $\mathbf \alpha\in {\mathbb{A}}({\varepsilon})^{s'}$. This means that we can write it as $\sum_{-m\le i\le m'}$ ${\varepsilon}^i g_i(\mathbf x)$ with $g_{-m}\ne0$ and $m\ge0$. Clearly, ${\varepsilon}^m\cdot \Psi(\mathbf{\alpha,x})$ infinitesimally approximates the nonzero polynomial $g_{-m} \in {\mathbb{A}}[\mathbf x]$. By the conditions on $\Psi$, we know that $g_{-m}$ is a homogeneous degree-$r$ polynomial (and approximative complexity $s'$). Thus, by [@Sch80], there exists a $\mathbf \beta\in{\mathbb{F}}_q^n$ such that $g_{-m}(\mathbf \beta)=:a$ is a nonzero element in ${\mathbb{A}}$. We can normalize by this and consider $a^{-1}{\varepsilon}^m\cdot \Psi(\mathbf{y,x})$, which evaluates to $1+{\varepsilon}{\mathbb{A}}[{\varepsilon}]$ at $(\mathbf{\alpha,\beta})$. Since this normalization factor only affects the auxiliary variables $\mathbf y$, we get another equivalent criterion: Candidate set ${\mathcal{H}}= \{\mathbf v_1,\ldots, \mathbf v_h\}$ is [*not*]{} a hitting-set iff   $\exists \mathbf y\in {\mathbb{A}}({\varepsilon})^{s'} $ and $\exists \mathbf x\in {\mathbb{F}}_q^n $ such that, $\,\Psi(\mathbf{y,x}) - 1 \in {\varepsilon}{\mathbb{A}}[{\varepsilon}]\,$ and $\,\forall i\in[h]$, $ \Psi(\mathbf y, \mathbf v_i)\in {\varepsilon}{\mathbb{A}}[{\varepsilon}] $. We reached closer to APS, but how do we implement $\exists? \mathbf x\in {\mathbb{F}}_q^n $ (it is an exponential space)? The idea is to rewrite it, instead using the $(r+1)$-th roots of unity $Z_{r+1}\subset{\mathbb{A}}$, as: $\exists \mathbf x\in {\mathbb{A}}({\varepsilon})^n$, $\forall i\in[n]$, $x_i^{r+1}-1 \in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$. This gives us a criterion that is an instance of APS with $n+h+1$ input polynomials (Theorem \[thm\_HITTING\]). By Theorem \[thm-aps\] it can be done in PSPACE; finishing the proof. Moreover, this PSPACE algorithm idea is independent of the field characteristic. (Eg. it can be seen as an alternative to [@forbes2017pspace] over the complex field.) Preliminaries {#sec-prelim} ============= [**Jacobian.**]{} Although this work would not need it, we define the classical Jacobian: For polynomials $\mathbf{f} = \inbrace{f_1, \cdots, f_m}$ in ${\mathbb{F}}[x_1,\cdots, x_n]$, [*Jacobian*]{} is the matrix $\mathcal{J}_{\mathbf{x}}(\mathbf{f}) := ({\partial_{x_j}f_i})_{m \times n}$, where ${\partial_{x_j}f_i} := \partial f_i/ \partial x_j$. Jacobian criterion [@Jac41; @BMS13] states: For degree $\le d$ and trdeg $\le r$ polynomials $\mathbf{f}$, if char$({\mathbb{F}}) = 0$ or char$({\mathbb{F}}) > d^r$, then trdeg$(\mathbf{f}) \,=\, {\text{rank}}_{{\mathbb{F}}(\mathbf{x})} \mathcal{J}_{\mathbf{x}}(\mathbf{f})$. This yields a randomized poly-time algorithm [@Sch80]. For other fields, Jacobian criterion fails due to inseparability and AD(${\mathbb{F}}$) is open. **AM protocol.** Arthur-Merlin class AM is a randomized version of the class NP (see [@AB09]). Arthur-Merlin protocols, introduced by Babai [@babai1985trading], can be considered as a special type of interactive proof system in which the randomized poly-time verifier (Arthur) and the all-powerful prover (Merlin) have only constantly many rounds of exchange. AM contains interesting problems like determining if two graphs are non-isomorphic. AM $\cap$ coAM is the class of decision problems for which both YES and NO answers can be verified by an AM protocol. It can be thought of as the randomized version of NP $\cap$ coNP. See [@kayal2006complexity] for a few natural algebraic problems in AM $\cap$ coAM. If such a problem is NP-hard (even under random reductions) then polynomial hierarchy collapses to the second-level, i.e. PH$=\Sigma_2$. In this work AM protocol will only be used to distinguish whether a set $S$ is ‘small’ or ‘large’. Formally, we refer to the Goldwasser-Sipser Set Lowerbound method: [@AB09 Chap.9] \[lem\_amprotocol\] Let $m\in{\mathbb{N}}$ be given in binary. Suppose $S$ is a set whose membership can be tested in nondeterministic polynomial time and its size is promised to be either $\le m$ or $\ge 2m$. Then, the problem of deciding whether $|S|\stackrel{?}{\ge}2m$ is in AM. **Geometry.** Due to limited space we have moved the geometry preliminaries to Appendix \[app-AG\]. One can also refer to a standard text, eg. [@Har92; @hartshorne2013algebraic]. Basically, we need terms about affine (resp. projective) zerosets and the underlying Zariski topology. The latter gives a way to ‘impose’ geometry even in very discrete situations, eg. finite fields in this work. Algebraic dependence testing: Proof of Theorem \[thm\_amcoam\] {#sec-dep} =============================================================== Given $f_1,\dots,f_m\in {\mathbb{F}}_q[x_1,\dots,x_n]$, we want to decide if they are algebraically dependent. For this problem AD(${\mathbb{F}}_q$) we could assume, with some preprocessing, that $m=n$. For, $m>n$ means that its a YES instance. If $m<n$ then we could apply a ‘random’ linear map on the variables to reduce them to $m$, preserving the YES/NO instances. Also, the trdeg does not change when we move to the algebraic closure ${\overline{{\mathbb{F}}}}_q$. The details can be found in [@pandey2016algebraic Lem.2.7-2.9]. So, we assume the input instance to be $\mathbf f := \inbrace{f_1,\dots,f_n}$ with nonconstant polynomials. In the following, let $D:=\prod_{i\in [n]}\deg(f_i) \,>0$ and $D':=\max_{i\in [n]} \deg(f_i) \,>0$. Let $d\in{\mathbb{N}}^+$ and $q'=q^d$. The value of $d$ will be determined later. Let $f:{\mathbb{F}}_{q'}^n\to{\mathbb{F}}_{q'}^n$ be the polynomial map $a\mapsto (f_1(a),\dots,f_n(a))$. For $b=(b_1,\dots,b_n)\in {\mathbb{F}}_{q'}^n$, denote by $N_b$ the size of the preimage $f^{-1}(b)=$ $\{\mathbf x \in {\mathbb{F}}_{q'}^n \,\mid\, f(\mathbf x) = b \}$. Define ${\mathbb{A}}:={\overline{{\mathbb{F}}}}_q$ and $N'_b:= \#\{\mathbf x \in {\mathbb{A}}^n \,\mid\, f_i(\mathbf x) = b_i, \, \text{ for all } i\in[n] \}$ which might be $\infty$. Let $Q\in{\mathbb{F}}_q[y_1,\dots,y_n]$ be a nonzero annihilator, of minimal degree, of $f_1,\dots,f_n$. If it exists then $\deg(Q)\leq D$ by Perron’s bound. AM protocol ----------- First, we study the independent case. \[lem\_finite\] Suppose $\mathbf f$ are independent. Then $N'_{f(a)}$ is [*finite*]{} for all but at most $(nDD'/q')$-fraction of $a\in{\mathbb{F}}_{q'}^n$. For $i\in [n]$, let $G_i\in{\mathbb{F}}_q[z, y_1,\dots,y_n]$ be the annihilator of $\inbrace{x_i, f_1,\dots,f_n}$. We have $\deg(G_i)\leq D$ by Perron’s bound. Consider $a\in{\mathbb{F}}_{q'}^n$ such that $G'_i(z):= G_i(z, f_1(a),\dots,f_n(a))\in{\mathbb{F}}_q[z]$ is a nonzero polynomial for every $i\in [n]$. We claim that $N'_{f(a)}$ is finite for such $a$. To see this, note that for any $b=(b_1,\dots,b_n)\in {\mathbb{A}}^n$ satisfying the equations $f_i(b)=f_i(a)$, $i\in[n]$, we have $$0=\, G_i(b_i, f_1(b), \dots, f_n(b)) \,=\, G_i(b_i, f_1(a),\dots, f_n(a)) \,=\, G'_i(b_i), \quad\forall i\in [n] \,.$$ Hence, each $b_i$ is a root of $G'_i$. It follows that $N'_{f(a)} \leq \prod_{i\in[n]} \deg(G'_i) < \infty$, as claimed. It remains to prove that the number of $a\in{\mathbb{F}}_{q'}^n$ satisfying $G'_i=0$, for some index $i\in [n]$, is bounded by $nDD'q'^{-1}\cdot q'^{n}$. Fix $i\in [n]$. Suppose $G_i=\sum_{j=0}^{d_i} G_{i,j} z^j$, where $d_i:=\deg_z(G_i)$ and $G_{i,j}\in {\mathbb{F}}_q[y_1,\dots,y_n]$, for $0\leq j\leq d_i$. The leading coefficient $G_{i,d_i}$ is nonzero. As $f_1,\dots,f_n$ are algebraically independent, the polynomial $G_{i,d_i}(f_1,\dots,f_n)\in{\mathbb{F}}_q[x_1,\dots,x_n]$ is also nonzero. Its degree is $\le D'\deg(G_{i,d_i}) \leq D'\deg(G_i) \leq DD'$. By [@Sch80], for all but at most $(DD'/q')$-fraction of $a\in {\mathbb{F}}_{q'}^n$, we have $G_{i,d_i}(f_1(a),\dots,f_n(a))\neq 0$ which implies $$G'_i(z) \,=\, G_i(z, f_1(a),\dots,f_n(a)) \,=\, \sum_{j=0}^{d_i} G_{i,j}(f_1(a),\dots,f_n(a)) z^j \,\neq 0 \,.$$ The claim now follows from the union bound. We need the following affine version of Bézout’s Theorem. Its proof can be found in [@Sch95 Thm.3.1]. Let $g_1,\dots,g_n\in {\mathbb{A}}[x_1,\dots,x_n]$. Then the number of common zeros of $g_1,\dots,g_n$ in ${\mathbb{A}}^n$ is either infinite, or at most $\prod_{i\in[n]} \deg(g_i)$. Combining Lemma \[lem\_finite\] with Bézout’s Theorem, we obtain \[lem\_bpre\] Suppose $\mathbf f$ are independent. Then $N_{f(a)}\leq D$ for all but at most $(nDD'/q')$-fraction of $a\in{\mathbb{F}}_{q'}^n$. Next, we study the dependent case (with an annihilator $Q$). \[lem\_coamdep\] Suppose $\mathbf f$ are dependent. Then for $k>0$, we have $N_{f(a)}>k$ for all but at most $(kD/q')$-fraction of $a\in {\mathbb{F}}_{q'}^n$. Let ${\mathrm{Im}}(f):= f({\mathbb{F}}_{q'}^n)$ be the image of the map. Note that $Q$ vanishes on all the points in ${\mathrm{Im}}(f)$. So, $|{\mathrm{Im}}(f)| \leq Dq'^{n-1}$ by [@Sch80]. Let $B:= \{b\in {\mathrm{Im}}(f): N_b\leq k\}$ be the “bad” images. We can estimate the bad domain points as, $$\#\{a\in {\mathbb{F}}_{q'}^n: N_{f(a)}\leq k\} \,=\, \#\{a\in {\mathbb{F}}_{q'}^n: f(a)\in B\} \,\le\, k|B| \le k|{\mathrm{Im}}(f)| \,\le\, k D q'^{n-1} \,.$$ which proves the lemma. \[thm\_am\] Testing algebraic dependence of $\mathbf f$ is in AM. Fix $q'=q^d>4nDD'+4kD$ and $k:=2D$. Note that $d$ will be polynomial in the input size. For an $a\in {\mathbb{F}}_{q'}^n$, consider the set $f^{-1}(f(a)) :=$ $\{\mathbf x \in {\mathbb{F}}_{q'}^n \,\mid\, f(\mathbf x) = f(a) \}$. By Lemmas \[lem\_bpre\] & \[lem\_coamdep\]: When Arthur picks $a$ randomly, with high probability, $|f^{-1}(f(a))|= N_{f(a)}$ is more than $2D$ in the dependent case while $\le D$ in the independent case. Note that an upper bound on $\prod_{i\in [n]}\deg(f_i)$ can be deduced from the size of the input circuits for $f_i$’s; thus, we know $D$. Moreover, containment in $f^{-1}(f(a))$ can be tested in P. Thus, by Lemma \[lem\_amprotocol\], AD(${\mathbb{F}}_q$) is in AM. coAM protocol ------------- We again study the independent case wrt a different point in the range of $f$. \[lem-coam-1\] Suppose $\mathbf f$ are independent. Then $N_b>0$ for at least $(D^{-1} - nD'q'^{-1})$-fraction of $b\in {\mathbb{F}}_{q'}^n$. Let $S:= \{a\in {\mathbb{F}}_{q'}^n: N_{f(a)}\leq D\}$. Then $|S|\geq (1-n DD'q'^{-1})\cdot q'^{n}$ by Lemma \[lem\_bpre\]. As every $b\in f(S)$ has at most $D$ preimages in $S$ under $f$, we have $|f(S)| \,\ge\, |S|/D \,\ge\, (D^{-1}-nD'q'^{-1})\cdot q'^{n} $. This proves the lemma since $N_b>0$ for all $b\in f(S)$. Next, we study the dependent case. \[lem-coam-2\] Suppose $\mathbf f$ are dependent. Then $N_b=0$ for all but at most $(D/q')$-fraction of $b\in {\mathbb{F}}_{q'}^n$. By definition: $N_b>0$ iff $b\in {\mathrm{Im}}(f):=f({\mathbb{F}}_{q'}^n)$. It was shown in the proof of Lemma \[lem\_coamdep\] that $|{\mathrm{Im}}(f)| \le Dq'^{n-1}$. The lemma follows. \[thm\_coam\] Testing algebraic dependence of $\mathbf f$ is in coAM. Fix $q'=q^d \,> D(2D+nD')$. Note that $d$ will be polynomial in the input size. For $b\in {\mathbb{F}}_{q'}^n$, consider the set $f^{-1}(b) :=$ $\{\mathbf x \in {\mathbb{F}}_{q'}^n \,\mid\, f(\mathbf x) = b \}$ of size $N_b$. Define $S:= {\mathrm{Im}}(f)$. Note that: $b\in {\mathbb{F}}_{q'}^n$ has $N_b>0$ iff $b\in S$. Thus, by Lemma \[lem-coam-1\] (resp. Lemma \[lem-coam-2\]), $|S|\ge (D^{-1} - nD'q'^{-1})q'^n\,> 2Dq'^{n-1}$ (resp. $|S|\le Dq'^{n-1}$) when $\mathbf f$ are independent (resp. dependent). Note that an upper bound on $\prod_{i\in [n]}\deg(f_i)$ can be deduced from the size of the input circuits for $f_i$’s; thus, we know $Dq'^{n-1}$. Moreover, containment in $S$ can be tested in NP. Thus, by Lemma \[lem\_amprotocol\], AD(${\mathbb{F}}_q$) is in coAM. The statement immediately follows from Theorems \[thm\_am\] & \[thm\_coam\]. Approximate polynomials satisfiability: Proof of Theorem \[thm-aps\] {#sec:aps} ==================================================================== Theorem \[thm-aps\] is proved in two parts. First, we show that APS is equivalent to AnnAtZero problem; which means that it is NP-hard [@Kay09]. Next, we utilize the beautiful underlying geometry to devise a PSPACE algorithm. APS is equivalent to AnnAtZero ------------------------------ Let ${\mathbb{A}}$ be the algebraic closure of ${\mathbb{F}}$. Note that for the given polynomials $\mathbf f:= \inbrace{f_1,\dots,f_m}$ in ${\mathbb{F}}[\mathbf x]$, there is an annihilator over ${\mathbb{F}}$ with nonzero constant term iff there is an annihilator over ${\mathbb{A}}$ with nonzero constant term. This is because if $Q$ is an annihilator over ${\mathbb{A}}$ with nonzero constant term, wlog $1$, then (by basic linear algebra) the linear system in terms of the (unknown) coefficients of $Q$ would also have a solution in ${\mathbb{F}}$. Thus, there is an annihilator over ${\mathbb{F}}$ with constant term $1$. This proves that it suffices to solve AnnAtZero over the algebraically closed field ${\mathbb{A}}$. This provides us with a better geometry. Write $f:{\mathbb{A}}^n\to {\mathbb{A}}^m$ for the polynomial map sending a point $x=(x_1,\dots,x_n)\in {\mathbb{A}}^n$ to $(f_1(x),\dots,f_m(x))\in {\mathbb{A}}^m$. For a subset $S$ of an affine or projective space, write $\overline{S}$ for its Zariski closure in that space. We will use $O$ to denote the origin $\mathbf 0$ of an affine space. The following lemma reinterprets APS in a geometric way. \[lem\_geom\] The constant term of every annihilator for $\mathbf f$ is zero  iff  $O\in \overline{{\mathrm{Im}}(f)}$. Note that: $Q\in {\mathbb{A}}[Y_1,\dots,Y_m]$ vanishes on ${\mathrm{Im}}(f)$ iff $Q(\mathbf f)$ vanishes on ${\mathbb{A}}^n$, which holds iff $Q(\mathbf f)=0$, i.e., $Q$ is an annihilator for $\mathbf f$. So $\overline{{\mathrm{Im}}(f)}=V(I)$, where the ideal $I\subseteq {\mathbb{A}}[Y_1,\dots,Y_m]$ consists of the annihilators for $\mathbf f$. Also note that $\{O\}=V({\mathfrak{m}})$, where ${\mathfrak{m}}$ is the maximal ideal $\langle Y_1,\dots, Y_m\rangle$. Let us study the condition $O\in \overline{{\mathrm{Im}}(f)}$. By the ideal-variety correspondence, $\{O\} \,=\, V({\mathfrak{m}}) \,\subseteq\, \overline{{\mathrm{Im}}(f)} \,=\, V(I)$ is equivalent to $I\subseteq {\mathfrak{m}}$, i.e., $Q\bmod {\mathfrak{m}}=0$ for $Q\in I$. But $Q\bmod {\mathfrak{m}}$ is just the constant term of the annihilator $Q$. Hence, we have the equivalence. As an interesting corner case, the above lemma proves that whenever $\mathbf f$ are algebraically [*independent*]{}, we have ${\mathbb{A}}^m={\overline{{\mathrm{Im}}(f)}}$. Eg. $f_1=X_1$ and $f_2=X_1 X_2-1$. Even in the dependent cases, ${\mathrm{Im}}(f)$ is not necessarily closed in the Zariski topology. Let $n=2$, $m=3$. Consider $f_1=f_2=X_1$ and $f_3=X_1 X_2-1$. The annihilators are multiples of $(Y_1-Y_2)$, which means by Lemma \[lem\_geom\] that $O\in {\overline{{\mathrm{Im}}(f)}}$. But there is no solution to $f_1=f_2=f_3=0$, i.e. $O\notin {\mathrm{Im}}(f)$. **Approximation.** Although $O\in \overline{{\mathrm{Im}}(f)}$ is not equivalent to the existence of a solution $x\in {\mathbb{A}}^n$ to $f_i=0$, $i\in[m]$, it is equivalent to the existence of an “approximate solution” $x\in {\mathbb{A}}[{\varepsilon},{\varepsilon}^{-1}]^n$, which is a tuple of Laurent polynomials in a formal variable ${\varepsilon}$. The formal statement is as follows. Wlog we assume $\mathbf f$ to be $m$ nonconstant polynomials. \[thm\_approx\] $O\in \overline{{\mathrm{Im}}(f)}$ iff there exists $x=(x_1,\dots,x_n)\in {\mathbb{A}}({\varepsilon})^n$ such that $f_i(x)\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$, for all $i\in [m]$. Moreover, when such $x$ exists, it may be chosen such that $$x_i \,\in\,\, {\varepsilon}^{-D} {\mathbb{A}}[{\varepsilon}] \,\cap\, {\varepsilon}^{D'} {\mathbb{A}}[{\varepsilon}^{-1}] \,=\, \left\{\sum_{j=-D}^{D'} c_j {\varepsilon}^j: c_j\in {\mathbb{A}}\right\}, \quad i\in[n],$$ where $D:=\prod_{i\in[m]} \deg(f_i) \,>0$ and $D':=(\max_{i\in[m]} \deg(f_i))\cdot D \,>0$. The proof of Theorem \[thm\_approx\] is almost the same as that in [@LL89]. First, we recall a tool to reduce the domain from a variety to a curve, proven in [@LL89]. [*[@LL89 Prop.1]*]{}\[lem\_aux1\] Let $V\subseteq {\mathbb{A}}^n$, $W\subseteq {\mathbb{A}}^m$ be affine varieties, $\varphi: V\to W$ dominant, and $t\in W\setminus \varphi(V)$. Then there exists a curve $C\subseteq {\mathbb{A}}^n$ such that $t\in\overline{\varphi(C)}$ and $\deg(C)\leq \deg(\Gamma_\varphi)$, where $\Gamma_\varphi$ denotes the graph of $\varphi$ embedded in ${\mathbb{A}}^n\times {\mathbb{A}}^m$. Next, [@LL89] essentially shows that in the case of a curve one can approximate the preimage of $f$ by using a [*single*]{} formal variable ${\varepsilon}$ and working in ${\mathbb{A}}({\varepsilon})$. [*[@LL89 Cor. of Prop.3]*]{}\[lem\_aux2\] Let $C\subseteq {\mathbb{A}}^n$ be an affine curve. Let $f: C\to {\mathbb{A}}^m$ be a morphism sending $x\in C$ to $(f_1(x),\dots,f_m(x))\in {\mathbb{A}}^m$, where $f_1,\dots,f_m\in {\mathbb{A}}[X_1,\dots,X_n]$. Let $t=(t_1,\dots,t_m)\in \overline{f(C)}$. Then there exists $p_1,\dots,p_n\in {\varepsilon}^{-\deg(C)}{\mathbb{A}}[[{\varepsilon}]]$ such that $f_i(p_1,\dots,p_n)-t_i \,\in\, {\varepsilon}{\mathbb{A}}[[{\varepsilon}]]$ , for all $i\in [m]$. Finally, we can use the above two lemmas to prove the connection of APS with $O\in \overline{{\mathrm{Im}}(f)}$, and hence with AnnAtZero (by Lemma \[lem\_geom\]). First assume that an $x$, satisfying the conditions in Theorem \[thm\_approx\], exists. Pick such an $x$. If $\mathbf f$ are algebraically independent then by Lemma \[lem\_geom\] we have that ${\mathbb{A}}^m=\overline{{\mathrm{Im}}(f)}$ and we are done. So, assume that there is a nonzero annihilator $Q$ for $\mathbf f$. We have $Q(f_1(x),\dots,f_m(x))=0\in{\varepsilon}{\mathbb{A}}[{\varepsilon}]$. On the other hand, as $f_i(x)\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$, for all $i\in [m]$; we deduce that $Q(f_1(x),\dots,f_m(x))\bmod {\varepsilon}{\mathbb{A}}[{\varepsilon}]$ is $Q(\mathbf 0)$, which is the constant term of $Q$. So it equals zero. By Lemma \[lem\_geom\], we have $O\in \overline{{\mathrm{Im}}(f)}$ and again we are done. Conversely, assume $O\in \overline{{\mathrm{Im}}(f)}$ and we will prove that $x$ exists. If $O\in {\mathrm{Im}}(f)$, then we can choose $x\in {\mathbb{A}}^n$ and we are done. So assume $O\in \overline{{\mathrm{Im}}(f)}\setminus {\mathrm{Im}}(f)$. Regard $f$ as a dominant morphism from ${\mathbb{A}}^n$ to $\overline{{\mathrm{Im}}(f)}$. Its graph $\Gamma_f$ is cut out in ${\mathbb{A}}^n\times {\mathbb{A}}^m$ by $Y_i-f_i(X_1,\dots,X_n)$, $i\in[m]$. So $\deg(\Gamma_f)\leq \prod_{i=1}^m \deg(f_i) = D$ by Bézout’s Theorem. By Lemma \[lem\_aux1\], there exists a curve $C\subseteq {\mathbb{A}}^n$ such that $O\in \overline{f(C)}$ and $\deg(C)\leq \deg(\Gamma_f)\leq D$. Pick such a curve $C$. Apply Lemma \[lem\_aux2\] to $C$, $f|_C$ and $O$, and let $p_1,\dots,p_n\in {\varepsilon}^{-\deg(C)} {\mathbb{A}}[[{\varepsilon}]]\subseteq {\varepsilon}^{-D} {\mathbb{A}}[[{\varepsilon}]]$ be as given by the lemma. Then $f_i(p_1,\dots,p_n)\in {\varepsilon}{\mathbb{A}}[[{\varepsilon}]]$, for all $i\in [m]$. For $i\in [n]$, let $x_i$ be the Laurent polynomial obtained from $p_i$ by truncating the terms of degree greater than $D'$. When evaluating $f_1,\dots,f_m$, at $(p_1,\dots,p_n)$, such truncation does not affect the coefficient of ${\varepsilon}^k$ for $k\leq 0$ by the choice of $D'$. So $f_i(x_1,\dots,x_n)\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$, for all $i\in [m]$. [*Remark–*]{} The lower bound $-D=-\prod_{i=1}^m \deg(f_i)$ for the least degree of $x_i$ in ${\varepsilon}$ can be achieved up to a factor of $1+o(1)$. Consider the polynomials $f_1=f_2=X_1$, $f_3=X_1^{d-1}X_2-1$, and $f_i=X_{i-2}^d-X_{i-1}$ for $i=4,\dots,m$, where $m=n+1$. Then we are forced to choose $x_1\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$ and $x_i \,\in\, {\varepsilon}^{-(d-1)d^{i-2}}\cdot{\mathbb{A}}[{\varepsilon}^{-1}] $, for $i=2,\dots, n$. So the least degree of $x_n$ in ${\varepsilon}$ is at most $-(d-1)d^{n-2}$, while $-D=-d^{n-1}$. Putting APS in PSPACE --------------------- Owing to the exponential upper bound on the precision (= degree wrt ${\varepsilon}$) shown in Theorem \[thm\_approx\], one expects to solve APS in EXPSPACE only. Surprisingly, in this section, we give a PSPACE algorithm. This we do by reducing the general AnnAtZero instance to a very special instance, that is easy to solve. Let ${\mathbb{A}}$ be the algebraic closure of the field ${\mathbb{F}}$. Let $f_1,\dots,f_m\in {\mathbb{F}}[X_1,\dots,X_n]$ be given. Denote by $k$ the trdeg of ${\mathbb{F}}(f_1,\dots,f_m)/{\mathbb{F}}$. Computing $k$ can be done in PSPACE using linear algebra [@Plo05; @C76]. We assume $k<m-1$, since the cases $k=m-1$ and $k=m$ are again easy to solve in PSPACE using linear algebra. We reduce the number of polynomials from $m$ to $k+1$ as follows: Fix a finite subset $S\subseteq {\mathbb{F}}$, and choose $c_{i,j}\in S$ at random for $i\in [k+1]$ and $j\in [m]$. For this to work, we need a large enough $S$ and ${\mathbb{F}}$. For $i\in [k+1]$, let $g_i:= \sum_{j=1}^m c_{i,j} f_j$. Let $\delta:= (k+1)(\max_{i\in [m]}\deg(f_i))^k/|S|$. Our algorithm is immediate once we prove the following claim. \[thm-randReduct\] It holds, with probability $\ge (1-\delta)$, that \(1) the transcendence degree of ${\mathbb{F}}(g_1,\dots,g_{k+1})/{\mathbb{F}}$ equals $k$, and \(2) the constant term of every annihilator for $g_1,\dots,g_{k+1}$ is zero iff the constant term of every annihilator for $f_1,\dots,f_{m}$ is zero. First, we reformulate the two items of Theorem \[thm-randReduct\] in a geometric way, and later we will analyze the error probability. For $d\in{\mathbb{N}}$, denote by ${\mathbb{A}}^d$ (resp. ${\mathbb{P}}^d$) the $d$-dimensional affine space (resp. projective space) over ${\mathbb{A}}:={\overline{{\mathbb{F}}}}$. Let $f: {\mathbb{A}}^n\to {\mathbb{A}}^m$ (resp. $g: {\mathbb{A}}^n\to {\mathbb{A}}^{k+1}$) be the polynomial map sending $x$ to $(f_1(x),\dots,f_m(x))$ (resp. $(g_1(x),\dots,g_{k+1}(x))$). Let $O$ and $O'$ be the origin of ${\mathbb{A}}^m$ and that of ${\mathbb{A}}^{k+1}$ respectively. Define the affine varieties $V:= \overline{{\mathrm{Im}}(f)}\subseteq {\mathbb{A}}^m$ and $V':= \overline{{\mathrm{Im}}(g)}\subseteq {\mathbb{A}}^{k+1}$. Then ${\text{dim~}}V= \text{trdeg } \mathbf f = k$. Let $\pi: {\mathbb{A}}^m\to {\mathbb{A}}^{k+1}$ be the linear map sending $(x_1,\dots,x_m)$ to $(y_1,\dots,y_{k+1})$ where $y_i= \sum_{j=1}^m c_{i,j} x_j$. Then $g= \pi\circ f$ and $V'= \overline{\pi(V)}$.[^4] Now (1) of Theorem \[thm-randReduct\] is equivalent to ${\text{dim~}}V'=k$, and (2) is equivalent to $O'\in V'$ iff $O\in V$. $$\begin{tikzcd}[column sep=large] {\mathbb{A}}^n \arrow{r}{f} \arrow[swap]{rd}{g} & V= \overline{{\mathrm{Im}}(f)} \arrow{r}{\subseteq} \arrow{d}{\pi|_V} & {\mathbb{A}}^m \arrow{d}{\pi}\\ & V'= \overline{{\mathrm{Im}}(g)} \arrow{r}{\subseteq} & {\mathbb{A}}^{k+1} \end{tikzcd}$$ We will give sufficient conditions of (1) and (2) in terms of incidence properties. Note that $O\in V$ implies $O'\in V'$, since $\pi(O)=O'$. Now suppose $O\not\in V$. Let $W:= \pi^{-1}(O')$, which is a linear subspace of ${\mathbb{A}}^m$. Then $O'\not\in \pi(V)$ iff $V\cap W=\emptyset$. However, $V\cap W=\emptyset$ does not imply $O'\not\in V'$, as $V$ may “get infinitesimally close to $W$” without actually meeting $W$, so that $O'\in\overline{\pi(V)}=V'$. See Example \[exmp\_reduction\] in the appendix. To overcome this problem, we consider projective geometry instead of affine geometry. Suppose ${\mathbb{A}}^m$ have coordinates $X_1,\dots,X_m$ and ${\mathbb{P}}^m$ have homogeneous coordinates $X_0,\ldots,X_m$. Regard ${\mathbb{A}}^m$ as a dense open subset of ${\mathbb{P}}^m$ via $(x_1,\dots,x_m)\mapsto (1,x_1,\dots,x_m)$. Then $H:= {\mathbb{P}}^m\setminus {\mathbb{A}}^m \,\cong {\mathbb{P}}^{m-1}$ is the [*hyperplane at infinity*]{}, defined by $X_0=0$. Denote by $V_c$ (resp. $W_c$) the [*projective closure*]{} of $V$ (resp. $W$) in ${\mathbb{P}}^m$. Then $V = V_c\cap {\mathbb{A}}^m$. Let $W_H:= W_c\cap H$, which is a projective subspace of $H$. For distinct points $P,Q\in{\mathbb{P}}^m$, write $\overline{PQ}$ for the projective line passing through them. \[lem\_projcriterion\] We have: \(1) ${\text{dim~}}V'=k$, if  $V_c\cap W_H=\emptyset$, and \(2) $O'\not\in V'$, if  $V_c\cap W_c=\emptyset$. (1): Assume ${\text{dim~}}V'<k$. Choose $P\in\pi(V)$. The dimension of $\pi^{-1}(P)\cap V$ is at least ${\text{dim~}}V-{\text{dim~}}V' \,\geq 1$ [@Har92 Thm.11.12]. Denote by $Y$ and $Z$ the projective closure of $\pi^{-1}(P)$ and that of $\pi^{-1}(P)\cap V$ in ${\mathbb{P}}^m$ respectively. Then $Z\subseteq Y\cap V_c$. As ${\text{dim~}}Z={\text{dim~}}\pi^{-1}(P)\cap V\geq 1$ and ${\text{dim~}}H=m-1$, we have $Z\cap H\neq\emptyset$ [@Har92 Prop.11.4]. As $\pi$ is a linear map, $\pi^{-1}(P)=Y\cap{\mathbb{A}}^m$ is a translate of $\pi^{-1}(O')= W =W_c\cap{\mathbb{A}}^m$. It is well known that two projective subspaces $W_1,W_2\not\subseteq H$ have the same intersection with $H$ iff $W_1\cap {\mathbb{A}}^m$ and $W_2\cap {\mathbb{A}}^m$ are translates of each other.[^5] So, $Y\cap H=W_c\cap H=W_H$. Therefore, $V_c\cap W_H \,=\, V_c\cap Y\cap H \,\supseteq\, Z\cap H \,\neq \emptyset $. (2): Assume to the contrary that $V_c\cap W_c=\emptyset$ but $O'\in V'$. We will derive a contradiction. As $W_H\subseteq W_c$, we have $V_c\cap W_H=\emptyset$ and hence ${\text{dim~}}V'=k$ by (1). Denote by $J(V_c, W_H)$ the [*join*]{} of $V_c$ and $W_H$, which is defined to be the union of the projective lines $\overline{PQ}$, where $P\in V_c$ and $Q\in W_H$. It is known that $J(V_c, W_H)$, as the join of two [*disjoint*]{} projective subvarieties, is again a projective subvariety of ${\mathbb{P}}^m$ [@Har92 Example 6.17]. Consider $P\in V_c$ and $Q\in W_H$. If $P\in H$, the line $\overline{PQ}$ lies in $H$ and does not meet ${\mathbb{A}}^m$. Now suppose $P\in V_c\setminus H=V$. Then $\overline{PQ}$ meets $\overline{OQ}$ at the point $Q$. So $\overline{PQ}\cap {\mathbb{A}}^m$ is a translate of $\overline{OQ}\cap {\mathbb{A}}^m \,\subseteq\, W_c\cap{\mathbb{A}}^m = W$. Conversely, let $P\in V$. Let $W_P$ denote the unique translate of $W$ containing $P$. Let $\ell_P$ be an affine line contained in $W_P$ and passing through $P$ (note that $W_P$ is the union of such lines). Then $\ell_P$ is a translate of an affine line $\ell\subseteq W$. As $\ell_P$ and $\ell$ are translates of each other, their projective closures intersect $H$ at the same point $Q$. We have $Q\in \ell\cap H \subseteq W_H$. So $\ell_P=\overline{PQ}\cap {\mathbb{A}}^m\subseteq J(V_c, W_H)\cap {\mathbb{A}}^m$. We conclude that $$\label{eq_join} J(V_c, W_H)\cap {\mathbb{A}}^m \,=\, \bigcup_{P\in V} W_P \,.$$ We claim that $J(V_c,W_H)\cap {\mathbb{A}}^m=\pi^{-1}(V')$. As $\pi$ is a linear map, Equation implies $J(V_c,W_H)\cap {\mathbb{A}}^m\subseteq\pi^{-1}(V')$. We prove the other direction by comparing dimensions. It is known that for two [*disjoint*]{} projective subvarieties $V_1$ and $V_2$, ${\text{dim~}}J(V_1,V_2)={\text{dim~}}V_1+{\text{dim~}}V_2+1$ [@Har92 Prop.11.37-Ex.11.38]. Therefore, $${\text{dim~}}J(V_c, W_H)={\text{dim~}}V_c +{\text{dim~}}W_H+1={\text{dim~}}V+{\text{dim~}}W=k+{\text{dim~}}W \,.$$ So, ${\text{dim~}}J(V_c,W_H) \cap {\mathbb{A}}^m \,= k+{\text{dim~}}W$. On the other hand, we have $\pi^{-1}(V')\cong V'\times W$. So ${\text{dim~}}\pi^{-1}(V')={\text{dim~}}V'+{\text{dim~}}W=k+{\text{dim~}}W$. Now $J(V_c,W_H)\cap {\mathbb{A}}^m$ and $\pi^{-1}(V')$ are (irreducible) affine varieties of the same dimension, and one is contained in the other. So they must be equal. This proves the claim. As $O'\in V'$, we have $W=\pi^{-1}(O')\subseteq \pi^{-1}(V')=\bigcup_{P\in V} W_P$. So $W_P=W$ for some $P\in V$, since $W$ is a linear space. But then $P\in V\cap W_P=V\cap W\subseteq V_c\cap W_c$, contradicting the assumption $V_c\cap W_c=\emptyset$. [*Remark–* ]{} The converse of Lemma \[lem\_projcriterion\] (Condition 2) is false; see Example \[exmp-projcrit\] in the appendix. [**Error probability.**]{} It remains to bound the probability of failure of the conditions $V_c\cap W_H=\emptyset$ and (in the case $O\not\in V$) $V_c\cap W_c=\emptyset$ in Lemma \[lem\_projcriterion\]. We need the following lemma. \[lem\_hyperplane\] Let $V\subseteq{\mathbb{P}}^m$ be a projective subvariety of dimension $r$ and degree $d$. Let $r'\geq r+1$. Choose $c_{i,j}\in S$ at random, for $i\in [r']$ and $0\leq j\leq m$. Let $W\subseteq {\mathbb{P}}^m$ be the projective subspace cut out by the equations $\sum_{j=0}^{m} c_{i,j} X_j=0$, $i=1,\dots,r'$, where $X_0,\dots,X_m$ are homogeneous coordinates of ${\mathbb{P}}^m$. Then $V\cap W=\emptyset$ holds with probability at least $1-(r+1)d/|S|$. For $i\in [r']$, let $H_i\subseteq {\mathbb{P}}^m$ be the hyperplane defined by $\sum_{j=0}^{m} c_{i,j} X_j=0$. By ignoring $H_i$ for $i>r+1$, we may assume $r'=r+1$. Let $V_0:= V$ and $V_i:= V_{i-1}\cap H_i$ for $i\in [r']$. It suffices to show that ${\text{dim~}}V_i={\text{dim~}}V_{i-1}-1$ holds with probability at least $1-d/|S|$, for each $i\in [r']$ (the dimension of the empty set is $-1$ by convention). Fix $i\in [r']$ and $c_{i',j}$, for $i'\in [i-1]$ and $0\leq j\leq m$. So $V_{i-1}$ is also fixed. Note that $V_{i-1}\neq \emptyset$ since by taking a hyperplane section reduces the dimension by at most one. If ${\text{dim~}}V_i\neq {\text{dim~}}V_{i-1}-1$, then ${\text{dim~}}V_i={\text{dim~}}V_{i-1}$, and $H_i$ contains some irreducible component of $V_{i-1}$ [@Har92 Exercise 11.6]. Let $Y$ be an irreducible component of $V_{i-1}$, and fix a point $P\in Y$. Then $Y\subseteq H_i$ only if $P\in H_i$, which holds only if $c_{i,0},\dots,c_{i,m}$ satisfy a nonzero linear equation determined by $P$. This occurs with probability at most $1/|S|$ (eg. by fixing all but one $c_{i,j}$). We also have $\deg(V_{i-1})\leq \deg(V)\leq d$, and hence the number of irreducible components of $V_{i-1}$ is bounded by $d$. By the union bound, $H_i$ contains an irreducible component of $V_{i-1}$ with probability at most $d/|S|$. As mentioned above, Theorem \[thm-randReduct\] is equivalent to showing that, with probability at least $1-\delta$: (1) ${\text{dim~}}V'=k$, and (2) $O'\in V'$ iff $O\in V$. Note that $W_c$ is cut out in ${\mathbb{P}}^m$ by the linear equations $\sum_{j=1}^{m} c_{i,j} X_j=0$, $i=1,\dots,k+1$. So $W_H$ is cut out in $H\cong {\mathbb{P}}^{m-1}$ (corresponding to $X_0=0$) by the linear equations $\sum_{j=1}^m c_{i,j} X_j=0$, $i=1,\dots,k+1$. We also have $\deg(V_c\cap H)\leq \deg(V_c)\leq (\max_{i\in [m]}\deg(f_i))^k$ (see, e.g., [@burgisser2013algebraic Thm.8.48]). Assume $O\in V$. Then $O'\in V'$ since $\pi(O)=O'$. Applying Lemma \[lem\_hyperplane\] to each of the irreducible components of $V_c\cap H$ and $W_H$, as subvarieties of $H\cong{\mathbb{P}}^{m-1}$, we see $V_c\cap W_H=(V_c\cap H)\cap W_H=\emptyset$ holds with probability at least $1-k\deg(V_c \cap H)/|S|\geq 1-\delta$. So by Lemma \[lem\_projcriterion\], ${\text{dim~}}V'=k$ holds with probability at least $1-\delta$. Now assume $O\not\in V$. Let $\pi_{O, H}: V_c\to H$ be the [*projection of $V_c$ from $O$ to $H$*]{}, defined by $P\mapsto \overline{OP}\cap H$ for $P\in V_c$. It is well defined since $O\not\in V_c$. The image $\pi_{O,H}(V_c)$ is a projective subvariety of $H$ [@Har92 Thm.3.5]. If $V_c\cap W_c$ contains a point $P$, then $\pi_{O,H}(V_c)\cap W_H$ contains $\pi_{O,H}(P)$. Conversely, if $\pi_{O,H}(V_c)\cap W_H$ contains a point $Q$, then there exists $P\in V_c$ such that $Q=\pi_{O,H}(P)$, and we have $P\in\overline{O Q}\subseteq W_c$. We conclude that $\pi_{O,H}(V_c)\cap W_H=\emptyset$ iff $V_c\cap W_c=\emptyset$, which implies $V_c\cap W_H=\emptyset$. Note that ${\text{dim~}}\pi_{O,H}(V_c)={\text{dim~}}V_c=k$, since $\pi_{O,H}(V_c)=J(\{O\}, V_c)\cap H$. We also have $\deg(\pi_{O,H}(V_c))\leq \deg(V_c)$ [@Har92 Eg.18.16]. Applying Lemma \[lem\_hyperplane\] to $\pi_{O,H}(V_c)$ and $W_H$, as subvarieties of $H\cong{\mathbb{P}}^{m-1}$, we see $\pi_{O,H}(V_c)\cap W_H=\emptyset$ holds with probability at least $1-(k+1)\deg(\pi_{O,H}(V_c))/|S|\geq 1-\delta$. By Lemma \[lem\_projcriterion\] and the previous paragraphs, it holds with probability at least $1-\delta$ that ${\text{dim~}}V'=k$ and $O'\not\in V'$. AnnAtZero is known to be NP-hard [@Kay09]. The NP-hardness of APS follows from Lemma \[lem\_geom\] and Theorem \[thm\_approx\]. Given an instance $\mathbf f$ of APS, we can first find the trdeg $k$. Fix a subset $S\subset {\mathbb{A}}$ to be larger than $2(k+1)(\max_{i\in [m]}\deg(f_i))^k$ (which can be scanned using only polynomial-space). Consider the points $\left(\left(c_{i,j} \,\mid\, i\in [k+1],\, j\in [m] \right)\right)\in S^{(k+1)\times m}$; for each such point define $\mathbf g:= \big\{g_i:= \sum_{j=1}^m c_{i,j} f_j \,\mid$ $i\in[k+1] \big\}$. Compute the trdeg of $\mathbf g$, and if it is $k$ then solve AnnAtZero for the instance $\mathbf g$. Output NO iff some $\mathbf g$ failed the AnnAtZero test. All these steps can be achieved in space polynomial in the input size, using the uniqueness of the annihilator for $\mathbf g$ [@Kay09 Lem.7], Perron’s degree bound [@Plo05] and linear algebra [@C76]. Hitting-set for $\overline{\rm VP}$: Proof of Theorem \[thm-hsg\] {#sec-hsg} ================================================================= Suppose $p$ is a prime. Define ${\mathbb{A}}:={\overline{{\mathbb{F}}}}_p$. We want to find hitting-sets for certain polynomials in ${\mathbb{A}}[x_1,\dots,x_n]$. Fix a $p$-power $q\ge\Omega(sr^6)$, for the given parameters $s, r$. Assume that $p\nmid(r+1)$. Also, fix a model for the finite field ${\mathbb{F}}_q$ [@AL86]. We now define the notion of ‘infinitesimally approximating’ a polynomial by a small circuit. [**Approximative closure of VP.**]{} [@bringmann2017algebraic] A family $(f_n|n)$ of polynomials from ${\mathbb{A}}[\mathbf x]$ is in the [*class $\overline{\rm VP}_{\mathbb{A}}$*]{} if there are polynomials $f_{n,i}$ and a function $t:\mathbb{N} \mapsto \mathbb{N}$ such that $g_n$ has a poly($n$)-size poly($n$)-degree algebraic circuit, over the field ${\mathbb{A}}({\varepsilon})$, computing $g_n(\mathbf x)= f_n(\mathbf x)+ {\varepsilon}f_{n,1}(\mathbf x)+ {{\varepsilon}}^2 f_{n,2}(\mathbf x)+ \ldots+ {{\varepsilon}}^{t(n)} f_{n,t(n)}(\mathbf x)$. That is, $g_n \equiv f_n \bmod{{\varepsilon}{\mathbb{A}}[{\varepsilon}][\mathbf x]}$. The smallest possible circuit size of $g_n$ is called the [*approximative complexity*]{} of $f_n$, namely ${\overline{{\text{size}}}}(f_n)$. It may happen that $g_n$ is much easier than $f_n$ in terms of traditional circuit complexity. That possibility makes the definition interesting and opens up a long line of research. [**Hitting-set for $\overline{\rm VP}_{\mathbb{A}}$.**]{} Given functions $s=s(n)$ and $r=r(n)$, a finite subset ${\mathcal{H}}\subset {\mathbb{A}}^n$ is called a [*hitting-set*]{} for degree-$r$ polynomials of approximative complexity $s$, if for every such nonzero polynomial $f$: $\exists \mathbf v\in{\mathcal{H}}, \, f(\mathbf v)\ne0$. [**Explicitness.**]{} We are interested in computing such a hitting-set in poly($s, \log r, \log q$)-time. Before our work, the best result known was EXPSPACE [@mul12; @mulmuley2017geometric]. Heintz and Schnorr [@heintz1980testing] proved that poly$(s,\log qr)$-sized hitting-sets exist aplenty (for degree-$r$ ${\overline{{\text{size}}}}$-$s$ polynomials). [*[@heintz1980testing Thm.4.4]*]{}\[lem-hs80\] There exists a hitting-set $\mathcal{H}\subset {\mathbb{F}}_q^n $ of size $O(s^2n^2)$ (assuming $q\geq\Omega(sr^2)$) that hits all nonzero degree-$r$ $n$-variate polynomials in ${\mathbb{A}}[\mathbf x]$ that can be infinitesimally approximated by size-$s$ algebraic circuits. Note that for the hitting-set design problem it suffices to focus only on homogeneous polynomials. They are known to be computable by homogeneous circuits, where each gate computes a homogeneous polynomial (see [@shpilka2010arithmetic]). [**Universal circuit.**]{} It can simulate any circuit of size-$s$ computing a degree-$r$ homogeneous polynomial in ${\mathbb{A}}({\varepsilon})[x_1,\ldots,x_n]$. We define the [*universal circuit*]{} $\Psi(\mathbf{y},\mathbf{x})$ as a circuit in $n$ essential variables $\mathbf{x}$ and $s':=O(sr^4)$ auxiliary variables $\mathbf y$. The variables $\mathbf y$ are the ones that one can specialize in ${\mathbb{A}}({\varepsilon})$, to compute a specific polynomial in ${\mathbb{A}}({\varepsilon})[x_1,\ldots,x_n]$. Every specialization gives a homogeneous degree-$r$ ${\overline{{\text{size}}}}$-$s'$ polynomial. Moreover, the set of these polynomials is closed under constant multiples (see [@forbes2017pspace Thm.2.2]). Note that by [@heintz1980testing] there is a hitting-set, with $m:=O(s'^2n^2)$ points in ${\mathbb{F}}_q^n$ ($\because q\ge\Omega(s'r^2)$), for the set of polynomials ${\mathcal{P}}$ approximated by the specializations of $\Psi(\mathbf{y},\mathbf{x})$. A universal circuit construction can be found in [@raz2008elusive; @shpilka2010arithmetic]. Using the above notation, we give a criterion to decide whether a candidate set is a hitting-set. \[thm\_HITTING\] Set ${\mathcal{H}}=:\{\mathbf v_1,\ldots, \mathbf v_m\}$ $\subset {\mathbb{F}}_q^n$ is [*not*]{} a hitting-set for the family of polynomials ${\mathcal{P}}$ iff there is a satisfying assignment $(\alpha, \beta)\,\in {\mathbb{A}}({\varepsilon})^{s'}\times {\mathbb{A}}({\varepsilon})^{n}$ such that: \(1) $\forall i \in [n],\, {\beta_i}^{r+1}- 1 \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$, and \(2) $\Psi(\alpha, \beta) - 1 \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$, and \(3) $\forall i \in [m],\, \Psi(\alpha,\mathbf{v}_i) \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$. [**Remark–**]{} The above criterion holds for algebraically closed fields ${\mathbb{A}}$ of [*any*]{} characteristic. Thus, it reduces those hitting-set design problems to APS as well. First we show that: $\exists x\in {\mathbb{A}}({\varepsilon}),\, {x}^{r+1}- 1 \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$ implies $x\in {\mathbb{A}}[[{\varepsilon}]]\cap {\mathbb{A}}({\varepsilon})$ (= rational functions defined at ${\varepsilon}=0$). Recall the formal power series ${\mathbb{A}}[[{\varepsilon}]]$ and its group of units ${\mathbb{A}}[[{\varepsilon}]]^*$. Note that for any polynomial $a=\big(\sum_{i_0\le i\le d} a_i{\varepsilon}^i\big)$ with $a_{i_0}\ne0$, the inverse $a^{-1} = {\varepsilon}^{-i_0}\cdot \big(\sum_{i_0\le i\le d} a_i{\varepsilon}^{i-i_0}\big)^{-1}$ is in ${\varepsilon}^{-i_0}\cdot {\mathbb{A}}[[{\varepsilon}]]^*$. This is just a consequence of the identity $(1-{\varepsilon})^{-1} = \sum_{i\ge0} {\varepsilon}^i$. In other words, any rational function $a\in {\mathbb{A}}({\varepsilon})$ can be written as an element in ${\varepsilon}^{-i}{\mathbb{A}}[[{\varepsilon}]]^*$, for some $i\ge0$. Thus, write $x$ as ${\varepsilon}^{-i}\cdot(b_0+b_1{\varepsilon}+\cdots)$ for $i\ge0$ and $b_0\in{\mathbb{A}}^*$. This gives $${x}^{r+1}- 1 \,=\, {\varepsilon}^{-i(r+1)}(b_0+b_1{\varepsilon}+ b_2{\varepsilon}^{2} +\cdots)^{r+1} \;-\; 1 \,.$$ For this to be in ${\varepsilon}{\mathbb{A}}[{\varepsilon}]$, clearly $i$ has to be $0$ (otherwise, ${\varepsilon}^{-i(r+1)}$ remains uncancelled); implying that $x\in {\mathbb{A}}[[{\varepsilon}]]$. Moreover, we deduce that $b_0^{r+1} - 1 = 0$. Thus, condition (1) implies that $b_0$ is one of the $(r+1)$-th roots of unity $Z_{r+1}\subset{\mathbb{A}}$ (recall that, since $p\nmid(r+1)$, $|Z_{r+1}|=r+1$). Thus, $x\in \, Z_{r+1} + {\varepsilon}{\mathbb{A}}[[{\varepsilon}]]$. Suppose $\mathcal{H}$ is not a hitting-set for ${\mathcal{P}}$. Then, there is a specialization $\alpha \in {\mathbb{A}}({\varepsilon})^{s'}$ of the universal circuit such that $\Psi(\alpha, \mathbf x)$ computes a polynomial in ${\mathbb{A}}[{\varepsilon}][\mathbf x]\setminus {\varepsilon}{\mathbb{A}}[{\varepsilon}][\mathbf x]$, but still ‘fools’ ${\mathcal{H}}$, i.e.: $\forall i \in [m],\, \Psi(\alpha,\mathbf{v}_i) \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$. What remains to show is that conditions (1) and (2) can be satisfied too. Consider the polynomial $g(\mathbf x):= \Psi(\alpha, \mathbf x)\big\vert_{{\varepsilon}=0}$. It is a nonzero polynomial, in ${\mathbb{A}}[\mathbf x]$ of degree-$r$, that ‘fools’ ${\mathcal{H}}$. By [@Sch80], there is a $\beta\in Z_{r+1}^n$ such that $a:= g(\beta)$ is in ${\mathbb{A}}^*$. Clearly, $\beta_i^{r+1} - 1 = 0$, for all $i$. Consider $\psi':= a^{-1}\cdot\Psi(\alpha, \mathbf x)$. Note that $\psi'(\beta)-1 \in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$, and $\psi'(\mathbf{v}_i) \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$ for all $i$. Moreover, the normalized polynomial $\psi'(\mathbf x)$ can easily be obtained from the universal circuit $\Psi$ by changing one of the coordinates of $\alpha$ (eg. the incoming wires of the root of the circuit). This means that the three conditions (1)-(3) can be simultaneously satisfied by (some) $(\alpha',\beta)\in\, {\mathbb{A}}({\varepsilon})^{s'}\times Z_{r+1}^n$. Suppose the satisfying assignment is $(\alpha, \beta')\,\in {\mathbb{A}}({\varepsilon})^{s'}\times {\mathbb{A}}({\varepsilon})^{n}$. As shown before, condition (1) implies: $\beta'_i\in \, Z_{r+1} + {\varepsilon}{\mathbb{A}}[[{\varepsilon}]]$ for all $i\in[n]$. Let us define $\beta_i:= \beta'_i\big\vert_{{\varepsilon}=0}$, for all $i\in[n]$; they are in $Z_{r+1}\subset{\mathbb{A}}$. By Condition (3): $\forall i \in [m],\, \Psi(\alpha,\mathbf{v}_i) \,\in {\varepsilon}{\mathbb{A}}[{\varepsilon}]$. Previous calculations suggest that $\Psi(\alpha,\mathbf x)$ is in ${\varepsilon}^{-j}{\mathbb{A}}[[{\varepsilon}]][\mathbf x]$, for some $j\ge0$. Expand the polynomial $\Psi(\alpha,\mathbf x)$, wrt ${\varepsilon}$, as: $$g_{-j}(\mathbf x){\varepsilon}^{-j} + \dots + {\varepsilon}^{-2} g_{-2}(\mathbf x) + g_{-1}(\mathbf x){\varepsilon}^{-1} + g_0(\mathbf x) + {\varepsilon}g_1(\mathbf x) + {\varepsilon}^2 g_2(\mathbf x) + \dots \,.$$ Let us study Condition (2). If for each $0\le\ell\le j$, polynomial $g_{-\ell}(\mathbf x)$ is zero, then $\Psi(\alpha,\beta')\big\vert_{{\varepsilon}=0} =0$ contradicting the condition. Thus, we can pick the largest $0\le\ell\le j$ such that the polynomial $g_{-\ell}(\mathbf x)\ne0$. Note that the normalized circuit ${\varepsilon}^\ell\cdot \Psi(\alpha,\mathbf x)$ equals $g_{-\ell}$ at ${\varepsilon}=0$. This means that $g_{-\ell}\in{\mathcal{P}}$, and it is a nonzero polynomial fooling ${\mathcal{H}}$. Thus, ${\mathcal{H}}$ cannot be a hitting-set for ${\mathcal{P}}$ and we are done. Given a prime $p$ and parameters $n, r, s$ in unary (wlog $p\nmid(r+1)$), fix a field ${\mathbb{F}}_q$ with $q\ge\Omega(sr^6)$. Fix the universal circuit $\Psi(\mathbf{y},\mathbf{x})$ with $n$ essential variables $\mathbf{x}$ and $s':=\Omega(sr^4)$ auxiliary variables $\mathbf y$. Fix $m:=\Omega(s'^2n^2)$. For every subset ${\mathcal{H}}=:\{\mathbf v_1,\ldots, \mathbf v_m\}$ $\subset {\mathbb{F}}_q^n$ solve the APS instance described by Conditions (1)-(3) in Theorem \[thm\_HITTING\]. These are $(n+m+1)$ algebraic circuits of degree poly($srn, \log p$) and a similar bitsize. Using the algorithm from Theorem \[thm-aps\] it can be solved in poly($srn, \log p$)-space. The number of subsets ${\mathcal{H}}$ is $q^{nm}$. So, in poly($nm\log q$)-space we can go over all of them. If APS fails on one of them (say ${\mathcal{H}}$) then we know that ${\mathcal{H}}$ is a hitting-set for ${\mathcal{P}}$. Since $\Psi$ is universal, for homogeneous degree-$r$ ${\overline{{\text{size}}}}$-$s$ polynomials in ${\mathbb{A}}[\mathbf x]$, we output ${\mathcal{H}}$ as the desired hitting-set. Conclusion {#sec-conclusion} ========== Our result on algebraic dependence testing in AM $\cap$ coAM gives further indication that a randomized polynomial time algorithm for the problem exists. Studying the following special case might be helpful to get an idea for designing better algorithms. Given quadratic polynomials $f_1,\ldots,f_n \in {\mathbb{F}}_2[x_1,\ldots,x_n]$, test if they are algebraically dependent in randomized polynomial time [@pandey2016algebraic]. As indicated in this paper, approximate polynomials satisfiability, or equivalently testing zero-membership in the Zariski closure of the image, may have further applications to problems in computational algebraic geometry and algebraic complexity. We know that HN is in AM over characteristic zero fields, assuming GRH [@koiran1996hilbert]. Can we solve AnnAtZero (or APS) in AM for characteristic zero fields assuming GRH?) [@Kay09]? This would also imply better hitting-set construction for ${\overline{\rm VP}}$. [**Acknowledgements.**]{} We thank Anurag Pandey and Sumanta Ghosh for insightful discussions on the approximate polynomials satisfiability and the hitting-set construction problems. N.S. thanks the funding support from DST (DST/SJF/MSA-01/2013-14). Z.G. is funded by DST and Research I Foundation of CSE, IITK. From Section \[sec-prelim\]: Algebraic-Geometry {#app-AG} ================================================ Let ${\mathbb{A}}:={\overline{{\mathbb{F}}}}$ be the algebraic closure of a field ${\mathbb{F}}$. For $d\in{\mathbb{N}}^+$, write ${\mathbb{A}}^d$ for the [*$d$-dimensional affine space*]{} over ${\mathbb{A}}$. It is defined to be the set ${\mathbb{A}}^d$, equipped with the [*Zariski topology*]{}, defined as follows: A subset $S$ of ${\mathbb{A}}^d$ is [*closed*]{} iff it is the set of common zeros of some subset of polynomials in ${\mathbb{A}}[X_1,\dots,X_d]$. For other subsets $S$ it makes sense to consider the [*closure*]{} ${\overline{S}}$– the smallest closed set containing $S$. Set $S$ is [*dense*]{} if ${\overline{S}}={\mathbb{A}}^d$. Complement of closed sets are called [*open*]{}. A closed set is called a [*hypersurface*]{} (resp. [*hyperplane*]{}) if it is definable by a single polynomial (resp. single linear polynomial). Define ${\mathbb{A}}^\times:= {\mathbb{A}}\setminus\{0\}$. Write ${\mathbb{P}}^d$ for the $d$-dimensional projective space over ${\mathbb{A}}$, defined to be the quotient set $({\mathbb{A}}^{d+1}\setminus\{(0,\dots,0)\})/\sim$. Where $(x_0,\dots,x_d)\sim (y_0,\dots,y_d)$ iff there exists $c\in{\mathbb{A}}^\times$ such that $y_i=c x_i$ for $0\leq i\leq d$. The set ${\mathbb{P}}^d$ is again equipped with the [*Zariski topology*]{}, where a subset is closed iff it is the set of common zeros of some subset of [*homogeneous*]{} polynomials in ${\mathbb{A}}[X_0,\dots,X_d]$. We use $(d+1)$-tuples $(x_0,\dots,x_d)$ to represent points in ${\mathbb{P}}^d$. Closed subsets of ${\mathbb{A}}^d$ or ${\mathbb{P}}^d$ are also called [*algebraic sets*]{} or [*zerosets*]{}. An algebraic set is [*irreducible*]{} if it cannot be written as the union of finitely many proper algebraic sets. An irreducible algebraic subset of an affine (resp. projective) space is also called an [*affine variety*]{} (resp. [*projective variety*]{}). (In some references, varieties are not required to be irreducible, but in this work we always assume it.) An algebraic set $V$ can be uniquely represented as the union of finitely many varieties, and these varieties are called the [*irreducible components*]{} of $V$. Affine zerosets (resp. varieties) are in 1-1 correspondence with [*radical*]{} (resp. [*prime*]{}) ideals. Irreducible decomposition of an affine variety mirrors the factoring of an ideal into primary ideals. Finally, note that the affine points are in 1-1 correspondence with [*maximal*]{} ideals; it is a simple reformulation of Hilbert’s Nullstellensatz. The affine space ${\mathbb{A}}^d$ may be regarded as a subset of ${\mathbb{P}}^d$ via the map $(x_1,\dots,x_d)\mapsto (1,x_1,\dots,x_d)$. Then the subspace topology of ${\mathbb{A}}^d$ induced from the Zariski topology of ${\mathbb{P}}^d$ is just the Zariski topology of ${\mathbb{A}}^d$. The set ${\mathbb{P}}^d\setminus {\mathbb{A}}^d$ is the projective subspace of ${\mathbb{P}}^d$ defined by $X_0=0$, called the [*hyperplane at infinity*]{}. For an algebraic subset $V$ of ${\mathbb{A}}^d\subseteq{\mathbb{P}}^d$, the smallest algebraic subset $V'$ of ${\mathbb{P}}^d$ containing $V$ (i.e. the intersection of all algebraic subsets containing $V$) is the [*projective closure*]{} of $V$, and we have $V'\cap {\mathbb{A}}^d=V$. To see this, note that for $P=(x_1,\dots,x_d)\in {\mathbb{A}}^d\setminus V$, there exists a polynomial $Q\in{\mathbb{A}}[X_1,\dots,X_d]$ of degree $D\in{\mathbb{N}}$ not vanishing on $P$ (but vanishing on $V$). Then its homogenization $Q'\in{\mathbb{A}}[X_0,\dots,X_d]$, defined by replacing each monomial $M=\prod_{i=1}^d X_i^{d_i}$ by $X_0^{D-\deg(M)}\prod_{i=1}^d X_i^{d_i}$, does not vanish on $(1,x_1,\dots,x_d)$. So, $(1,\mathbf x)\notin V'$. For distinct points $P=(x_0,\dots,x_d), Q=(y_0,\dots,y_d)\in{\mathbb{P}}^d$, write $\overline{PQ}$ for the [*projective line*]{} passing through them, i.e., $\overline{PQ}$ consists of the points $(ux_0+vy_0,\dots, ux_d+vy_d)$, where $(u,v)\in{\mathbb{A}}^2\setminus\{(0,0)\}$. The [*dimension*]{} of a variety $V$ is defined to be the largest integer $m$ such that there exists a chain of varieties $\emptyset\subsetneq V_0\subsetneq V_1\subsetneq\cdots\subsetneq V_m=V$. More generally, the dimension of an algebraic set $V$, denoted by ${\text{dim~}}V$, is the maximal dimension of its irreducible components. Eg. we have ${\text{dim~}}{\mathbb{A}}^d={\text{dim~}}{\mathbb{P}}^d=d$. The dimension of the empty set is $-1$ by convention. One dimensional varieties are called [*curves*]{}. The [*degree*]{} of a variety $V$ in ${\mathbb{A}}^d$ (resp. ${\mathbb{P}}^d$) is the number of intersections of $V$ with a general affine subspace (resp. projective subspace) of dimension $d-{\text{dim~}}V$. More generally, the degree of an algebraic set $V$, denoted by $\deg(V)$, is the sum of the degrees of its irreducible components. The degree of an algebraic subset of ${\mathbb{A}}^d$ coincides with the degree of its projective closure in ${\mathbb{P}}^d$. Suppose $V\subseteq{\mathbb{A}}^d$ is an algebraic set, defined by polynomials $f_1,\dots,f_k$. Let $(a_1,\dots,a_d)\in{\mathbb{A}}^d$. Then the set $\{(x_1+a_1,\dots,x_d+a_d): (x_1,\dots,x_d)\in V\}$ is called a [*translate*]{} of $V$. It is also an algebraic set, defined by $f_i(X_1-a_1,\dots,X_d-a_d)$, $i=1,\dots,k$. Let $V\subseteq {\mathbb{A}}^n$, $W\subseteq{\mathbb{A}}^m$ be affine varieties. A [*morphism*]{} from $V$ to $W$ is a function $f:V\to W$ that is a restriction of a polynomial map ${\mathbb{A}}^n\to{\mathbb{A}}^m$. A morphism $f: V\to W$ is called [*dominant*]{} if $\overline{{\mathrm{Im}}(f)}=W$. The preimage of a closed subset under a morphism is closed (i.e. morphisms are [*continuous*]{} in the Zariski topology). For a polynomial map $f:{\mathbb{A}}^n\to{\mathbb{A}}^m$ and an affine variety $V\subseteq {\mathbb{A}}^n$, $W:=\overline{f(V)}$ is also an affine variety (i.e., it is irreducible). To see this, assume to the contrary that $W$ is the union of two proper closed subsets $W_1$ and $W_2$. By the definition of closure, $f(V)$ is not contained in either $W_1$ or $W_2$, i.e., it intersects both. Then $f^{-1}(W_1)\cap V$ and $f^{-1}(W_2)\cap V$ are two proper closed subsets of $V$, and their union is $V$. This contradicts the irreducibility of $V$. The [*graph*]{} $\Gamma_f$ of a morphism $f$ is the set $\{(x,f(x)): x\in V\}\subseteq V\times W\subseteq {\mathbb{A}}^n\times {\mathbb{A}}^m$. Here $V\times W=\{(x,y): x\in V,y\in W\}$ denotes the [*product*]{} of $V$ and $W$, which is a subvariety of the $(n+m)$-dimensional affine space ${\mathbb{A}}^n\times {\mathbb{A}}^m\cong {\mathbb{A}}^{n+m}$. Note the graph $\Gamma_f$ is closed in ${\mathbb{A}}^n\times {\mathbb{A}}^m$: Suppose $f$ sends $x\in V$ to $(f_1(x),\dots,f_m(x))\in{\mathbb{A}}^m$, where $f_i\in{\mathbb{A}}[X_1,\dots,X_n]$ for $i\in [m]$. And suppose $V$ and $W$ are defined by ideals $I\subseteq {\mathbb{A}}[X_1,\dots,X_n]$ and $I'\subseteq {\mathbb{A}}[Y_1,\dots,Y_m]$ respectively. Then $\Gamma_f$ is defined by $I$, $I'$, and the polynomials $Y_i-f_i(X_1,\dots,X_n)\in {\mathbb{A}}[X_1,\dots,X_n,Y_1,\dots,Y_m]$, $i=1,\dots,m$. From Section \[sec:aps\] ======================== \[exmp\_reduction\] Let $m=4$, $(f_1,f_2,f_3,f_4)=(X_1,X_2,X_1 X_2-1, X_1+X_2)$. Then $k:=\text{trdeg}\mathbf f= 2$. Let $(g_1,g_2,g_3)=(f_1,f_3,f_1+f_2-f_4)=(X_1,X_1 X_2-1,0)$. Suppose ${\mathbb{A}}^m$ has coordinates $Y_1,\dots,Y_4$ and ${\mathbb{A}}^{k+1}$ has coordinates $Z_1,\dots,Z_3$. Then $V\subseteq{\mathbb{A}}^m$ is defined by $Y_1Y_2-Y_3-1=0$ and $Y_1+Y_2-Y_4=0$, and $W$ is defined by $Y_1=0$, $Y_3=0$, and $Y_2-Y_4=0$. So $V\cap W=\emptyset$. But $V'\subseteq{\mathbb{A}}^{k+1}$ is the plane $Z_3=0$, which contains the origin. \[exmp-projcrit\] Consider Example \[exmp\_reduction\] but choose $f_4$ to be $X_1+X_2+1$ instead of $X_1+X_2$. Now we have $g_3=1$, $V$ is defined by $Y_1Y_2-Y_3-1=0$ and $Y_1+Y_2-Y_4+1=0$, and $V'$ is the plane $Z_3=1$. So $O'\not\in V'$. On the other hand, suppose ${\mathbb{P}}^m$ has coordinates $Y_0,\dots,Y_4$. Then $V_c\cap H$ is defined by $Y_0=Y_1Y_2=Y_1+Y_2-Y_4=0$, and $W_H$ is defined by $Y_0=Y_1=Y_2-Y_4=Y_3=0$. So $(0,0,1,0,1)\in V_c\cap W_H\subseteq V_c\cap W_c$. [^1]: Department of Computer Science & Engineering, Indian Institute of Technology Kanpur, `zguo@cse.iitk.ac.in` [^2]: CSE, IIT Kanpur, `nitin@cse.iitk.ac.in` [^3]: CSE, IIT Kanpur, `amitks@cse.iitk.ac.in` [^4]: To see $V'\supseteq\overline{\pi(V)}$, note that $\pi^{-1}(V')$ contains ${\mathrm{Im}}(f)$ and is closed, and hence contains $V=\overline{{\mathrm{Im}}(f)}$. [^5]: Indeed, $W_i\cap {\mathbb{A}}^m$ is defined by linear equations $\sum_{j=1}^m a_{j,t} X_j +a_{0,t}=0$ iff $W_i\cap H$ is defined by homogeneous linear equations $X_0=0$ and $\sum_{j=1}^m a_{j,t} X_j=0$. So the constant terms $a_{0,t}$ do not matter.
--- abstract: | In this paper, we study the following fractional nonlinear Schrödinger system $$\left\{\begin{array}{ll} (-\Delta)^s u +u=\mu_1 |u|^{2p-2}u+\beta |v|^p|u|^{p-2}u,~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^s v +v=\mu_2 |v|^{2p-2}v+\beta |u|^p|v|^{p-2}v,~~x\in {\mathbb{R}}^N, \end{array}\right.$$ where $0<s<1, \mu_1 >0, \mu_2>0, 1<p<2_s^*/2, 2_s^*=+\infty$ for $N\le 2s$ and $2_s^*=2N/(N-2s)$ for $N>2s$, and $\beta \in {\mathbb{R}}$ is a coupling constant. We investigate the existence and non-degeneracy of proportional positive vector solutions for the above system in some ranges of $\mu_1,\mu_2, p, \beta$. We also prove that the least energy vector solutions must be proportional and unique under some additional assumptions. address: - 'Department of Mathematics and Information Science, Guangxi University, Nanning, 530003, P. R. China ' - 'School of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, P. R. China' - 'School of Mathematics, Guizhou Normal University, Guiyang, 550001, P. R. China' author: - 'QiHan He, Shuangjie Peng and Yan-Fang Peng' title: ' Existence, non-degeneracy of proportional positive solutions and least energy solutions for a fractional elliptic system' --- Introduction ============ In this paper, we consider the following fractional Schrödinger system $$\label{1.1} \left\{\begin{array}{lll} (-\Delta)^s u +u=\mu_1 |u|^{2p-2}u+\beta |v|^p|u|^{p-2}u,~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^s v +v=\mu_2 |v|^{2p-2}v+\beta |u|^p|v|^{p-2}v,~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ u,v\in H^s({\mathbb{R}}^N), \end{array}\right.$$ where $0<s<1, \mu_1 >0, \mu_2>0, 1<p<2_s^*/2,$ $2_s^*=+\infty$ for $ N \leq 2s$ and $2_s^*=2N/(N-2s)$ for $ N>2s$, $\beta \in {\mathbb{R}}$ and $$H^s({\mathbb{R}}^N):=\Big\{u\in L^2({\mathbb{R}}^N):{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{u}|^2~d\xi<+\infty\Big\},$$ where $\hat{}$ denotes the Fourier transform, i.e. $\hat{\phi}(\xi)=\frac{1}{(2\pi)^{N/2}}\int_{{\mathbb{R}}^N}e^{-2\pi i\xi\cdot x}\phi(x)dx$, $(-\Delta)^s$ of a function $\phi\in S$ is defined by $$\widehat{(-\Delta)^s\phi(\xi)} = |\xi|^{2s}\hat{\phi}(\xi).$$ Here $S$ denotes the Schwartz space of rapidly decreasing $C^\infty$ functions in ${\mathbb{R}}^N$. In fact, since $S$ is density in $L^2({\mathbb{R}}^N)$, $(-\Delta)^s$ can act on $H^s({\mathbb{R}}^N).$ If $\phi$ is smooth enough, it can be expressed by the following formula $$(-\Delta)^s\phi(x)=C_{N,s} P.V.\int_{{\mathbb{R}}^N}\frac{\phi(x)-\phi(y)}{|x-y|^{N+2s}}dy={\displaystyle}C_{N,s} \lim_{\varepsilon\rightarrow 0}\int_{{\mathbb{R}}^N\setminus B_\varepsilon(x)}\frac{\phi(x)-\phi(y)}{|x-y|^{N+2s}}dy,$$ where $P.V.$ is the principal value and $C_{N,s}$ is a normalization constant. This type of fractional Schrödinger systems are of particular interest in fractional quantum mechanics for the study of particles on stochastic fields modelled by Lévy processes. A path integral over the Lévy flights paths and a fractional Schrödinger equation of fractional quantum mechanics are formulated by Laskin [@l1] from the idea of Feynman and Hibbs’s path integrals (see also [@l2]). Problem can be regarded as a counterpart of the following fractional equation $$\label{1.3} (-\Delta)^su+u=|u|^{2p-2}u,~~~x\in {\mathbb{R}}^N.$$ When $s=1$, turns to be the classical equation $$\label{1.4} -\Delta u+u=|u|^{2p-2}u,$$ where $1<p<2_1^*/2$. In [@c1], Coffman showed the uniqueness of ground state solutions of the following equation $$-\Delta u+u-u^3=0,\,x\in {\mathbb{R}}^3.$$ For a general case, the uniqueness of positive radial solutions of $$\Delta u+f(u)=0,\,x\in {\mathbb{R}}^N,$$ was obtained by Maris in [@m] when $N>1$ and $f(u)$ satisfies certain assumptions. In a celebrated paper [@k], Kwong established the uniqueness and non-degeneracy of the ground states for problem for $ N\geq1$, which provides an indispensable basis for the blow-up analysis as well as the stability of solitary waves for related time-dependent equations such as the nonlinear Schrödinger equation (see [@mr]). For with $0<s<1$, the uniqueness of ground state solutions for the following nonlinear model $$(-\Delta)^\frac{1}{2} u+u-u^2=0,\,x\in {\mathbb{R}},$$ was proved by Amick and Toland [@at]. In [@fls], Frank, Lenzmann and Silvester showed the uniqueness and non-degeneracy of ground state solutions $w$ for arbitrary space dimensions $N\geq 1$ and all admissible exponents $1<p<2^*_s/2$, where the non-degeneracy means that the kernel of the associated linearized operator in $H^s({\mathbb{R}}^N)$ $$L_+=(-\Delta)^s+I-(2p-1)w^{2p-2}$$ is exactly $\hbox{span}\{\frac{\partial w}{\partial x_i}:~i=1,2,\cdots,N\}$. This result generalizes the uniqueness and non-degeneracy result for dimension $N = 1$ obtained in [@fl] and in particular, the uniqueness result in [@at]. The existence and symmetry results for the solution $w$ for equation were also shown by Dipierro, Palatucci and Valdinoci in [@dpv] and Felmer, Quaas and Tan in [@fqt]. Recently, for a critical semi-linear nonlocal equation involving the fractional Laplacian, Dávila, del Pino and Sire [@dds] proved the non-degeneracy of the manifold consisting of positive solutions. Since the important result of [@fls], people began to focus on the generalized form of . Based on minimization on the Nehari manifold, Secchi [@s] found solutions for the following class of fractional nonlinear Schrödinger equations $$(-\Delta )^su +V(x)u=|u|^{2p-2}u.$$ Felmer, Quaas and Tan [@fqt] studied the existence of positive solutions for the fractional nonlinear Schrödinger equation $$(-\Delta)^su+u=f(x,u) ~\hbox{in}~{\mathbb{R}}^N, u>0, \lim\limits_{|x|\rightarrow +\infty}u(x)=0,$$ and analyzed regularity, decay and symmetry properties of these solutions. In [@c], Chang obtained the existence of ground state solutions for the following fractional Schrödinger equation $$(-\Delta)^su + V(x)u = f(x,u), \,x\in {\mathbb{R}}^N,$$ by means of variational methods, where $f(x, u)$ is asymptotically linear in $u $ at infinity. For more results concerning the fractional equations and the related problems, we can refer to [@bl; @bs; @k; @mr; @w] and the references therein. We emphasize that although there is wide study on existence, uniqueness and non-degeneracy for single fractional equation, to our knowledge, there are few papers dealing with fractional system, with the exception of [@dp], where Dipierro and Pinamonti studied the symmetry properties of solutions of elliptic system $$\label{1.6} \left\{\begin{array}{ll} (-\Delta)^{s_1} u =F_1(u,v),~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^{s_2} v =F_2 (u,v),~~x\in {\mathbb{R}}^N. \end{array}\right.$$ Here $F_1,F_2 \in C^{1,1}_{loc}({\mathbb{R}}^2), s_1, s_2 \in (0,1)$. As far as is concerned, we can find many results for the case $s_1=s_2=1$ (see [@a; @bdw; @clw; @lw2; @pw; @p] and the references therein). In the present paper, we will focus on the existence, non-degeneracy of proportional vector solutions for fractional system , and will investigate the form and the uniqueness of the least energy vector solutions of . More precisely, our first goal is to prove an existence and non-degeneracy result for proportional positive solutions of , where non-degeneracy of a solution $(U,~V)$ for means that the kernel of the linearized operator of at $(U,~V)$ is given by span$\{(\theta(\beta)\frac{\partial w} {\partial x_j},\frac{\partial w} {\partial x_j})~|~j=1,2, \cdots, N\}$ with $\theta(\beta) \neq 0$. Non-degeneracy is very important because it enables one to construct solutions for many problems, see [@cwy; @den; @dy; @pw; @wy] for example. Our second goal is to show that the least energy solutions of must be proportional and unique. A similar result for the case $s=1$ has been proved by Chen and Zou [@cz]. Before we state our main results, we introduce some notations. We call $(u,v)$ a least energy solution of if $u \not\equiv 0, v\not\equiv 0$ satisfy and $(u,v)$ makes the value of the corresponding functional the smallest among all the solutions of . Throughout this paper, we denote by $w$ the solution, found by Frank, Lenzmann and Silvester in [@fls], for the equation . Without loss of generality, we assume that $\mu_1>\mu_2$. Set $$\label{S} S:=\inf\limits_{u\in H^s({\mathbb{R}}^N)\setminus\{0\}}\Big\{{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{u}|^2:\,{\displaystyle}\int_{{\mathbb{R}}^N}|u|^{2p}=1\Big\},$$ $$\label{Smu1mu2} S_{\mu_1,\mu_2}:=\inf\limits_{(u, v)\in H^s({\mathbb{R}}^N)^2\setminus\{0,0\}}\frac{{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})(|\hat{u}|^2+|\hat{v}|^2)} {\Big({\displaystyle}\int_{{\mathbb{R}}^N}(\mu_1|u|^{2p}+2\beta|u|^p|v|^p+\mu_2|v|^{2p})\Big)^\frac{1}{p}}.$$ We define the following functions which is important in the analysis of the uniqueness and non-degeneracy of the least energy solutions. $$\label{ftau} f(\tau):=\frac{1+\tau^2}{(\mu_1+2\beta\tau^p+\mu_2\tau^{2p})^\frac{1}{p}},$$ $$H_1(t)=\frac{1}{p-1}t^2-\frac{p-2}{p}(p-1) ^{-\frac{p}{p-2}}t^\frac{2(p-1)}{p-2}.$$ We also write $$\label{1.7} D=\left\{\begin{array}{ll} (0,1),\quad\quad~ \hbox{if}~1<p<2 ~\hbox{and}~(p-1)\mu_1^\frac{p-2}{2(p-1)}\mu_2^\frac{p}{2(p-1)}>\beta>0,\vspace{2mm}\\ (1,+\infty),\quad\quad~ \hbox{if}~p>2~\hbox{and}~\beta>(p-1)\mu_1,\vspace{2mm}\\ (0,+\infty),\quad\quad~ \hbox{if}~\hbox{otherwise}, \end{array}\right.$$ $$\label{1.7'} \tilde{D}=\left\{\begin{array}{ll} (0,1),\quad\quad~ \hbox{if}~1<p<2 ~\hbox{and}~(p-1)\mu^\frac{p-2}{2(p-1)}>\tilde{\beta}>0,\vspace{2mm}\\ (1,+\infty),\quad\quad~ \hbox{if}~p>2~\hbox{and}~\tilde{\beta}>(p-1)\mu,\vspace{2mm}\\ (0,+\infty),\quad\quad~ \hbox{if}~\hbox{otherwise}, \end{array}\right.$$ and $$\label{1.8}\left\{\begin{array}{ll} (A_1)~ 2<p<\frac{2^*_s}{2},~ 0<\beta\leq(p-1)\mu_1,\vspace{2mm}\\ (A_2)~2<p<\frac{2^*_s}{2},~\mu_1 \geq \frac{\mu_2}{2}(\frac{p}{p-1})^{p-1},~\beta >(p-1)\mu_1,\vspace{2mm}\\ (A_3)~2<p<\frac{2^*_s}{2},~\mu_1<\frac{\mu_2}{2}(\frac{p}{p-1})^{p-1}, ~(p-1)\mu_1\leq\beta\leq\beta_0~ \hbox{or}~\beta\geq\max\{\beta_1,(p-1)\mu_1\},\vspace{2mm}\\ (A_4)~1<p<\min\{2,\frac{2^*_s}{2}\},~ \beta\geq (p-1)\mu_1^\frac{p-2}{2(p-1)}\mu_2^\frac{p}{2(p-1)},\vspace{2mm}\\ (A_5)1<p<\min\{2,\frac{2^*_s}{2}\},~0<\mu_1<\frac{p\mu_2}{2-p},\vspace{2mm}\\ \quad\quad0<\beta\leq \min \Big\{\frac{p\mu_2-\mu_1(2-p)}{2}, \,\, 2(p-1)(2-p)^\frac{2-p}{2(p-1)}(\frac{1}{p})^\frac{p}{2(p-1)}\mu_1^\frac{p}{2(p-1)}\mu_2^\frac{p-2}{2(p-1)}\Big\},\vspace{2mm}\\ (A_6)~1<p<\min\{2,\frac{2^*_s}{2}\},\max\Big\{\frac{p\mu_2-\mu_1(2-p)}{2}, ~0\Big\}<\beta<(p-1)\mu_1^\frac{p-2}{2(p-1)}\mu_2^\frac{p}{2(p-1)},\vspace{2mm}\\ (A_7) ~p=2, ~\beta \in (0, \mu_2)\cup(\mu_1, +\infty), \end{array}\right.$$ where $ 0<\beta_0<\beta_1$ solve $$\frac{2(p-1)\mu_1}{p\mu_2}- H_1\Big(\frac{\beta}{\mu_2}\Big)=0.$$ Our first result is on the existence and non-degeneracy of positive proportional vector solutions. \[Th1\]  Suppose that $0<s<1, 1<p<2^*_s/2, \mu_1>\mu_2>0, \beta>0$ and one of $A_i$ ($i=1,2,\cdots,7$) defined in holds. Then has a positive solution $(U,V):=(k_1w, \tau_0 k_1w) $ in $H^s({\mathbb{R}}^N) \times H^s({\mathbb{R}}^N)$ which is non-degenerate, where $\tau_0\in D$ satisfies $\mu_1+\beta\tau^p-\mu_2\tau^{2p-2}-\beta\tau^{p-2}=0$ and $k_1^{2p-2}=(\mu_1+\beta\tau_0^p)^{-1}$. Indeed, we prove that any solution of the form $(k_1w, \tau_0 k_1w) $ in $H^s({\mathbb{R}}^N) \times H^s({\mathbb{R}}^N)$ is non-degenerate, where $\tau_0\in D$ satisfies $\mu_1+\beta\tau^p-\mu_2\tau^{2p-2}-\beta\tau^{p-2}=0$ and $k_1^{2p-2}=(\mu_1+\beta\tau_0^p)^{-1}$. When $p=2, \beta \in [\mu_2, \mu_1]$, has no positive solutions. In fact, suppose, to the contrary, that $(u,v)$ is a positive solution of . Multiplying the first equation in by $v$ and the second equation by $u$, and then integrating on ${\mathbb{R}}^N$ and substracting together, we obtain $0=\int_{{\mathbb{R}}^N}uv[(\mu_1-\beta)u^2+(\beta-\mu_2)v^2]~dx\neq 0$, which is a contradiction. Our second result is for the case $\beta<0$. \[Th1.4\] Suppose that $0<s<1, 1<p<2^*_s/2, \mu_1>\mu_2>0$. Then there exists a decreasing sequence $\{\beta_k\} \subset (-\sqrt{\mu_1\mu_2},0)$ such that for $\beta \in (-\sqrt{\mu_1\mu_2},0)\setminus\{\beta_k\}$, has a positive non-degenerate solution $(U,V):=(k_1w, \tau_0 k_1w)$ in $H^s({\mathbb{R}}^N) \times H^s({\mathbb{R}}^N)$, where $\tau_0\in D$ satisfies $\mu_1+\beta\tau^p-\mu_2\tau^{2p-2}-\beta\tau^{p-2}=0, k_1^{2p-2}=(\mu_1+\beta\tau_0^p)^{-1}$. The last result describes the form and the uniqueness of the least energy solutions of . \[th1.6\] Assume that $(u_0, v_0)$ is a positive least energy solution of , and one of the following conditions holds: $(1)~~\beta>\mu_1, p=2$, $(2)~~\beta>0, 1<p<2$. Then $(u_0, v_0)=(k_{min}w_{x_0}, k_{min}\tau_{min}w_{x_0})$. Moreover, the least energy solution is unique. Here $x_0$ is some point in ${\mathbb{R}}^N$, $w_{x_0}(x):=w(x-x_0),$ $\tau_{min}>0$ is the minimum point of $f(\tau)$ in $[0, +\infty)$ satisfying $f(\tau_{min}):=\min\limits_{\tau \geq0}f(\tau)<f(0)=\mu_1^{-\frac{1}{p}}$ with $k_{min}^{2p-2}=(\mu_1+\beta \tau_{min}^p)^{-1}.$ Each non-zero local maximum or non-zero minimum point of $f(\tau)$ corresponds to a positive proportional vector solution of , see Remark \[re3.4\] later. The proof for Theorems \[Th1\] and \[Th1.4\] can be divided into two parts. In the first part, we establish the existence of a positive proportional vector solution with the form $(k_1w,\tau k_1w)$ to . In this case, we see $$\label{KC} (\mu_1+\beta\tau^p)k_1^{2p-2}=1=(\mu_2\tau^{2p-2}+\beta\tau^{p-2})k_1^{2p-2}$$ and $\tau$ satisfies $$\mu_1+\beta\tau^p-\mu_2\tau^{2p-2}-\beta\tau^{p-2}=0.$$ So we need to investigate the solvability of the following equation $$\label{1.10} g(\tau):=\mu_1+\beta\tau^p-\mu_2\tau^{2p-2}-\beta\tau^{p-2}=0,\quad \tau \in D.$$ In the second part, we prove that any positive proportional vector solution, got in the first part, is non-degenerate. Our method is inspired by [@bdw] and [@pw] where a special case $s=1,N=3$ and $p=2$ was studied. We will convert the study on the non-degeneracy of the solutions for system into that of a single equation by using linearization and spectral analysis. As Peng and Wang [@pw] did, here we have to prove that $\tilde{f}(\tilde{\beta}) \neq \lambda_k~(\forall k\in N^+)$, where $\tilde{f}(\tilde{\beta})$ will be defined in and , and $\lambda_k~(k\in N^+)$ are the eigenvalues of the weighted eigenvalue problem $(-\Delta)^s u +u= \lambda w^{2p-2}u$. However, compared to [@pw], we will encounter more difficulties. On one hand, since $p$ is more general in system , we can not write out the explicit expression on $\tau_0, k_1$, which makes our discussion more complicated. On the other hand, in [@pw], Peng and Wang obtained the non-degeneracy by proving that the corresponding $\tilde{f}(\tilde{\beta})$ is monotone about $\tilde{\beta}$. But in our case, we only get the same result for the case $\tilde{\beta}<0$. But, for the case $\tilde{\beta}>0$, we have to discuss $p$ in three cases: $1<p<2, p=2, 2<p<2^*_s/2$. In each case, we should carry out some tedious and preliminary analysis to get the non-degeneracy result in some ranges of $\mu_1,\mu_2, p, \tilde{\beta}$. To verify Theorem \[th1.6\], we first prove that has a least energy solution of the form $(kw, k\tau w)$. We observe that $(kw, k\tau w)$ is a least energy solution of if and only if $S_{\mu_1,\mu_2}$ can be obtained by $(kw, k\tau w)$, which help us to reduce the problem to considering a minimization problem $\min_{\tau\ge 0}f(\tau)$. Then, we prove that any positive least energy solution to must be proportional and the minimum point $\tau_{min}$ of $f(\tau)$ must be unique. This paper is organized as follows. In Section 2, we will prove Theorems \[Th1\] and \[Th1.4\]. The proof for Theorem \[th1.6\] will be provided in Section \[s3\]. The analysis on minimum or maximum points of $f(\tau)$ will be given in the Appendix. Existence and non-degeneracy of proportional solutions ======================================================= In this section, we prove Theorems \[Th1\] and \[Th1.4\]. To this end, we first consider the following system about $\tau$: $$\label{2.1} \left\{\begin{array}{ll} p\tau^{2p-2}-2\tilde{\beta}\tau^p=\mu(2-p),\vspace{2mm}\\ \tilde{g}(\tau):=\mu+\tilde{\beta}\tau^p-\tilde{\beta}\tau^{p-2}-\tau^{2p-2}=0. \end{array}\right.$$ \[lm2.1\] Assume that $\mu>1, \tilde{\beta}>0.$ Then system admits no solutions in $\tilde{D}$, provided one of the following conditions holds:\ $(B_1)~ p>2, ~0<\tilde{\beta}\leq(p-1)\mu;$\ $(B_2)~p>2,~\mu \geq \frac{1}{2}(\frac{p}{p-1})^{p-1},~\tilde{\beta }\geq (p-1)\mu;$\ $(B_3)~p>2,~1<\mu<\frac{1}{2}(\frac{p}{p-1})^{p-1}, (p-1)\mu\leq \tilde{\beta}\leq\tilde{\beta}_0, \hbox{or}~\max\Big\{\tilde{\beta}_1,(p-1)\mu\Big\}\leq \tilde{\beta};$\ $(B_4)~1<p<2, ~\tilde{\beta}\geq (p-1)\mu^\frac{p-2}{2(p-1)};$\ $(B_5)~1<p<2, ~1<\mu<\frac{p}{2-p},~0<\tilde{\beta}\leq \min\Big\{\frac{p-\mu(2-p)}{2}, ~2(p-1)(2-p)^\frac{2-p}{2(p-1)}(\frac{\mu}{p})^\frac{p}{2(p-1)}\Big\};~$\ $(B_6)~1<p<2,\max\Big\{\frac{p-\mu(2-p)}{2},~0\Big\}<\tilde{\beta}<(p-1)\mu^\frac{p-2}{2(p-1)}$;\ $(B_7)~p=2, ~\tilde{\beta} >0,$\ where $\tilde{D}$ is defined in , and $0<\tilde{\beta}_0<\tilde{\beta}_1$ are the roots of $\frac{2(p-1)}{p}\mu-H_1(\tilde{\beta})=0$. Set $$F(\tau):=p\tau^{2p-2}-2\tilde{\beta}\tau^p,$$ and $$G(\tau):=\tilde{g}(\tau)+\frac{1}{p}(F(\tau)-\mu(2-p)).$$ Then $$G(\tau)=\frac{2(p-1)}{p}\mu+\frac{\tilde{\beta}(p-2)}{p}\tau^p-\tilde{\beta}\tau^{p-2}.$$ We divide the proof into three cases: $2<p, 1<p<2$ and $p=2.$ Case I: $\tilde{\beta}>0, p>2$. In this case, $G(\tau)$ gets its minimum at $ \tau^G_{min}=1$ and $G(1)=\frac{2}{p}[(p-1)\mu-\tilde{\beta}]$. If $0<\tilde{\beta}<(p-1)\mu$, then $G(\tau^G_{min})>0.$ So has no solution in $\tilde{D}$. If $\tilde{\beta}=(p-1)\mu$, then $G(1)=0~\hbox{but}~\tilde{g}(1)=\mu-1>0$. So has no solution in $\tilde{D}$. At last, if $\tilde{\beta}>(p-1)\mu$, then $F(1)<\mu(2-p)$ since $\mu>1$ and $2<p$. So $F(\tau)=\mu(2-p)$ has two solutions $0<\tau_1<\tau_2$. It follows from $F(\tau_i)=\mu(2-p)$ that $p\tau_i^{2p-2}<2\tilde{\beta}\tau_i^p$ and $\mu(p-2)<2\tilde{\beta}\tau_i^p$, which implies $$\Big[\frac{\mu(p-2)}{2\tilde{\beta}}\Big]^\frac{1}{p}<\tau_1<\tau_2 <\Big[\frac{2\tilde{\beta}}{p}\Big]^\frac{1}{p-2}.$$ But from $\tilde{\beta}>(p-1)\mu$, we know $\tau^F_{min}=(\frac{\tilde{\beta}}{p-1})^\frac{1}{p-2}>1$. Therefore, by the graph of $F(\tau)$, we see $$\label{2.51} \Big[\frac{\mu(p-2)}{2\tilde{\beta}}\Big]^\frac{1}{p}<\tau_1<1,\,\, 1<\Big(\frac{\tilde{\beta}}{p-1}\Big)^\frac{1}{p-2}<\tau_2 <\Big[\frac{2\tilde{\beta}}{p}\Big]^\frac{1}{p-2}.$$ Since we only consider the solutions in $(1,+\infty)$, we can complete our proof if we can prove $G(\tau_2)\neq0$. $G(\tau)$ increases strictly in $[1, +\infty],$ hence $$\begin{aligned} G(\tau_2) &>&G\Big((\frac{\tilde{\beta}}{p-1})^\frac{1}{p-2}\Big)=\frac{2(p-1)}{p}\mu-\Big[\frac{1}{p-1}\tilde{\beta}^2-\frac{p-2}{p}(p-1) ^{-\frac{p}{p-2}}\tilde{\beta}^\frac{2(p-1)}{p-2}\Big]\\ &=&\frac{2(p-1)}{p}\mu-H_1(\tilde{\beta}). \end{aligned}$$ Using the definition of $H_1(\tilde{\beta})$, we conclude that $H_1(\tilde{\beta})$ attains its maximum at $ \tilde{\beta}_{max}=p^\frac{p-2}{2}(p-1)^\frac{4-p}{2}$ and $~H_1(\tilde{\beta}_{max})=(\frac{p}{p-1})^{p-2}.$ Now we have two subcases: If $\mu \geq \frac{1}{2}(\frac{p}{p-1})^{p-1},$ then $\frac{2(p-1)}{p}\mu\geq H_1(\tilde{\beta}_{max}).$ That is, $G(\tau_2) >G((\frac{\tilde{\beta}}{p-1})^\frac{1}{p-2})\geq 0,$ which implies that $\tilde{g}(\tau_2) \neq 0$ and has no solution in $\tilde{D}$; If $1<\mu<\frac{1}{2}(\frac{p}{p-1})^{p-1},$ then $\frac{2(p-1)}{p}\mu< H_1(\tilde{\beta}_{max}).$ Hence there exist $\tilde{\beta}_0, \tilde{\beta}_1$ such that $\frac{2(p-1)}{p}\mu\geq H_1(\tilde{\beta})$ when $0<\tilde{\beta} \leq \tilde{\beta}_0$ or $\tilde{\beta} \geq \tilde{\beta}_1$. So $G(\tau_2) >G((\frac{\tilde{\beta}}{p-1})^\frac{1}{p-2})\geq 0,$ which also implies that $\tilde{g}(\tau_2) \neq 0$. As a result, if $(p-1)\mu\leq \tilde{\beta}\leq \tilde{\beta}_0~\hbox{or}~\max\{(p-1)\mu, \tilde{\beta}_1\}\leq\tilde{\beta}$, then has no solution in $\tilde{D}$.\ Case II:$ \tilde{\beta} >0, 1<p<2$ In this case, $F(\tau^F_{max})=(2-p)(\frac{p-1}{\tilde{\beta}})^\frac{2(p-1)}{2-p}$. If $(p-1)\mu^\frac{p-2}{2(p-1)} <\tilde{\beta} $, then $F(\tau)\leq F(\tau^F_{max})<\mu(2-p)$. So has no solution in $\tilde{D}$. If $\tilde{\beta}=(p-1)\mu^\frac{p-2}{2(p-1)}$, then $F(\tau)=\mu(2-p)$ has unique solution $ \tau_0=(\frac{p-1}{\tilde{\beta}})^\frac{1}{2-p}.$ Direct calculation gives $$\begin{aligned} G(\tau_0)&=&\frac{2(p-1)}{p}\mu-\frac{2-p}{p}(p-1)-(p-1)\mu^\frac{p-2}{p-1}\\ &>&\frac{2(p-1)}{p}\mu-\frac{2-p}{p}(p-1)-(p-1) =\frac{2(p-1)}{p}(\mu-1)>0. \end{aligned}$$ So has no solution in $\tilde{D}$. At last, if $0< \tilde{\beta}<(p-1)\mu^\frac{p-2}{2(p-1)}$, then the equation $F(\tau)=\mu(2-p)$ has two solutions $\tau_1, \tau_2.$ This case is more complicated. From the definition of $F(\tau)$, we know that $F(\tau)$ increases strictly in $[0, (\frac{p-1}{\tilde{\beta}})^\frac{1}{2-p}]$ and decreases strictly in $[(\frac{p-1}{\tilde{\beta}})^\frac{1}{2-p}, +\infty]$. Similarly, we deduce that if $1<\mu<\frac{p}{2-p}~\hbox{and}~0<\tilde{\beta}\leq\frac{p-\mu(2-p)}{2}$, then $F(1)\geq \mu(2-p).$ Proceeding as we prove , we find $$\Big[\frac{\mu(2-p)}{p}\Big]^\frac{1}{2(p-1)}<\tau_1\leq 1,\,\, 1<\Big(\frac{p-1}{\tilde{\beta}}\Big)^\frac{1}{2-p}<\tau_2<\Big[\frac{p}{2\tilde{\beta}}\Big]^\frac{1}{2-p}.$$ Similar to case $p>2$, we only need to prove $G(\tau_1)\neq0$. Direct computation yields that $G(\tau)$ increases strictly in $[0, 1]$ and decreases strictly in $[1, +\infty]$. If $\tilde{\beta}\leq 2(p-1)(2-p)^\frac{2-p}{2(p-1)}(\frac{\mu}{p})^\frac{p}{2(p-1)},$ then $ G\Big(\big[\frac{\mu(2-p)}{p}\big]^\frac{1}{2(p-1)}\Big)\geq 0$. Since $G(1) \geq \mu-1>0$, we conclude that $G(\tau_1)>0$; But if $\max\Big\{\frac{p-\mu(2-p)}{2},~0\Big\}<\tilde{\beta}<(p-1)\mu^\frac{p-2}{2(p-1)}$, then $\max\{(\frac{\mu(2-p)}{p})^\frac{1}{2(p-1)}, 1 \}<\tau_1<(\frac{p-1}{\tilde{\beta}})^\frac{1}{2-p}<\tau_2<(\frac{p}{2\tilde{\beta}})^\frac{1}{2-p},$ which implies $p\tau^{2p-2}-2\tilde{\beta}\tau^p=\mu(2-p)$ has no solution in $(0,1)$, and hence has no solution in $\tilde{D}$.\ Case III: $p=2$ can be written as $$\left\{\begin{array}{ll} \tau^2-\tilde{\beta}\tau^2=0,\\ \mu+\tilde{\beta}\tau^2-\tilde{\beta}-\tau^2=0. \end{array}\right.$$ It is easy to see that the above system has no solutions in $\tilde{D}$. In the following, we will consider the case $\tilde{\beta}<0$. Define $$l(\tilde{\beta})=\frac{\mu(2p-1)+\tilde{\beta}(p-1)\tau^p-\tilde{\beta} p \tau^{p-2}}{\mu+\tilde{\beta} \tau^p},$$ where $\tau\in \tilde{D}~\hbox{and}~\tilde{\beta}$ satisfy $$\mu+\tilde{\beta}\tau^p-\tau^{2p-2}-\tilde{\beta}\tau^{p-2}=0,\quad \mu+\tilde{\beta}\tau^p>0,\quad \tau^{2p-2}+\tilde{\beta}\tau^{p-2}>0.$$ \[npro3\] Assume that $0<s<1, 1<p<\frac{2^*_s}{2}, \mu>1$. Then $l(\tilde{\beta})$ decreases strictly in $(-\sqrt{\mu},0).$ Differentiating with respect to $ \tilde{\beta}$ on both sides of the equation $\mu+\tilde{\beta}\tau^p-\tau^{2p-2}-\tilde{\beta}\tau^{p-2}=0$, we get $$\tau^\prime(\tilde{\beta})=\frac{\tau(\tau^2-1)}{2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2)}.$$ So $$\begin{aligned} l^\prime(\tilde{\beta})&=&{\displaystyle}\frac{\big[(p-1)\tau^p-p\tau^{p-2}+\tilde{\beta} p\tau^{p-3}((p-1)\tau^2-(p-2))\tau^\prime(\tilde{\beta})\big]\big(\mu+\tilde{\beta} \tau^p\big)}{(\mu+\tilde{\beta} \tau^p)^2}\\ &&-{\displaystyle}\frac{\tau^p\big[\mu(2p-1)+\tilde{\beta}(p-1)\tau^p-\tilde{\beta} p \tau^{p-2}\big]}{(\mu+\tilde{\beta} \tau^p)^2}\\ &&+{\displaystyle}\frac{\tilde{\beta} p\tau^{p-1}\big[\mu(2p-1)+\tilde{\beta}(p-1)\tau^p-\tilde{\beta} p \tau^{p-2}\big]\tau^\prime(\tilde{\beta})}{(\mu+\tilde{\beta} \tau^p)^2}\\ &=&{\displaystyle}\frac{1}{(\mu+\tilde{\beta} \tau^p)^2}\Big[-\mu p\tau^{p-2}(\tau^2+1)+\tilde{\beta} p\tau^{p-2}(\tau^2-1)\frac{2\tilde{\beta}\tau^p-\mu p\tau^2+\mu(2-p)}{2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2)}\Big]\\ &=&{\displaystyle}\frac{\tau^{p-2}}{(\mu+\tilde{\beta} \tau^p)^2}\frac{\tilde{\beta} p(\tau^2-1)\big(2\tilde{\beta}\tau^p\big)}{2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2)}\\ &&-{\displaystyle}\frac{\big[\mu p\tau^2+\mu(2-p)\big]-\mu p(\tau^2+1)\big[2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2)\big]}{2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2)}\\ &=&{\displaystyle}\frac{2p\tau^p}{(\mu+\tilde{\beta} \tau^p)^2}\frac{\big[\tilde{\beta}^2-\mu(p-1)\big]\tau^p-\big[\tilde{\beta}^2+\mu(p-1)\big]\tau^{p-2}+2\mu\tilde{\beta}}{2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2)}.\end{aligned}$$ Since $\mu+\tilde{\beta}\tau^p-\tau^{2p-2}-\tilde{\beta}\tau^{p-2}=0, \tilde{\beta}<0$ and $0<\tau\in \tilde{D}$, we see $\tau>1,~\hbox{and}~ \mu-\tau^{2p-2}>0$, which combined with $\mu+\tilde{\beta}\tau^p>0,\,\tau^{2p-2}+\tilde{\beta}\tau^{p-2}>0$ yields $$\label{2.3} \min\Big\{(\frac{\mu}{|\tilde{\beta}|})^\frac{1}{p},\mu^\frac{1}{2(p-1)}\Big\}>\tau >\max\Big\{1,|\tilde{\beta}|^\frac{1}{p}\Big\}.$$ Set $M(\tau)=2(p-1)\tau^p-\tilde{\beta} p\tau^2+\tilde{\beta}(p-2).$ Then $M(\tau)$ increases in $[1,+\infty)$ and $M(\tau)>M(1)=2(p-1)-2\tilde{\beta}>0$. To get some $\tau$ satisfying , we know that $\tilde{\beta}$ should satisfy $ \tilde{\beta}^2<\mu.$ Set $$T(\tilde{\beta})= \big[\tilde{\beta}^2-\mu(p-1)\big]\tau^p-\big[\tilde{\beta}^2+\mu(p-1)\big]\tau^{p-2}+2\mu\tilde{\beta}.$$ If $p\geq 2$, we see from $ \tilde{\beta}^2<\mu$ that $ \tilde{\beta}^2\leq\mu(p-1).$ Using the facts that $\tau>0, \mu>1 $ and $\tilde{\beta}<0$, we conclude $T(\tilde{\beta})<0$. So $l(\tilde{\beta})$ decreases strictly in $(-\sqrt{\mu},0).$ If $1<p<2$ and $\tilde{\beta}^2\leq\mu(p-1)$, similar to the case $p\geq 2$, we obtain that $l(\tilde{\beta})$ decreases strictly in $(-\sqrt{\mu(p-1)},0)$. At last, if $1<p<2$ and $\mu>\tilde{\beta}^2>\mu(p-1)$, it follows from $\tau>1$ that $\tau^\prime(\tilde{\beta})>0$, which combined with $\mu+\tilde{\beta}\tau^p-\tau^{2p-2}-\tilde{\beta}\tau^{p-2}=0$ yields that $$T^\prime(\tilde{\beta})=2\tau^{2p-2}+ \Big\{\big[\tilde{\beta}^2-\mu(p-1)\big]p\tau^{p-1}+\big[\tilde{\beta}^2 +\mu(p-1)\big](2-p)\tau^{p-3}\Big\}\tau^\prime(\tilde{\beta})>0.$$ Thus $T(\tilde{\beta})$ increases in $[-\sqrt{\mu}, -\sqrt{\mu(p-1)}]$. Direct computation verifies $T(-\sqrt{\mu(p-1)})<0$. So $T(\tilde{\beta})<0$ in $[-\sqrt{\mu}, -\sqrt{\mu(p-1)}]$. As a consequence, $l(\tilde{\beta})$ decreases strictly in $[-\sqrt{\mu}, -\sqrt{\mu(p-1)}]$. \[lm2.3\] Suppose that $p\in(1,+\infty)\setminus\{2\}, \beta \in (-\sqrt{\mu_1\mu_2},0)\cup(0, +\infty)~~ \hbox{or}~~ p=2, \beta \in (-\sqrt{\mu_1\mu_2},0)\cup(0, \mu_2)\cup(\mu_1, +\infty)$. Then there exists $\tau_0\in D$ such that $g(\tau_0)=0, \mu_1+\beta\tau_0^p>0$ and $\mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0$, where $g(\tau)$ is defined in . The proof can be divided into two cases: $\beta>0, \beta<0$. Case I: $\beta>0$. In this case, we consider the following three subcases: 1\) $1<p<2$. We see $2(p-1)>0, p-2<0$ and for any fixed $\beta>0$, $\lim\limits_{\tau \to 0^+} g(\tau)=-\infty$ and $g(1)=\mu_1-\mu_2>0$. So there exists $\tau_0(\beta)\in D$ such that $g(\tau_0)=0, \mu_1+\beta\tau_0^p>0 ~\hbox{and}~ \mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0$. 2\) $p>2$. We have $2p-2>p> p-2>0$ and for any fixed $\beta>0$, $g(1)>0,~~ \hbox{and}~~ \lim\limits_{\tau \to +\infty}g(\tau)=-\infty.$ Hence there exists $\tau_0(\beta)\in D$ such that $g(\tau_0)=0, \mu_1+\beta\tau_0^p>0 ~\hbox{and}~ \mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0$. 3\) $p=2$. We find $g(\tau)=\mu_1-\beta+(\beta-\mu_2)\tau^2$. If $\beta>\mu_1,~~ \hbox{or}~~ 0<\beta<\mu_2$, then there exists $\tau_0(\beta)\in D$ such that $g(\tau_0)=0, \mu_1+\beta\tau_0^p>0 ~\hbox{and}~ \mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0$. Case II: $\beta <0$. Firstly, to guarantee that $\mu_1+\beta\tau_0^p>0 ~\hbox{and}~ \mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0$, we need $(\frac{|\beta|}{\mu_2})^\frac{1}{p}<\tau_0<(\frac{\mu_1}{|\beta|})^\frac{1}{p},$ which implies $\beta \in (-\sqrt{\mu_1\mu_2}, 0).$ Secondly, since $\beta \in (-\sqrt{\mu_1\mu_2}, 0)$, we see $g((\frac{|\beta|}{\mu_2})^\frac{1}{p})=\frac{\mu_1\mu_2-\beta^2}{\mu_2}>0$ and $g((\frac{\mu_1}{|\beta|})^\frac{1}{p})=-\frac{1}{|\beta|}(\frac{\mu_1}{|\beta|})^\frac{p-2}{p}(\mu_1\mu_2-\beta^2)<0$. Thus there exists $\tau_0(\beta)\in D$ such that $g(\tau_0)=0$. Therefore, when $\beta \in (-\sqrt{\mu_1\mu_2}, 0)$, there exists $\tau_0(\beta)\in D$ such that $g(\tau_0)=0, \mu_1+\beta\tau_0^p>0$ and $\mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0$. \[**Proof of Theorem \[Th1\] and Theorem \[Th1.4\]:\] We first prove the existence of positive solutions with the form $(kw, \tau kw)$ to . Put $(U,V):=(kw, \tau kw)$ into system , we have $$\left\{\begin{array}{ll} (-\Delta)^s w +w=(\mu_1+\beta\tau^p)|k|^{2p-2}w^{2p-1},~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^s w +w=(\mu_2\tau^{2p-2}+\beta\tau^{p-2})|k|^{2p-2}w^{2p-1},~~x\in {\mathbb{R}}^N.\\ \end{array}\right.$$ According to Lemma \[lm2.3\], we can find $\tau_0\in D$ such that $$\mu_1+\beta\tau_0^p=\mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2}>0.$$ Thus take $k_1>0$ satisfying $k_1^{2p-2}=(\mu_1+\beta\tau_0^p)^{-1}$, then $$(\mu_1+\beta\tau_0^p)k_1^{2p-2}=1, (\mu_2\tau_0^{2p-2}+\beta\tau_0^{p-2})k_1^{2p-2}=1,$$ which implies that $(k_1w, \tau_0k_1w)$ is a radial positive solution of .** Next, we prove that any positive proportional vector solution got above is non-degenerate. Let $(U,V):=(k_1w, \tau_0k_1w)\in H^s({\mathbb{R}}^N)\times H^s({\mathbb{R}}^N)$ is a positive solution of the system . In system , making a change $(u,v) \to (\mu_2^{-\frac{1}{2p-2}}u, \mu_2^{-\frac{1}{2p-2}}v)$, we see $$\label{3.1} \left\{\begin{array}{ll} (-\Delta)^s u +u= \mu|u|^{2p-2}u+\tilde{\beta} |v|^p|u|^{p-2}u,~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^s v +v= |v|^{2p-2}v+\tilde{\beta} |u|^p|v|^{p-2}v,~~x\in {\mathbb{R}}^N, \end{array} \right.$$ where $\mu=\frac{\mu_1}{\mu_2}, \tilde{\beta}=\frac{\beta}{\mu_2}$. So to complete the proof of Theorems \[Th1\] and \[Th1.4\], it suffices to prove the same conclusion for system . Consider the weighted eigenvalue problem in $ \lambda: (-\Delta)^s u +u= \lambda w^{2p-2}u$. It follows from [@fls] that this equation has a sequence of eigenvalues $1 = \lambda_1 < \lambda_2 = \lambda_3 = \cdots=\lambda_{N+1}=2p-1< \lambda_{N+2}\leq\cdots$ with associated eigenfunctions $\Phi_k$ satisfying $\int_{{\mathbb{R}}^N}w^{2p-2}\Phi_k\Phi_m=0$ for $k\neq m$. For $\Phi_k$ with $k = 2, 3,\cdots,N+1$, we may take them as $\frac{\partial w}{\partial x_1}, \frac{\partial w}{\partial x_2},\cdots,\frac{\partial w}{\partial x_N}.$ The linearization of at $(\mu_2^{-\frac{1}{2p-2}}U, \mu_2^{-\frac{1}{2p-2}}V)$ is $$\left\{\begin{array}{ll} (-\Delta)^s \varphi +\varphi=w^{2p-2}(a\varphi+b\psi),~~x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^s \psi +\psi=w^{2p-2}(c\psi+b\varphi),~~x\in {\mathbb{R}}^N, \end{array}\right.$$ where $$\begin{aligned} a(\tau_0)&=&[\mu(2p-1)+\tilde{\beta}(p-1)\tau_0^p]k_1^{2p-2},\\ b(\tau_0)&=&\tilde{\beta} p \tau_0^{p-1}k_1^{2p-2},\\ c(\tau_0)&=&[(2p-1)\tau_0^{2p-2}+\tilde{\beta}(p-1)\tau_0^{p-2}]k_1^{2p-2}.\end{aligned}$$ Let $\gamma_{\pm}=\frac{(a-c)}{2b}\pm\frac{\sqrt{(c-a)^2+4b^2}}{2b} $ be the solutions of equation $$c\gamma-b=\gamma(a-b\gamma).$$ Now, we complete the proof of Theorem \[Th1.4\]. If $\tilde{\beta}<0$, by direct computation, we obtain $a-b\gamma_{-}=2p-1.$ So $$(-\Delta)^s(\varphi-\gamma_{-}\psi)+(\varphi-\gamma_{-}\psi)=(2p-1)w^{2p-2}(\varphi-\gamma_{-}\psi),$$ and $$\varphi-\gamma_{-}\psi=\sum\limits_{j=2}^{N+1}\alpha_j\Phi_j.$$ Thus, $$(-\Delta)^s\psi+\psi =(b\gamma_{-}+c)\psi w^{2p-2}+\sum\limits_{j=2}^{N+1}b\alpha_j\Phi_jw^{2p-2}.$$ Set $\psi=\sum\limits_{j=1}^\infty \Gamma_j\Phi_j$ and $$\label{3.2} \tilde{f}(\tilde{\beta})=b\gamma_{-}+c.$$ By direct computation, we have $\tilde{f}(\tilde{\beta})=\frac{\mu(p-1)-\tilde{\beta}\tau_0^p+p\tau_0^{2p-2}}{\mu+\tilde{\beta}\tau_0^p}.$ Since $\mu+\tilde{\beta}\tau_0^p-\tau_0^{2p-2}-\tilde{\beta}\tau_0^{p-2}=0$, we obtain that $\tilde{f}(\tilde{\beta})=l(\tilde{\beta}) =\frac{\mu(2p-1)+\tilde{\beta}(p-1)\tau_0^p-\tilde{\beta} p \tau_0^{p-2}}{\mu+\tilde{\beta} \tau_0^p}$. From Lemma \[npro3\], there exists a decreasing sequence $\{\tilde{\beta_k}\}$ such that $\tilde{f}(\tilde{\beta_k})=\lambda_k$ and $\tilde{f}(\tilde{\beta})\neq \lambda_k$ for any $\tilde{\beta}\in (-\sqrt{\mu},0)\setminus\{\tilde{\beta_k}\}$ and $k=1,2,\cdots$. Using orthogonality, we see that $\Gamma_j=0$ for $j\neq 2, 3,\cdots,N+1$ and $\Gamma_j=\frac{b\alpha_j}{2p-1-\tilde{f}(\tilde{\beta})} $ for $j= 2, 3,\cdots,N+1$. Thus, the kernel at $(U, V)$ is given by span$\{((\gamma_-\frac{b}{2p-1-\tilde{f}(\tilde{\beta})}+1)\Phi_k,\frac{b}{2p-1-\tilde{f}(\tilde{\beta})}\Phi_k)| k=2,3, \cdots,N+1\}$, a $N$-dimensional space. Taking $\theta(\tilde{\beta})=\gamma_-+\frac{2p-1-\tilde{f}(\tilde{\beta})}{b}$, we can check $\theta(\tilde{\beta})\neq 0$. Therefore, there exists a decreasing sequence $\{\beta_k\}$ such that when $\beta \in (-\sqrt{\mu_1\mu_2},0)\setminus\{\beta_k\}$, the conclusion of Theorem \[Th1.4\] is true. At last, we finish proving Theorem \[Th1\]. If $\tilde{\beta}>0$, we can check $a-b\gamma_{+}=2p-1.$ So $$(-\Delta)^s(\varphi-\gamma_{+}\psi)+(\varphi-\gamma_{+}\psi)=(2p-1)w^{2p-2}(\varphi-\gamma_{+}\psi),$$ and $$\varphi-\gamma_{+}\psi=\sum\limits_{j=2}^{N+1}\alpha_j\Phi_j.$$ Thus, $$(-\Delta)^s\psi+\psi =(b\gamma_{+}+c)\psi w^{2p-2}+\sum\limits_{j=2}^{N+1}b\alpha_j\Phi_jw^{2p-2}.$$ Set $\psi=\sum\limits_{j=1}^\infty \Gamma_j\Phi_j$ and $$\label{3.3} \tilde{f}(\tilde{\beta})=b\gamma_{+}+c.$$ By direct computation, we have $\tilde{f}(\tilde{\beta})=\frac{\mu(2p-1)+\tilde{\beta}(p-1)\tau_0^p-\tilde{\beta} p \tau_0^{p-2}}{\mu+\tilde{\beta} \tau_0^p}$. Claim I:   $\tilde{f}(\tilde{\beta})\neq 1$ if any one of $(A_i)$ ($i=1,2,\cdots,7$) holds. We assume that $\tilde{f}(\tilde{\beta})=1$. Then there exist $\tilde{\beta}$ and $\tau_0\in \tilde{D}$ such that $$\left\{\begin{array}{ll} \mu(2p-1)+\tilde{\beta}(p-1)\tau_0^p-\tilde{\beta} p \tau_0^{p-2}=\mu+\tilde{\beta} \tau_0^p,\vspace{2mm}\\ {\displaystyle}\mu+\tilde{\beta}\tau_0^p-\tilde{\beta}\tau_0^{p-2}-\tau_0^{2p-2}=0.\\ \end{array}\right.$$ That is, $$\left\{\begin{array}{ll} p\tau_0^{2p-2}-2\tilde{\beta}\tau_0^p=\mu(2-p),\vspace{2mm}\\ \mu+\tilde{\beta}\tau_0^p-\tilde{\beta}\tau_0^{p-2}-\tau_0^{2p-2}=0,\vspace{2mm}\\ \tau_0\in \tilde{D}, \end{array}\right.$$ which contradicts to Lemma \[lm2.1\], since the conditions given in Lemma \[lm2.1\] correspond to those given in respectively. So the claim I is true. Claim II:   $\tilde{f}(\tilde{\beta})<2p-1$. Assume that ${\displaystyle}\tilde{f}(\tilde{\beta}) \geq 2p-1.$ Then $0>-\tilde{\beta} p \tau_0^{p-2}\geq p\tilde{\beta}\tau_0^p>0$, which is impossible, since $p, \tau_0, \tilde{\beta}>0$. So the claim II is true. Claim I and Claim II imply that $\tilde{f}(\tilde{\beta})\neq \lambda_k$ for any $k=1,2,\cdots$. Proceeding as we prove Theorem \[Th1.4\], we can complete the proof of Theorem \[Th1\]. the least energy solutions {#s3} ========================== \[lm3.1\] Assume that $\beta>0$ and $1<p<\frac{2^*_s}{2}, \mu_1,\mu_2>0$. Then $(1)~~ S_{\mu_1,\mu_2}=f(\tau_{min})S$ $(2)~~ S_{\mu_1,\mu_2}$ can be obtained by $(w_{x_0}, \tau_{min}w_{x_0})$ for all $x_0 \in {\mathbb{R}}^N$, where $\tau_{min}\ge 0$ is a minimum point of $f(\tau)$ in $[0, +\infty)$ satisfying $$\tau(\mu_1+\beta\tau^p-\mu_2\tau^{2p-2}-\beta\tau^{p-2})=0,$$ $S, S_{\mu_1,\mu_2}, f(\tau) $ are defined respectively in , and . Since $w$ is the ground state of , we see $$\label{2.4} S_{\mu_1,\mu_2} \leq \frac{{\displaystyle}\int_{{\mathbb{R}}^N}(|\xi|^{2s}+1)(1+\tau_{min}^2)|\hat{w}|^2} {[(\mu_1+2\beta\tau_{min}^p+\mu_2\tau_{min}^{2p}){\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p}]^{^\frac{1}{p}}} =f(\tau_{min})S.$$ Let $(u_n, v_n)$ be a minimizing sequence for $S_{\mu_1,\mu_2}$, and $\tau_n$ be a positive constant such that $$\label{2.5}\tau_n^{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|u_n|^{2p}={\displaystyle}\int_{{\mathbb{R}}^N}|v_n|^{2p}.$$ Set $z_n:=\frac{1}{\tau_n}v_n$. By Young’s inequality, we have $$\label{2.6} {\displaystyle}\int_{{\mathbb{R}}^N}|u_n|^p|z_n|^p \leq \frac{1}{2}{\displaystyle}\int_{{\mathbb{R}}^N}|u_n|^{2p}+\frac{1}{2}{\displaystyle}\int_{{\mathbb{R}}^N}|z_n|^{2p}={\displaystyle}\int_{{\mathbb{R}}^N}|u_n|^{2p}.$$ Therefore, $$\label{2.7}\begin{array}{ll} S_{\mu_1,\mu_2} +o_n(1)&=\frac{{\displaystyle}\int_{{\mathbb{R}}^N}(|\xi|^{2s}+1)(|\hat{u_n}|^2+|\hat{v_n}|^2)} {\Big[{\displaystyle}\int_{{\mathbb{R}}^N}(\mu_1|u_n|^{2p}+2\beta|u_n|^p|v_n|^p+\mu_2|v_n|^{2p})\Big]^{^\frac{1}{p}}}\\ &=\frac{{\displaystyle}\int_{{\mathbb{R}}^N}(|\xi|^{2s}+1)(|\hat{u_n}|^2+\tau_n^2|\hat{z_n}|^2)} {\Big[{\displaystyle}\int_{{\mathbb{R}}^N}(\mu_1|u_n|^{2p}+2\beta\tau_n^p|u_n|^p|z_n|^p+\mu_2\tau_n^{2p}|z_n|^{2p})\Big]^{^\frac{1}{p}}}\\ &\geq\frac{(1+\tau_n^2)\min\Big\{{\displaystyle}\int_{{\mathbb{R}}^N}(|\xi|^{2s}+1)|\hat{u_n}|^2,{\displaystyle}\int_{{\mathbb{R}}^N}(|\xi|^{2s}+1)|\hat{z_n}|^2\Big\}} {\Big(\mu_1+2\beta\tau_n^p+\mu_2\tau_n^{2p}\Big)^\frac{1}{p}\Big({\displaystyle}\int_{{\mathbb{R}}^N}|u_n|^{2p}\Big)^\frac{1}{p}}\\ &\geq f(\tau_n)S\geq f(\tau_{min})S,\\ \end{array}$$ where we have used the facts that $\beta>0$ and $\int_{{\mathbb{R}}^N}|u_n|^{2p}=\int_{{\mathbb{R}}^N}|z_n|^{2p}$. Combining and , we get $$S_{\mu_1,\mu_2}=f(\tau_{min})S.$$ Since $w$ is the ground state of , by direct computation, we find that $S_{\mu_1,\mu_2}$ can be obtained by $(w_{x_0}, \tau_{min}w_{x_0}).$ This completes the proof. In Lemma \[lm3.1\], if $\tau_{\min}=0$, then the minimizer is semi-trivial. \[lm3.2\] Assume that $\beta>0,\mu_1,\mu_2>0, 1<p<\frac{2^*_s}{2}$. We have (1)if $p>2, 0<\beta\leq (p-1)\mu_2$, then $f(\tau)$ has a unique maximum point $\tau_0>1$ and a unique minimum point $0$; if $p>2, \beta > (p-1)\mu_2$, then $f(\tau)$ has either a unique maximum point $\tau_0>1$ and a unique minimum point $0$, or two local minimum points $0, \tau_1$ and two local maximum points $\tau_2, \tau_3$ satisfying $0<\tau_2<\tau_1<\tau_3$; \(2) if $ p=2, \beta>\mu_1$, then $f(\tau)$ has a unique minimum point $\tau_0>1$ and a local maximum point $0$; if $ p=2, 0<\beta<\mu_2$, then $f(\tau)$ has a unique maximum point $\tau_0>1$ and a unique minimum point $0$; (3)if $ 1<p<2, \beta \geq (p-1)\mu_2$, then $f(\tau)$ has a unique minimum point $\tau_0 \in (0, 1)$ and a local maximum point $ 0$; if $ 1<p<2, 0<\beta < (p-1)\mu_2$, then $f(\tau)$ has either a unique minimum point $\tau_0 \in (0, 1)$ and a local maximum point $ 0$, or two local maximum points $0 ,\tau_1$ and two local minimum points $\tau_2, \tau_3$ satisfying $0<\tau_2<\tau_1<\tau_3$, which implies that $f(\tau)$ has a minimum point in $(0, +\infty)$. The proof of Lemma \[lm3.2\] is very preliminary and we postpone it to the Appendix. From Lemma \[lm3.2\], we know that $f(\tau)$ admits a minimum point $\tau_{min}>0$ under the assumptions of Theorem \[th1.6\]. Hence $(k_{min}w_{x_0}, k_{min}\tau_{min}w_{x_0})$ is a positive least energy solution of , where $k_{min}>0$ satisfies $(\mu_1+\beta\tau_{min}^p)k_{min}^{2p-2}=1$. If we can prove that any positive least energy solution $(u_0, v_0)$ of must be of the form $(k_{min}w_{x_0}, k_{min}\tau_{min}w_{x_0})$ and $\tau_{min}$ must be unique, then we complete the proof of Theorem \[th1.6\]. Firstly, we prove that $$\label{3.9} {\displaystyle}\int_{{\mathbb{R}}^N}|u_0|^{2p}=k_{min}^{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p}.$$ To this end, we study the following equation with a parameter $\mu>0$ $$\label{3.10}\left\{\begin{array}{ll} (-\Delta)^su+u=\mu|u|^{2p-2}u+\beta|v|^p|u|^{p-2}u, x\in {\mathbb{R}}^N,\vspace{2mm}\\ (-\Delta)^sv+v=\mu_2|v|^{2p-2}v+\beta|u|^p|v|^{p-2}v, x\in {\mathbb{R}}^N,\vspace{2mm}\\ u,v \in H^s({\mathbb{R}}^N). \end{array} \right.$$ Similarly, we denote $$f_1(\tau)=\frac{1+\tau^2}{(\mu+2\beta\tau^p+\mu_2\tau^{2p})^\frac{1}{p}},\,\,\,\, f_1(\tilde{\tau}_{min})=\min\limits_{\tau \geq 0}f_1(\tau).$$ Since $0<\tau_{min}$ is a local minimum point of $f(\tau)$, then $f^\prime(\tau_{min})=0$ and $f^{\prime\prime}(\tau_{min})>0$. From this, we get $g(\tau_{min})=0$ and $g^{\prime}(\tau_{min})>0.$ Let $$F(\mu, \tau)=\mu+\beta\tau^p-\beta\tau^{p-2}-\mu_2\tau^{2p-2}.$$ Then $$\begin{aligned} && F(\mu_1, \tau_{min})=g(\tau_{min})=0,\\ &&\frac{\partial F(\mu, \tau)}{\partial \mu}=1,\\ &&\frac{\partial F(\mu, \tau)}{\partial \tau}=\beta p\tau^{p-1}-\beta(p-2)\tau^{p-3}-\mu_2(2p-2)\tau^{2p-3},\\ &&\frac{\partial F(\mu, \tau)}{\partial \tau}|_{_{_{(\mu, \tau)=(\mu_1, \tau_{min})}}}=g^{\prime}(\tau_{min})>0. \end{aligned}$$ By the Implicit Function Theorem, we can find $\varepsilon, \delta >0$ and two positive functions $\tilde{k}_{min}(\mu), \tilde{\tau}_{min}(\mu)\in C^1((\mu_1-\varepsilon, \mu_1+\varepsilon),~(\tau_{min}-\delta, \tau_{min}+\delta))$ such that $F(\mu, \tilde{\tau}_{min})\equiv 0$ in $(\mu_1-\varepsilon, \mu_1+\varepsilon)$ and $(\mu+\beta\tilde{\tau}_{min}^p)\tilde{k}_{min}(\mu)^{2p-2}\equiv1$ in $(\mu_1-\varepsilon, \mu_1+\varepsilon)$. That is, $$\mu+\beta\tilde{\tau}_{min}^p-\beta\tilde{\tau}_{min}^{p-2}-\mu_2\tilde{\tau}_{min}^{2p-2}\equiv 0~\hbox{in}~\mu \in(\mu_1-\varepsilon, \mu_1+\varepsilon).$$ Moreover, the least energy $B(\mu)=\tilde{k}_{min}^2(\mu)(1+ \tilde{\tau}^2_{min}(\mu))B_1 \in C^1((\mu_1-\varepsilon, \mu_1+\varepsilon),{\mathbb{R}}),$ where $B_1=\frac{p-1}{2p}\int_{{\mathbb{R}}^N}|w|^{2p}$. By direct computation we can have $$\label{3.10} B(\mu)=\inf\limits_{(u,v)\in(H^s({\mathbb{R}}^N)\setminus\{0\})^2}\max\limits_{t>0}I_\mu(tu, tv)$$ where $$I_\mu(u,v):=\frac{1}{2}{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})(|\hat{u}|^2+|\hat{v}|^2)-\frac{1}{2p}{\displaystyle}\int_{{\mathbb{R}}^N} (\mu|u|^{2p}+2\beta|u|^p|v|^p+\mu_2|v|^{2p}).$$ Define $$A:={\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})(|\hat{u}_0|^2+|\hat{v}_0|^2),\quad B:={\displaystyle}\int_{{\mathbb{R}}^N} (2\beta|u_0|^p|v_0|^p+\mu_2|v_0|^{2p}),\quad E:={\displaystyle}\int_{{\mathbb{R}}^N}|u_0|^{2p}.$$ By direct computation, there exists a unique $t(\mu)>0$ such that $$\max\limits_{t>0}I_\mu(tu_0, tv_0)=I_\mu(t(\mu)u_0, t(\mu)v_0),$$ where $t(\mu)=(\frac{A}{E\mu+B})^\frac{1}{2p-2}$. Let $H(\mu,t):=(E\mu+B)t^{2p-2}-A$. Then $H(\mu_1,1)=0, \frac{\partial H}{\partial t}(\mu_1, 1)>0$. By the Implicit Function Theorem, there exist $t(\mu), \varepsilon_1 \in(0, \varepsilon)$ such that $t(\mu) \in C^1((1-\varepsilon_1, 1+\varepsilon_1),{\mathbb{R}})$ and $$t^\prime(\mu_1)=-\frac{E}{2(p-1)(E\mu_1+B)}.$$ By Taylor expansion, we see that $t(\mu)=1+t^\prime(\mu_1)(\mu-\mu_1)+O((\mu-\mu_1)^2)$ and so $t^2(\mu)=1+2t^\prime(\mu_1)(\mu-\mu_1)+O((\mu-\mu_1)^2)$. By the fact that $(u_0, v_0)$ is a positive least energy solution of , we have $$B(\mu_1)=\frac{p-1}{2p}A={\displaystyle}\frac{p-1}{2p}(E\mu_1+B).$$ Then using , we get $$\label{3.11}\begin{array}{ll} B(\mu)&\leq I_\mu(t(\mu)u_0, t(\mu)v_0)={\displaystyle}\frac{p-1}{2p}At(\mu)^2=B(\mu_1)t(\mu)^2\vspace{2mm}\\ &=B(\mu_1)-{\displaystyle}\frac{E}{2p}(\mu-\mu_1)+O((\mu-\mu_1)^2). \end{array}$$ It follows that $$\frac{B(\mu)-B(\mu_1)}{\mu-\mu_1} \geq -\frac{E}{2p}+O((\mu-\mu_1))$$ as $\mu \nearrow \mu_1$ and so $B^\prime(\mu_1)\geq-\frac{E}{2p}.$ Similarly, we have $$\frac{B(\mu)-B(\mu_1)}{\mu-\mu_1} \leq -\frac{E}{2p}+O((\mu-\mu_1))$$ as $\mu \searrow \mu_1$, which means $B^\prime(\mu_1)\leq-\frac{E}{2p}.$ Therefore, $$B^\prime(\mu_1)=-\frac{E}{2p}=-\frac{1}{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|u_0|^{2p}.$$ Moreover, since $(k_{min}w, k_{min}\tau_{min}w)$ is a positive least energy solution of , we have $B^\prime(\mu_1)=-\frac{k_{min}^{2p}}{2p}\int_{{\mathbb{R}}^N}|w|^{2p}.$ So we have $$\label{3.12} {\displaystyle}\int_{{\mathbb{R}}^N}|u_0|^{2p}=k_{min}^{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p}.$$ Moreover, we claim that $\tau_{min}$ is unique. In fact, suppose, to the contrary, that there exist two minimum points $\tau^1_{min}\neq \tau^2_{min}$. We have $k^1_{min}\neq k^2_{min}$. From the above proof, we deduce $$-\frac{(k^2_{min})^{2p}}{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p}=B^\prime(\mu_1)=-\frac{(k^1_{min})^{2p}}{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p},$$ which contradicts to the fact that $k^1_{min}\neq k^2_{min}$. So $\tau_{min}$ must be unique. With the similar argument, we can show that $$\label{3.13} {\displaystyle}\int_{{\mathbb{R}}^N}|v_0|^{2p}=k^{2p}_{min}\tau_{min}^{2p}{\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p},~~ {\displaystyle}\int_{{\mathbb{R}}^N}|u_0|^p|v_0|^p=k^{2p}_{min}\tau_{min}^p{\displaystyle}\int_{{\mathbb{R}}^N}|w|^{2p}.$$ Since $(k_{min}w, k_{min}\tau_{min}w)$ is a positive least energy solution of , we have $$\label{3.14} (\mu_1+\beta\tau_{min}^p)k_{min}^{2p-2}=1=(\mu_2\tau_{min}^{2p-2}+\beta\tau_{min}^{p-2})k_{min}^{2p-2}.$$ Set $(u_1, v_1):=(\frac{u_0}{k_{min}}, \frac{v_0}{k_{min}\tau_{min}})$. It follows from and that $${\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{u}_1|^2={\displaystyle}\int_{{\mathbb{R}}^N}|u_1|^{2p}.$$ Similarly, we find $${\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{v}_1|^2={\displaystyle}\int_{{\mathbb{R}}^N}|v_1|^{2p}.$$ Since $w$ is the ground state of , we have $${\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{u}_1|^2\geq{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{w}|^2,$$ and $${\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{v}_1|^2\geq{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{w}|^2.$$ Noticing that $(u_0, v_0)$ and $(k_{min}w, k_{min}\tau_{min}w)$ are the least energy solutions of , we obtain $$\begin{aligned} &&{\displaystyle}\frac{p-1}{2p}k^2_{min}(1+\tau^2_{min}){\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{w}|^2\vspace{2mm}\\ &=&{\displaystyle}\frac{p-1}{2p}{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})(|\hat{u}_0|^2+|\hat{v}_0|^2)\\ &=&{\displaystyle}\frac{p-1}{2p}{\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})(k^2_{min}|\hat{u}_1|^2+k^2_{min}\tau^2_{min}|\hat{v}_1|^2)\vspace{2mm}\\ &\geq&{\displaystyle}\frac{p-1}{2p}k^2_{min}(1+\tau^2_{min}){\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{w}|^2,\end{aligned}$$ which implies that $${\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{u}_1|^2 ={\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{w}|^2,$$ and $${\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{v}_1|^2={\displaystyle}\int_{{\mathbb{R}}^N}(1+|\xi|^{2s})|\hat{w}|^2.$$ So $u_1$ and $v_1$ are both positive least energy solutions of . By Hölder inequality, and , we see $$\label{H} \begin{array}{ll} {\displaystyle}\int_{{\mathbb{R}}^N}w^{2p}&={\displaystyle}\int_{{\mathbb{R}}^N}|u_1|^{p}|v_1|^p\le {\displaystyle}\frac12{\displaystyle}\int_{{\mathbb{R}}^N}|u_1|^{2p}+{\displaystyle}\frac12{\displaystyle}\int_{{\mathbb{R}}^N}|v_1|^{2p}\vspace{2mm}\\ &={\displaystyle}\frac12{\displaystyle}\int_{{\mathbb{R}}^N}w^{2p}+{\displaystyle}\frac12{\displaystyle}\int_{{\mathbb{R}}^N}w^{2p}\vspace{2mm}\\ &={\displaystyle}\int_{{\mathbb{R}}^N}w^{2p}. \end{array}$$ Hence the inequality in is in fact an equality, which implies $u_1=v_1$. \[re3.4\] From Lemma \[lm3.2\] and the proof of Theorem \[th1.6\], we see \(1) Both non-zero minimizers and non-zero maximizers of $f(\tau)$ correspond positive proportional vector solutions of problem . \(2) In the cases $p>2, 0<\beta\leq (p-1)\mu_2$ and $ p=2, 0<\beta<\mu_2$, problem  admits no positive least energy solutions. \(3) When $p>2$ and $ \beta > (p-1)\mu_2$, $f(\tau)$ may have two local minimum points $0,\tau_1$. If we can prove $f(\tau_1) \leq f(0)$, then has a unique positive least energy solution. Appendix {#apx} ======== \[**Proof of Lemma \[lm3.2\]:\] Set $$H(\tau):=\mu_1+2\beta\tau^p+\mu_2\tau^{2p},$$ and $$h(\tau):=\beta p\tau^2-\mu_22(p-1)\tau^p-\beta(p-2).$$ Since $\mu_1,\mu_2>0$, for $\beta>0$, we have $$\label{2.8} H(\tau)>0~\hbox{ for~ all}~ \tau \in (0,+\infty).$$ By direct computation, we have $$\label{2.9} f^\prime(\tau)=\frac{2\tau g(\tau)}{H^{\frac{1}{p}+1}(\tau)},$$** $$\label{2.10} g^\prime(\tau)=\tau^{p-3}h(\tau)$$ and $$\label{2.11} h^\prime(\tau)=2p\tau(\beta-\mu_2(p-1)\tau^{p-2}).$$ Combining Lemma \[lm2.3\], – and the fact that $\lim\limits_{\tau \to +\infty}f(\tau)=\mu_2^{-\frac{1}{p}}>\mu_1^{-\frac{1}{p}}=f(0)$, we proceed the following discussion. Case I: $\beta>0, p>2$ From , we see that $h^\prime (\tau)=0$ has a unique positive solution $ \tau_2=(\frac{\beta}{\mu_2(p-1)})^\frac{1}{p-2}$, and $h^{\prime\prime} (\tau)=0$ has a unique positive solution $ \tau_1=(\frac{\beta}{\mu_2(p-1)^2})^\frac{1}{p-2}$, $\tau_1<\tau_2$. We have the following two subcases. $(I_1): 0<\beta \leq (p-1)\mu_2$ In the case, $h(\tau_2) \leq 0$. We find --------------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- $(0,\tau_1)$ $\tau_1$ $(\tau_1, \tau_2)$ $\tau_2$ $(\tau_2, +\infty)$ $h^{\prime\prime} (\tau)$ $>0$ $=0$ $<0$ $<0$ $<0$ $h^\prime (\tau)$ $>0$ $>0$ $>0$ $=0$ $<0$ $h(\tau)$ $<0$ $<0$ $<0$ $\leq0$ $<0$ $g^\prime(\tau)$ $<0$ $<0$ $<0$ $\leq0$ $<0$ --------------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- : $p>2$ Considering that $g(\tau)=0$ has a positive solution $\tau_0$, we get the following table\ \ $(0,\tau_0)$ $\tau_0$ $(\tau_0, +\infty)$ ------------------ -------------- ---------- --------------------- -- $g(\tau)$ $>0$ $=0$ $<0$ $f^\prime(\tau)$ $>0$ $=0$ $<0$ : $p>2$ So $f(\tau)$ has a unique maximum point $\tau_0>1$ and a unique minimum point $0$.\ $(I_2): \beta > (p-1)\mu_2$ In this case, $h(\tau_2)>0$. Combining the facts that $\lim\limits_{\tau \to 0^+}h(\tau)<0~\hbox{and}~ \lim\limits_{\tau +\infty}h(\tau)=-\infty$ with the following table, then $ h(\tau)=0$ has only two solution $\tau_3, \tau_4$.\ \ \ \ --------------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- $(0,\tau_1)$ $\tau_1$ $(\tau_1, \tau_2)$ $\tau_2$ $(\tau_2, +\infty)$ $h^{\prime\prime} (\tau)$ $>0$ $=0$ $<0$ $<0$ $<0$ $h^\prime (\tau)$ $>0$ $>0$ $>0$ $=0$ $<0$ --------------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- : $p>2$ So we have ------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- $(0,\tau_3)$ $\tau_3$ $(\tau_3, \tau_4)$ $\tau_4$ $(\tau_4, +\infty)$ $h (\tau)$ $<0$ $=0$ $>0$ $=0$ $<0$ $g^\prime (\tau)$ $<0$ $=0$ $>0$ $=0$ $<0$ ------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- : $p>2$ From the above table we can see that $g(\tau)=0$ has at most three solutions. If $g(\tau)=0$ has one solution or two solutions, then we can find a solution $\tilde{\tau}_1>0$ of $f^\prime (\tau)=0$ such that $f(\tau)$ increases strictly in $(0, \tilde{\tau}_1)\setminus\{\tilde{\tau}_2\}$ and $f(\tau)$ decreases strictly in $(\tilde{\tau}_1, +\infty)\setminus\{\tilde{\tau}_2\}$, where $\tilde{\tau}_2$ is the other root of $f^\prime (\tau)=0$ if exists.Therefore, we can see that $f(\tau)$ has a unique maximum point $\tau_0>1$ and a unique minimum point $0$. Now we study the case that $g(\tau)=0$ has three solutions $\tau_5,\tau_6,\tau_7$. From the following table we can see that $\tau_5, \tau_7$ are the local maximum points of $f(\tau)$ and $0, \tau_6$ are the local minimum points of $f(\tau)$.\ \ \ \ \ ------------------- -------------- ---------- -------------------- ---------- -------------------- ---------- --------------------- -- -- $(0,\tau_5)$ $\tau_5$ $(\tau_5, \tau_6)$ $\tau_6$ $(\tau_6, \tau_7)$ $\tau_7$ $(\tau_7, +\infty)$ $g (\tau)$ $>0$ $=0$ $<0$ $=0$ $>0$ $=0$ $<0$ $f^\prime (\tau)$ $>0$ $=0 $ $<0$ $=0$ $>0$ $=0$ $<0$ ------------------- -------------- ---------- -------------------- ---------- -------------------- ---------- --------------------- -- -- : $p>2$ Case II: $\beta>0, p=2$ We see $g(\tau)=(\mu_1-\beta)+(\beta-\mu_2)\tau^2$. $(II_1):$   $0<\beta< \mu_2$ We have $(0,\tau_0)$ $\tau_0=\sqrt{\frac{\mu_1-\beta}{\mu_2-\beta}}$ $(\tau_0, +\infty)$ ------------------ -------------- ------------------------------------------------- --------------------- -- $g(\tau)$ $>0$ $=0$ $<0$ $f^\prime(\tau)$ $>0$ $=0$ $<0$ : $0<\beta< \mu_2,p=2$ From this, we can see that $f(\tau)$ has a unique maximum point $\tau_0>1$ and a unique minimum point $0$. $(II_2):$    $\beta> \mu_1$ If $\beta> \mu_1$, we can obtain $(0,\tau_0)$ $\tau_0=\sqrt{\frac{\mu_1-\beta}{\mu_2-\beta}}$ $(\tau_0, +\infty)$ ------------------ -------------- ------------------------------------------------- --------------------- -- $g(\tau)$ $<0$ $=0$ $>0$ $f^\prime(\tau)$ $<0$ $=0$ $>0$ : $\beta> \mu_1, p=2$ So we also can get that $f(\tau)$ has a unique minimum point $\tau_0>1$ and a local maximum point $0$ in $[0, +\infty)$. Case III: $\beta>0, 1<p<2$ By direct computation, we find that $h^{\prime}(\tau)=0$ has a unique positive solution $ \tau_2=(\frac{\beta}{\mu_2(p-1)})^\frac{1}{p-2}$, $h^{\prime}(\tau)<0$ in $(0, \tau_2)$ and $h^{\prime}(\tau)>0$ in $(\tau_2, +\infty)$. $(III_1):$ $\beta \geq (p-1)\mu_2$ Direct computation yields $h(\tau) >0$ for $\tau \in (0,+\infty)\setminus\{\tau_2\}$ and $g^\prime (\tau) >0$ for $\tau \in (0,+\infty)\setminus\{\tau_2\}.$ So $g(\tau)$ increases in $(0,+\infty)$. Due to the fact that $\lim\limits_{\tau \to 0^+}g(\tau)=-\infty, g(1)>0,$ we deduce that $g(\tau)=0$ has a unique positive solution $\tau_0<1$, $g(\tau)<0 $ in $(0, \tau_0)$ and $g(\tau)>0 $ in $(\tau_0, +\infty)$. Therefore we have proved that $f(\tau)$ has a unique minimum point $\tau_0<1$ and a local maximum point $0$. $(III_2):$ $0<\beta<(p-1)\mu_2$ We see $h(\tau_2)<0$. So $h(\tau)=0$ has two roots $\tau_3, \tau_4$. We have the following table.\ \ \ ------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- $(0,\tau_3)$ $\tau_3$ $(\tau_3, \tau_4)$ $\tau_4$ $(\tau_4, +\infty)$ $h (\tau)$ $>0$ $=0$ $<0$ $=0$ $>0$ $g^\prime (\tau)$ $>0$ $=0$ $<0$ $=0$ $>0$ ------------------- -------------- ---------- -------------------- ---------- --------------------- -- -- : $1<p<2$ From the above table, we can see that $g(\tau)=0$ and hence $f^\prime (\tau)=0$ have at most three solutions. If $f^\prime (\tau)=0$ has at most two solutions, then we can find a solution $\tilde{\tau}_1>0$ of $f^\prime (\tau)=0$ such that $f(\tau)$ decreases strictly in $(0, \tilde{\tau}_1)\setminus\{\tilde{\tau}_2\}$ and $f(\tau)$ increases strictly in $(\tilde{\tau}_1, +\infty)\setminus\{\tilde{\tau}_2\}$, where $\tilde{\tau}_2$ is the other root of $f^\prime (\tau)=0$ if exists. So $f(\tau)$ has a unique minimum point $\tau_0<1$ and a local maximum point $0$. Now we study the case that $f^\prime (\tau)=0$ has three solutions $\tau_5, \tau_6, \tau_7$.\ ------------------- -------------- ---------- -------------------- ---------- -------------------- ---------- --------------------- -- -- $(0,\tau_5)$ $\tau_5$ $(\tau_5, \tau_6)$ $\tau_6$ $(\tau_6, \tau_7)$ $\tau_7$ $(\tau_7, +\infty)$ $g (\tau)$ $<0$ $=0$ $>0$ $=0$ $<0$ $=0$ $>0$ $f^\prime (\tau)$ $<0$ $=0 $ $>0$ $=0$ $<0$ $=0$ $>0$ ------------------- -------------- ---------- -------------------- ---------- -------------------- ---------- --------------------- -- -- : $p>2$ From the above table, we see that $\tau_5, \tau_7$ are the local minimum points of $f(\tau)$, and $0, \tau_6$ are the local maximum points of $f(\tau)$ and $f(\tau_5)<f(0)$. Therefore $f(\tau)$ has a minimum point and $\min\{f(\tau): \tau \geq 0\}<f(0)$. [999]{} C. Alves, Local mountain pass for a class of elliptic system, *J. Math. Anal. Appl., **335(2007), 135-150.*** C. Amick and J. Toland, Uniqueness and related analytic properties for the Benjamin-Ono equation-a nonlinear Neumann problem in the plane, *Acta Math., **167(1991), no. 1-2, 107-126.*** T.Bartsch, N.Dancer and Z.Wang, A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system, *Calc. Var. Partial Differential Equations, **37(2010), no. 3-4, 345-361.*** J.Bona and Y.Li, Decay and analyticity of solitary waves, *J. Math. Pures Appl., **(9)76(1997), no. 5, 377-430.*** A.de Bouard and J.Saut, Symmetries and decay of the generalized Kadomtsev-Petviashvili solitary waves, *SIAM J. Math. Anal., **28(1997), no. 5, 1064-1085.*** X. Chang, Ground state solutions of asymptotically linear fractional Schrödinger equations, *J. Math. Phys., **54(2013), no. 6, 061504, 10pp.*** X. Chen, T. Lin and J. Wei, Blow up and solitary wave solutions with ring profiles of two-component nonlinear Schrödinger systems, *Phys. D., **239(2010), 613-626.*** C. Coffman, Uniqueness of the ground state solution for $\Delta u-u+u^3=0$ and a variational characterization of other solutions, *Arch. Rational Mech. Anal., **46(1972), 81-95.*** W. Chen, J. Wei and S. Yan, Infinitely many solutions for the Schrödinger equations in ${\mathbb{R}}^N$ with critical growth, *J. Differential Equations, **252(2012), 2425-2447. Z. Chen and W. Zou, Positive least energy solutions and phase separation for coupled Schrödinger equations with critical exponent. Arch. Ration. Mech. Anal. 205 (2012), no. 2, 515-551.*** J. Dávila, M. del Pino and Y. Sire, Nondegeneracy of the bubble in the critical case for nonlocal equations, *Proc. Amer. Math. Soc., **141(2013), 3865-3870.*** E. Dancer, On the influence of domain shape on the existence of large solutions of some superlinear problem, *Math. Ann., **285(1989), 647-669.*** S. Dipierro and A. Pinamonti, A geometric inequality and a symmetry result for elliptic systems involving the fractional Laplacian, *J. Differential Equations, [**255**]{}(2013), no. 1, 85-119.* S. Dipierro, E. Palatucci and E. Valdinoci, Existence and symmetry results for a Schrödinger type problem involving the fractional Laplacian, *Matematiche (Catania), **68(2013), no. 1, 201-216.*** E. Dancer and S.Yan, Multibump solutions for an elliptic problem in expanding domains, *Comm. Partial Differ. Equ., **27(2002), 23-55.*** R. Frank and E.Lenzmann, Uniqueness of non-linear ground states for fractional Laplacians in ${\mathbb{R}}$, *Acta Math., **210(2013), 261-318.*** R. Frank, E.Lenzmann and L.Silvester, uniqueness of radial solutions for the fractional laplacian, arXiv:1302.2652v1 \[math.AP\] 11 Feb 2013. P. Felmer, A.Quaas and J.Tan, Positive solutions of the nonlinear Schrödinger equation with the fractional Laplacian, *Proc. Roy. Soc. Edinburgh Sect. A, **142(2012), no. 6, 1237-1262.*** C. Kenig, Y. Martel and L. Robbiano, Local well-posedness and blow-up in the energy space for a class of $L^2$critical dispersion generalized Benjamin-Ono equations, *Ann. Inst. H. Poincaré Anal. Non Linéaire, **28(2011), no. 6, 853-887.*** M. Kwong, Uniqueness of positive solutions of $\Delta u-u + u^p = 0 $ in ${\mathbb{R}}^N$, *Arch. Rational Mech. Anal., **105(1989), 243-266.*** N. Laskin, Fractional quantum mechanics and Lévy path integrals, *Phys. Lett. A, **268(2000), 298-305.*** N. Laskin, Fractional Schrödinger equation, *Phys. Rev. E, **66(2002), 31-35.*** T. Lin and J. Wei, Spikes in two-component systems of nonlinear Schrödinger equations with trapping potentials, *J. Differential Equations, **229 (2006), 538-569.*** M. Maris, On the existence, regularity and decay of solitary waves to a generalized Benjamin-Ono equation, *Nonlinear Anal., **51 (2002), no. 6, 1073-1085.*** K. McLeod, Uniqueness of positive radial solutions of $\Delta u+f(u) = 0$ in ${\mathbb{R}}^N$, II, *Trans. Amer. Math. Soc., **339(1993), 495-505.*** F. Merle and P. Raphael, The blow-up dynamic and upper bound on the blow-up rate for critical nonlinear Schrödinger equation, *Ann. Math., [**161**]{}(2005), 157-222.* S. Peng and Z. Wang, Segregated and synchronized vector solutions for nonlinear Schrödinger systems, *Arch. Ration. Mech. Anal., **208(2013), no. 1, 305-339.*** A. Pomponio, Coupled nonlinear Schrödinger systems with potentials, *J. Differential Equations, **227 (2006), 258-281.*** S. Secchi, Ground state solutions for nonlinear fractional Schrödinger equations in ${\mathbb{R}}^N$, *J. Math. Phys., **54(2013), no. 3, 031501, 17pp.*** M. Weinstein, Solitary waves of nonlinear dispersive evolution equations with critical power nonlinearities, J. Differential Equations 69 (1987), no. 2, 192-203. J. Wei and S. Yan, Infinitely many positive solutions for the nonlinear Schrödinger equations in ${\mathbb{R}}^N$, *Calc. Var. Partial Differential Equations, **37(2010), 423-439.***
--- abstract: 'The acceleration of the universe can be explained either through dark energy or through the modification of gravity on large scales. In this paper we investigate modified gravity models and compare their observable predictions with dark energy models. Modifications of general relativity are expected to be scale-independent on super-horizon scales and scale-dependent on sub-horizon scales. For scale-independent modifications, utilizing the conservation of the curvature scalar and a parameterized post-Newtonian formulation of cosmological perturbations, we derive results for large scale structure growth, weak gravitational lensing, and cosmic microwave background anisotropy. For scale-dependent modifications, inspired by recent $f(R)$ theories we introduce a parameterization for the gravitational coupling $G$ and the post-Newtonian parameter $\gamma$. These parameterizations provide a convenient formalism for testing general relativity. However, we find that if dark energy is generalized to include both entropy and shear stress perturbations, and the dynamics of dark energy is unknown a priori, then modified gravity cannot in general be distinguished from dark energy using cosmological linear perturbations.' author: - Edmund Bertschinger and Phillip Zukin title: Distinguishing Modified Gravity from Dark Energy --- Introduction {#sec:intro} ============ Cosmic acceleration has been revealed by measurements of the redshift-distance relation $\chi(z)$ where $\chi$ is the comoving radial distance. The Hubble expansion rate follows from $H(z)=(d\chi/dz)^{-1}$ (in units where $c=1$). This determination assumes only that the observable universe is adequately described by the Robertson-Walker metric, an assertion that is testable empirically [@Hogg04; @LuHellaby07] and does not imply the validity of general relativity (GR). The inference of dark energy follows once the Einstein field equations of general relativity are imposed on the Robertson-Walker metric yielding the Friedmann equations. These equations imply that a stress-energy-momentum component with negative pressure is needed to explain cosmic acceleration. This substance may be vacuum energy (i.e., a cosmological constant, giving rise to the $\Lambda$CDM model) or a scalar field [@Copeland06]. The dark energy equation of state for uniform expansion, $p(\rho)$, can be determined from measurement of $\rho(z)$, which itself follows from $H(z)$ combined with the first Friedmann equation. Measuring $w(z)=p/\rho$ is the primary goal of dark energy experiments [@DETF]. Another possibility is that general relativity requires modification on large distance scales and at late times in the universe. In this case cosmic acceleration would arise not from dark energy as a substance but rather from the dynamics of modified gravity. Modified gravity is not particularly attractive theoretically, but the observed cosmic acceleration is so surprising that all plausible explanations should be considered. Observations of the cosmic expansion history cannot distinguish dark energy from modified gravity [@fRexphist; @Song06]. Testing gravity requires exploring the evolution of spatial inhomogeneity (e.g. [@ZLBD07; @inhom] and references therein). Modified gravity theories must pass tests within the solar system and in relativistic binaries [@Will06]. They are expected to show significant departures from general relativity only on cosmological distance scales. The combination of cosmic microwave background anisotropy, weak gravitational lensing, and the growth of clustering of dark matter and galaxies provides an opportunity to discriminate between dark energy and modified gravity. Performing cosmological tests of modified gravity requires a set of predictions. There are two approaches to generating these predictions. In the first, a theory is specified by its Lagrangian (or other fundamental description) which provides the equations of motion for both homogeneous expansion and cosmological perturbations. A class of theories can be specified by giving a Lagrangian with free parameters, e.g. $f(R)$ theories where $R$ is the Ricci scalar [@Song06]. A second approach is inspired by the parameterized post-Newtonian framework for solar system tests [@ThorneWill71]. Here one begins with the solution of the gravitational field equations (i.e., the metric) instead of the Lagrangian. Several authors have recently adopted this framework or a similar one [@HuSawicki07b; @Caldwell07; @Amendola07; @Amin07; @Linder07]. The difficulty here is to find a good “Newtonian” description in cosmology to which one adds “Post-Newtonian” parameters. Metric perturbations in the scalar sector governing the growth of cosmic structure are characterized by two spatial scalar fields, $\Phi$ (the Newtonian potential) and $\Psi$ (the spatial curvature potential). Even if we introduce the relationship $\Psi=\gamma\Phi$ with Eddington parameter $\gamma$ [@Eddington22], there remains one unknown function of space and time. In the solar system case, by contrast, $\Phi=-GM/r$ is known to provide an excellent approximation to planetary dynamics. On solar system scales, and even within galaxies, one can use test-particle orbits to determine $\Phi$, and light rays to determine $\Phi+\Psi$. This comparison yields impressive limits on $|\gamma-1|$ in the solar system [@Will06]. However, on a scale of several kpc, gravitational lensing combined with stellar dynamics in elliptical galaxies yields a current best result $|\gamma-1|=0.02\pm0.07$ [@Bolton06]. At the scale of Gpc where dark energy appears to drive accelerated expansion, there are no longer any bound test-particle orbits to measure $\Phi$, so a different approach, based on cosmological perturbation theory, is needed. Previous work in the cosmological parameterized post-Newtonian framework has either assumed that some of the Einstein field equations remain valid with modified gravity [@Caldwell07] or has examined the dynamics of individual theories, e.g.[@Dore07]. Neither approach is ideal. One would prefer to sample all possible theories in a broad class, and for each theory to constrain the potentials by a consistency condition that does not assume general relativity or any particular modification thereof. Such a consistency condition was found recently in Ref.[@Bertschinger06] for the long-wavelength perturbations of a Robertson-Walker spacetime. This result was derived assuming that gravity is described by a classical four-dimensional metric theory having a well-defined infrared limit (i.e., the theory is well behaved for very long wavelength perturbations). For practical application, assumptions must also be made about the background spatial curvature and entropy perturbations. Assuming an inflationary or equivalent origin of perturbations, long-wavelength isentropic perturbations are imprinted in the spatial curvature on a flat background. In general relativity, these spatial curvature fluctuations, represented by the gauge-invariant $\zeta$ field of Bardeen et al. [@BST83] or the ${\cal R}$ field of Lyth [@Lyth85], are time-independent in the long-wavelength limit. Ref. [@Bertschinger06] presented a derivation of the conserved curvature perturbation (calling it $\kappa$) making no assumption about the field equations except that they have a well-defined infrared limit. Physically this means that curvature perturbations are small and that all waves propagate causally. In what follows, the curvature perturbation introduced in Ref. [@Bertschinger06] will be called $\zeta$ although its definition differs from that of Ref. [@BST83]. For long wavelength perturbations on a flat background, ${\cal R}=\zeta$. In the long-wavelength limit all cosmological perturbations factorize into functions of time multiplying the curvature perturbation or its spatial derivatives. This is true in GR and in modified gravity theories that are well-behaved in the infrared limit. One might naively expect this factorization to hold only on scales larger than the Hubble length. In general relativity, however, signals in the scalar sector propagate at the speed of sound, not the speed of light [@Bertschinger96], leading to conservation of $\zeta$ on scales larger than the Jeans length. To satisfy solar system tests, modified gravity theories for cosmic acceleration must introduce a length scale $L_G$ below which general relativity is recovered. This length scale might be associated, for example, with the dynamics of new scalar degrees of freedom. The value of this length scale, compared with the size of the systems investigated, plays a crucial role in characterizing the behavior of modified gravity theories. If $L_G$ is much smaller than the length scales over which linear cosmological structure formation is measured (e.g., $L_G=1$ Mpc), then the factorization of cosmological perturbations on scales larger than the Jeans length remains valid. We denote this case scale-independent modified gravity. These theories are like GR in that the curvature perturbation is conserved for the relevant length scales. This condition yields a great simplification of the dynamics, reducing cosmological perturbations to quadratures. In GR, waves propagating in the scalar sector travel only at the speed of sound, so that scale-dependence of transfer functions arises only below the Jeans length. Modified gravity theories, however, typically have additional fields supporting waves that travel at the speed of light. In this case, $L_G$ is the Hubble length and the factorization of cosmological perturbations no longer holds. Theories of this type are called scale-dependent modified gravity theories. Now two quantities, $\gamma$ and the gravitational coupling $G_\Phi$ (the generalization of Newton’s constant in the Poisson equation for $\Phi$), are needed to characterize gravity, and both will vary with length scale as well as with time. Even such complicated models can still be approximated by parameterizations, as we will discuss below. Ref. [@HuSawicki07b] found a way of bridging super- and sub-horizon modifications to GR. Our parametrization, while not as general, is simpler because it only involves a few free parameters and no free functions. Because we do not start with a Lagrangian, we cannot explain cosmic acceleration. We take the cosmic expansion history as given from observations. Rather than providing a complete theory of modified gravity, we provide a framework for observational tests of gravity in cosmology. This paper is organized as follows. Section \[sec:long\] describes the curvature perturbation and its use to build scale-independent modified gravity theories. Section \[sec:observe\] works out the growth of structure on sub-horizon scales, shows that the Poisson equation is modified, and derives results for cosmic microwave background anisotropy and weak gravitational lensing for scale-independent modified gravity theories. Section \[sec:f(R)\] then examines the sub-horizon behavior of a currently popular class of theories known as $f(R)$ models and shows that they are scale-dependent. Section \[sec:shear\] considers the alternative hypothesis that dark energy has a peculiar stress tensor while gravity is governed by GR. Finally, results are summarized and conclusions are presented in Section \[sec:discuss\]. Gravity at long wavelengths {#sec:long} =========================== Our starting point is the perturbed Robertson-Walker metric in conformal Newtonian gauge [@MFB92]: $$\label{pertFRW} ds^2=a^2(t)[-(1+2\Phi)dt^2+(1-2\Psi)\gamma_{ij}dx^idx^i]\ ,$$ where $t$ is conformal time, $a(t)=1/(1+z)$ is the expansion scale factor, and $\gamma_{ij}({\bf x},K)$ is the three-metric for a space of constant spatial curvature $K$, e.g. $\gamma_{ij}dx^idx^j=d\chi^2+r^2(\chi,K)d\Omega^2$ where $r(\chi,K)\sqrt{K}=\sin(\chi\sqrt{K})$ for $K>0$ and is analytically continued for $K\le0$. Note that different conventions appear in the literature for the metric perturbations: $\Phi=\Psi_{\rm Hu}=\psi$ and $\Psi=-\Phi_{\rm Hu}=\phi$ where $(\Psi_{\rm Hu},\Phi_{\rm Hu})$ are the potentials of Ref. [@HuSawicki07b] and $(\psi,\phi)$ are the potentials of Ref. [@MaBert95]. Linear perturbation theory is assumed to be valid throughout this paper. The evolution of the scale factor can depend, in principle, on any quantities characterizing the geometry and composition of the Robertson-Walker background, for example the spatial curvature $K$ and the entropy density (or equivalently, parameters characterizing the equation of state). We neglect entropy perturbations and consider only curvature perturbations on a flat ($K=0$) background. Assuming that the unknown gravity theory has a well-defined infrared limit obeying causality, long wavelength curvature perturbations should evolve like separate Robertson-Walker universes. In this case it is possible to transform to a new set of coordinates, $t\to t-\alpha(t)$ and $\chi\to\chi(1+\zeta)$ where $\zeta$ is constant and $\dot\alpha=\Phi+\Psi-\zeta$, such that the new line element is eq. (\[pertFRW\]) with $\Phi=\Psi=0$ and having spatial curvature $K(1+2\zeta)$. Thus, $\zeta$ is one-half the spatial curvature perturbation. Enforcing the coordinate transformation leads to the consistency condition [@Bertschinger06] $$\label{consistent} \frac{1}{a^2}\frac{\partial}{\partial t}\left(\frac{a^2\Psi} {\cal H}\right)+\Phi-\Psi=\left[\frac{1}{a}\frac{\partial} {\partial t}\left(\frac{a}{\cal H}\right)+\frac{K}{{\cal H}^2} +O(k^2)\right]\zeta\ ,$$ where ${\cal H}=\dot a/a=aH$ and $k$ is the comoving wavenumber. Although Ref. [@Bertschinger06] states that large-scale shear stress is neglected in this result, in fact eq. (\[consistent\]) is valid for $k\to0$ in general relativity (and presumably in modified gravity theories) even if shear stress is present. The curvature term $K/{\cal H}^2$ has been computed assuming the Friedmann equation is valid; in modified gravity theories this term might be different but it must vanish when $K=0$. Hereafter we assume $K=0$ and drop the curvature term. Eq. (\[consistent\]) may be regarded as a definition of the curvature perturbation $\zeta$ for arbitrary theories of gravity. For long wavelengths, $\zeta$ is independent of time. Sound waves in the matter sector or wave propagation in the modified gravity sector cause $\zeta$ to change with time on small scales. These changes are implied by the neglected terms proportional to $k^2\zeta$ in eq. (\[consistent\]). For now we ignore such terms, in effect assuming that both the Jeans length and $L_G$ are smaller than the cosmological scales of interest. During the radiation-dominated era the Jeans length is comparable to the Hubble length, and $\zeta$ (and $\Phi$ and $\Psi$) is damped for scales smaller than the Jeans length. We assume that during this early period of evolution general relativity is an excellent approximation so that the damping is well described by the transfer functions computed using standard codes [@MaBert95; @cmbfast]. When modified gravity becomes important at low redshift, the Jeans length has dropped to a few Mpc or less. In practice, we modify CMBFAST only for $z<30$ and then do so in such a way as to enforce eq. (\[consistent\]) with $\zeta$ corrected from its primeval value using the GR transfer function at $z=30$. Assuming a well-defined infrared limit, the time and space dependence of perturbations must factorize for wavelengths longer than the Jeans length or $L_G$, e.g. $$\label{factorize} \Phi({\bf k},t)=F(a)\zeta({\bf k})+O(k^2\zeta)$$ where ${\bf k}$ is the wavevector. Factorization implies that the ratio of the two gravitational potentials depends only on time as $k\to0$. Therefore we may write, for any causal theory of gravity having a well-defined infrared limit, $$\label{gammadef} \Psi({\bf k},t)=\gamma(a)\Phi({\bf k},t)+O(k^2\zeta)\ .$$ In modified gravity theories, $\gamma(a)$ is the only degree of freedom important for long-wavelength scalar perturbations. The conditions of causality and a well-defined infrared limit greatly restrict the dynamics of modified gravity theories. We now make the key assumption that the terms proportional to $k^2\zeta$ in eqs. (\[consistent\])–(\[gammadef\]) can be neglected not only on super-horizon scales $k<{\cal H}^{-1}$ but also, as they can be in general relativity, on sub-horizon scales down to the Jeans length. This assumption defines a class of theories we call scale-independent modified gravity models. Under these assumptions, modified gravity is completely specified on large scales by the scale-independent function $\gamma(a)$. At high redshift when dark energy is unimportant we require $\gamma\to1$ in order to retain the success of general relativity in explaining the cosmic microwave background anisotropy [@Spergel07]. Thus we adopt the following parameterization for scale-independent modified gravity: $$\label{gamma} \gamma(a) = 1+\beta a^s\ ,$$ where $\beta$ and $s>0$ are constants. Eqs.(\[consistent\])–(\[gamma\]) now give $$\label{Fquad} \gamma F(a)=a^{-2}{\cal H}\gamma^{(1/s)} \int^a_0 a\gamma^{-(1/s)}\frac{d}{da} \left(\frac{a}{\cal H}\right)\,da\ .$$ Changing the lower limit of integration introduces a rapidly decaying solution which we ignore. In general relativity with negligible shear stress, $\gamma=1$. When a component with constant equation of state parameter $w>-\frac{1}{3}$ is dominant, $a\propto t^n$ with $n=2/(1+3w)$ yielding $F=(3+3w)/(5+3w)$. Thus, for long wavelengths the potential drops from $\Phi=\frac{2}{3}\zeta$ during the radiation-dominated era to $\Phi=\frac{3}{5}\zeta$ during the matter-dominated era. Of greater interest here is the evolution of the potentials during the matter-dominated era with modified gravity parameterized by (\[gamma\]). Figure \[fig:F\] shows the results for $s=1$ and $s=3$ as well as the GR case $\beta=0$. The background expansion history is chosen to match GR with $\Omega_m=0.284$ and a cosmological constant with $\Omega_\Lambda=0.716$. The choice $s=3$ matches Caldwell et al. in the limit $\beta\ll 1$ [@Caldwell07]. However, as we will see below, our results differ from theirs because they assumed the validity of some components of the Einstein field equations, while we instead required consistent causal evolution on large scales. This difference will be discussed further below in Section \[sec:shear\]. Figure \[fig:F\] shows that the Newtonian potential $\Phi$ is enhanced and the spatial curvature $\Psi=\gamma\Phi$ is diminished for $\gamma<1$, compared with general relativity. For $s=3$ the modifications occur later because $|\gamma-1|$ is smaller at earlier times. The quantitative results depend on the validity of eq.(\[consistent\]) but this qualitative behavior (the Newtonian potential being enhanced for $\gamma<1$) should persist in general. Observables for scale-independent modified gravity {#sec:observe} ================================================== With the time evolution of the metric in hand for long wavelengths we are now able to calculate observable quantities for scale-independent modified gravity theories parameterized by the constants $(\beta,s)$. The effects considered here are the growth of structure in the dark matter, microwave background anisotropy, and weak gravitational lensing. Growth of structure ------------------- Until now, no assumptions have been made about dynamics in the matter sector except for causality and consistency with a spatially homogeneous, uniformly expanding Robertson-Walker solution. To follow the growth of structure we must specify how the matter fields are coupled to gravity. Here we assume that the dark matter obeys the weak equivalence principle, i.e. collisionless dark matter particles follow geodesics. This choice explicitly forces scalar-tensor theories to the Jordan frame in which matter fields are minimally coupled to gravity. In the conformal Newtonian gauge, on sub-horizon scales where $|\delta|\gg|\Psi|$ with $\delta\equiv\delta\rho/\rho_m$, and $\rho_m$ is the average mass density, cold dark matter fluctuations obey the evolution equation $$\label{cdmevol} \ddot\delta+{\cal H}\dot\delta=-k^2\Phi\ .$$ This equation follows from particle number conservation and geodesic motion or, equivalently, from energy-momentum conservation. The density perturbation field can be written as $$\label{Dfactor} \delta({\bf k},t)=-k^2D(a,k)\zeta_i({\bf k})$$ where $\zeta_i$ is the curvature perturbation at $a=a_i$ which we take to be $z=30$ so that the Jeans length is smaller than the scales of interest and modified gravity has not yet become important. For $a>a_i$, $\Phi$ factorizes and eq. (\[cdmevol\]) can be reduced to quadratures for $D(a,k)$. With initial conditions $D(a,k)=D_i(k)$ and $\partial_t D(a,k)=\dot D_i(k)$ at $a=a_i$, the solution is $$\label{Density} D(a,k)=D(a)+D_i(k)+a_iy(a)\dot D_i(k)\ ,$$ where \[Dysol\] $$\begin{aligned} D(a)&\equiv&y\int^a_{a_i}\frac{F}{\cal H}\,da - \int^a_{a_i}\frac{yF}{\cal H}\,da\ ,\label{Dsol}\\ y(a)&\equiv&\int^a_{a_i}{\frac{da}{a^2{\cal H}}}\ . \label{ydef}\end{aligned}$$ The function $y(a)$ asymptotes to a constant but $D(a)\propto a$ for $a\gg a_i$ in the matter-dominated era. Thus the late-time solution for density perturbation growth factorizes, $D(a,k)=D(a)$ in eq.(\[Dfactor\]). This perturbation growth is often represented as a function of redshift by defining $g(z)\equiv D(a)/D(1)$ with $a=1/(1+z)$. Figure \[fig:I\] shows the logarithmic derivative of the density perturbation growth versus time for our parameterized modified gravity models. In the $\Lambda$CDM model, $d\ln D/d\ln a\approx[\Omega_m(a)]^{6/11}$ [@Nesseris07]. The transition to a cosmological constant-dominated universe leads to a suppression of growth. If instead gravity is modified, the growth rate can be increased or decreased relative to the GR case. The qualitative effects are easy to understand. We already saw that models with $\gamma<1$ ($\beta<0$) get an enhanced Newtonian potential $\Phi$. A stronger potential increases the gravitational force on density perturbations leading to an enhanced growth rate. The simplest test of growth of perturbations is the total perturbation growth by redshift zero, which is characterized by the variance of density fluctuations in spheres of radius $R_8=8\,h^{-1}$ Mpc, $$\label{sigma8} \sigma^2_8=\int^{\infty}_0\frac{d^3k}{(2\pi)^3}P_m(k)W^2(kR_8)\ ,$$ where $W(x)=3j_1(x)/x$. The power spectrum of matter density fluctuations is $$\label{powerspec} P_m(k)=\frac{2\pi^2}{k^3}\Delta_{\cal R}^2\left(\frac{k}{k_0}\right)^{n_s-1} T^2_m(k,z=30)\frac{D^2(z=0)}{D^2(z=30)}\ ,$$ where $\Delta_{\cal R}$ is the amplitude of the initial scalar curvature fluctuations on scale $k_0$ and $T_m$ is the transfer function for matter fluctuations in the synchronous gauge relative to $\zeta = {\cal{R}}$ computed by CMBFAST (which accounts for the suppression of growth during the radiation-dominated era). We adopt $\Delta_{\cal R}^2=2.4\times 10^{-9}$, $k_0=0.002$ Mpc$^{-1}$, and $n_s=0.958$ [@Spergel07]. Our modification of gravity is scale-invariant in that the $k$-dependence of the dark matter power spectrum is unchanged relative to GR. Thus the amplitude of density perturbations $\sigma_8$ depends on modified gravity only through the enhancement or diminution of growth shown in Fig. \[fig:I\]. The specific results obtained here assume that the gravitational potentials factorize on scales larger than a few Mpc. If this is true, then CMB and galaxy clustering results on all scales can be fit by a single modified gravity theory with one function $\gamma(a)$ or, equivalently, $D(a)$. If, however, modified gravity introduces a length scale $L_G$ intermediate between $R_8$ and the Hubble length, then the growth of structure will depend on wavenumber in a way not described by a scale-independent $D(a)$. In Section \[sec:scaledep\] below we consider an alternative scale-dependent parameterization of modified gravity and examine the observable consequences. For now, we assume $L_G<R_8$ or equivalently that super-horizon relations apply, as they do in GR, to sub-horizon scales all the way down to the larger of the Jeans and nonlinear lengths. Figure \[fig:Sigma\] shows a contour plot of $\sigma_8$ (normalized to the GR case) for different choices of the modified gravity parameters. As expected, smaller values of $\gamma$ (i.e., $\beta<0$) lead to larger amplitude. Thus, modified gravity changes the amplitude of galaxy clustering relative to the CMB, and could explain any apparent discrepancy between the values of $\sigma_8$ inferred from CMB analysis and galaxy clustering or lensing measurements. Modified Poisson Equation ------------------------- Rearranging eqs. (\[factorize\]) and (\[Dfactor\]), we arrive at a modified Poisson equation relating the Newtonian potential to the overdensity: $$\label{Poisson} \nabla^2\Phi = \frac{F(a)}{D(a)}\delta \equiv 4\pi G_{\Phi}(a) a^2 \rho_m \delta\ .$$ The space curvature potential obeys a similar Poisson equation, $$\label{Poisson-Psi} \nabla^2\Psi = 4\pi G_{\Psi}(a) a^2 \rho_m \delta\ ,$$ where $G_{\Psi}(a) = \gamma G_{\Phi}(a)$. Plots of $G_{\Phi}$ and $G_{\Psi}$ are shown in Figure \[fig:G\]. Their time dependence is dominated by the potentials $F(a)$ and $\gamma(a)F(a)$ since $\delta$ is less sensitive to our modified gravity parameters ($\beta,s$). As a result, we see the same qualitative behavior as in Fig. \[fig:F\]. Models with $\gamma < 1$ produces a greater value of $G_{\Phi}$ relative to the $\Lambda$CDM model, and larger values of $s$ produce larger late time behavior. In GR, the gravitational coupling $G$ is constant. In many alternative theories of gravity, the strength of gravity varies with time (and also with place, for length scales smaller than $L_G$). Time-varying $G$ is well known for scalar-tensor theories [@Acquaviva05; @MotaEmail], but we find that it is a generic feature of all modified gravity theories with $\gamma\ne1$ on cosmological scales. The time-variation of $G$, represented by $\dot G/G$, has been severely constrained by measurements in the solar system, in stars, and in the early universe [@TestG]. Limits on larger scales are provided by the microwave background [@TestGCMB]. The CMB acoustic peak structure will be unaffected if variations occur only long after recombination. Modified gravity explanations of cosmic acceleration suggest the need to constrain $\dot G/G$ on large length scales in the late universe. It would be very interesting to know, for example, to what extent the structure of clusters of galaxies and their X-ray emission can be used to constrain $\dot G/G$. Note that the method used to derive our modified Poisson equations is roundabout. Had we started with a Lagrangian, the gravitational field equations would directly yield the gravitational coupling strength. Because we started with a phenomenological description of modified gravity, we instead deduce the dynamics of this coupling from the requirements of causal evolution and the weak equivalence principle. On small scales our treatment breaks down as modifications of general relativity must become scale-dependent. Nonetheless, the motivation to investigate limits on $\dot G/G$ on scales much larger than the solar system remains valid. CMB temperature anisotropy -------------------------- Modified gravity (or dark energy) affect the microwave background only at late times through the integrated Sachs-Wolfe (ISW) effect: $$\frac{\Delta T}{T}(\hat{n})= \int (\dot{\Psi} + \dot{\Phi})\,d\chi$$ where $\chi$ is the comoving radius and $\hat n$ is the photon direction. The ISW effect arises when the gravitational potentials change with time, as occurs during transitional periods in cosmic evolution. One such contribution occurs during the transition from radiation to matter domination. The other contribution is occurring today during the current transition to an accelerating expansion. The physics governing the matter-radiation transition is well explained by GR, while the physics governing the transition today is (for the models considered here) dependent on modified gravity parameters. Because the recent ISW effect arises relatively nearby ($z<2$), it shows up only at large angular scales. The ISW contribution from some modified gravity models has been studied in several recent papers [@Song06; @Caldwell07]. We computed the CMB temperature anisotropy spectrum by modifying CMBFAST [@cmbfast] to replace the ISW contribution of the $\Lambda$CDM model with that for our models assuming the factorization of the potential. The results are shown in Figure \[fig:T\]. As expected, only the low-order multipoles are affected. Models with higher $s$ produce larger changes because they lead to larger time derivatives. Models with $0.2<\gamma<1$ ($-0.8<\beta<0$) produce less anisotropy because of destructive interference between the ISW and primary anisotropy contributions. Although decreasing the quadrupole moment improves agreement with observations, little statistical weight can be given to this conclusion because the modifications, at least for $0.2\le\gamma \le1.5$, are smaller than cosmic variance. However, cross correlating the CMB with galaxy surveys could potentially be a more discriminating probe of modified gravity [@HoEtAl08; @Cooray02] It was recently found [@Daniel08] that our results are consistent with recipe R1 of Caldwell et al. [@Caldwell07]. However, we expect differences from our Figure \[fig:T\] for recipe R3 of [@Caldwell07] since $\zeta$ is not conserved on large scales in this scheme. This difference will be discussed below in Section \[sec:shear\]. Weak lensing ------------ Metric perturbations $\Phi+\Psi$ affect both the energy of photons (ISW effect) and their direction of travel (gravitational lensing). Gravitational lensing causes both magnification (or de-magnification) and differential stretching (shear) of background images. The correlation function or power spectrum of weak gravitational lens shear is an observable measure of large-scale structure. The weak lensing power spectrum is given by [@WeakLens] $$\label{shearPS} P^{\kappa}_{l}=\int^{\chi_\infty}_0d\chi\, {W^2(\chi)}\frac{l^4}{\chi^4}P_{\Psi+\Phi}(k=\frac{l} {\chi}, \chi)\ ,$$ where $$\frac{P_{\Psi+\Phi}}{(1+\gamma)^2} =\frac{2\pi^2\triangle^2_{\cal{R}}}{k^3} \left(\frac{k}{k_0}\right)^{n_s-1}T_{\Phi}^2 (k,z=30)\frac{F^2(z)}{F^2(z=30)}\ ,\ \$$ and $$W(\chi) = \int^{\chi_\infty}_\chi d\chi' \frac{\chi' - \chi}{\chi'} \eta(\chi')\ .$$ Here $\chi_\infty$ is the comoving distance to $z=10$ (the results change by a negligible amount if the maximum redshift lies anywhere between $6\le z\le15$), $T_{\Phi}$ is the transfer function of the Newtonian potential relative to $\zeta$ computed at $z=30$ using CMBFAST and $\eta(\chi)$ is the radial distribution of sources, normalized with $\int\eta(\chi)\,d\chi=1$. Note that these formulae assume a flat space. Our lensing analysis uses the source distribution $$\eta(z)\propto z^2\exp[{-(1.41z/z_{\rm med})}^{1.5}]$$ with $z_{\rm med}=1.26$ [@Massey]. This distribution approximates the galaxy redshift distribution of the COSMOS survey, if there were no clumping. Measurements of weak gravitational lens shear, for galaxies separated by angle $\theta$ on the sky, provide an estimate of shear correlation functions including $$C_+(\theta)=\frac{1}{2\pi}\int^{\infty}_0 P^{\kappa}_{l}J_0(l\theta) l\,dl\ .$$ Modifying gravity changes $F(z)$ thereby changing $C_+(\theta)$. Figure \[fig:C+\] plots $C_+(\theta)$ for modified gravity, normalized at each $\theta$ by its GR value. At $z=1$, 1 Mpc corresponds to 2.15 arcmin. Hence, for some of the scales shown in Figure \[fig:C+\] structures are nonlinear and thus beyond the regime of validity of the current framework. However, a scheme that takes a linear power spectrum to a nonlinear power spectrum would correct this flaw [@PeacockDodds]. On such small scales, our assumption of scale-independent modified gravity may also be invalid, so Figure \[fig:C+\] should be regarded as suggestive, but not definitive, of modified gravity effects on weak lensing. As expected, models with $\gamma<1$ have larger shear correlations because they have a larger $F(z)$ and therefore more growth of structure (despite having a smaller $1+\gamma$). For angular scales less than about 10 arcmin, the effect is almost equivalent to a constant change in the normalization of the power spectrum, i.e., in the value of $\sigma_8$. At larger angular scales, the redshift-dependence of $F(z)$ at small redshift translates to a dependence on distance and hence on angular scale, however this is in a regime where the shear correlations are small and difficult to measure. Thus, scale-independent modified gravity theories predict an amplitude of weak lensing different from GR with the same CMB primary anisotropy. The acoustic peak amplitudes tightly constrain $\Delta_{\cal R}$. In principle, measurements of $\sigma_8$ based on the CMB acoustic peaks (which are unaffected by modified gravity) could differ both from measurements based on galaxy clustering \[which depend on $D(z)$\] and those based on weak lensing \[which depend on $F(z)$\]. Current error bars are inconclusive [@Spergel07] but this comparison of different $\sigma_8$ values could eventually provide a powerful test of GR. Comparison with $f(R)$ theories {#sec:f(R)} =============================== Substantial work has already been done investigating modified gravity effects for $f(R)$ theories. Here we consider such theories where the Ricci scalar $R$ is replaced by $R+f(R)$ in the Einstein-Hilbert action, and where the action is extremized with respect to the metric. In these models, the field equations are generically fourth-order. In effect, modified gravity introduces a new propagating scalar degree of freedom coupled to gravity, the scalaron $f_R\equiv df/dR$ [@Starobinsky]. The Compton wavelength of the scalaron imprints a physical length scale, which is made dimensionless by combining with the wavenumber $k$: $$\label{Qdef} Q\equiv\frac{3k^2}{a^2}\frac{f_{RR}}{1+f_R}\ ,$$ where $f_{RR}\equiv d^2f/dR^2$. Several papers have recently discussed cosmological perturbation evolution for metric $f(R)$ theories [@f(R); @HuSawicki07a; @PS07]; our notation most closely follows that of ref. [@PS07] except our potentials $\Phi$ and $\Psi$ are exchanged from theirs. In $f(R)$ theories, $\zeta$ is conserved on super-horizon scales [@HuSawicki07b]. However, the scalaron obeys a nonlinear Klein-Gordon equation with two length scales: the Hubble length and the scalaron Compton wavelength. The interesting case for large-scale structure is the quasi-static regime of linear, sub-horizon perturbations ($k^2\gg{\cal H}^2$) where [@PS07] $$\label{fPoisson} \nabla^2\Phi\approx\frac{4\pi Ga^2\rho_m}{1+f_R} \left(\frac{3+4Q}{3+3Q}\right)\delta\ ,\ \ \gamma\approx\frac{3+2Q}{3+4Q}\ .$$ Differentiating eq. (\[consistent\]) and substituting eqs.(\[cdmevol\]) and (\[fPoisson\]) along with the background evolution equations (5) and (6) of ref. [@PS07] for a universe containing only nonrelativistic matter and (optionally) a cosmological constant yields $$\label{zetadot} \dot\zeta=U\dot\Psi+V\Psi\ ,$$ where $$\label{Udef} U\equiv\frac{a{\cal H}}{\Gamma^2}\frac{\partial}{\partial t} \left(\frac{\Gamma^2B}{a{\cal H}^2}\right)+\frac{2QB}{3+2Q}$$ and $$\begin{aligned} \label{Vdef} V\equiv\frac{4\pi Ga^2\rho_m\Gamma B}{\gamma {\cal H}}+\frac{\partial}{\partial t}\left(\frac{B}{\gamma} \right) -\frac{\Gamma B}{{\cal H}a^2}\frac{\partial}{\partial t} \left[a\frac{\partial}{\partial t}\left(\frac{a}{\Gamma} \right)\right]\ ,\nonumber\\\end{aligned}$$ where we have defined the auxiliary variables \[BGammadef\] $$\begin{aligned} \Gamma&\equiv&\frac{G_\Psi}{G}=\frac{1}{1+f_R}\left( \frac{3+2Q}{3+3Q}\right)\ ,\label{Gammadef}\\ B&\equiv&a\left[\frac{\partial}{\partial t}\left(\frac{a} {\cal H}\right)\right]^{-1}=\frac{2(1+f_R){\cal H}^2}{8\pi Ga^2\rho_m+a^2\partial_t(\dot f_R/a^2)} \label{Bdef}\ .\qquad\end{aligned}$$ General relativity with a cosmological constant corresponds to the case $f=2\Lambda$, $f_R=0$, and $\gamma=\Gamma=1$, yielding $U=V=0$. Thus, $\zeta$ is conserved even on sub-horizon scales in a $\Lambda$CDM universe. However, this is no longer true if $f_R\ne0$. Two distinct effects modify the curvature perturbation. First, the $1+f_R$ factor in (\[fPoisson\]) modifies the evolution on sub-horizon scales. In practice, this effect is small if $|f_R|\ll1$, as is favored by galactic structure considerations [@HuSawicki07a]. In this case, the background expansion history is nearly identical to GR with a cosmological constant and gravity is significantly modified only at wavelengths approaching the scalaron Compton wavelength, where $Q\sim1$. For long wavelengths such that $Q\ll1$, $\gamma\approx1-\frac{2}{3}Q$ and the corrections introduced to eq. (\[consistent\])–(\[gammadef\]) by scalaron dynamics are $O(k^2)$. The treatment given in the preceding sections remains valid for $|f_R|\ll1$ and $Q\ll1$. However, this limit corresponds to general relativity. Unfortunately, the treatment presented in Section \[sec:long\], which was based on a scale-invariant modification of gravity, breaks down just where $f(R)$ theories begin to deviate significantly from GR. The $f(R)$ models generically have $\gamma-1\propto k^2$ for sub-horizon wavelengths longer than the scalaron Compton wavelength. These models have a scale-dependent modification of gravity. Although we cannot use the results of Section \[sec:long\] to describe them, it is still possible to parameterize scale-dependent modified gravity models so as to obtain useful results for the sub-horizon growth of large-scale structure. A simple parameterization inspired by $f(R)$ theories is presented in the next section. Scale-dependent modified gravity {#sec:scaledep} ================================ For a wide class of theories, modified gravity leads generically to a Poisson equation with variable gravitational coupling. In the scale-invariant modifications of Section \[sec:long\], the Newton constant is replaced by the time-varying $G_\Phi(t)$ which follows from the scale-invariant potential ratio $\gamma(t)$. In scale-dependent modified gravity theories, on the other hand, $G_\Phi(k,t)$ and $\gamma(k,t)=G_\Psi/G_\Phi$ are functions of length scale as well as time, and there is no simple relation between them. Thus, more parameters are needed to characterize such theories [@Amin07]. Despite their generality, $f(R)$ theories with $f_R\ll1$ have, for a wide range of sub-horizon length scales, a very simple form for $G_\Phi$ and $\gamma$ given by eq. (\[fPoisson\]). To arrive at a simple phenomenological model we simplify the time dependence as follows: $$\label{scaledepMG} \frac{G_\Phi}{G}=\frac{1+\alpha_1k^2a^s}{1+\alpha_2k^2a^s}\ ,\ \ \gamma=\frac{1+\beta_1k^2a^s}{1+\beta_2k^2a^s}\ .$$ We assume that these relationships hold only in the linear regime of cosmological density perturbations, and that $G_\Phi/G\to1$ and $\gamma\to1$ on solar system scales. We also require GR to hold at early times, implying $s>0$. Eq. (\[scaledepMG\]) describes $f(R)$ theories with $|f_R|\ll1$ if $\alpha_1=\frac{4}{3}\alpha_2=2\beta_1=\beta_2=4f_{RR}/a^{2+s}$. As a simple post-Newtonian model we will now assume that $(\alpha_1,\alpha_2,\beta_1,\beta_2)$ are arbitrary constants with units of length squared. In order to ensure that $G_\Phi/G$ and $\gamma$ are finite for all $k$, we require $\alpha_2$ and $\beta_2$ to be non-negative. Moreover, we need $G_\Phi > 0$ in order to ensure that gravity is attractive. Hence, $\alpha_1$ must be non-negative as well. This scale-dependent parameterization has a different dependence on length scale than that of Amin et al.[@Amin07]. It is chosen to reproduce the scale-dependence of $f(R)$ theories. For some modified gravity theories $\gamma=1$, e.g., Einstein plus Yukawa gravity. For this model $G_\Phi/G$ in eq. (\[scaledepMG\]) is multiplied by an overall factor $\alpha_2/\alpha_1$ [@Dore07] so that the deviation from Einstein gravity shows up only at large distances. The class of theories considered here has at least three physical length scales: the horizon scale $a/{\cal H}$, the transition scale $a^{1+s/2}\sqrt{\alpha_1}$ where gravity changes strength (for simplicity, we consider models where the $\alpha_i$ and $\beta_i$ are all of comparable magnitude), and the nonlinear length scale for structure formation (e.g., approximately 10 Mpc today). If $a^{1+s/2}\sqrt{\alpha_i}$ and $a^{1+s/2}\sqrt{\beta_i}$ are smaller than the nonlinear scale, then for purposes of large-scale structure formation, gravity is adequately described by GR. The parameterization of eq. (\[scaledepMG\]) applies only to intermediate length scales between the horizon scale and the smaller transition scale $a^{1+s/2}\sqrt{\alpha_1}$. However, it implies that for long wavelengths and at early times, gravity reduces to GR (with constant gravitational coupling). This assumption can be relaxed at the cost of introducing additional parameters, which seems premature given the difficulty of measuring any post-Newtonian parameters. Also, for wavelengths short compared with $a^{1+s/2}\sqrt{\alpha_i}$ and $a^{1+s/2}\sqrt{\beta_i}$ but large compared with the nonlinear scale, the gravitational couplings are constant but differ from GR, e.g., $\gamma=\frac{1}{2}$ for $f(R)$. From the perspective of model testing, scale-dependent modified gravity is much more complicated than the scale-independent case considered in Sect. \[sec:long\]. The models have four dimensional parameters plus an exponent giving the time dependence. However, the situation is not so bleak, because structure formation depends only on $G_\Phi(k,t)$ and not on $\gamma(k,t)$. In particular, matter density perturbations on scales larger than the Jeans length and smaller than the Hubble length follow from integration of $$\label{pertevol} \ddot\delta+{\cal H}\dot\delta=4\pi G_\Phi(k,t)a^2\rho_m\delta\ .$$ At early times, $G_\Phi\to G$ and $\delta$ evolves as in the GR solution until the scale-dependent terms in eq. (\[scaledepMG\]) become important. The density transfer function $D(k,t)$ given by eq. (\[Dfactor\]) is now scale-dependent at late times, implying a change in the shape of the matter power spectrum. It is easy to see that the transfer function can depend only on the dimensionless variables $(k\sqrt{\alpha_1},\alpha_1/\alpha_2,a)$. The most interesting new feature of scale-dependent modified gravity is the change of shape of the matter density transfer function. Figure \[fig:fundep\] shows $D(k,a=1)$ normalized to the GR result, obtained by numerically integrating eq. (\[pertevol\]) with initial conditions given by the GR result ${\cal H}^2D(a)\to\frac{2}{3}F= \frac{2}{5}$ for a matter-dominated universe at $a=0.03$. As expected, at large length scales ($k\sqrt{\alpha_1}\ll1$) the results converge to the GR limit. At short length scales, gravity is weaker than GR if $\alpha_1/\alpha_2<1$, leading to reduced growth; the growth is enhanced for $\alpha_1/\alpha_2>1$. Thus, scale-dependent gravity changes the shape of the matter power spectrum [@Starobinsky07]. Ultimately, measuring this scale-dependence (and doing so at several redshifts) can constrain scale-dependent modified gravity theories. However, the interpretation of the galaxy power spectrum shape is complicated by scale-dependent biased galaxy formation and by the dark-matter-dependent transfer function (e.g., the neutrino fraction). Thus, while the linear growth of structure offers a potentially powerful test of GR versus modified gravity, it must be combined with other tests. The second function characterizing scale-dependent modified gravity, $\gamma(k,t)$, is (in our analysis, which assumes no particular Lagrangian) unrelated to $G_\Phi(k,t)$. This function is best constrained by combining weak gravitational lensing and galaxy clustering measurements made at the same redshift. Care is required because the lensing amplitude is proportional to $(1+\gamma)\Phi$ while the galaxy density is proportional to $D$ and also depends on biasing. Galaxy peculiar velocity measurements could be used, in principle, to reduce or ideally eliminate the dependence on biasing [@ZLBD07]. However, one must be careful not to assume the velocity-density relation obtained in GR. The continuity equation gives $$\label{velden} {\bf v}=-\frac{i{\bf k}}{k^2}\frac{\partial\ln D}{\partial\ln a} {\cal H}\delta\ ,$$ where the logarithmic growth rate $\partial\ln D/\partial\ln a$ is now scale-dependent, as shown in Figure \[fig:S\]. As in the case of galaxy clustering, measurement of this effect is contingent upon knowing the composition of dark matter (hot dark matter has a free-streaming scale, and its abundance determines the suppression of growth at small scales) and correcting for any velocity bias. The greater freedom allowed by scale-dependent modified gravity models, and the fact that astrophysics (biased galaxy formation and dark matter dynamics) may also introduce scale-dependence into transfer functions, makes it challenging to test GR using growth of structure and weak gravitational lensing. It is likely that a combination of galaxy clustering, peculiar velocities, and weak lensing will be needed to obtain strong constraints on scale-dependent modified gravity theories. Modified gravity versus shear stress {#sec:shear} ==================================== A difference between the two longitudinal potentials $\Phi$ and $\Psi$ need not signal modified gravity; it might arise from shear stress [@Amendola07; @Mota07]. For scalar mode fluctuations, the shear stress is fully characterized by a scalar potential $\pi$, such that the spatial stress tensor components are $$\label{shearstress} T^i_{\ \,j}=p\delta^i_{\ \,j}+\frac{3}{2}(\bar\rho+\bar p)\left( \nabla^i\nabla_j-\frac{1}{3}\delta^i_{\ \,j}\Delta\right)\pi$$ where $(\bar\rho+\bar p)$ is the background enthalpy and $\Delta=\nabla^i\nabla_i$. In linearized GR, one of the Einstein field equations yields $$\label{GRshear} \Psi-\Phi=12\pi Ga^2(\bar\rho+\bar p)\pi\ .$$ All of the results obtained in Sections \[sec:long\] and \[sec:observe\] for modified gravity apply equally to GR with shear stress if $\gamma$ is replaced by $12\pi Ga^2(\bar\rho+\bar p)\pi/\Phi$. In standard cosmology, the only significant source of shear stress is relativistic neutrinos after neutrino decoupling during the radiation-dominated era. For long wavelengths during the radiation-dominated era, neutrino shear stress gives [@MaBert95] $$\label{nushear} \gamma-1=\frac{2}{5}\left(\frac{\rho_\nu+p_\nu}{\bar\rho+\bar p} \right)\ .$$ During the matter-dominated era, $\gamma-1\propto a^{-1}$ and shear stress is unimportant at late times in the $\Lambda$CDM model. It is also unimportant in simple quintessence models because shear stress vanishes for linear perturbations of a minimally-coupled scalar field. Shear stress might nonetheless be important if cosmic acceleration is driven by an imperfect fluid. Without specifying the dynamics of this fluid, few constraints can be placed on $\pi$. One possible bound comes from the dominant energy condition, which states that each of the eigenvalues of $T^i_{\ \,j}$ must be smaller in absolute value than the energy density. If this condition holds, then eqs.(\[shearstress\]) and (\[GRshear\]) can be combined to give rather weak bounds on $\Psi/\Phi-1$. Additional constraints follow from the initial-value constraints of GR and energy-momentum conservation, which for a spatially flat background become [@Bertschinger06] \[ivcon\] $$\begin{aligned} -k^2\Psi&=&4\pi Ga^2(\bar\rho+\bar p)(\delta+3{\cal H}u)\ , \qquad\label{Energycon}\\ \dot\Psi+{\cal H}\Phi&=&4\pi Ga^2(\bar\rho+\bar p)u\ , \label{Momcon}\\ \dot\delta+3{\cal H}\sigma&=&3\dot\Psi-k^2u\ , \label{Econs}\\ \dot u+(1-3c_s^2){\cal H}u&=&c_s^2\delta+\sigma+\Phi-k^2\pi\ . \label{Momcons}\end{aligned}$$ The first of these equations is the usual Poisson equation in conformal Newtonian gauge. The density and velocity potential perturbations are defined from the energy-momentum tensor components by $T^0_{\ \,0}=-\bar\rho-(\bar\rho+\bar p)\delta$, $T^0_{\ \,i}= -(\bar\rho+\bar p)\nabla_iu$, while the entropy perturbation is defined by $\sigma\equiv(\delta p-c_s^2\delta\rho)/(\bar\rho+\bar p)$ with $c_s^2=d\bar p/d\bar\rho$. For a single perfect fluid like cold dark matter before its trajectories intersect, $\sigma=0$. However, in general $\sigma\ne0$ for a multi-component fluid, e.g. dark matter and a non-constant dark energy. The freedom introduced by entropy and shear stress perturbations is, unfortunately, sufficient in principle to reproduce any observations of large-scale structure and gravitational lensing. Consider, for example, perfect measurements of galaxy peculiar velocities everywhere and at all times assuming that galaxies exactly trace cold dark matter. Then, eq. (\[Momcons\]) with $c_s^2=\sigma=\pi=0$ for CDM yields $\Phi({\bf k},t)$. Assume furthermore that complete and ideal gravitational lensing measurements are available to yield $\Phi({\bf k},t)+\Psi({\bf k},t)$. Now, the GR initial-value constraints (\[Energycon\])–(\[Momcon\]) suffice to yield $\delta({\bf k},t)$ and $u({\bf k},t)$ for the multi-component fluid of dark matter and dark energy. Requiring this fluid to obey energy-momentum conservation (\[Econs\])–(\[Momcons\]) yields $\sigma$ and $\pi$. In short, perfect measurements of $\Phi$ and $\Psi$ can be used to determine $\sigma$ and $\pi$ of the combined fluid of dark matter and dark energy. When combined with measurements of the dark matter density and velocity fields (assuming galaxies trace dark matter), one can, in principle, determine the energy-momentum tensor components for dark energy. One can think of this energy-momentum tensor for dark energy as the difference between the Einstein tensor for the observed metric and the energy-momentum tensor of the ordinary matter [@HuSawicki07b]. This approach can be used, in principle, to determine the dark energy entropy and shear stress needed to explain any observations of large-scale structure (including peculiar velocities) and weak lensing — even if there is no dark energy, but instead gravity is modified. In effect, the two observable metric fields $\Phi$ and $\Psi$ can be exchanged for either $\sigma$ and $\pi$ (GR with exotic dark energy) or $G_\Phi$ and $\gamma$ (modified gravity). Although the initial-value constraints of GR can be used to determine the properties of dark energy, they cannot be used to model modified gravity. As we have seen in previous sections, modified gravity leads generically to a modified Poisson equation with a variable gravitational coupling $G_\Phi(k,t)$. In order not to break local Lorentz invariance by the selection of a preferred frame, the other components of the Einstein equation should also be modified. For example, in $f(R)$ theories the left-hand side of (\[Momcon\]) is multiplied by $(1+f_R)$ while the right-hand side acquires an extra term $\frac{1}{2}\dot f_R$. Neglecting these modifications leads to violations of eq. (\[consistent\]) on large scales. For example, Caldwell et al. [@Caldwell07] modeled the dynamics of modified gravity with three recipes that assume the validity of some combination of eqs. (\[Energycon\]) and (\[Momcon\]). While recipes R1 and R2 satisfy the conservation of $\zeta$, recipe R3 does not. As a result, it leads to a different prediction for the gravitational potentials and therefore the ISW effect. Discussion {#sec:discuss} ========== Parameterizing modified gravity theories in cosmology is much more difficult than parameterizing post-Newtonian gravity in the solar system because the gravitational potentials $\Phi$ and $\Psi$ in the GR limit are not static Coulomb potentials in cosmology. In GR, on scales larger than the Jeans or nonlinear length scales, $\Psi=\Phi$ factorizes into a product of a time-dependent growth function and a spatially-varying curvature perturbation. If this factorization persists in modified gravity theories, then a simple post-Newtonian parameterization can be obtained. This is the approach we introduced in Section \[sec:long\]. It has the practical virtue of yielding easily calculated predictions for all observables in the linear regime, including the growth of matter clustering, peculiar velocities, microwave background anisotropy, and weak gravitational lensing. Introducing the Eddington parameterization $\Psi=\gamma\Phi$, where $\gamma$ depends on time but not on space, we showed that structure grows faster (gravity is stronger) in models with $\gamma<1$. However, the shape of the transfer functions (i.e., their dependence on spatial wavenumber) for these scale-independent models is unchanged compared with GR. Unfortunately, realistic modified gravity theories such as the $f(R)$ models have scale-dependent effects and can no longer be described by only one post-Newtonian quantity $\gamma$. Instead, the strength of gravity, described by $G_\Phi$ in the Poisson equation, can vary independently of $\gamma$. Even so, because galaxy clustering and dynamics depends on $G_\Phi$ but not on $\gamma$, while weak lensing depends on $\gamma$, observations could, in principle, measure modified gravity parameters assuming there is no dark energy. If dark energy is complex enough to require two additional fields to characterize its stress tensor (e.g., shear stress potential and entropy), then it appears that there is enough information in the dark energy model to account for any $\Phi$ and $\Psi$ without modifying gravity. One cannot prove gravity is modified unless one can account for all significant contributions to the stress-energy tensor. Thus, our hope to describe all modified gravity models with two parameters, yielding predictions measurably different from all dark energy models, has not been realized. Distinguishing modified gravity from dark energy will require making additional assumptions. Nevertheless, the $(\beta,s)$ parameterization of scale-independent modified gravity presented in Section \[sec:long\], and the $(\alpha_1,\alpha_2,\beta_1,\beta_2)$ parameterization of scale-dependent modified gravity models presented in Section \[sec:scaledep\], are still useful for characterizing observational data. If measurements of galaxy clustering, peculiar velocities, and weak lensing are all consistent with $\beta=0$ and $\alpha_1=\alpha_2=\beta_1=\beta_2=0$, for example, then modified gravity and exotic dark energy models can both be excluded. If measurements require nonzero parameters, however, dark energy and modified gravity remain viable explanations until additional assumptions are made to distinguish them,e.g. restriction of the Lagrangian to a particular form. A generic prediction of modified gravity theories in cosmology is that the gravitational coupling $G_\Phi$ in the Poisson equation should vary with time and with length scale. Departures from GR could be important not only in the linear regime of cosmological perturbations but perhaps also in the nonlinear regime (albeit on scales much larger than the solar system). Nonlinear effects may allow modified gravity to be distinguished from exotic dark energy, assuming that the dark energy fluctuations are small. For this reason it would be valuable to perform N-body simulations of structure formation using variable $G_\Phi$, extending previous work [@Nbody] to the scale-independent and scale-dependent modified gravity models discussed in the current paper. We thank Richard Gott for helpful comments and Scott Tremaine for the hospitality of the Institute for Advanced Study. This work was supported by the Kavli Foundation, by a Guggenheim Fellowship to EB, and by NSF grant AST-0708501. [99]{} D. W. Hogg, D. J. Eisenstein, M. R. Blanton, N. A. Bahcall, J. Brinkmann, J. E. Gunn, and D. P. Schneider, Astrophys. J. [**624**]{}, 54 (2005), arXiv:astro-ph/0411197. T. Lu and C. Hellaby, Class. Quantum Grav. [**24**]{}, 4107 (2007), arXiv:0705.1060 \[gr-qc\]. E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. [**D15**]{} 1753 (2006), arXiv:hep-th/0603057. A. Albrecht et al., arXiv:astro-ph/0609591 (2006). S. Capozziello, V. F. Cardone and A. Troisi, Phys. Rev. D [**71**]{}, 043503 (2005), arXiv:astro-ph/0501426; S. Nojiri and S. D. Odintsov, Phys. Rev. D [**74**]{}, 086005 (2006), arXiv:hep-th/0608008. Y. S. Song, W. Hu and I. Sawicki, Phys. Rev. D [**75**]{}, 044004 (2007), arXiv:astro-ph/0610532. P. Zhang, M. Liguori, R. Bean, and S. Dodelson, Phys. Rev. Lett. [**99**]{}, 141302, arXiv:0704.1932 \[astro-ph\] (2007). S. Wang, L. Hui, M. May, and Z. Haiman, Phys. Rev. D [**76**]{}, 063503 (2007), arXiv:0705.0165 \[astro-ph\]; B. Jain and P. Zhang (2007), arXiv:0709.2375 \[astro-ph\]; J. Uzan (2006), arXiv:astro-ph/0605313. C. M. Will, Living Rev. Rel. [**9**]{} (2006), arXiv:gr-qc/0510072. K. S. Thorne and C. M. Will, Astrophys. J. [**163**]{}, 595 (1971). W. Hu and I. Sawicki, Phys. Rev. D [**76**]{}, 104043, arXiv:0708.1190 \[astro-ph\] (2007). R. Caldwell, A. Cooray and A. Melchiorri, Phys. Rev. D [**76**]{}, 023507 (2007), arXiv:astro-ph/0703375. L. Amendola, M. Kunz and D. Sapone, JCAP [**4**]{}, 13, arXiv:0704.2421 \[astro-ph\]. M. A. Amin, R. V. Wagoner and R. D. Blandford, arXiv:0708.1793 \[astro-ph\] (2007). E. Linder, R. Cahn, APh [**28**]{}, 481, arXiv:astro-ph/0701317 A. S. Eddington, [*The Mathematical Theory of Relativity*]{} (Cambridge University Press, 1922). A.S. Bolton, S. Rappaport, and S. Burles, Phys. Rev. D [**74**]{}, 061501(R) (2006); arXiv:astro-ph/0607657. O. Doré et al., arXiv:0712.1599 \[astro-ph\] (2007). E. Bertschinger, Astrophys. J. [**638**]{}, 797 (2006), arXiv:astro-ph/0604485. J. M. Bardeen, P. J. Steinhardt and M. S. Turner, Phys. Rev. D [**28**]{}, 679 (1983). D. H. Lyth, Phys. Rev. D [**31**]{}, 1792 (1985). E. Bertschinger, in [*Cosmology and Large Scale Structure*]{}, ed. R. Schaeffer, J. Silk, M. Spiro, and J. Zinn-Justin (Amsterdam: Elsevier Science), 273 (1996), arXiv:astro-ph/9503125. V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, Phys. Rep. [**251**]{}, 1 (1992). C.-P. Ma and E. Bertschinger, Astrophys. J. [**455**]{}, 7 (1995), arXiv:astro-ph/9401007. U. Seljak and M. Zaldarriaga, Astrophys. J. [**469**]{}, 437 (1996), arXiv:astro-ph/9603033. D. N. Spergel et al., Astrophys. J. Suppl. [**170**]{}, 377 (2007), arXiv:astro-ph/0603449. S. Nesseris and L. Perivolaropoulos, Phys. Rev. D [**77**]{}, 023504, arXiv:0710.1092 \[astro-ph\] (2008). V. Acquaviva, C. Baccigalupi, S. M. Leach, A. R. Liddle, and F. Perrotta, Phys. Rev. D [**71**]{}, 104025, arXiv:astro-ph/0412052. T. Clifton, D. Mota, J. Barrow, MNRAS [**358**]{}, 601, arXiv:gr-qc/0406001 J.-P. Uzan, Rev. Mod. Phys. [**75**]{}, 403 (2003), arXiv:hep-ph/0205340; P. Jofre, A. Reisenegger, and R. Fernandez, Phys. Rev. Lett. [**97**]{}, 131102 (2006), arXiv:astro-ph/0606708; P. G. Krastev and B. A. Li, Phys. Rev. C [**76**]{}, 055804 (2007), arXiv:nucl-th/0702080. O. Zahn and M. Zaldarriaga, Phys. Rev. D [**67**]{}, 063002 (2003), arXiv:astro-ph/0212360; K. I. Umezu, K. Ichiki, and M. Yahiro, Phys. Rev. D [**72**]{}, 044010 (2005), arXiv:astro-ph/0503578; K. C. Chan and M.-C. Chu, Phys. Rev. D [**75**]{}, 083521 (2007), arXiv:astro-ph/0611851. S. Ho, C. Hirata, N. Padmanabhan, U. Seljak, N. Bahcall, arXiv:astro-ph/0801.0642 A. Cooray, Phys. Rev. D [**65**]{}, 083510 (2002), arXiv:astro-ph/0112408 S. Daniel, R. Caldwell, A. Cooray, A. Melchiorri, arXiv:astro-ph/0802.1068 N. Kaiser, Astrophys. J. [**498**]{}, 26 (1998), arXiv:astro-ph/9610120; W. Hu, Astrophys. J. [**522**]{}, L21 (1999), arXiv:astro-ph/9904153. R. Massey et al., Astrophys. J. Suppl. [**172**]{}, 239 (2007), arXiv:astro-ph/0701480. J.A. Peacock, S.J. Dodds, MNRAS [**280**]{}, L19, arXiv:astro-ph/9603031 A. A. Starobinsky, JETP Lett. [**30**]{}, 682 (1979); Phys. Lett. B [**91**]{}, 99 (1980). R. Bean, D. Bernat, L. Pogosian, A. Silvestri, and M. Trodden, Phys. Rev. D [**75**]{}, 064020 (2007), arXiv:astro-ph/0611321; B. Li and J. D. Barrow, Phys. Rev. D [**75**]{}, 084010, arXiv:astro-ph/0701111 (2007); S. Tsujikawa, K. Uddin and R. Tavakol, Phys. Rev. D [**77**]{}, 043007, arXiv:0712.0082 \[astro-ph\] (2007). W. Hu and I. Sawicki, Phys. Rev. D [**76**]{}, 064004, arXiv:0705.1158 \[astro-ph\] (2007). L. Pogosian and A. Silvestri, Phys. Rev. D [**77**]{}, 023503, arXiv:0709.0296 \[astro-ph\] (2007). A. A. Starobinksy, JETP Lett. [**86**]{}, 157 (2007), arXiv:0706.2041 \[astro-ph\]. D. F. Mota, J. R. Kristiansen, R. Koivisto, and N. E. Groeneboom, MNRAS [**382**]{}, 793, arXiv:0708.0830 \[astro-ph\] (2007). H. F. Stabenau and B. Jain, Phys. Rev. D [**74**]{}, 084007 (22006), arXiv:astro-ph/0604038; I. Laszlo and R. Bean, Phys. Rev. D [**77**]{}, 02404, arXiv:0709.0307 \[astro-ph\] (2007).
**A note on the holography of\ Chern-Simons matter theories with flavour** Stefan Hohenegger and Ingo Kirsch$^a$ *${}^a$ Institut für Theoretische Physik, ETH Zürich* *CH-8093 Zürich, Switzerland* **Abstract** [We study a three-dimensional ${\mathcal{N}}=3$ $U(N)_k \times U(N)_{-k}$ Chern-Simons matter theory with flavour, corresponding to the ${\mathcal{N}}=6$ Aharony-Bergman-Jafferis-Maldacena CSM theory coupled to $2N_f$ fundamental fields. The dual holographic description is given by the near-horizon geometry of $N$ M2-branes at a particular hypertoric geometry ${\cal M}_8$. We explicitly construct the space ${\cal M}_8$ and match its isometries to the global symmetries of the field theory. We also discuss the model in the quenched approximation by embedding probe D6-branes in $AdS_4 \times {{\mathbb{CP}^3}}$.]{} Introduction ============ Recently, there has been a renewed interest in three-dimensional superconformal Chern-Simons-matter (CSM) theories. Other than their purely topological cousins, this type of Chern-Simons theories exhibits non-trivial dynamics due to the coupling to matter fields. Bagger and Lambert [@BL] as well as Gustavsson [@G] (BLG) constructed a three-dimensional ${\mathcal{N}}=8$ superconformal Chern-Simons gauge theory with manifest $SO(8)$ R-symmetry. A unitary realization of the involved three-algebra restricted the gauge group to $SO(4)$. After the reformulation of the BLG theory as a $SU(2) \times SU(2)$ CSM theory [@Raamsdonk], Aharony, Bergman, Jafferis and Maldacena (ABJM) [@ABJM] constructed a ${\mathcal{N}}=6$ CSM theory with gauge group $U(N) \times U(N)$ at level $k$ as the world-volume theory of $N$ M2-branes at a ${\mathbb{C}}^4/{\mathbb{Z}}_k$ orbifold. A prerequisite for making the above theory interesting for more realistic applications, e.g. in condensed matter physics, is the introduction of light matter fields in the [*fundamental*]{} representation of the gauge group. The fundamentals could serve, for instance, as a prototype for strongly-coupled electrons. First steps in this direction have been taken in [@Giveon2008; @Niarchos; @Niarchos:2009aa], which discussed ${\mathcal{N}}=2$ supersymmetric CSM theories with fundamental matter and discovered an interesting strong-weak coupling Seiberg-type duality. However, Refs. [@Giveon2008; @Niarchos; @Niarchos:2009aa] have not yet addressed a possible holographic description of CSM theories with flavour, which over the last years has turned out to be remarkably successful for Yang-Mills theories (see e.g. [@Erdmenger:2007cm] for a review). In this note we fill this gap by proposing a holographic description of the ABJM model coupled to $2N_f$ light fundamental fields. We show that the field theory, whose action will be written using ${\mathcal{N}}=2$ superspace formalism, preserves ${\mathcal{N}}=3$ supersymmetry for particular values of the coupling constants. We find that other than in the (unflavoured) ABJM model, where supersymmetry is enhanced to ${\mathcal{N}}=6$ [@ABJM], the supersymmetry of the present model remains ${\mathcal{N}}=3$ in the infrared. The latter describes the low-energy region of the open-string sector of the web-deformed type IIB configuration studied in [@ABJM] with two additional stacks of $N_f$ D5-branes. The T-dual type IIA setup, now involving $2N_f$ D6-branes, lifts to $N$ M2-branes at the origin of a toric hyperkähler geometry ${\cal M}_8$. We explicitly construct ${\cal M}_8$ and compare its isometry group to the global symmetries of the dual ${\mathcal{N}}=3$ field theory. The corresponding near-horizon geometry includes the information of the (uplifted) flavour D6-branes and therefore their backreaction on the geometry. However, the complicated structure of the near-horizon metric impedes further progress along these lines. We therefore continue to discuss flavours in the quenched approximation, using holographic methods as initiated in [@KarchKatz; @Kruczenski; @Babington]. This requires the embedding of probe D6-branes in $AdS_4 \times {{\mathbb{CP}^3}}$, which is the near-horizon geometry of the ABJM setup in type-IIA string theory [@ABJM]. The D6-branes fill the $AdS_4$ space and wrap around a special Lagrangian submanifold inside the ${{\mathbb{CP}^3}}$. We show that the real projective space ${\mathbb{R}}{\mathbb{P}}^3$ is such a submanifold inside the ${{\mathbb{CP}^3}}$, and thus the corresponding embedding of the D6-branes is stable and supersymmetric. The paper is organized as follows. In section \[secft\] we present the ${\mathcal{N}}=3$ Chern-Simons Yang-Mills theory with matter in the fundamental representation of the $U(N)_k \times U(N)_{-k}$ gauge group. In section \[secsetup\] we discuss the corresponding brane setup in type IIB string theory, its lift to $M$-theory and the corresponding near-horizon geometry. In section \[secprobe\] we discuss the embedding of probe D6-branes in $AdS_4 \times {\mathbb{C}}{\mathbb{P}}^3$. After publication of the first version of this work, two further papers [@Gaiotto2; @Hikida] appeared in the arXiv, which have considerable overlap with the present work. In particular, taking into account a comment in the introduction of [@Gaiotto2], we clarified the discussion of our brane-setup in section 3. Chern-Simons Yang-Mills theory with fundamental matter {#secft} ====================================================== In this section we study a three-dimensional ${\mathcal{N}}=3$ superconformal $U(N)\times U(N)$ Chern-Simons-matter theory with flavour in the fundamental representation of the gauge group. This theory will be obtained by coupling $N_f$ fundamental hypermultiplets to the ABJM theory [@ABJM]. The action ---------- The ABJM theory has gauge group $U(N)_k\times U(N)_{-k}$ and its action can be written in manifest ${\mathcal{N}}=2$ language [@ABJM]. Let us briefly review its field content. There are two bifundamental ${\mathcal{N}}=4$ hypermultiplets $(A, B^\dagger)_{1,2}$ and two adjoint ${\mathcal{N}}=4$ vector multiplets consisting of the ${\mathcal{N}}=2$ vector fields $V_{1,2}$ and the chiral fields $\Phi_{1,2}$. Formally, there are also $k$ chiral multiplets ($q_{1,2}$) in the fundamental and $k$ chiral multiplets ($\tilde q_{1,2}$) in the anti-fundamental representation of each gauge group. These are assumed to be massive and, when integrated out, produce a Chern-Simons term via the parity anomaly. Thus at low energies all fundamental fields are integrated out, leaving only fields in the adjoint or bifundamental representation. In order to also have massless fundamental fields in the far infrared, we introduce $2N_f$ fundamental hypermultiplets $(Q^r, \tilde Q^r{}^\dagger)_{1,2}$ with $r=1,...,N_f$. The ${\mathcal{N}}=2$ superfields and their quantum numbers are summarized in the upper part of table \[table1\]. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $U(N)$ $U(N)$ $U(k)$ $U(k)$ $U(N_f)$ $U(N_f)$ $\Delta$ -------------- ----------------- -------- --------------------------------------------------------------------------------------- -------- ---------- ---------- ---------- , $B_1$, $B_2$ $\overline {N}$ $N$ **[1]{}&**[1]{}&**[1]{}&**[1]{} &$\frac{1}{2}$\ $\Phi_1$, $V_1$ & adjoint & **[1]{} & **[1]{}& **[1]{}&**[1]{}&**[1]{}&$1, 0$\ $\Phi_2$, $V_2$ & **[1]{} & adjoint & **[1]{}& **[1]{}&**[1]{}&**[1]{}&$1, 0$\ & & &&&& &\ $\tilde q_1$ & $\overline {N}$&**[1]{}&$k$&**[1]{}&**[1]{}&**[1]{} &$\frac{1}{2}$\ $q_2$ & **[1]{} & $N$ &$\overline{k}$ & **[1]{}&**[1]{}&**[1]{}&$\frac{1}{2}$\ $\tilde q_2$ & **[1]{} &$\overline {N}$& **[1]{}& $k$&**[1]{}&**[1]{}&\ , &$N$&**[1]{}&**[1]{}&**[1]{}&$N_f$&**[1]{} &$\frac{1}{2}$\ $Q_2$, $\tilde Q_2^\dagger$&**[1]{}&$N$&**[1]{}&**[1]{}&**[1]{}&$N_f$ &$\frac{1}{2}$\ ******************************************************************** --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : ${\mathcal{N}}=2, d=3$ superfields in the field theory.[]{data-label="table1"} In ${\mathcal{N}}=2$ superspace, the action can be written as a sum of three terms ${\cal S}={\cal S}_{\text{mat}}+ {\cal S}_{\text{CS}}+{\cal S}_{\text{pot}}$, a matter part, a Chern-Simons part and a superpotential given by $$\begin{aligned} {\cal S}_{\text{mat}}&= \int d^3 x d^4 \theta {\,\rm Tr} \left(-\bar A_i e^{-V_1} A_i e^{V_2} - \bar B_i e^{-V_2} B_i e^{V_1}\right) - \bar Q^r_i e^{-V_i} Q^r_i - {\tilde Q}^r_i e^{V_i} \bar{\tilde Q}^r_i \,,\label{ActPart1}\\ {\cal S}_{\text{CS}}&= -i \frac{k}{4\pi} \int d^3 x d^4 \theta \int_0^1 dt {\,\rm Tr}\left(V_1 \bar D^\alpha(e^{t V_1} D_\alpha e^{-tV_1})- V_2 \bar D^\alpha(e^{t V_2} D_\alpha e^{-tV_2}) \right)\,,\label{ActPart2}\\ {\cal S}_{\text{pot}} &=\int d^3 x d^2 \theta\, \left(W_{\text{ABJM}} + W_{\text{flavour}}\right) + c.c. \,,\label{ActPart3} \end{aligned}$$ where $$\begin{aligned} \label{WABJM} W_{\text{ABJM}} =-\frac{k}{8\pi}{\,\rm Tr\,} (\Phi_1^2-\Phi_2^2) + {\,\rm Tr\,} (B_i \Phi_1 A_i) + {\,\rm Tr\,} (A_i \Phi_2 B_i)\end{aligned}$$ and $$\begin{aligned} \label{Wflavor} W_{\text{flavour}} = \alpha_1\tilde Q^r_1 \Phi_1 Q^r_1 +\alpha_2 \tilde Q^r_2 \Phi_2 Q^r_2 \,.\end{aligned}$$ The first term in the ABJM superpotential $W_{\text{ABJM}}$ [@ABJM] involving $\Phi^2_1$ and $\Phi^2_2$ is the ${\mathcal{N}}= 3$ supersymmetry completion of the Chern-Simons action ${\cal S}_{\text{CS}}$, while the remaining two terms include the coupling to the bifundamentals $A_{1,2}$ and $B_{1,2}$. The superpotential $W_{\text{flavour}}$ describes the coupling of the new flavour fields $\tilde Q^r_{1,2}, Q^r_{1,2}$ to the adjoints $\Phi_{1,2}$. The action preserves ${\mathcal{N}}=2$ supersymmetry for arbitrary values of the coupling constants $\alpha_{1,2}$. There are no kinetic terms for the fields of the ${\mathcal{N}}=4$ vector multiplet, which contains the ${\mathcal{N}}=2$ superfields $V_{1,2}$ and $\Phi_{1,2}$. These fields are massive and will be integrated out at low energies. Upon integrating out the adjoint fields $\Phi_{1,2}$, we get the superpotential $$\begin{aligned} W&=W_{\text{ABJM}} + W_{\text{flavour}} \nonumber \\ &= \frac{4\pi}{k} {\,\rm Tr\,} (A_1B_1A_2B_2-A_2B_1A_1B_2) +\frac{4\pi\alpha_1}{k}\,\tilde Q_1(A_1B_1+A_2B_2)Q_1\nonumber\\ &\quad-\frac{4\pi\alpha_2}{k}\,\tilde Q_2 (B_1 A_1 +B_2A_2) Q_2+ \frac{2\pi\alpha_1^2}{k}\, Q_1 \tilde Q_1 Q_1 \tilde Q_1-\frac{2\pi\alpha_2^2}{k}\,\tilde Q_2Q_2 \tilde Q_2 Q_2 \,. \label{superpot}\end{aligned}$$ The first term is exactly the same as in the Klebanov-Witten theory associated with the conifold [@conifold]. The remaining terms proportional to $\alpha_1$ and $\alpha_2$ describe the coupling of the fundamentals to the ABJM model. Similar terms appear when fundamental matter is coupled to the Klebanov-Witten theory, see for instance [@Kuperstein; @Ouyang]. The field content and the superpotential of the low-energy theory can best be represented by the quiver diagram shown in figure \[quiverfig\]. ${\mathcal{N}}=3$ supersymmetric theory and conformal invariance ---------------------------------------------------------------- So far we have considered generic coupling constants $\alpha_1$ and $\alpha_2$. However, it turns out that upon choosing the particular values $\alpha_1=-\alpha_2=1$, the amount of supersymmetry preserved by the action (\[ActPart1\])–(\[ActPart3\]) is enhanced to ${\mathcal{N}}=3$. This is accompanied by an enhancement of the $U(1)_R$ R-symmetry to $SU(2)_R$, which is explicitly shown in appendix \[Sect:InvCompAct\], where we write the bosonic part of the action in a manifest $SU(2)_R$ invariant way. As we have also shown in appendix \[Sect:InvCompAct\], apart from the $SU(2)_R$ symmetry, the action is also invariant under an additional $SU(2)_D$ symmetry.[^1] It is important to notice that the latter is a global symmetry which, in particular, commutes with $SU(2)_R$. Therefore, there is no enhancement of the R-symmetry group (or supersymmetry), in contrast to the ABJM model [@ABJM] and related theories, e.g. [@Benna; @Jafferis]. In addition to the $SU(2)_R \times SU(2)_D$ symmetry of (\[superpot\]) there is finally also the “baryonic” $U(1)$ symmetry $$\begin{aligned} &U(1)_b: && A_i \rightarrow e^{i \alpha} A_i \,,&& B_i \rightarrow e^{-i \alpha} B_i \,,&& Q^r_i, \tilde Q^r_i \,\,\, \textmd{inert} \,, \label{baryonU1}\end{aligned}$$ which has already been discussed in detail in [@ABJM]. This symmetry has to be distinguished from the baryonic $(U(1) \times U(1))_{B}$ subgroup of the $U(N_f) \times U(N_f)$ flavour group. All couplings in (\[superpot\]) are marginal, and the theory is classically conformal . The standard non-renormalization theorem for 2+1-dimensional Yang-Mills theories coupled to matter fields does not apply to CSM theories [@Gaiotto]. Nevertheless, there are good reasons to believe that, similarly to the ABJM theory [@ABJM] and the general class of CSM theories studied in [@Gaiotto], the present ${\mathcal{N}}=3$ CSM theory ($\alpha_1=-\alpha_2=1$) is also conformal invariant at the quantum level. Note first that the Chern-Simons level $k$ is not renormalized beyond a possible one-loop shift [@Kapustin:1994mt]. Moreover, as found in [@Gaiotto], possible corrections to the classical Kähler potential are either irrelevant or absorbed by a wave function renormalization. However, since for ${\mathcal{N}}=3$ supersymmetry, $U(1)_R$ is part of the (non-anomalous) $SU(2)_R$ R-symmetry, the conformal dimensions of all fields are protected from quantum corrections. Therefore, there is no $U(1)_R$ charge renormalization nor wave function renormalization, excluding relevant or marginal corrections to the Kähler potential. Non-renormalization of the coupling constants in the superpotential has explicitly been shown to two-loop order in [@Avdeev] for CSM theories with matter fields in the fundamental representation. We expect that the coupling to the ABJM term does not destroy the non-renormalization. This strongly suggests conformal invariance of the action at the quantum level. Brane construction {#secsetup} ================== In this section we will make a proposal for the gravitational theory dual to the Chern-Simons Yang-Mills theory with fundamental matter discussed in the previous section. The gravitational theory corresponds to the near-horizon geometry of the following brane-construction. We start from the type IIB setup of [@ABJM] which consists of two NS5-branes along 012345, which are separated in the compact direction 6, and $N$ D3-branes along 0126. In addition, there are $k$ D5-branes along 012349 which intersect the D3-branes along 012 and one of the two NS5-branes along 012345, as shown in figure \[setup\]. ![Type IIB brane setup of [@ABJM] plus two stacks of $N_f$ “flavour” D5-branes before (lhs.) and after the web deformation (rhs.). []{data-label="setup"}](setup.eps) This setup preserves ${\mathcal{N}}=2$ supersymmetry and gives rise to the following [@ABJM]: The NS5-branes divide the D3-brane worldvolume into two intervals along 6. The 3-3 open strings therefore give rise to two $U(N)$ ${\mathcal{N}}=4$ vector multiplets $(V, \Phi)_{1,2}$ consisting of ${\mathcal{N}}=2$ vector and chiral multiplets. They also give rise to two complex bifundamental ${\mathcal{N}}=4$ hypermultiplets $(A, B^\dagger)_{1,2}$. Furthermore, we note that the (left) NS5-brane splits the $k$ D5-branes into two stacks of $k$ half-D5-branes along the direction 9. This phenomenon is dubbed [ *flavour doubling*]{}, see [*e.g.*]{} [@Uranga]: Each stack of half D5-branes gives rise to a $U(k)$ global symmetry and provides $2 k$ fundamental flavours, [*i.e.*]{} $k$ flavours for each gauge factor. At low energies the 3-5 and 5-3 open string modes therefore generate $k$ fundamental chiral fields $q_{1,2}$, $\tilde q_{1,2}$. These fields transform under the $U(k) \times U(k)$ global symmetry as indicated in table \[table1\]. The remaining modes coming from strings with both ends on 5-branes are assumed to be decoupled at low energies. We may now add another class of fundamental fields by introducing $2N_f$ “flavour” D$5'$-branes along 012789.[^2] These branes intersect with the $k$ D5-branes on a three-brane along 0129 and overlap with the D3-branes along 012. This does not break any further supersymmetries, [*i.e.*]{} the total configuration still preserves ${\mathcal{N}}=2$. At low energies the 3-$5'$ and $5'$-3 strings give rise to $2 N_f$ additional fundamental hypermultiplets: $(Q^r, \tilde Q^r{}^\dagger)_{1,2}$ with $r=1,...,N_f$. The corresponding $U(N_f) \times U(N_f)$ flavour symmetry is non-chiral. We now perform a [*web deformation*]{}, in which the $k$ D5-branes and the NS5-brane merge into an intermediate $(1,\pm k)5$-brane along $012[3,7]_{\theta_1} [4,8]_{\theta_2} [5,9]_{\theta_3}$, as explained in detail in [@ABJM]. This notation means that the $(1,\pm k)5$-brane is aligned along 012 and stretched along directions mixing 345 and 789. Only if the $\theta_{i}$ ($i=1,2,3$) are all the same and satisfy $\tan \theta_i =k$ supersymmetry is enhanced from ${\mathcal{N}}=2$ to ${\mathcal{N}}=3$ [@Kitao:1998mf; @Gauntlett]. ![Three types of 5-branes in $\mathbb{E}^6$.[]{data-label="fivebrane"}](fivebrane.eps) The result of this deformation is a triple 5-brane intersection of a $(1, 0)5$-brane (NS5-brane), a $(1, k)5$-brane, and two $(0, N_f)5$-branes ($N_f$ D5-branes). These branes overlap over 012, and the remaining directions of the 5-branes form three-planes in the $\mathbb{E}^6$ parameterized by $(x^3, x^4,x^5, x^7, x^8,x^9)$. The angle $\theta_{p, q}$ of two of these three-planes is given by [@Gauntlett] $$\begin{aligned} \cos \theta_{p, q} = \frac{p \cdot q}{\sqrt{p^2 {q}^2}} \,,\end{aligned}$$ where $p\cdot q = p_i q_i$, and $p, q$ are two of the three $SL(2, {\mathbb{Z}})$ charge vectors $p=(1, 0)$, $p'=(1, k)$, $p''=(0, N_f)$. We obtain the angles $$\begin{aligned} &\tan \theta_{p, p'} = k \,, && \tan \theta_{p', p''} = \frac{1}{k} \,,&& \theta_{p, p''} = \frac{\pi}{2} \,,\end{aligned}$$ which satisfy $\theta_{p, p'} +\theta_{p', p''} = \theta_{p, p''} =\frac{\pi}{2}$. The 5-branes and their intersection angles are shown in figure \[fivebrane\]. The rotations in the three three-planes by the same element of $SO(3)_R$ correspond to the R-symmetry transformations of the ${\mathcal{N}}=3$ ultraviolet field theory (\[ActPart1\])–(\[Wflavor\]) (with $\alpha_1=-\alpha_2=1$). We finally note that our setup differs from those considered in [@Giveon2008; @Niarchos], which study ${\mathcal{N}}=2$ supersymmetric Chern-Simons theories with fundamental fields. There the $(1, k)5$-brane is rotated in the 3-7 plane, but not in the 4-8 and 5-9 plane, $\theta_1 = \theta$, $\theta_2=\theta_3=0$. Because of this supersymmetry is reduced to ${\mathcal{N}}=2$ there. T-dual setups and lift to M-theory {#secIIA} ---------------------------------- As in [@ABJM] we begin by T-dualizing along the direction 6. The resulting [*web-deformed*]{} type IIA setup then consists of the following branes: The $N$ D3-branes map to $N$ D2-branes along 012, and the NS5-brane turns into a single Kaluza-Klein monopole with world-volume along 012345. The $(1, k)5$ brane is T-dual to an object along $0126[3,7]_\theta[4,8]_\theta[5,9]_\theta$ (with $\tan \theta=k$), which consists of $k$ D6-branes and $1$ KK monopole associated with the 6 direction [@ABJM]. In addition we now have $2N_f$ D6$'$-branes along $0126789$ descending from the flavour D5$'$-branes in the type IIB setup. The type IIA setup may now be lifted to M-theory, where the D2-branes naturally become M2-branes along $012$. The object along $0126[3,7]_\theta[4,8]_\theta[5,9]_\theta$ and the D6$'$-branes become KK monopoles with circular direction in 6 and 10 and a linear combination of both [@ABJM]. The resulting M-theory configuration will be a stack of $N$ M2-branes located at the origin of a [*toric hyperkähler manifold*]{} [@Gauntlett]. This is an eight-dimensional space ${\cal M}_8$ with $sp(2)$ holonomy and preserves $3/16$ of the supersymmetries of the eleven-dimensional supergravity, which is precisely the amount of supersymmetry expected for the dual of theories in 2+1 dimensions with ${\mathcal{N}}= 3$ supersymmetry. Adding a stack of $N$ M2-branes at the origin of ${\cal M}_8$ does not break any additional supersymmetry [@Gauntlett]. The metric of ${\cal M}_8$ is given by $$\begin{aligned} ds^2_{{\cal M}_8}&=U_{ij} d \vec x^i \cdot d\vec x^j + U^{ij} (d\varphi_i +A_i) (d\varphi_j +A_j) \,, \label{metricX8}\end{aligned}$$ with the following quantities $$\begin{aligned} &A_i = d \vec x^j \cdot \vec \omega_{ji} = d x^j_a \omega^a_{ji} \,,&& \partial_{x^j_a} \omega^b_{ki} - \partial_{x^k_b} \omega^a_{ji} = \epsilon^{abc} \partial_{x^j_c} U_{ki} \,, \label{relX8}\end{aligned}$$ with $i,j=1,2$. The three-vectors $\vec x_1$ and $\vec x_2$ describe positions in two three-planes parameterized by $(x^7,x^8,x^9)$ and $(x^3, x^4,x^5)$, respectively. The two circular directions of the toric geometry are in the directions 6 and 10. The two-dimensional matrix $U_{ij}$ contains the information of the uplifted five-branes of the IIB setup [@Gauntlett]. Here it is given by $$\begin{aligned} \label{U} U= \mathbf{1}+\begin{pmatrix} h_1 &0\\0 & 0 \end{pmatrix} +\begin{pmatrix} h_2 & kh_2\\ kh_2 & k^2h_2 \end{pmatrix} +\begin{pmatrix} 0 &0\\0&N^2_f h_3 \end{pmatrix}\,,\end{aligned}$$ with $$\begin{aligned} &h_1= \frac{1}{2|\vec x_1|}\,, &&h_2= \frac{1}{2|\vec x_1 + k \vec x_2|}\,, &&h_3= \frac{1}{|N_f \vec x_2|}\,.\end{aligned}$$ The first three terms in (\[U\]) are as in the ABJM case without flavours [@ABJM], while the last term contains the information of the uplifted flavour branes. In the type IIB setup the functions $h_{1,2,3}$ stem from the $(1, 0)5$-brane (NS5-brane), the $(1, k)5$-brane, and the (two stacks of) $(0, N_f)5$-branes (D5-branes), respectively. An appropriate ansatz for $N$ M2-branes at the origin of ${\cal M}_8$ is $$\begin{aligned} ds^2 &= H^{-2/3} (-dX_0^2 + dX_1^2 +dX_2^2 )+ H^{1/3} ds_{{\cal M}_8}^2 \,,\\ F &= dX_0 \wedge dX_1 \wedge dX_2 \wedge dH^{-1} \,,\end{aligned}$$ where the scalar function $H$ only depends on the coordinates of ${\cal M}_8$. The supergravity equations of motion then require that $H$ satisfies the Laplace equation on ${\cal M}_8$, $$\begin{aligned} \partial_\mu (\sqrt{g}g^{\mu\nu} \partial_\nu H) = 0 \,, \label{Laplace}\end{aligned}$$ with $g_{\mu\nu}$ given by (\[metricX8\]). Near-horizon geometry {#nhg} --------------------- Here we do not attempt to explicitly solve (\[Laplace\]) but instead explore the hypertoric geometry of the manifold ${\cal M}_8$ in more detail. Given the form (\[U\]) of the matrix $U_{ij}$, we see that the metric (\[metricX8\]) develops a physical singularity at the point $\vec{x}_1=\vec{x}_2=0$. In this near-core region the constant piece of the matrix (\[U\]) is subdominant and can henceforth be dropped. In the following we will carry on to study this region more closely. We begin by presenting the solution to the equations (\[relX8\]) by writing an expression for the gauge field one-forms $$\begin{aligned} &A_1=\frac{(x_{12}^2dx_1^1-x_{12}^1dx_1^2)+k(x_{12}^2 dx_2^1-x_{12}^1dx_2^2)}{2|\vec{x}_{12}|(|\vec{x}_{12}|+x_{12}^3)}+\frac{x_1^2dx_1^1-x_1^1dx_1^2}{2|\vec{x}_1|(|\vec{x}_1|+x_1^3)}\,,\nonumber\\ &A_2=\frac{k(x_{12}^2 dx_1^1-x_{12}^1dx_1^2)+k^2(x_{12}^2dx_2^1-x_{12}^1dx_2^2)}{2|\vec{x}_{12}|(|\vec{x}_{12}|+x_{12}^3)}+\frac{N_f(x_2^2dx_2^1-x_2^1dx_2^2)}{|\vec{x}_2|(|\vec{x}_2|+x_2^3)}\,,\label{NfsolOmeg}\end{aligned}$$ where we have introduced the shorthand notation $\vec{x}_{12}=\vec{x}_1+k\vec{x}_2$. This explicitly determines the metric by inserting (\[NfsolOmeg\]) into (\[metricX8\]). However, due to the complicated form of (\[NfsolOmeg\]) the complete metric becomes rather difficult to handle and we therefore will not work with it directly. Instead we want to discuss the isometry group of the geometry (\[metricX8\]) with solution (\[NfsolOmeg\]). First of all, there are two global $U(1)$ symmetries since (\[metricX8\]) is invariant under a shift of each of the $\varphi_i$ by a constant. We choose to parameterize these $U(1)$s in the following manner $$\begin{aligned} &U(1)_\text{gauge}:\ \left\{\begin{array}{ll}\varphi_1\longmapsto \varphi_1 + \lambda_1 \\ \varphi_2\longmapsto \varphi_2 + \lambda_1\end{array}\right.\,,&&\text{with} &&\lambda_1\in[0,2\pi)\,,\label{U1diag} \\ &\nonumber\\ &U(1)_b:\ \left\{\begin{array}{ll}\varphi_1\longmapsto \varphi_1 + \lambda_2 \\ \varphi_2\longmapsto \varphi_2 - \lambda_2\end{array}\right.\, ,&&\text{with} &&\lambda_2\in[0,2\pi)\,,\label{U1glob}\end{aligned}$$ where $\lambda_1$ and $\lambda_2$ are just two constant parameters.[^3] The diagonal $U(1)_{\text{gauge}}$ can be promoted to a local symmetry provided that we also transform the gauge potential (\[NfsolOmeg\]). In fact, $U(1)_{\text{gauge}}$ is part of a larger $SU(2)_{\text{gauge}}$ “gauge” symmetry,[^4] which acts in the usual way on (\[NfsolOmeg\]) $$\begin{aligned} SU(2)_{\text{gauge}}:\,\varphi_i\longmapsto \varphi_i +\Lambda (\vec{x}_1,\vec{x}_2)\,,&&\text{and} &&A_i\longmapsto A_i-\partial_i \Lambda (\vec{x}_1,\vec{x}_2)\, .\label{SU2gauge}\end{aligned}$$ So far we have only been considering invariances of (\[metricX8\]) involving $\varphi_i$ and $A_i$, while there is additionally also an $SO(3)$ symmetry which acts diagonally on $\vec{x}_1$ and $\vec{x}_2$. As we can see[^5] from (\[NfsolOmeg\]), in order to keep the metric (\[metricX8\]) invariant, such an $SO(3)$ rotation will only close up to a gauge transformation of (\[NfsolOmeg\]). We can, for example, for $\Omega\in SO(3)$ write a transformation which will leave the metric invariant in the following manner $$\begin{aligned} SO(3):\,\left\{\begin{array}{ccl}(\vec{x}_1,\vec{x}_2) & \longmapsto & (\Omega \vec{x}_1,\Omega \vec{x}_2) \\ A_i & \longmapsto & A_i-\partial_i h(\Omega,\vec{x}_1,\vec{x}_2) \\ \varphi_i & \longmapsto & \varphi_i+h(\Omega,\vec{x}_1,\vec{x}_2)\end{array}\right.\,.\label{SO3rotat}\end{aligned}$$ Here, according to [@Cotaescu:2003gx], $h$ is a function of $\Omega$ and $\vec{x}_{1,2}$, which needs to satisfy $$\begin{aligned} &h(\Omega_1\Omega_2,\vec{x}_1,\vec{x}_2)=h(\Omega_1,\Omega_2\vec{x}_1,\Omega_2\vec{x}_2)+h(\Omega_2,\vec{x}_1,\vec{x}_2)\,, &&\text{and} &&h(\mathbbm{1},\vec{x}_1,\vec{x}_2)=0\,.\end{aligned}$$ We can therefore summarize that the complete isometry group of the near-horizon geometry is given by $$\begin{aligned} SO(3)\times SU(2)_{\text{gauge}}\times U(1)_b\, .\label{isometries}\end{aligned}$$ This fits nicely with our analysis of the symmetries present in the field theory (see section \[secft\]). Indeed, the $SO(3)$ symmetry takes over the role of the $SU(2)$ R-symmetry group, while we can identify $SU(2)_{\text{gauge}}$ with the additional global $SU(2)_D$ symmetry present in the dual CFT. As we have already remarked, the global $U(1)_b$ is identified with $U(1)_b$ of (\[baryonU1\]). We should finally also mention that in the limit $N_f=0$ the isometry group (\[isometries\]) is in fact enhanced. Most prominently, $SO(3)$, which in (\[SO3rotat\]) acts diagonally on $(\vec{x}_1,\vec{x}_2)$, is enhanced to $SO(3)\times SO(3)$ acting separately on $\vec{x}_1$ and $\vec{x}_2$. This in turn means that (\[isometries\]) becomes isomorphic to $SU(4)\times U(1)_b$, which ties in nicely with the analysis of the symmetries in the dual ABJM-model (see [@ABJM]). There it was found, that the three-dimensional Chern-Simons matter theory has an $SU(4)_R$ R-symmetry and an additional global baryonic $U(1)$. D6-branes in $AdS_4 \times {{\mathbb{CP}^3}}$ {#secprobe} ============================================= In the previous section we discussed the fully backreacted solution of the dual gravitational theory. The structure of the corresponding near-horizon geometry is quite involved, which makes a full discussion of the supergravity fluctuations of this background technically difficult. A simpler approach towards a gravity dual of the ABJM theory with flavour is to treat the fundamental fields in the quenched approximation. On the gravity side this corresponds to taking the [ *probe*]{} limit [@KarchKatz], in which it is assumed that for a small number of flavours the backreaction of the D6-branes may be ignored. We will therefore embed probe D6-branes into the (type-IIA) near-horizon geometry of the ABJM model, which is $AdS_4 \times {{\mathbb{CP}^3}}$ [@ABJM]. Since flavour branes are spacetime-filling, the D6-branes extend along all the directions of the $AdS_4$ space and wrap a Lagrangian-(codimension three)-cycle inside ${{\mathbb{CP}^3}}$. For consistency of the probe approximation, we need to make sure that this cycle is stable and supersymmetric.[^6] The most natural guess for such a Lagrangian subcycle is a real projective space ${{\mathbb{RP}^3}}\subset{{\mathbb{CP}^3}}$. This is due to the well-known mathematical fact that a ${{\mathbb{RP}^3}}$ is among the simplest codimension-three cycles inside ${{\mathbb{CP}^3}}$ [@Chiang]. Moreover, it is also known [@Oh] that ${{\mathbb{RP}^3}}$ fulfills certain mathematical stability-criteria under special Hamiltonian deformations. This already points towards the fact that ${{\mathbb{RP}^3}}$ is indeed a stable cycle for the probe D6-brane to wrap. In the following we will not only show that the configuration of a D6 wrapping a ${{\mathbb{RP}^3}}\subset{{\mathbb{CP}^3}}$ is stable but moreover also supersymmetric. We will proof stability by showing that this cycle gives rise to a generalized calibration 3-form. As a by-product of this computation we will find explicit expressions for the Killing spinors of ${{\mathbb{CP}^3}}$. Geometry of ${{\mathbb{CP}^3}}$ ------------------------------- ### Metric and curvature Let us start by reviewing some basic facts about the $AdS_4\times {{\mathbb{CP}^3}}$ supergravity solution. According to [@ABJM] the metric, the dilaton and the 2- and 4-form field-strengths are given by $$\begin{aligned} &ds^2=\frac{R^3}{k}\left(\frac{1}{4}ds_{Ads_4}^2+ds_{{{\mathbb{CP}^3}}}^2\right), &&e^{2\phi}=\frac{R^3}{k^3}\,, \nonumber\\ &F^{(2)}_{mn}=k J_{mn}\,, && F^{(4)}_{\mu\nu\rho\tau}=\frac{3R^3}{8}\epsilon_{\mu\nu\rho\tau}\,.\end{aligned}$$ Here we have used Greek indices $\mu,\nu=1,\ldots,4$ to denote the directions of $AdS_4$ and Latin indices $m,n=1,\ldots,6$ for the ${{\mathbb{CP}^3}}$. Moreover, $ds_{{{\mathbb{CP}^3}}}^2$ is the standard Fubini-Study metric given by $$\begin{aligned} ds_{{{\mathbb{CP}^3}}}^2=\frac{d\bar{\zeta}_\alpha d\zeta^\alpha} {(1+\bar{\zeta}_\gamma\zeta^\gamma)^2}+ \frac{\zeta^\alpha\bar{\zeta}_{\beta} d\bar{\zeta}_{\alpha}d\zeta^\beta} {(1+\bar{\zeta}_\gamma\zeta^\gamma)^4}\,,\label{FubiniStudyGeneral}\end{aligned}$$ with $$\begin{aligned} &\zeta_1=\tan\mu\sin\alpha\sin\frac{\vartheta}{2}e^{\frac{i}{2} (\psi-\varphi+\chi)}\,, \nonumber\\ &\zeta_2=\tan\mu\cos\alpha e^{\frac{i}{2}\chi}\,, \nonumber\\ &\zeta_3=\tan\mu\sin\alpha\cos\frac{\vartheta}{2}e^{\frac{i}{2} (\psi+\varphi+\chi)}\,,\label{coord3}\end{aligned}$$ and $0\leq \mu, \alpha \leq \pi/2$, $0 \leq \vartheta \leq \pi$, $0 \leq \varphi \leq 2\pi$ [@PopeWarner]. More explicitly, in terms of the left-invariant $SU(2)$ one-forms $$\begin{aligned} &\sigma_1=\cos\psi d\vartheta+\sin\vartheta\sin\psi d\varphi\,, &&\sigma_2=\sin\psi d\vartheta-\sin\vartheta\cos\psi d\varphi\,, &&\sigma_3=d\psi+\cos\vartheta d\varphi\,,\nonumber\end{aligned}$$ the metric $ds_{{{\mathbb{CP}^3}}}^2$ can be written as $$\begin{aligned} ds_{{{\mathbb{CP}^3}}}^2=d\mu^2+\sin^2\mu&\left[d\alpha^2+\frac{1}{4}\sin^2\alpha(\sigma_1^2+\sigma_2^2+\cos^2\alpha\sigma_3^2)+\frac{1}{4}\cos^2\mu(d\chi+\sin^2\alpha\sigma_3)^2\right].\end{aligned}$$ The Kähler form $J_{mn}$ is given by $$\begin{aligned} J&=\frac{1}{2}dA=\frac{1}{4}d\left[\sin^2\mu(d\chi+\sin^2\alpha\sigma_3)\right] \nonumber\\ &=\sin2\mu (d\chi+\sin^2\alpha(d\psi+\cos\vartheta d\varphi))\wedge d\mu-\frac{1}{4}\sin^2\mu\sin^2\alpha\sin\vartheta d\varphi\wedge d\vartheta\,.\end{aligned}$$ For the choice (\[FubiniStudyGeneral\]) of the metric, ${{\mathbb{CP}^3}}$ is Einstein satisfying the relation $$\begin{aligned} R_{mn}=8 g_{mn}\,.\label{Einsteinmetric}\end{aligned}$$ ### Lagrangian submanifold {#Sect:LagSub} Given the (complex) parameterization (\[coord3\]), a ${{\mathbb{RP}^3}}\subset{{\mathbb{CP}^3}}$ is easily found by making sure that $\zeta_{1,2,3}$ all have the same (fixed) complex phase $\omega$. As we can see by a quick inspection, this is achieved by solving $$\begin{aligned} &\psi-\varphi+\chi=\omega, &&\chi=\omega, &&\psi+\varphi+\chi=\omega\,.\end{aligned}$$ For the simplest choice $\omega=0$ the solution $\psi=\varphi=\chi=0$ yields a manifestly real parameterization of ${{\mathbb{RP}^3}}=S^3/{\mathbb{Z}}_2$ with the standard metric $$\begin{aligned} ds_{{{\mathbb{RP}^3}}}^2=d\mu^2+\sin^2\mu d\alpha^2+\frac{1}{4}\sin^2\alpha\sin^2\mu d\vartheta^2\,.\label{metricRP}\end{aligned}$$ The isometry group of ${{\mathbb{RP}^3}}$ is ${\mathbb{Z}}_2 \ltimes (SO(3) \times SO(3))$, where $\ltimes$ denotes a semi-direct product. This means the isometry group consists of two $SO(3)$ groups and a discrete symmetry exchanging the two $SO(3)$ groups, see e.g. [@0308022]. The occurrence of two $SO(3)$’s reflect the $SU(2)$ R-symmetry and the global $SU(2)_D$ symmetry of the field theory. In the following we will show that a D6-brane wrapped around this submanifold is indeed a stable and supersymmetric configuration. Killing spinors of ${{\mathbb{CP}^3}}$ {#Sect:KillSpin} -------------------------------------- For the construction of a bispinor 3-form in the next subsection, we will need the Killing spinors of ${{\mathbb{CP}^3}}$. It is a well-known fact [@Nilsson] that there are six Killing spinors on ${{\mathbb{CP}^3}}$. They can be found as solutions of the following two equations, which stem from the supervariations of the fermionic degrees of freedom in supergravity $$\begin{aligned} &D_m\epsilon-\frac{1}{32}\left(\Gamma_mQ+16\tilde{\Gamma}_m\Gamma_0\right)\epsilon-\frac{9}{16}\Gamma_m\epsilon=0\,,\label{diffkilling}\\ &\frac{3}{8\sqrt{2}}Q\Gamma_0\epsilon+\frac{3}{4\sqrt{2}}\Gamma_0\epsilon=0\,. \label{eigenspinorrel}\end{aligned}$$ Here we have introduced the quantities $$\begin{aligned} &Q=J^{mn}\Gamma_{mn}\Gamma_0\,, &&\text{and} &&\tilde{\Gamma}_m={J_m}^n\Gamma_n\,.\label{matQ}\end{aligned}$$ Moreover, $\Gamma_m$ are the six-dimensional Dirac matrices. Relation (\[eigenspinorrel\]) is an eigenspinor relation for the operator $Q$, and it was shown in [@Nilsson] that $Q$ has the following eigenvalues $$\begin{aligned} \{-2,-2,-2,-2,-2,-2,6,6\}\,.\end{aligned}$$ Since we know that the background $AdS_4\times {{\mathbb{CP}^3}}$ preserves ${\mathcal{N}}=6$ supersymmetry, we conclude that we need to look for eigenspinors to the eigenvalue $-2$ because the latter has a 6-fold degeneracy. The most general spinor compatible with this eigenvalue is given by $$\begin{aligned} \epsilon=\left(\begin{array}{c}f_1+f_2+f_6 \\ -f_6 \\ -f_3+f_4+f_5 \\ f_5 \\ f_4 \\ f_3 \\ f_2 \\ f_1\end{array}\right),\label{killansatz}\end{aligned}$$ where $f_{i=1,\ldots,6}$ are six arbitrary functions of the coordinates $(\mu,\alpha,\vartheta,\varphi,\psi,\chi)$. The exact functional dependence can be fixed by inserting this ansatz into the remaining killing spinor equation (\[diffkilling\]). This yields a system of coupled first order partial differential equations, which can be solved analytically. Since the computations are rather tedious, we will not relate all the details here, but content ourselves by giving the explicit solution in appendix \[Sect:KillingSpinors\]. Embedding of D6-branes ---------------------- The Killing spinors of ${{\mathbb{CP}^3}}$ can now be used to compute the bispinor 3-form $$\begin{aligned} \Omega_{mnp}=\bar{\epsilon}\Gamma_{mnp}\epsilon \,.\end{aligned}$$ Since the full $\Omega$ is a rather lengthy expression, we refrain from writing it down completely, but only display the relevant component. Following our logic in section \[Sect:LagSub\] concerning the Lagrangian submanifold, the latter is parameterized by $(\mu,\alpha,\vartheta)$, while $(\psi,\varphi,\chi)$ are set to constant values. Therefore, the relevant component of $\Omega$ is $\Omega_{\mu\alpha\vartheta}$ given by $$\begin{aligned} \Omega_{\mu\alpha\vartheta}=&\frac{1}{2} e^{-\frac{i}{2} (2 \varphi +\chi +2 \psi )}\left(e^{i (\varphi +\chi +2\psi )} \lambda_1^2+e^{i\varphi }\lambda_2^2+e^{i \psi }\left(\lambda_4^2+e^{2 i\varphi }\lambda_6^2+e^{i\chi }\left(e^{2 i\varphi}\lambda_3^2+\lambda_5^2\right)\right)\right)\nonumber\\ &\times\sin\alpha \sin^2\mu \,.\end{aligned}$$ Notice that this is a complex expression, which we can separate into real and imaginary part $$\begin{aligned} &\text{Re}(\Omega_{\mu\alpha\vartheta})=\frac{1}{2}\sin\alpha\sin^2\mu \label{REcalibration}\\ &\hspace{0.3cm}\times \left(\left(\lambda_5^2+\lambda_6^2\right)\cos\left(\phi-\frac{\chi}{2}\right)+\left(\lambda_3^2+\lambda_4^2\right)\cos\left(\phi+\frac{\chi}{2}\right)+\left(\lambda_1^2+\lambda_2^2\right)\cos\left(\frac{\chi}{2}+\psi\right)\right),\nonumber\\ &\text{Im}(\Omega_{\mu\alpha\vartheta})=-\frac{1}{2}\sin\alpha\sin^2\mu \label{IMcalibration}\\ &\hspace{0.3cm}\times\left((\lambda_5^2-\lambda_6^2)\sin\left(\phi-\frac{\chi}{2}\right)+\left(\lambda_4^2-\lambda_3^2\right)\sin\left(\phi +\frac{\chi}{2}\right)+\left(\lambda_2^2-\lambda_1^2\right)\sin\left(\frac{\chi}{2}+\psi\right)\right). \nonumber\end{aligned}$$ Following [@Cascales2004],[^7] in order to be a real calibration form the restriction of $\Omega$ to the Lagrangian submanifold needs to satisfy $$\begin{aligned} &\text{Im}(\Omega)_{\big|\psi=\varphi=\chi=0}=0\,, &&\text{and} &&\text{Re}(\Omega)_{\big|\psi=\varphi=\chi=0}\simeq\text{Vol}_{{{\mathbb{RP}^3}}}\,, \label{SLcond}\end{aligned}$$ where $\text{Vol}_{{{\mathbb{RP}^3}}}$ is the volume form of the submanifold. Inserting (\[REcalibration\]) and (\[IMcalibration\]) into (\[SLcond\]) we indeed find $$\begin{aligned} &\text{Im}(\Omega)_{\big|\psi=\varphi=\chi=0}=0\,,\\ &\text{Re}(\Omega)_{\big|\psi=\varphi=\chi=0}= \frac{1}{2}(\lambda_1^2+\lambda_2^2+\lambda_3^2 +\lambda_4^2+\lambda_5^2+\lambda_6^2)\sin\alpha\sin^2\mu\,.\end{aligned}$$ Comparing to (\[metricRP\]) we see that this is indeed proportional to the volume form of ${{\mathbb{RP}^3}}$ showing that $\Omega$ is indeed a calibration form. The chosen embedding of the D6-branes is thus stable and supersymmetric. Naively, we expect that the D6-brane embedding breaks half of the supersymmetries of the $AdS_4 \times {{\mathbb{CP}^3}}$ background, which ties in with the ${\mathcal{N}}=3$ supersymmetry of the field theory. Clearly, to show this precisely would require a careful analysis of the . Conclusions and open questions ============================== In this note we discussed an ${\mathcal{N}}=3$ version of the ABJM model with $2N_f$ fields in the fundamental representation of the $U(N)_k \times U(N)_{-k}$ gauge group. The theory has a dual description in terms of $N$ M2-branes at a hypertoric geometry ${\cal M}_8$, given by the metric (\[metricX8\]) with (\[U\]) and (\[NfsolOmeg\]). We argued that the isometry of ${\cal M}_8$ is $SU(2) \times SU(2) \times U(1)$, which matches to the $SU(2)$ R-symmetry of ${\mathcal{N}}=3$ supersymmetry, a global $SU(2)_D$ symmetry and a $U(1)_b$ “baryonic” symmetry in the dual field theory. Of course, a complete construction of the near-horizon geometry would require to also determine the harmonic function $H$ of the geometry by solving the Laplace equation (\[Laplace\]), possibly along the lines of [@Hashimoto:2008iv]. Another approach outlined in this paper is to consider the “flavour” D6-branes in the probe approximation, which corresponds to quenched flavours in the field theory. This requires the embedding of the D6-branes into the $AdS_4 \times {{\mathbb{CP}^3}}$ near-horizon geometry of the ABJM model. We showed that ${{\mathbb{RP}^3}}$ is a special Lagrangian three-cycle in ${{\mathbb{CP}^3}}$ and that D6-branes wrapping $AdS_4 \times {{\mathbb{RP}^3}}$ are stable and supersymmetric. Since the isometries of ${{\mathbb{RP}^3}}$ match again the global symmetries of the field theory, we expect that fluctuations of the D6-branes are dual to bound state operators of (massless) fundamentals. It would be interesting to verify this by an explicit calculation. This could possibly be done (numerically) using the Dirac-Born-Infeld action evaluated on the world-volume of the D6-branes. We leave this for future work. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Martin Ammon, Johanna Erdmenger, Matthias Gaberdiel and V. Ramallo for useful discussions and invaluable comments on a preliminary version of this paper. Moreover, we are grateful to Stanislav Kuperstein for helpful correspondence. This research has been supported by the Swiss National Science Foundation. Symmetries of the action {#Sect:InvCompAct} ======================== In this section we will explore the symmetry content of the action (\[ActPart1\])–(\[ActPart3\]) for the particular values $\alpha_1=-\alpha_2=1$. In the way (\[ActPart1\])–(\[ActPart3\]) is written only ${\mathcal{N}}=2$ supersymmetry is manifest. In order to exhibit a possible enhancement of the latter we need to get rid of the auxiliary fields as they are intimately tied to the ${\mathcal{N}}=2$ superspace formulation. We therefore need to work out the action in component formulation. To this end, we recall the Grassmann expansion of all superfields involved. The chiral superfields are of the form $$\begin{aligned} &A_i={a_{i} }+\sqrt{2}\theta\psi^{(A)}_{i}+\theta^2 F^{(A)}_{i}\,, &&B_i={b_{i} }+\sqrt{2}\theta\psi^{(B)}_{i}+\theta^2 F^{(B)}_{i}\,,\\ &Q_i^r={q_{i} }^r+\sqrt{2}\theta\zeta^r_{i}+\theta^2 G^r_i\,, &&\tilde{Q}^r_i={\tilde{q}_{i} }^r+\sqrt{2}\theta\tilde{\zeta}^r_{i}+\theta^2 \tilde{G}^r_i\,, \end{aligned}$$ while the vector superfields have the expansion $$\begin{aligned} V_i=2i\theta\bar{\theta}\sigma_i+2\theta\gamma^\mu\bar{\theta}A_{\mu,i}+\sqrt{2}i\theta^2\bar{\theta}\bar{\chi}_i-\sqrt{2}i\bar{\theta}^2\theta\chi_i+\theta^2\bar{\theta}^2D_i \quad(i=1,2)\,.\end{aligned}$$ Although we have also explicitly written down fermionic components in the Grassmann expansion, we will in the following only focus on the scalar fields, namely $({a_{i} },{b_{i} },{q_{1,2} }^r,{\tilde{q}_{1,2} }^r)$.[^8] It is then straight-forward to eliminate the auxiliary fields $(F^{(A,B)}_i,G^r_{1,2},\tilde{G}^r_{1,2},\sigma_i,D_i)$ from the action (\[ActPart1\])–(\[ActPart3\]), after which the pure scalar part becomes $$\begin{aligned} S=\frac{4\pi^2}{3k^2}\big[&{q_{a}^{1}}{\bar{q}^{a}_{1}}{q_{b}^{1}}{\bar{q}^{b}_{1}}{q_{c}^{1}}{\bar{q}^{c}_{1}}+{q_{a}^{2}}{\bar{q}^{a}_{2}}{q_{b}^{2}}{\bar{q}^{b}_{2}}{q_{c}^{2}}{\bar{q}^{c}_{2}}-4{q_{a}^{1}}{\bar{q}^{b}_{1}}{q_{c}^{1}}{\bar{q}^{a}_{1}}{q_{b}^{1}}{\bar{q}^{c}_{1}}-4{q_{a}^{2}}{\bar{q}^{b}_{2}}{q_{c}^{2}}{\bar{q}^{a}_{2}}{q_{b}^{2}}{\bar{q}^{c}_{2}}+\nonumber\\ +&{a_{a}^{i}}{\bar{a}^{a}_{i}}{a_{b}^{j}}{\bar{a}^{b}_{j}}{a_{c}^{k}}{\bar{a}^{c}_{k}}+{\bar{a}^{a}_{i}}{a_{a}^{i}}{\bar{a}^{b}_{j}}{a_{b}^{j}}{\bar{a}^{c}_{k}}{a_{c}^{k}}+4{a_{a}^{i}}{\bar{a}^{b}_{j}}{a_{c}^{k}}{\bar{a}^{a}_{i}}{a_{b}^{j}}{\bar{a}^{c}_{k}}-6{a_{a}^{i}}{\bar{a}^{b}_{j}}{a_{b}^{j}}{\bar{a}^{a}_{i}}{a_{c}^{k}}{\bar{a}^{c}_{k}}\nonumber\\ +&3{a_{a}^{i}}{\bar{a}^{a}_{i}}{a_{b}^{j}}{\bar{a}^{b}_{j}}{q_{c}^{1}}{\bar{q}^{c}_{1}}+3{\bar{a}^{a}_{i}}{a_{a}^{i}}{\bar{a}^{b}_{j}}{a_{b}^{j}}{\bar{q}^{c}_{2}}{q_{c}^{2}}-6{a_{a}^{i}}{\bar{a}^{b}_{j}}{a_{b}^{j}}{\bar{a}^{a}_{i}}{q_{c}^{1}}{\bar{q}^{c}_{1}}-6{\bar{a}^{a}_{i}}{a_{b}^{j}}{\bar{a}^{b}_{j}}{a_{a}^{i}}{\bar{q}^{c}_{2}}{q_{c}^{2}}\nonumber\\ +&9{a_{a}^{i}}{\bar{a}^{a}_{i}}{q_{b}^{1}}{\bar{q}^{b}_{1}}{q_{c}^{1}}{\bar{q}^{c}_{1}}+9{\bar{a}^{a}_{i}}{a_{a}^{i}}{\bar{q}^{b}_{2}}{q_{b}^{2}}{\bar{q}^{c}_{2}}{q_{c}^{2}}-6{a_{a}^{i}}{\bar{a}^{a}_{i}}{q_{b}^{1}}{\bar{q}^{c}_{1}}{q_{c}^{1}}{\bar{q}^{b}_{1}}-6{\bar{a}^{a}_{i}}{a_{a}^{i}}{\bar{q}^{b}_{2}}{q_{c}^{2}}{\bar{q}^{c}_{2}}{q_{b}^{2}}\nonumber\\ -&6{a_{a}^{i}}{\bar{a}^{b}_{i}}{q_{b}^{1}}{\bar{q}^{a}_{1}}{q_{c}^{1}}{\bar{q}^{c}_{1}}-6{\bar{a}^{a}_{i}}{a_{b}^{i}}{\bar{q}^{b}_{2}}{q_{a}^{2}}{\bar{q}^{c}_{2}}{q_{c}^{2}}+6{a_{a}^{i}}{\bar{a}^{b}_{i}}{q_{b}^{1}}{\bar{q}^{c}_{1}}{q_{c}^{1}}{\bar{q}^{a}_{1}}+6{\bar{a}^{a}_{i}}{a_{b}^{i}}{\bar{q}^{b}_{2}}{q_{c}^{2}}{\bar{q}^{c}_{2}}{q_{a}^{2}}\nonumber\\ -&6{a_{a}^{i}}{\bar{a}^{b}_{i}}{q_{c}^{1}}{\bar{q}^{a}_{1}}{q_{b}^{1}}{\bar{q}^{c}_{1}}-6{\bar{a}^{a}_{i}}{a_{b}^{i}}{\bar{q}^{c}_{2}}{q_{a}^{2}}{\bar{q}^{b}_{2}}{q_{c}^{2}}-6{a_{a}^{i}}{\bar{a}^{b}_{i}}{q_{c}^{1}}{\bar{q}^{c}_{1}}{q_{b}^{1}}{\bar{q}^{a}_{1}}-6{\bar{a}^{b}_{i}}{a_{a}^{i}}{\bar{q}^{c}_{2}}{q_{c}^{2}}{\bar{q}^{a}_{2}}{q_{b}^{2}}-\nonumber\\ -&6{\bar{a}^{a}_{i}}{q_{b}^{1}}{\bar{q}^{b}_{1}}{a_{a}^{i}}{\bar{q}^{c}_{2}}{q_{c}^{2}}+12{\bar{a}^{a}_{i}}{q_{b}^{1}}{\bar{q}^{c}_{1}}{a_{a}^{i}}{\bar{q}^{b}_{2}}{q_{c}^{2}}+12\epsilon_{ij}\epsilon^{kl}{a_{c}^{i}}{\bar{a}^{b}_{k}}{a_{a}^{j}}{\bar{a}^{c}_{l}}{q_{b}^{1}}{\bar{q}^{a}_{1}}\nonumber\\ +&12\epsilon^{ij}\epsilon_{kl}{\bar{a}^{c}_{i}}{a_{b}^{k}}{\bar{a}^{a}_{j}}{a_{c}^{l}}{\bar{q}^{b}_{2}}{q_{a}^{2}}\big]\,,\label{FullCovariantAction}\end{aligned}$$where flavour indices are suppressed. Here we have arranged all fields in the following doublet form $$\begin{aligned} &{a_{a}^{i}}=\left(\begin{array}{c} a_i \\ \bar{b}_i \end{array} \right)\,,&&\text{and} && {q_{a}^{1}}=\left(\begin{array}{c} q_1 \\ \bar{\tilde{q}}_1 \end{array}\right)\,, &&\text{and} && {q_{a}^{2}}=\left(\begin{array}{c} \tilde{q}_2 \\ \bar{q}_2 \end{array}\right)\, ,\end{aligned}$$ which exhibits invariance of (\[FullCovariantAction\]) under two types of $SU(2)$ symmetries. First of all, we find a $SU(2)_R$ R-symmetry, which acts on the indices $a,b=1,2$. This symmetry acts on the bifundamental fields ${a_{a}^{i}}$ as well as on the flavours ${q_{a}^{1,2}}$. Apart from this, there is yet another $SU(2)$, which acts on the index $i,j=1,2$. We call this symmetry $SU(2)_D$ as it can be understood to be the diagonal of the $SU(2)_A\times SU(2)_B$ global symmetry, which has already been described in [@ABJM]. It is, however, important to notice that in contrast to the pure ABJM-case this $SU(2)_D$ commutes with the R-symmetry group $SU(2)_R$ and therefore does not lead to an additional enhancement of the $SU(2)_R$ R-symmetry group. As a side remark, we note that the first four lines of (\[FullCovariantAction\]) are invariant under a larger $SU(2)_a \times SU(2)_q$ symmetry rotating separately $a_a^i$ and $q^{1,2}_a$. However, the last four lines of (\[FullCovariantAction\]) contain contractions between the $a_a^i$ and $q^i_a$ fields breaking $SU(2)_a \times SU(2)_q$ down to $SU(2)_R$. Killing spinors of ${{\mathbb{CP}^3}}$ {#Sect:KillingSpinors} ====================================== According to the reasoning of section \[Sect:KillSpin\], we expect to find a six-dimensional solution space to the Killing spinor equation, which is parameterized by the integration constants $\lambda_1,\ldots, \lambda_6$. With these parameters, we can state the final result for the general Killing spinor defined in (\[killansatz\]): [$$\begin{aligned} f_1&=\frac{1}{2}e^{-\frac{i}{4} (2 \varphi +\chi +2 \psi )}\bigg[e^{\frac{i}{2} (\varphi +\chi +2 \psi )} {\lambda_5} (\cos\alpha+i\cos\mu\sin\alpha)-e^{\frac{i}{2} \varphi} {\lambda_6} \sin\alpha\sin\mu\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\psi} \bigg(e^{\frac{i}{2}\chi}\bigg(\cos(\vartheta/2)\left(\left(e^{i\varphi}{\lambda_1}-i{\lambda_3}\cos\alpha\right)\cos\mu+{\lambda_3}\sin\alpha\right)\\* &\hspace{0.5cm}+\bigg(e^{i \varphi } {\lambda_1} (\sin\alpha-i\cos\alpha\cos\mu)-{\lambda_3}\cos\mu\bigg)\sin(\vartheta/2)\bigg)\nonumber\\* &\hspace{0.5cm}+\sin\mu\bigg(\left(i{\lambda_2}+e^{i\varphi}{\lambda_4} \cos\alpha\right) \cos (\vartheta/2)+\sin(\vartheta/2)({\lambda_2}\cos\alpha+{\lambda_4}(\sin\varphi-i\cos\varphi ))\bigg)\bigg)\bigg],\nonumber\\ &\nonumber\\ f_2&=\frac{1}{2} e^{-\frac{i}{4} (2 \varphi +\chi +2 \psi )}\bigg[-e^{\frac{i}{2}(\varphi+\chi+2\psi)}{\lambda_5}(\cos\alpha+i\cos\mu\sin\alpha)-e^{\frac{i}{2}\varphi}{\lambda_6}\sin\alpha\sin\mu\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\psi}\bigg(e^{\frac{i}{2}\chi}\bigg(\cos(\vartheta/2)\left(\left(e^{i\varphi}{\lambda_1}+i {\lambda_3}\cos\alpha\right)\cos\mu-{\lambda_3}\sin\alpha\right)\nonumber\\* &\hspace{0.5cm}-\left({\lambda_3}\cos\mu+e^{i\varphi}{\lambda_1}(\sin\alpha-i \cos\alpha\cos\mu)\right)\sin(\vartheta/2)\bigg)\nonumber\\* &\hspace{0.5cm}+\left(\left(e^{i\varphi}{\lambda_4}\cos\alpha-i{\lambda_2}\right)\cos(\vartheta/2)+\left(ie^{i\varphi}{\lambda_4}+{\lambda_2}\cos\alpha\right)\sin(\vartheta/2)\right)\sin\mu\bigg)\bigg],\\ &\nonumber\\ f_3&=\frac{1}{2} e^{-\frac{i}{2}(\varphi +\psi )}\bigg[e^{\frac{i}{4}(2\varphi-\chi)}{\lambda_6}(\cos\alpha+\cos\mu\sin\alpha)-e^{\frac{i}{4}(2\varphi+\chi+4\psi)}{\lambda_5}\sin\alpha\sin\mu\nonumber\\* &\hspace{0.5cm}+e^{-\frac{i}{4}(\chi-2\psi)}\bigg(\sin(\vartheta/2)\bigg(\left(-e^{i\varphi}{\lambda_4}-i{\lambda_2} \cos\alpha\right)\cos\mu+{\lambda_2}\sin\alpha\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\chi}\left(e^{i\varphi}{\lambda_1}\cos\alpha-i{\lambda_3}\right)\sin\mu\bigg)+\cos(\vartheta/2)\big(e^{i\varphi}{\lambda_4}\sin\alpha\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\chi}\left(ie^{i\varphi}{\lambda_1}+{\lambda_3}\cos\alpha\right)\sin\mu+\cos\mu({\lambda_2}-ie^{i\varphi}{\lambda_4}\cos\alpha)\big)\bigg)\bigg],\\ &\nonumber\\ f_4&=\frac{1}{2} e^{-\frac{i}{2}(\varphi+\psi)}\bigg[e^{\frac{i}{4}(2\varphi-\chi)}{\lambda_6}(\cos\alpha+i\cos\mu\sin\alpha)+e^{\frac{i}{4}(2\varphi+\chi+4\psi)}{\lambda_5}\sin\alpha\sin\mu\nonumber\\* &\hspace{0.5cm}+e^{-\frac{i}{4}(\chi-2\psi)}\bigg(\sin(\vartheta/2)\bigg(\left(e^{i \varphi}{\lambda_4}-i{\lambda_2}\cos\alpha\right)\cos\mu+{\lambda_2}\sin\alpha\nonumber\\* &\hspace{0.5cm}-e^{\frac{i}{2}\chi}\left(i{\lambda_3}+e^{i\varphi}{\lambda_1}\cos\alpha\right)\sin\mu\bigg)+\cos(\vartheta/2)\bigg(\left(-{\lambda_2}-ie^{i\varphi}{\lambda_4}\cos\alpha\right)\cos\mu\nonumber\\ &\hspace{0.5cm}+e^{i\varphi}{\lambda_4}\sin\alpha-e^{\frac{i}{2}}\sin\mu({\lambda_3}\cos\alpha-ie^{i\varphi}{\lambda_1})\bigg)\bigg)\bigg],\\ &\nonumber\\ f_5&=\frac{1}{2}e^{-\frac{i}{2}(\varphi+\psi)}\bigg[-e^{\frac{i}{4}(2\varphi-\chi)}{\lambda_6}(\cos\alpha-i\cos\mu\sin\alpha)-e^{\frac{i}{4}(2\varphi+\chi+4\psi)}{\lambda_5}\sin\alpha\sin\mu\nonumber\\* &\hspace{0.5cm}+e^{-\frac{i}{4}(\chi-2\psi)}\bigg(\sin(\vartheta/2)\bigg(\left(-e^{i\varphi}{\lambda_4}-i{\lambda_2}\cos\alpha\right)\cos\mu-{\lambda_2}\sin\alpha\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\chi}\left(e^{i\varphi}{\lambda_1}\cos\alpha-i{\lambda_3}\right)\sin\mu\bigg)+\cos(\vartheta/2)\bigg(-e^{i\varphi}{\lambda_4}\sin\alpha\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\chi}\left(ie^{i\varphi}{\lambda_1}+{\lambda_3}\cos\alpha\right)\sin\mu+\cos\mu({\lambda_2}-ie^{i\varphi}{\lambda_4}\cos\alpha)\bigg)\bigg)\bigg],\\ &\nonumber\\ f_6&=-\frac{1}{2}e^{-\frac{i}{4}(2\varphi+\chi+2\psi)}\bigg[-e^{\frac{i}{2}(\varphi+\chi+2\psi)}{\lambda_5}(\cos\alpha-i \cos\mu\sin\alpha)-e^{\frac{i}{2}\varphi}{\lambda_6}\sin\alpha\sin\mu\nonumber\\* &\hspace{0.5cm}+e^{\frac{i}{2}\psi}\bigg(e^{\frac{i}{2}\chi}\big(\cos(\vartheta/2)\left(\left(e^{i\varphi}{\lambda_1}-i{\lambda_3}\cos\alpha\right)\cos\mu-{\lambda_3}\sin\alpha\right)\\* &\hspace{0.5cm}+\left({\lambda_3}\cos\mu+e^{i\varphi}{\lambda_1}(i\cos\alpha\cos\mu+\sin\alpha)\right)\sin(\vartheta/2)\big)\nonumber\\* &\hspace{0.5cm}+\sin\mu\left(\left(i{\lambda_2}+e^{i\varphi}{\lambda_4}\cos\alpha\right)\cos(\vartheta/2)+\sin(\vartheta/2)({\lambda_2}\cos\alpha-ie^{i\varphi}{\lambda_4})\right)\bigg)\bigg].\nonumber\end{aligned}$$]{} [99]{} J. Bagger and N. Lambert, [*Modeling multiple M2’s,*]{} Phys. Rev.  D [**75**]{}, 045020 (2007) \[arXiv:hep-th/0611108\]; J. Bagger and N. Lambert, [*Gauge Symmetry and Supersymmetry of Multiple M2-Branes,*]{} Phys. Rev.  D [**77**]{}, 065008 (2008) \[arXiv:0711.0955 \[hep-th\]\]; J. Bagger and N. Lambert, [*Comments On Multiple M2-branes,*]{} JHEP [**0802**]{}, 105 (2008) \[arXiv:0712.3738 \[hep-th\]\]; A. Gustavsson, [*Algebraic structures on parallel M2-branes,*]{} Nucl. Phys.  B [**811**]{}, 66 (2009) \[arXiv:0709.1260 \[hep-th\]\]. A. Gustavsson, [*Selfdual strings and loop space Nahm equations,*]{} JHEP [**0804**]{}, 083 (2008) \[arXiv:0802.3456 \[hep-th\]\]; M. Van Raamsdonk, [*Comments on the Bagger-Lambert theory and multiple M2-branes,*]{} JHEP [**0805**]{}, 105 (2008) \[arXiv:0803.3803 \[hep-th\]\]. O. Aharony, O. Bergman, D. L. Jafferis and J. Maldacena, [*N=6 superconformal Chern-Simons-matter theories, M2-branes and their gravity duals,*]{} JHEP [**0810**]{}, 091 (2008) \[arXiv:0806.1218 \[hep-th\]\]. A. Giveon and D. Kutasov, [*Seiberg Duality in Chern-Simons Theory,*]{} Nucl. Phys.  B [**812**]{}, 1 (2009) \[arXiv:0808.0360 \[hep-th\]\]. V. Niarchos, [*Seiberg Duality in Chern-Simons Theories with Fundamental and Adjoint Matter,*]{} JHEP [**0811**]{}, 001 (2008) \[arXiv:0808.2771 \[hep-th\]\]. V. Niarchos, [*R-charges, Chiral Rings and RG Flows in Supersymmetric Chern-Simons-Matter Theories,*]{} arXiv:0903.0435 \[hep-th\]. J. Erdmenger, N. Evans, I. Kirsch and E. Threlfall, [*Mesons in Gauge/Gravity Duals - A Review*]{}, Eur. Phys. J.  A [**35**]{}, 81 (2008) \[arXiv:0711.4467 \[hep-th\]\]. A. Karch and E. Katz, [*Adding flavor to AdS/CFT,*]{} JHEP [**0206**]{}, 043 (2002) \[arXiv:hep-th/0205236\]. M. Kruczenski, D. Mateos, R. C. Myers and D. J. Winters, [*Meson spectroscopy in AdS/CFT with flavour*]{}, JHEP [**0307**]{}, 049 (2003) \[arXiv:hep-th/0304032\]. J. Babington, J. Erdmenger, N. J. Evans, Z. Guralnik and I. Kirsch, [*Chiral symmetry breaking and pions in non-supersymmetric gauge / gravity duals,*]{} Phys. Rev. D [ **69**]{}, 066007 (2004) \[arXiv:hep-th/0306018\]. D. Gaiotto and D. L. Jafferis, [*Notes on adding D6 branes wrapping RP3 in AdS4 x CP3,*]{} arXiv:0903.2175 \[hep-th\]. Y. Hikida, W. Li and T. Takayanagi, [*ABJM with Flavors and FQHE,*]{} arXiv:0903.2194 \[hep-th\]. I. R. Klebanov and E. Witten, [*Superconformal field theory on threebranes at a Calabi-Yau singularity,*]{} Nucl. Phys.  B [**536**]{}, 199 (1998) \[arXiv:hep-th/9807080\]. S. Kuperstein, [*Meson spectroscopy from holomorphic probes on the warped deformed conifold*]{}, JHEP [**0503**]{}, 014 (2005) \[arXiv:hep-th/0411097\]. P. Ouyang, [*Holomorphic D7-branes and flavored N = 1 gauge theories,*]{} Nucl. Phys.  B [**699**]{}, 207 (2004) \[arXiv:hep-th/0311084\]; T. S. Levi and P. Ouyang, [*Mesons and Flavor on the Conifold,*]{} Phys. Rev. D [**76**]{}, 105022 (2007) \[arXiv:hep-th/0506021\]. M. Benna, I. Klebanov, T. Klose and M. Smedback, [*Superconformal Chern-Simons Theories and $AdS_4/CFT_3$ Correspondence,*]{} JHEP [**0809**]{}, 072 (2008) \[arXiv:0806.1519 \[hep-th\]\]. D. L. Jafferis and X. Yin, [*Chern-Simons-Matter Theory and Mirror Symmetry,*]{} arXiv:0810.1243 \[hep-th\]. D. Gaiotto and X. Yin, [*Notes on superconformal Chern-Simons-matter theories,*]{} JHEP [**0708**]{}, 056 (2007) \[arXiv:0704.3740 \[hep-th\]\]. A. N. Kapustin and P. I. Pronin, [*Nonrenormalization theorem for gauge coupling in (2+1)-dimensions*]{}, Mod. Phys. Lett.  A [**9**]{}, 1925 (1994) \[arXiv:hep-th/9401053\]. L. V. Avdeev, D. I. Kazakov and I. N. Kondrashuk, [*Renormalizations in supersymmetric and nonsupersymmetric nonAbelian Chern-Simons field theories with matter*]{}, Nucl. Phys.  B [**391**]{}, 333 (1993); L. V. Avdeev, G. V. Grigorev and D. I. Kazakov, [*Renormalizations in Abelian Chern-Simons field theories with matter*]{}, Nucl. Phys.  B [**382**]{}, 561 (1992). J. Park, R. Rabadan and A. M. Uranga, [*N = 1 type IIA brane configurations, chirality and T-duality,*]{} Nucl. Phys.  B [**570**]{}, 3 (2000) \[arXiv:hep-th/9907074\]. T. Kitao, K. Ohta and N. Ohta, [*Three-dimensional gauge dynamics from brane configurations with (p,q)-fivebrane,*]{} Nucl. Phys.  B [**539**]{}, 79 (1999) \[arXiv:hep-th/9808111\]. J. P. Gauntlett, G. W. Gibbons, G. Papadopoulos and P. K. Townsend, [*Hyper-Kaehler manifolds and multiply intersecting branes,*]{} Nucl. Phys.  B [**500**]{}, 133 (1997) \[arXiv:hep-th/9702202\]. I. I. Cotaescu and M. Visinescu, [*The induced representation of the isometry group of the Euclidean Taub-NUT space and new spherical harmonics,*]{} Mod. Phys. Lett.  A [**19**]{} (2004) 1397 \[arXiv:hep-th/0312129\]. D. Arean, D. E. Crooks and A. V. Ramallo, [ *Supersymmetric probes on the conifold,*]{} JHEP [**0411**]{}, 035 (2004) \[arXiv:hep-th/0408210\]. R. Chiang, [*Nonstandard Lagrangian Submanifolds in ${\mathbb{C}}{\mathbb{P}}^n$*]{}, arXiv:math/0303262 \[math.SG\] Yong-Geun Oh, [*Second variation and stabilities of minimal lagrangian submanifolds in Kähler manifolds,*]{} Invent. math. [**101**]{}, 501 (1990); Yong-Geun Oh, [*Volume minimization of Lagrangian submanifolds under Hamiltonian deformations,*]{} Math. Z. [**212**]{}, 175 (1993). C. N. Pope and N. P. Warner, [*An $SU(4)$ invariant compactification of the $d=11$ Supergravity on a strechted seven-sphere,*]{} Phys. Lett. B [**150**]{}, 352 (1985). B. McInnes, [*De Sitter and Schwarzschild-de Sitter according to Schwarzschild and de Sitter,*]{} JHEP [**0309**]{}, 009 (2003) \[arXiv:hep-th/0308022\]. B. E. W. Nilsson and C. N. Pope, [*Hopf Fibration Of Eleven-Dimensional Supergravity,*]{} Class. Quant. Grav.  [**1**]{}, 499 (1984). J.F.G. Cascales and A.M. Uranga, [*Branes on Generalized Calibrated Submanifolds*]{}, JHEP [**0411**]{}, 083 (2004) \[arXiv:hep-th/0407132\]; J. Dadok and F. R. Harvey, [*Calibrations and spinors*]{}, Acta. Math. [**170**]{}, 83 (1993); J. Gutowski, G. Papadopoulos and P. K. Townsend, [ *Supersymmetry and generalized calibrations,*]{} Phys. Rev.  D [**60**]{}, 106006 (1999) \[arXiv:hep-th/9905156\]. P. Koerber and L. Martucci, JHEP [**0801**]{}, 047 (2008) \[arXiv:0710.5530 \[hep-th\]\]; P. Koerber, [*Stable D-branes, calibrations and generalized Calabi-Yau geometry,*]{} JHEP [**0508**]{}, 099 (2005) \[arXiv:hep-th/0506154\]; L. Martucci and P. Smyth, [*Supersymmetric D-branes and calibrations on general backgrounds,*]{} JHEP [**0511**]{}, 048 (2005) \[arXiv:hep-th/0507099\]. A. Hashimoto and P. Ouyang, [*Supergravity dual of Chern-Simons Yang-Mills theory with N=6,8 superconformal IR fixed* ]{} JHEP [**0810**]{}, 057 (2008) \[arXiv:0807.1500 \[hep-th\]\]. [^1]: This symmetry simultaneously exchanges $A_1$ with $A_2$ and $B_1$ with $B_2$ and can be thought of as the diagonal $SU(2)$ of the global $SU(2)_A \times SU(2)_B$ group of the ABJM model [@ABJM]. [^2]: The D$5'$-branes are actually grouped along $x^6$ in two stacks of $N_f$ D$5'$-branes, one in each D3-brane sector. [^3]: Notice that we have called the second $U(1)$ symmetry $U(1)_b$. This is not by chance, since we will see later on that this $U(1)$ maps precisely to the “baryonic“ $U(1)_b$ symmetry (\[baryonU1\]) in the dual gauge theory. [^4]: This symmetry is local w.r.t. the [ *internal*]{} coordinates $\vec x_1, \vec x_2$ and therefore global w.r.t. the spacetime coordinates $X_{0,1,2}$. [^5]: For a similar discussion in the context of the Taub-Nut space see e.g. [@Cotaescu:2003gx]. [^6]: The analogue in Klebanov-Witten theory with flavour corresponds to the embedding of probe D7-branes in $AdS_5 \times T^{1,1}$ studied in [@Kuperstein; @Ouyang; @Arean]. [^7]: For generalized calibrations in backgrounds containing $AdS_4$ see also [@Koerber]. [^8]: The calculations for all other components follow in exactly the same manner. However, since the calculation is rather tedious we will not discuss them.
--- author: - | , on behalf of the COMPASS Collaboration\ University and INFN Trieste, Italy\ E-mail: title: 'Weighted transverse spin asymmetries in 2015 COMPASS Drell–Yan data' ---
--- bibliography: - 'AsymmetricAgentbasedModel.bib' --- [**Agent-based model with asymmetric trading and herding for complex financial systems** ]{}\ Jun-Jie Chen, Bo Zheng$^{\ast}$, Lei Tan\ Department of Physics, Zhejiang University, Hangzhou, Zhejiang, China\ $\ast$ E-mail: zheng@zimp.zju.edu.cn Abstract {#abstract .unnumbered} ======== ***Background:*** For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results.\ \ ***Methods:*** To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors’ asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data.\ \ ***Results:*** With the model parameters determined for six representative stock-market indices in the world respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities.\ \ ***Conclusions:*** We reveal that for the leverage and anti-leverage effects, both the investors’ asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors’ trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries. Introduction {#introduction .unnumbered} ============ In recent years, the understanding of complex systems has been undergoing rapid development. Financial markets are important examples of complex systems with many-body interactions. The possibility of accessing large amounts of historical financial data has spurred the interest of scientists in various fields, including physics. Plenty of results have been obtained with physical concepts, methods and models [@man95; @gop99; @liu99; @bou01; @gab03; @qiu06; @she09; @qiu10; @zha11; @pre11; @zho11; @jia12; @jia13]. There are several stylized facts in financial markets. Besides the fat tail in the probability distribution of price returns, it is well-known that the volatilities are long-range correlated in time, which is the so-called volatility clustering [@yam05]. However, our knowledge on the dynamics of the price itself is still limited. Since the auto-correlation of returns is extremely weak [@gop99; @liu99], nonzero higher-order time correlations become important, especially the lowest-order one among them. In financial markets, this lowest-order nonzero correlation turns out to be the return-volatility correlation, on which we lay emphasis in this paper. In 1976, a negative return-volatility correlation is first discovered by Black[@bla76]. This is the so-called leverage effect, which implies that past negative returns increase future volatilities. The leverage effect is actually observed in various financial systems, such as stock markets, futures markets, bank interest rates and foreign exchange rates [@bla76; @eng01; @bol06; @qiu06; @qiu07; @she09a; @par11; @pre12]. We have studied about thirty stock-market indices, and all of them exhibit the leverage effect. To the best of our knowledge, the leverage effect exists in almost all stock markets in the world. In Chinese stock markets, however, a positive return-volatility correlation is detected, which is called the anti-leverage effect [@qiu06; @she09a]. This effect is also observed in other economic systems, such as bank interest rates of early years and spot markets of non-ferrous metals. The leverage and anti-leverage effects are crucial for the understanding of the price dynamics [@bla76; @qiu06; @she09a; @par11], and important for risk management and optimal portfolio choice [@bou01a; @bur10]. However, the origination of the return-volatility correlation is still disputed, even at the macroscopic level [@hau91; @bek00; @gir04; @ahl07; @rom08; @she09a; @par11; @li11]. According to Black, the leverage effect arises because a price drop increases the risk of a company to go bankrupt and leads the stock to fluctuate more. So far, various macroscopic models have been proposed to understand the return-volatility correlation [@bai96; @bou01; @tan06; @mas06; @rui08; @she09a]. The retarded volatility model is an enlightening one, which can produce both the leverage and anti-leverage effects [@bou01]. However, it is a model with only one degree of freedom, and both the initial time series of returns and the function of the feedback return-volatility interaction, are actually input. Hence, the model is phenomenological in essence, and the generation mechanism of the leverage and anti-leverage effects is macroscopic. In very recent years, many researches have been devoted to the return-volatility correlation, but how to produce the return-volatility correlation with a microscopic model remains open. Agent-based modeling is a powerful simulation technique, which is widely applied in various fields [@gia01; @cha01; @bon02; @eba04; @ren06a; @far09; @sch09; @fen12]. More recently, an agent-based model is proposed for reproducing the cumulative distribution of empirical returns and trades in stock markets[@fen12]. It is a outstanding model with key parameters determined from empirical findings rather than from being set artificially. In this paper, we construct an agent-based model with asymmetric trading and herding to explore the microscopic origination of the leverage and anti-leverage effects. In the past decades, although the asymmetric trading and herding behaviors may have been touched macroscopically, they have not been taken into account in the microscopic modeling yet. Especially, we propose effective methods to determine the key parameters in our model from historical market data. Methods {#methods .unnumbered} ======= To study the microscopic origination of the return-volatility correlation in stock markets, we take into account the individual and collective behaviors of investors, and construct a microscopic model with multi-agent interactions. Further, we determine the key parameters in our model from historical market data rather than from statistical fitting of the results. Our model is basically built on agents’ daily trading, i.e., buying, selling and holding stocks. Empirical studies indicate that investors make decisions according to the previous stock performance of different time windows [@men10], which suggests that their horizons of investment vary. This investment horizon is introduced to our model for a better description of agents’ market behavior. Most crucially, two important behaviors of investors are taken into account for understanding the return-volatility correlation. **1. Two important behaviors of investors** \(a) Investors’ asymmetric trading in bull and bear markets. There are various definitions of bull and bear markets [@pag02; @jan10]. The usual definition is that in stock markets, bull and bear markets correspond to the periods of generally increasing and decreasing stock prices respectively [@pag02]. In this paper we adopt this definition, and simply define a market to be bullish on one day if the price return is positive, and bearish if the price return is negative. The asymmetric trading in bull and bear markets is an individual behavior, which is induced by investors’ different trading desire when the price drops and rises. To be more specific, an investor’s willingness to trade is affected by the previous price returns, leading the trading probability to be distinct in bull and bear markets. \(b) Investors’ asymmetric herding in bull and bear markets. Herding, as one of the collective behaviors, is that investors cluster in groups when making decisions, and these groups can be large in financial markets [@egu00; @con00; @hwa04; @zhe04; @ken11; @ken12; @ken13]. Actually, the herding behavior in bull markets is not the same as that in bear ones [@hwa04; @kim05; @wal06]. For instance, previous study has shown that in the recent US market, the herding behavior in bear markets appears much more significant than that in bull ones [@hwa04]. Generally, investors may cluster more intensively in either bull or bear markets, leading the herding to be asymmetric. **2. Microscopic model with multi-agent interactions** The stock price on day $t$ is denoted as $Y(t)$, and the logarithmic price return is $R(t)=\ln [Y(t)/Y(t-1)]$. In stock markets, the information for investors is highly incomplete, therefore an agent’s decision of *buy*, *sell* or *hold* is assumed to be random. Since intraday trading is not persistent in empirical trading data [@eis07], we consider that only one trading decision is made by each agent in a single day. In our model, there are $N$ agents, and each operates one share every day. On day $t$, each agent $i$ makes a trading decision $S_{i}(t)$, $$S_{i}(t)=\begin{cases} \:1 & \textnormal{buy}\\ -1 & \textnormal{sell}\\ \:0 & \textnormal{hold} \end{cases}\label{eq:si},$$ and the probabilities of buy, sell and hold decisions are denoted as $P_{buy}(t)$, $P_{sell}(t)$ and $P_{hold}(t)$, respectively. The price return $R(t)$ in our model is defined by the difference of the demand and supply of the stock, i.e., the difference between the number of buy agents and sell ones, $$R(t)=\sum_{i=1}^{N}S_{i}(t).\label{eq:return}$$ The volatility is defined as the absolute return $|R(t)|$. The investment horizon is introduced since agents’ decision makings are based on the previous stock performance of different time horizons. It has been found that the relative portion $\gamma_{i}$ of agents with $i$ days investment horizon follows a power-law decay, $\gamma_{i}\varpropto i^{- \eta}$ with $\eta=1.12$ [@fen12]. The maximum investment horizon is denoted as $M$, thus $i_{max}=M$. With the condition of $\sum_{i=1}^{M}\gamma_{i}=1$, we normalize $\gamma_i$ to be $\gamma_{i}=i^{-\eta}/\sum_{i=1}^{M}i^{-\eta}$. Agents’ trading decisions are made according to the previous price returns. For an agent having investment horizon of $i$ days, $\sum_{j=0}^{i-1}R(t-j)$ represents a simplified investment basis for decision making on day $t+1$. We introduce a weighted average return $R'(t)$ to describe the integrated investment basis of all agents. Taking into account that $\gamma_{i}$ is the weight of $\sum_{j=0}^{i-1}R(t-j)$, $R'(t)$ is defined as $$R'(t)=k\cdot\sum_{i=1}^{M}\left[\gamma_{i}\sum_{j=0}^{i-1}R(t-j)\right],\label{eq:fR}$$ where $k$ is a proportional coefficient. We set $k=1/(\sum_{i=1}^{M}\sum_{j=i}^{M}\gamma_{j})$, such that $|R'(t)|_{max}=N=|R(t)|_{max}$ to ensure that the fluctuation scale of $R'(t)$ remains consistent with the one of $R(t)$ (see Appendix S1). If $M=1$, $R'(t)$ is just identical to $R(t)$. Actually, $M$ varies from market to market, and from time period to time period for a market. According to Ref. [@men10], the investment horizons of investors range from a few days to several months. We estimate the maximum investment horizon $M$ to be $150$ in our model. For $M$ between $50$ and $500$, the simulated results remain qualitatively robust. **(i) Asymmetric trading.** In Ref. [@fen12], investors’ probabilities of buy and sell are assumed to be equal, i.e., $P_{buy}=P_{sell}=p$, and $p$ is a constant. In our model, we adopt the value of $p$ estimated in Ref. [@fen12], $p=0.0154$. We assume $P_{buy}(t)=P_{sell}(t)$ as well, but now $P_{buy}(t)$ and $P_{sell}(t)$ evolve with time since the agents’ trading is asymmetric in bull and bear markets. As the trading probability $P_{trade}(t)=P_{buy}(t)+P_{sell}(t)$, we set its average over time $\langle P_{trade}(t)\rangle =2p$. From the investors’ behavior (a) described in Subsec. 1 in Sect. Methods, we define the market performance of the previous $M$ days to be bullish if $R'(t)>0$, and bearish if $R'(t)<0$. The investors’ asymmetric trading in bull and bear markets gives rise to the distinction between $P_{trade}(t+1)|_{R'(t)>0}$ and $P_{trade}(t+1)|_{R'(t)<0}$. Thus, $P_{trade}(t+1)$ should take the form $$\begin{cases} P_{trade}(t+1)=2p\cdot\alpha & \; R'(t)>0\\ P_{trade}(t+1)=2p & \; R'(t)=0\\ P_{trade}(t+1)=2p\cdot\beta & \; R'(t)<0 \end{cases}\label{eq:fp}.$$ Here $\alpha$ and $\beta$ are constants, and $\langle P_{trade}(t)\rangle =2p$ requires $\alpha+\beta=2$, i.e., $\alpha$ and $\beta$ are not independent. **(ii) Asymmetric herding.** The herding behavior implies that investors can be divided into groups. Here a herding degree $D(t)$ is introduced to quantify the clustering degree of the herding behavior, $$D(t)=n_A(t)/N,\label{eq:dt}$$ where $n_A(t)$ is the average number of agents in each group on day $t$. Herding should be related to previous volatilities [@con00; @bla12], and we set $n_A(t+1)=|R'(t)|$. Hence the herding degree on day $t+1$ is $$D(t+1)=|R'(t)|/N.\label{eq:od}$$ This herding degree is symmetric for $R'(t)>0$ and $R'(t)<0$. According to the investors’ behavior (b) described in Subsec. 1 in Sect. Methods, however, investors’ herding behaviors in bull and bear markets are asymmetric, i.e., herding is stronger in either bull markets or bear ones. More specifically, $D(t+1)$ is not symmetric for $R'(t)>0$ and $R'(t)<0$, and should be redefined to be $$D(t+1)=|R'(t)-\Delta R|/N.\label{eq:fd}$$ Here $\Delta R$ is the degree of asymmetry, and as $\Delta R$ grows, herding becomes more asymmetric. According to Eq. (\[eq:dt\]), $D(t+1)=n_A(t+1)/N$. Therefore $N\cdot D(t+1)$ is the average number of agents in a same group. Thus we randomly divide $N$ agents into $1/D(t+1)$ groups on day $t+1$. Everyday, the agents in a group make a same trading decision (buy, sell or hold) with the same probability ($P_{buy}$, $P_{sell}$ or $P_{hold}$). **3. Determination of $\alpha$ and $\Delta R$** This is the key step in the construction of our model. We emphasize that $\alpha$ and $\Delta R$ are determined from the historical market data rather than from statistical fitting of the simulated results. Six representative stock-market indices are studied with our model, including the S&P 500, Shanghai, Nikkei 225, FTSE 100, Hangseng and DAX indices. We collect the daily data of closing price and trading volume, both of which are from 1950 to 2012 with 15775 data points for the S&P 500 Index, from 1991 to 2006 with 3928 data points for the Shanghai Index, from 2003 to 2012 with 2367 data points for the Nikkei 225 Index, from 2004 to 2012 with 1801 data points for the FTSE 100 Index, from 2001 to 2012 with 2787 data points for the Hangseng Index and from 2008 to 2012 with 1016 data points for the DAX Index. These data are obtained from “Yahoo$!$ Finance” (http://finance.yahoo.com). For comparison of different time series of returns, the normalized return $r(t)$ is introduced, $$r(t)=[R(t)-\langle R(t)\rangle]/\sigma,\label{eq:norm}$$ where $\langle \cdots\rangle $ represents the average over time $t$, and $\sigma=\sqrt{\langle R^{2}(t)\rangle -\langle R(t)\rangle ^{2}}$ is the standard deviation of $R(t)$. The stock market is assumed to be bullish if $r(t)>0$, and bearish if $r(t)<0$. To determine $\alpha$, we first define an average trading volume $V_{+}$ for the bull markets, and $V_{-}$ for the bear ones, $$\left\{ \begin{array}{c} V_{+}=[\sum_{r(t)>0}V(t)]/n_{r(t)>0}\\ V_{-}=[\sum_{r(t)<0}V(t)]/n_{r(t)<0} \end{array}\right..$$ Here $n_{r(t)>0}$ and $n_{r(t)<0}$ represent the number of positive and negative returns respectively, and $V(t)$ is the trading volume on day $t$. As displayed in Table \[tab:value\], the ratio $V_{+}/V_{-}$ is $1.03$ for the S&P 500 Index and $1.21$ for the Shanghai Index. In our model, since the average trading volumes for bull markets ($R'(t)>0$) and bear markets ($R'(t)<0$) are $N\cdot P_{trade}(t+1)|_{R'(t)>0}$ and $N\cdot P_{trade}(t+1)|_{R'(t)<0}$, the ratio of these two average trading volumes is $$\frac{P_{trade}(t+1)|_{R'(t)>0}}{P_{trade}(t+1)|_{R'(t)<0}}=\alpha/\beta=V_{+}/V_{-}.$$ Together with the condition $\alpha+\beta=2$, we determine $\alpha=1.01$ from $V_{+}/V_{-}$ for the S&P 500 Index and $\alpha=1.09$ for the Shanghai Index. Table \[tab:value\] also shows the values of $V_{+}/V_{-}$ and $\alpha$ for the Nikkei 225, FTSE 100, Hangseng and DAX indices. Several data series of different time periods are sampled from the historical market data, and the error is given for $\alpha$ in this table. Student’s *t*-test is performed to analyze the statistical significance for $\alpha$ deviating from $1.0$, and a *p*-value less than 0.05 is considered statistically significant. The analysis shows that only the value $\alpha=1.09$ of the Shanghai Index is significantly deviating from $1.0$, with the $\textnormal{\emph{p}-value}=8.4\times10^{-4}$. In our simulation, for simplicity, we approximate $\alpha$ to be $1.0$ for the S&P 500, Nikkei 225, FTSE 100, Hangseng and DAX indices, and $1.1$ for the Shanghai Index. Now we turn to $\Delta R$. In real markets, herding is related to volatilities [@con00; @bla12]. Thus we introduce the average $|r(t)|$ with the weight $V(t)$ to describe the herding degree in a specific period. Thus the herding degrees of bull markets ($r(t)>0$) and bear markets ($r(t)<0$) are defined as $$\left\{ \begin{array}{c} d_{bull}(r(t))=\sum_{t,r(t)>0}[V(t)\cdot r(t)]/\sum_{t,r(t)>0}V(t)\\ d_{bear}(r(t))=\sum_{t,r(t)<0}[V(t)\cdot|r(t)|]/\sum_{t,r(t)<0}V(t) \end{array}\right.\label{eq:reald}.$$ From empirical findings, the herding degrees of bull and bear stock markets are not equal, i.e., $d_{bull}\neq d_{bear}$. In order to equalize $d_{bull}$ and $d_{bear}$, we introduce a shifting to $r(t)$, denoted by $\Delta r$, such that $d_{bull}(r'(t))=d_{bear}(r'(t))$ with $r'(t)=r(t)+\Delta r$. From this definition of $\Delta r$, we derive (see Appendix S2) $$\Delta r=\frac{1}{2}[d_{bear}(r(t))-d_{bull}(r(t))].$$ Thus we obtain $\Delta r=0.067$ for the S&P 500 Index and $\Delta r=-0.043$ for the Shanghai Index. In our model, we similarly compute the shifting to the time series $R(t)$, which equalize the herding degree $D(t+1)=|R'(t)-\Delta R|/N$ in bull markets ($R'(t)>0$) and bear markets ($R'(t)<0$). Actually, one may prove that the shifting to $R(t)$ is equivalent to the shifting to $R'(t)$ (see Appendix S3). If $R'(t)$ is replaced by $R''(t)=R'(t)+\Delta R$, $D(t+1)$ turns into $D(t+1)=|R''(t)-\Delta R|/N=|R'(t)|/N$, which is symmetric for bull and bear markets. Therefore, $\Delta R$ is the shifting to $R'(t)$, and it is just the shifting to $R(t)$. The time series of returns in different real markets and in our model fluctuate at different levels. For comparison, we normalize the returns with Eq. (\[eq:norm\]). Similarly, $\Delta R$, the shifting to returns, should also be normalized to $\Delta r$. However, in simulating the stock markets with our model, the parameter we need is $\Delta R$. Therefore, we should first derive the relation of $\Delta R$ and $\Delta r$. With the normalization of the time series $R(t)$, $\Delta R$ should be normalized to $\Delta r$, $$[\Delta R-\langle R(t)\rangle]/\sigma=\Delta r,$$ where $\langle \cdots\rangle $ represents the average over time $t$, and $\sigma$ is the standard deviation of $R(t)$. To determine the relation of $\Delta R$ and $\Delta r$, $\Delta R$ is set to be $-4$, $-3$, $-2$, $-1$, $0$, $1$, $2$, $3$, $4$, respectively, and $\alpha$ is set to be $1.0$ to produce time series $R(t)$. With $R(t)$ simulated $100$ times for each $\Delta R$, we compute $\Delta r$ and average the results. As displayed in Fig. \[fig:Delatr\], the relation of $\Delta R$ and $\Delta r$ is linear, and $\Delta R=38.2\Delta r$. For $\alpha$ between $0.9$ and $1.1$, the results remain robust. Thus, we determine $\Delta R=3$ for the simulation of the S&P 500 Index and $\Delta R=-2$ for the simulation of the Shanghai Index. Table \[tab:value\] shows the values of $\Delta r$ and $\Delta R$, as well as the error of $\Delta r$, for the Nikkei 225, FTSE 100, Hangseng and DAX indices. Due to the fluctuation of the empirical data, the error of $\Delta r$ is about $10$ percent. Since the sign of $\Delta r$ determines that the simulation yields the leverage or anti-leverage effect, we perform Student’s *t*-test to analyze the statistical significance of $\Delta r$, and the corresponding *p*-value is listed in Table \[tab:value\]. A *p*-value less than 0.05 is considered statistically significant. To further validate the methods for the determination of the key parameters and the simulations for the leverage and anti-leverage effects, eight more indices are studied (see Appendix S4). The simulation of each index correctly produces the leverage or anti-leverage effect. **4. Simulation** The number of agents in our simulations is $10000$, i.e., $N=10000$. With $\alpha$ and $\Delta R$ determined for each index, our model produces the time series of returns $R(t)$ in the following procedure. Initially, the returns of the first $150$ time steps are set to be 0. On day $t+1$, we calculate $R'(t)$ according to Eq. (\[eq:fR\]), then $P_{trade}(t+1)$ and $D(t+1)$ according to Eq. (\[eq:fp\]) and Eq. (\[eq:fd\]), respectively. Next, we randomly divide all agents into $1/D(t+1)$ groups. The agents in a group make a same trading decision (buy, sell or hold) with the same probability ($P_{buy}$, $P_{sell}$ or $P_{hold}$). After all agents have made their decisions, we calculate the return $R(t+1)$ with Eq. (\[eq:si\]) and Eq. (\[eq:return\]). Repeating this procedure, we obtain the return time series $R(t)$. $20000$ data points of $R(t)$ are produced in each simulation, but the first $10000$ data points are abandoned for equilibration. Results {#results .unnumbered} ======= To describe how past returns affect future volatilities, the return-volatility correlation function $L(t)$ is defined, $$L(t)=[\langle r(t')\cdot|r(t'+t)|^{2}\rangle -L_{0}] /Z,$$ with $Z=\langle |r(t')|^{2}\rangle ^{2}$ and $L_{0}=\langle r(t')\rangle \langle |r(t')|^{2}\rangle $ [@bou01]. Here $\langle \cdots\rangle $ represents the average over time $t'$. As displayed in Fig. \[fig:L\], $L(t)$ calculated with the empirical data of the S&P 500 Index shows negative values up to at least 15 days, and this is the well-known leverage effect [@bou01; @bla76; @qiu06]. On the other hand, $L(t)$ for the Shanghai Index remains positive for about 10 days. That is the so-called anti-leverage effect [@qiu06; @she09a]. Fitting $L(t)$ to an exponential form $L(t)=c\cdot exp(-t/\tau)$, we obtains $\tau=19$ and $8$ days for the leverage and anti-leverage effects, respectively. Compared with the short correlating time of the returns, the order of minutes [@gop99; @liu99], both the leverage and anti-leverage effects are prominent. As the lowest-order nonzero correlations of returns, the leverage and anti-leverage effects are theoretically crucial for the understanding of the price dynamics [@bla76; @qiu06; @she09a; @par11]. In practical application, these effects are important for risk management and optimal portfolio choice [@bou01a; @bur10]. After the time series $R(t)$ produced in our model is normalized to $r(t)$, we compute the return-volatility correlation function, and the result is in agreement with that calculated from empirical data on amplitude and duration for both the S&P 500 and Shanghai indices, as shown in Fig. \[fig:L\]. This is the first time that the leverage and anti-leverage effects are produced with a microscopic model. For the Nikkei, FTSE 100, Hangseng and DAX indices, the volume data of early years are not available to us. However, $L(t)$ is computed from only price data. In order to reduce the fluctuation of $L(t)$, we collect the price data of a longer period, which are from 1984 to 2012 with 7092 data points for the Nikkei 225 Index, from 1984 to 2012 with 7227 data points for the FTSE 100 Index, from 1988 to 2012 with 6181 data points for the Hangseng Index and from 1990 to 2012 with 5514 data points for the DAX Index. As displayed in Fig. \[fig:FandNL\], $L(t)$ for the simulations is in agreement with that for the corresponding indices. Table \[tab:test\] shows the values of $c$ and $\xi$ of the exponential fit $L(t)=c\cdot exp(\xi t)$ for the six indices and the corresponding simulations. Since $c$ is obviously non-zero, the *p*-value of Student’s *t*-test is only listed for $\xi$. Our model also produces other features of the real markets, such as the long-term correlation of volatilities and the fat-tail distribution of the returns. Here we take the S&P 500 and Shanghai indices as examples. The auto-correlation function of volatilities is defined as $$A(t)=[\langle |r(t')||r(t'+t)|\rangle -\langle |r(t')|\rangle ^{2}]/A_{0},$$ where $A_{0}=\langle |r(t')|^{2}\rangle -\langle |r(t')|\rangle ^{2}$ [@she09a], and $\langle \cdots\rangle $ represents the average over time $t'$. As shown in Fig. \[fig:A\], $A(t)$ for the simulations is consistent with that for the empirical data. The cumulative distributions $P(|r(t)|>x)$ of absolute returns are shown in Fig. \[fig:P\], where the fat tail in the distribution of empirical returns can be observed in that of the simulated returns as well. By the definitions, both $\alpha$ and $\Delta r$ are not dependent on the number of agents (denoted by $N$) in the model. However, the slope of the linear relation between $\Delta R$ and $\Delta r$ increases with $N$. Therefore, the magnitude of $\Delta R$ becomes larger as $N$ grows. For the simulation results, the amplitude of $L(t)$ increases with $N$, but gradually converges for larger $N$ (see Appendix S5). For $A(t)$ and $P(|r(t)|>x)$, the cases are similar. Discussion {#discussion .unnumbered} ========== In our model, the crucial generation mechanisms of the return-volatility correlation are the agents’ asymmetric trading and herding behaviors in bull and bear markets. Now we discuss how these two mechanisms contribute to the leverage and anti-leverage effects, and which one is more significant. According to Eq. (\[eq:fp\]) and $\alpha+ \beta=2$, $P_{trade}$ is symmetric about $R'(t)=0$ if $\alpha=1.0$, and asymmetric if $\alpha \neq 1.0$. On the other hand, $D(t+1)$ in Eq. (\[eq:fd\]) is asymmetric about $R'(t)=0$ if $\Delta R \neq 0$. In our model, the S&P 500 and Shanghai indices are simulated with$(\alpha,\Delta R)=(1.0,3)$ and $(\alpha,\Delta R)=(1.1,-2)$, respectively. Therefore, $P_{trade}$ is symmetric in the simulation of the S&P 500 Index, but asymmetric in the simulation of the Shanghai Index. $D(t+1)$ is asymmetric in the simulation of both the S&P 500 and Shanghai indices. With other parts of the model remain unchanged, we consider the following controls: (a) $P_{trade}$ is replaced by a symmetric one in the simulation of the Shanghai Index; (b) $D(t+1)$ is replaced by a symmetric one in the simulation of both the S&P 500 and Shanghai indices; (c) both $P_{trade}$ and $D(t+1)$ are replaced by the symmetric ones in the simulation of the Shanghai Index. The simulations are performed 100 times for average. We conclude that for the leverage and anti-leverage effects, both the investors’ asymmetric trading and herding are essential generation mechanisms. As displayed in Fig. \[fig:SPSHL\], the anti-leverage effect is weakened significantly and the leverage effect disappears after we replace the asymmetric $D(t+1)$ with the symmetric one. On the other hand, the anti-leverage effect recedes after the asymmetric $P_{trade}$ is replaced by the symmetric one. It is worth mentioning that for the five stock markets exhibiting the leverage effect, the S&P 500, Nikkei 225, FTSE 100, Hangseng and DAX, $P_{trade}$ is approximately symmetric, thus contributing very little to the leverage effect. The investors’ asymmetric trading in the Shanghai market may result from the fact that the Shanghai market is an emerging market. Investors are somewhat speculative, and rush for trading as the stock price increases [@qiu06]. **Conclusion** Based on investors’ individual and collective behaviors, we construct an agent-based model to investigate how the return-volatility correlation arises in stock markets. In our model, agents are linked with each other and trade in groups. In particular, two novel mechanisms, investors’ asymmetric trading and herding behaviors in bull and bear markets, are introduced. There are four parameters in our model, i.e., $p$, $M$, $\alpha$ and $\Delta R$. We adopt $p$ estimated in Ref. [@fen12], and estimate the only tunable parameter $M$ to be $150$. $\alpha$ and $\Delta R$, the key parameters, are induced by the asymmetries in trading and herding, respectively. Specifically, we determine $\alpha$ from the ratio of the average trading volume when stock price is rising and that when price is dropping, and $\Delta R$ from investors’ different herding degrees in bull and bear markets. We collect the daily price and volume data of six representative stock-market indices in the world, including the S&P 500, Shanghai, Nikkei 225, FTSE 100, Hangseng and DAX indices. With $\alpha$ and $\Delta R$ determined for these indices respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. Other features, such as the long-range auto-correlation of volatilities and the fat-tail distribution of returns, are produced at the same time. Further, it is quantitatively demonstrated in our model that both the investors’ asymmetric trading and herding are essential generation mechanisms for the leverage and anti-leverage effects at the microscopic level. However, the investors’ trading is approximately symmetric for the five stock markets exhibiting the leverage effect, thus contributing very little to the effect. These two microscopic mechanisms and the methods for the determination of $\alpha$ and $\Delta R$ can also be applied to other complex economic systems with similar asymmetries in individual and collective behaviors, e.g., to futures markets, bank interest rates, foreign exchange rates and spot markets of non-ferrous metals. Supporting Information {#supporting-information .unnumbered} ====================== **Appendix S1** Derivation of $k$ (PDF)  \ **Appendix S2** Derivation of $\Delta r$ (PDF)  \ **Appendix S3** Equivalence of the shifting to $R(t)$ and that to $R'(t)$ (PDF)  \ **Appendix S4** The values of $\alpha$, $\Delta r$ and $\Delta R$ for eight more indices (PDF)  \ **Appendix S5** How $N$ affects the model parameters and simulation results (PDF) Acknowledgments {#acknowledgments .unnumbered} =============== Figure Legends {#figure-legends .unnumbered} ============== ![**The relation of $\Delta R$ and $\Delta r$.** With $\Delta R$ set to be $-4$, $-3$, $-2$, $-1$, $0$, $1$, $2$, $3$ and $4$ respectively, time series $R(t)$ is simulated $100$ times for $\alpha=1.0$. The corresponding $\Delta r$ is computed and averaged for each $\Delta R$. This plot shows a linear relation of $\Delta R$ and $\Delta r$, i.e., $\Delta R=38.2\Delta r$, and this result remains robust for $\alpha$ between $0.9$ and $1.1$. []{data-label="fig:Delatr"}](Deltar.pdf){width="4in"} ![**The return-volatility correlation functions for the S&P 500 and Shanghai indices, and for the corresponding simulations.** The S&P 500 and Shanghai indices are simulated with $(\alpha,\Delta R)=(1.0,3)$ and $(\alpha,\Delta R)=(1.1,-2)$, respectively. Dashed lines show an exponential fit $L(t)=c\cdot exp(-t/\tau)$ with $(c,\tau)=(-0.36,19)$ and $(0.61,8)$ for the S&P 500 Index and the Shanghai Index. []{data-label="fig:L"}](L.pdf){width="4in"} ![**The return-volatility correlation functions for the four indices and the corresponding simulations.** The Nikkei 225, FTSE 100, Hangseng and DAX indices are simulated with $(\alpha,\Delta R)=(1.0,2)$, $(1.0,2)$, $(1.0,2)$ and $(1.0,1)$, respectively. Dashed lines show an exponential fit $L(t)=c\cdot exp(-t/\tau)$ with $(c,\tau)=(-0.25,26)$ for the Nikkei 225 Index, $(-0.33,18)$ for the FTSE 100 Index, $(-0.50,10)$ for the Hangseng Index and $(-0.20,39)$ for the DAX Index. []{data-label="fig:FandNL"}](OtherL.pdf){width="4in"} ![**The auto-correlation functions of volatilities for the S&P 500 and Shanghai indices, and for the corresponding simulations.** For clarity, the curves for the S&P 500 Index have been shifted down by a factor of 10. []{data-label="fig:A"}](A.pdf){width="4in"} ![**The cumulative distributions of absolute returns for the S&P 500 and Shanghai indices, and for the corresponding simulations.** For clarity, the curves for the S&P 500 Index have been shifted left by a factor of 8.5. []{data-label="fig:P"}](P.pdf){width="4in"} ![**The return-volatility correlation functions for the simulated results of the S&P 500 and Shanghai indices, and for those of the controls.** The S&P 500 and Shanghai indices exhibit the leverage and anti-leverage effects, respectively. For the leverage effect, we consider two cases: $D$ is asymmetric; $D$ is symmetric. The latter is the control. For the anti-leverage effect, we consider the following cases: both $P_{trade}$ and $D$ are asymmetric; only $D$ is asymmetric; only $P_{trade}$ is asymmetric; both $P_{trade}$ and $D$ are symmetric. The last three cases are controls. For each case, the simulation is performed for $100$ times, and the average $L(t)$ is displayed. []{data-label="fig:SPSHL"}](SPSHL.pdf){width="4in"} Tables {#tables .unnumbered} ====== Index $V_{+}/V_{-}$ $d_{bull}$ $d_{bear}$ $\alpha$ $\Delta r$ *p*-value $\Delta R$ ------------------------ --------------- ------------ ------------ --------------- ------------------ -------------------- ------------ S&P 500 (1950-2012) $1.03$ $0.993$ $1.127$ $1.01\pm0.01$ $0.067\pm0.007$ $6.7\times10^{-4}$ $3$ Shanghai (1991-2006) $1.21$ $0.533$ $0.447$ $1.09\pm0.01$ $-0.043\pm0.005$ $1.0\times10^{-3}$ $-2$ Nikkei 225 (2003-2012) $1.01$ $0.729$ $0.807$ $1.01\pm0.01$ $0.039\pm0.005$ $1.5\times10^{-3}$ $2$ FTSE 100 (2004-2012) $0.98$ $0.673$ $0.729$ $0.99\pm0.01$ $0.028\pm0.003$ $7.3\times10^{-4}$ $2$ Hangseng (2001-2012) $1.04$ $0.966$ $1.029$ $1.02\pm0.02$ $0.032\pm0.003$ $4.4\times10^{-4}$ $2$ DAX (2008-2012) $0.96$ $0.797$ $0.822$ $0.98\pm0.02$ $0.013\pm0.002$ $2.9\times10^{-3}$ $1$ : **The values of $V_{+}/V_{-}$, $d_{bull}$, $d_{bear}$, $\alpha$, $\Delta r$ and $\Delta R$ for the six indices.** $V_{+}/V_{-}$, $d_{bull}$ and $d_{bear}$ are determined from the historical data for each index. We calculate $\alpha$ from $\alpha+ \beta=2$ and $\alpha/\beta=V_{+}/V_{-}$, and $\Delta r$ from $\Delta r = \frac{1}{2}(d_{bear}-d_{bull})$. Student’s *t*-test is performed to analyze the statistical significance of $\Delta r$. A *p*-value less than 0.05 is considered statistically significant. We compute $\Delta R$ from the linear relation between $\Delta r$ and $\Delta R$ for all these indices. As $\Delta R$ for the Shanghai Index is negative, it is rounded down to the nearest integer, while $\Delta R$ for other indices are positive, and each of them is rounded up to the nearest integer. \[tab:value\] $c$ $\xi$ *p*-value ------------ ---------------- ------------------ -------------------- S&P 500 $-0.36\pm0.02$ $-0.053\pm0.005$ $4.5\times10^{-4}$ simulation $-0.30\pm0.01$ $-0.032\pm0.001$ $5.7\times10^{-6}$ Shanghai $0.61\pm0.12$ $-0.133\pm0.014$ $6.9\times10^{-4}$ simulation $0.30\pm0.02$ $-0.066\pm0.004$ $7.9\times10^{-5}$ Nikkei 225 $-0.25\pm0.01$ $-0.038\pm0.004$ $6.9\times10^{-4}$ simulation $-0.27\pm0.01$ $-0.042\pm0.001$ $1.9\times10^{-6}$ FTSE 100 $-0.33\pm0.03$ $-0.055\pm0.007$ $1.4\times10^{-3}$ simulation $-0.26\pm0.01$ $-0.036\pm0.001$ $3.6\times10^{-6}$ Hangseng $-0.50\pm0.06$ $-0.098\pm0.012$ $1.2\times10^{-3}$ simulation $-0.22\pm0.01$ $-0.027\pm0.001$ $1.1\times10^{-5}$ DAX $-0.20\pm0.01$ $-0.026\pm0.002$ $2.0\times10^{-4}$ Simulation $-0.22\pm0.01$ $-0.031\pm0.001$ $6.5\times10^{-6}$ : **The values of $c$ and $\xi$ of the exponential fit $L(t)=c\cdot exp(\xi t)$ for the six indices and the corresponding simulations.** Student’s *t*-test is performed to analyze the statistical significance of $\xi$. A *p*-value less than 0.05 is considered statistically significant. \[tab:test\]
--- abstract: 'In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.' author: - | Donghoon Lee$^{1,2}$ Tomas Pfister$^2$ Ming-Hsuan Yang$^{2,3}$\ $^1$Electrical and Computer Engineering and ASRI, Seoul National University\ $^2$Google Cloud AI\ $^3$Electrical Engineering and Computer Science, University of California at Merced - bibliography: - 'videoinvideo.bib' title: - Inserting Videos into Videos - | Inserting Videos into Videos\ Supplementary Material --- ![image](fig1.jpg){width=".93\linewidth"} \[fig:teaser\] Introduction ============ Object insertion in images aims to insert a new object into a given scene such that the manipulated scene looks realistic. In recent years, there has been increasing interest in this problem as it can be applied to numerous vision tasks, including but not limited to training data augmentation for object detection [@ouyang2018pedestrian], interactive image editing [@hong2018learning], and manipulating semantic layouts [@lee2018context]. However, there remains a significant gap between its potential and real-world applications since existing methods focus on modifying a single image while either requiring carefully pre-processed inputs, segmented objects without backgrounds [@lin2018stgan], or generating objects from a random vector which makes it difficult to control the resulting appearance of the object directly [@hong2018learning; @lee2018context; @ouyang2018pedestrian]. In this paper, we introduce a new problem of inserting existing videos into other videos. More specifically, as shown in Figure \[fig:teaser\], a user can select a video of an object of interest, a walking pedestrian, and put it at a desired location in other videos, surveillance scenes. Then, an algorithm composes the object seamlessly while it moves in the scene video. Note that unlike previous approaches [@hong2018learning; @lee2018context; @ouyang2018pedestrian], we do not assume that the input videos have expensive segmentation annotations. This not only allows users to edit videos more directly and intuitively, but also opens the door to numerous applications from training data augmentation for object tracking, video person re-identification, and video object segmentation, to video content generation for virtual reality or movies. We pose the problem as a video-to-video synthesis task where the synthesized video containing an object of interest should follow the distribution of existing objects in the scene video. This falls into an unsupervised video-to-video translation problem since we do not have paired data in general, we do not observe exactly the same motion of the same object at the location we want to insert in different videos. Nevertheless, without any supervision, we face challenging issues such as handling different backgrounds, occlusions, lighting conditions and object sizes. Existing methods are limited to addressing such issues when there exists a number of moving objects and complex backgrounds. For example, the performance of an algorithm that relies on object segmentation methods, which often fails to crop foreground objects accurately in a complex scene, will be bounded by the accuracy of the segmentation algorithm. To address the problem, we first address the related problems in the image domain, we study how to insert a given object image into other frames from different videos. To alleviate the issue of unpaired data, we propose a simple yet effective way to synthesize fake data that can provide supervisory signals for object insertion. The key idea of this supervision approach using the fake data is, when training a network, the fake data is carefully rendered to closely match the distribution of real data so that back-propagated gradient signals from the supervised fake data can help training the network with the unsupervised real data. In this work, the fake data is generated by blending an object image and a random background patch from each video. Then, the network learns how to reconstruct the object from the blended data. As the reconstruction errors provide strong supervisory signals, this approach facilitates the learning process of the generative adversarial framework [@goodfellow2014generative] using unpaired real data. During inference, a new object is blended into a target location of the scene video and then fed to the trained network. To extend the above-described algorithm to videos, we discuss how to utilize a history of synthesized frames to obtain a temporally consistent video. We observe that if we simply add a history of previous frames as a new source of input to the object insertion network trained on images, the network will easily collapse by relying only on the (clean) previous frames instead of the (blended) current frame. To avoid this pitfall, we use an idea from the denoising autoencoder [@vincent2008extracting]: a random noise is injected into previous frames before synthesizing the current frame. It forces the network to learn semantics between previous frames and the current input instead of blindly copy-and-pasting most of the information from the previous frames. We conduct extensive experiments with strong baseline methods to evaluate the effectiveness of the proposed algorithm on real-world data. Experimental results show that the proposed algorithm can insert challenging objects, moving pedestrians under the cluttered backgrounds, into other videos. For quantitative evaluation, we carry out three experiments. First, we measure the recall of the state-of-the-art object detector [@redmon2018yolov3] for the inserted object. It assesses the overall appearance of the inserted object given the surrounding context. Second, given the state-of-the-art segmentation algorithm [@deeplabv3plus2018], we measure pixel-level precision and recall of the inserted object. Third, we perform a human subjective study for evaluating the realism of inserted objects. The main contributions of this work are summarized as follows: We introduce an important and challenging problem which broadens the domain of object insertion from images to videos. We propose a novel approach to synthesize supervised fake training pairs that can help a deep neural network to learn to insert objects without supervised real pairs. We develop a new conditional GAN model to facilitate the joint training of both unsupervised real and supervised fake training pairs. We demonstrate that the proposed algorithm can synthesize realistic videos based on challenging real-world input videos. Related Work ============ #### Inserting objects into images. Given a pair of an object image and a scene image, the ST-GAN approach [@lin2018stgan] learns a warping of the object conditioned on the scene. Based on the warping, the object is transformed to a new location without changing its appearance. As it focuses on geometric realism, they use carefully segmented object as an input. Other approaches aim to insert an object by rendering its appearance. In [@hong2018learning], an object in a target category is inserted into a scene given a location and a size of a bounding box. It first predicts a shape of the object in the semantic space, after which an output image is generated from the predicted semantic label map and an input image. A similar approach is proposed in [@ouyang2018pedestrian] without using a semantic label map. A bounding box of a pedestrian is replaced by random noise and then infilled with a new pedestrian based on the surrounding context. To learn both placement and shape of a new object, the method in [@chien2017detecting] removes existing objects from the scene using an image in-painting algorithm. Then, a network is trained to recover the existing objects. The results of this method rely significantly run script on whether the adopted image in-painting algorithm performs well, not generating noisy pixels. This issue is alleviated in [@lee2018context] by learning the joint distribution of the location and shape of an object conditioned on the semantic label map. This method aims to find plausible locations and sizes of a bounding box by learning diverse affine transforms that warp a unit bounding box into the scene. Then, objects of different shapes are synthesized conditioned on the predicted location and its surrounding context. In contrast to existing methods, our algorithm allows a user to specify both the appearance of an object to insert and its location. In addition, our algorithm does not require a segmentation map for training or test. #### Conditional video synthesis. The future frame prediction task conditions on previous frames to synthesize image content [@mathieu2015deep; @finn2016unsupervised; @walker2016uncertain; @denton2017unsupervised; @liang2017dual; @villegas2017decomposing; @villegas2017learning]. Due to the future uncertainty and accumulated error in the prediction process, it typically can generate only short video sequences. On the other hand, we synthesize long video sequences by inserting one video into other videos. The contents of a video can be transferred to other videos to synthesize new videos. In [@chan2018everybody], given a source video of a person, the method transfers one’s motion to another person in the target video. This method estimates the object motion using a detected body pose and trains a network to render a person conditioned on the pose. The trained network renders a new video as if the target subject follows the motion of the source video. Instead of following exactly the same motion, the approach in [@bansal2018recycle] transfers an abstract content of the source video while the style of the target video is preserved. A cyclic spatio-temporal constraint is proposed to address the task in an unsupervised manner. It translates a source frame to a target domain and predicts the next frame. Then, the predicted frame is translated back to the source domain. This work also forms a cyclic loop which can improve the video quality. The dynamic contents/textures in a video can also be used for conditional video synthesis. In [@tesfaldet2018], dynamic textures in a video such as water flow or fire flame are captured by learning a two-stream network. Then, the work animates an input image to a video with realistic dynamic motions. Artistic styles of a video is transferred to edit a target video while preserving its contents [@huang2017real; @ruder2018artistic]. For more generic video-to-video translations, the scheme in [@wang2018video] formulates conditional generative adversarial networks (GANs) to synthesize photorealistic videos given a sequence of semantic label maps, sketches or human pose as an input. During training, the network takes paired data as input, sequences of a semantic label map and the corresponding RGB image sequence. The network is constrained to preserve the content of the input sequence in the output video. ![image](fig22.jpg){width=".85\linewidth"} Proposed Algorithm ================== In this work, we consider the problem where a user selects an object in video $A$ and wants to insert it at a desired location in video $B$. We assume that each video has annotations for bounding boxes and IDs of objects at every frame. From the bounding boxes of the selected object in $A$, we obtain a video $\mathbf{u}_A$ consisting of cropped images. The goal is to translate $\mathbf{u}_A$ to $\mathbf{v}_A$ so that the translated video is realistic when inserted into $B$. We first tackle this problem’s image counterpart and then extend it to videos. Inserting images into images {#sec:i2i} ---------------------------- Let $u_A$ denote a frame in $\mathbf{u}_A$ which will be inserted into a user-defined region $r_B$ in $B$. We train a generator network $G_I$ which takes $u_A$ and $r_B$ as inputs to render an output $v_A$. Note that this is different from existing image-to-image translation tasks [@huang2018multimodal; @isola2017image; @liu2016unsupervised; @CycleGAN2017; @zhu2017toward] since they aim to preserve the content of an input image while changing it to different attributes or styles, a semantic map is translated to RGB images that have the same semantic layout. In contrast, we need to translate two different images into a single image while learning which part of the content in each image should be preserved. One challenging issue is that we do not have a training tuple $(u_A, r_B, v_A)$. To address this issue, we first cast the problem as a conditional image in-painting task. More specifically, we corrupt $r_B$ by blending $u_A$ using pixel-wise multiplications with a fixed binary mask $m$, $u_A \oplus r_B = u_Am/2 + r_B(1-m/2)$, as shown in the Figure \[fig:insert\_img\]. Then, the generator learns a mapping $G_I\colon(u_A\oplus r_B)\to v_A$ to synthesize realistic $v_A$. To this end, the generator learns how to render the object while suppressing mismatched backgrounds based on the context of surrounding non-blended regions. The key advantage of this formulation is that it is easy to synthesize fake training pairs that are similar to $(u_A\oplus r_B, v_A)$. In this paper, we propose two types of fake pairs $(u_B\oplus r_A, u_B)$ and $(u_B\oplus r_B, u_B)$ to learn object insertion. The intuition behind it is that these pairs contain two separate tasks that the generator has to perform during inference: rendering consistent backgrounds based on the context, and recovering the object region overlapped with $r_B$. We design two objective functions for fake pairs using $G_I$ and an image discriminator $D_I$. First, $$\label{eq:fake_pair_adv} \begin{split} & \mathcal{L}_{\mathcal{A}}^{fake}(G_I, D_I) = \mathbb{E}_{(u_B,r_A)}[\log D_I(u_B, u_B \oplus r_A)] \\ & \ + \mathbb{E}_{(u_B,r_B)}[\log D_I(u_B, u_B \oplus r_B)] \\ & \ + \mathbb{E}_{(u_B,r_A)}[\log(1-D_I(G_I(u_B \oplus r_A), u_B \oplus r_A))] \\ & \ + \mathbb{E}_{(u_B,r_B)}[\log(1-D_I(G_I(u_B \oplus r_B), u_B \oplus r_B))], \end{split}$$ is a conditional adversarial loss to make the reconstructed image sharper and realistic[^1]. Second, $$\label{eq:fake_pair_recon} \mathcal{L}_{\mathcal{R}}(G_I) = \|u_B - G_I(u_B\oplus r_A)\| + \|u_B - G_I(u_B\oplus r_B)\|,$$ is a content loss to reconstruct $u_B$. We present results on the real pair using a network trained with fake pairs in Figure \[fig:result\_different\_objective\](c). Although some parts are blurry, the overall shape and appearance of inserted objects are preserved. In addition, most of the background pixels from $A$ are removed and replaced by $r_B$, showing that fake pairs provide meaningful signals to the network to insert unseen objects. Thus, we expect that the network can be trained well with both of real and fake pairs. We update the adversarial loss to consider real pairs as follows: $$\label{eq:both_pair_adv1} \begin{split} & \mathcal{L}_{\mathcal{A}}(G_I,D_I) = \mathcal{L}_{\mathcal{A}}^{fake}(G_I, D_I) \\ & \ + \mathbb{E}_{(u_A,r_B)}[\log(1-D_I(G_I(u_A \oplus r_B), u_A \oplus r_B))]. \end{split}$$ However, as shown in Figure \[fig:result\_different\_objective\](d), the synthesized results become unstable when we naively train the network using (\[eq:fake\_pair\_recon\]) and (\[eq:both\_pair\_adv1\]). We attribute this to different distributions of the fake pair and real pair. Although their similar distributions make it possible to generalize the network to unseen images, when the network actually learns with both pair types, it is able to distinguish between them, thus limiting generalization. We address this issue by making it more difficult for the network to distinguish these pairs. In particular, we make it uncertain about whether the input is sampled from the fake pair or real pair. To this end, we add a discriminator $D_E$ that aims to distinguish the input type based on its embedded vector as follows: $$\label{eq:embed_adv} \begin{split} \mathcal{L}_{\mathcal{A}}(G_I, D_E) &= \mathbb{E}_{(u_B,r_A)}[\log D_E(e_{u_B \oplus r_A})] \\ & + \mathbb{E}_{(u_B,r_B)}[\log D_E(e_{u_B \oplus r_B})] \\ & + \mathbb{E}_{(u_A,r_B)}[\log (1-D_E(e_{u_A \oplus r_B}))], \end{split}$$ where $e_x$ denotes an embedded vector from the encoder in $G_I$ with an input $x$. The encoder is trained to fool the discriminator by embedding the fake pair and real pair into the same space. This embedding vector is fed to discriminators as a conditional input. We tile the vector to the same size of the input image and concatenate them to the input channel. The objective function $\mathcal{L}_{\mathcal{A}}(G_I,D_I)$ is modified as follows: $$\label{eq:both_pair_adv2} \begin{split} & \mathcal{L}_{\mathcal{A}}(G_I, D_I) = \mathbb{E}_{(u_B,r_A)}[\log D_I(u_B, e_{u_B \oplus r_A})] \\ & \ + \mathbb{E}_{(u_B,r_B)}[\log D_I(u_B, e_{u_B \oplus r_B})] \\ & \ + \mathbb{E}_{(u_B,r_A)}[\log(1-D_I(G_I(u_B \oplus r_A), e_{u_B \oplus r_A}))] \\ & \ + \mathbb{E}_{(u_B,r_B)}[\log(1-D_I(G_I(u_B \oplus r_B), e_{u_B \oplus r_B}))] \\ & \ + \mathbb{E}_{(u_A,r_B)}[\log(1-D_I(G_I(u_A \oplus r_B), e_{u_A \oplus r_B}))]. \end{split}$$ Finally, the overall objective function for object insertion on the image domain is formulated as follows: $$\label{eq:img_all_loss} \mathcal{L}(G_I, D_I, D_E) = \mathcal{L}_{\mathcal{A}}(G_I, D_I) + \mathcal{L}_{\mathcal{A}}(G_I, D_E) + \mathcal{L}_{\mathcal{R}}(G_I).$$ Figure \[fig:result\_different\_objective\](e) shows that the inserted objects using the loss function in  are sharp and realistic. ![Network structure of the video insertion network $G_V$. As an illustrative example, we show the case where the number of layers is four. The network takes previous frames $(v_A^{t-N},\dots,v_A^{t-1})$ and a blended image $u_A^t\oplus r_B^t$ as an input to render $v_A^t$. Each square denotes a layer in the network. The dashed lines indicates shared weights, and layers next to each other represent channel concatenations. []{data-label="fig:Gv"}](fig4.png){width=".9\linewidth"} Inserting videos into videos ---------------------------- In this section, we discuss how to extend the object insertion model from images to videos. To this end, we make two major modifications. First, when rendering the current frame, we also look up previous frames. Second, we add a new term in the objective function to synthesize temporally consistent videos. Let $G_V$ denote a video generator that learns a mapping $G_V\colon (\mathbf{u}_A\oplus \mathbf{r}_B) \to \mathbf{v}_A$[^2]. One simple mapping is to apply $G_I$ for each frame. However, as the mapping of a frame is independent from neighboring frames, the resulting sequence becomes temporally inconsistent. Therefore, we let $G_V$ to additionally look up $N$ previous frames while synthesizing each frame from the blended input. This Markov assumption is useful for generating long sequence videos [@wang2018video]. Figure \[fig:Gv\] shows the proposed U-net [@ronneberger2015u] style encoder-decoder network architecture. If the network operates without blue layers, which correspond to the feature maps of previous frames, then it is identical to $G_I$ in Section \[sec:i2i\]. The network encodes all previous frames using a shared encoder. Then, the feature map is linearly combined with a scalar weight $w^n$ which represents the importance of each frame. We use $N=2$ and $w^1=w^2=0.5$ for experiments in this work. To learn $G_V$, we calculate an error signal for the generated sequence using the following objective function: $$\label{eq:vid_all_loss} \begin{split} \mathcal{L}(G_V, D_I, D_V, D_E) & = \mathcal{L}_{\mathcal{A}}(G_V, D_I) + \mathcal{L}_{\mathcal{A}}(G_V,D_V)\\ & + \mathcal{L}_{\mathcal{A}}(G_V, D_E) + \mathcal{L}_{\mathcal{R}}(G_V), \end{split}$$ where $D_V$ is a video discriminator. The first term is defined similarly to (\[eq:both\_pair\_adv2\]) while we select a random frame from the generated sequence to calculate the loss; this term focuses on the realism of the selected frame. The second term assesses the rendered sequence as follows: $$\label{eq:GvDv} \begin{split} & \mathcal{L}_{\mathcal{A}}(G_V, D_V) = \mathbb{E}_{(\mathbf{u}_B,\mathbf{r}_A)}[\log D_V(\mathbf{u}_B, e_{\mathbf{u}_B \oplus \mathbf{r}_A})] \\ & \ + \mathbb{E}_{(\mathbf{u}_B,\mathbf{r}_B)}[\log D_V(\mathbf{u}_B, e_{\mathbf{u}_B \oplus \mathbf{r}_B})] \\ & \ + \mathbb{E}_{(\mathbf{u}_B,\mathbf{r}_A)}[\log(1-D_V(G_V(\mathbf{u}_B \oplus \mathbf{r}_A), e_{\mathbf{u}_B \oplus \mathbf{r}_A}))] \\ & \ + \mathbb{E}_{(\mathbf{u}_B,\mathbf{r}_B)}[\log(1-D_V(G_V(\mathbf{u}_B \oplus \mathbf{r}_B), e_{\mathbf{u}_B \oplus \mathbf{r}_B}))] \\ & \ + \mathbb{E}_{(\mathbf{u}_A,\mathbf{r}_B)}[\log(1-D_V(G_V(\mathbf{u}_A \oplus \mathbf{r}_B), e_{\mathbf{u}_A \oplus \mathbf{r}_B}))]. \end{split}$$ The third and fourth terms are defined similarly to (\[eq:embed\_adv\]) and (\[eq:fake\_pair\_recon\]), respectively. In addition, while training the network, we observe that the predicted frame $v_A^t$ heavily relies on the previous frames rather than the current input. The main reason is that the current input is corrupted by a blending $u_A^t\oplus r_B^t$ which makes it more difficult to process. Therefore, instead of learning to recover the current frame, the network gradually ignores the current input and depends more on the previous frame. It is a critical problem when generating long videos as the error from the previous frame is accumulated. As a result, the generated sequence contains severe artifacts after a number of frames. To address this issue, we degrade previous frames as well using random noise before render the current frame. By blocking this easy cheating route, the network has to learn semantic relationships between the two inputs instead of relying on one side. It makes the network significantly stable during training. Experimental Results {#sec:experiments} ==================== We evaluate our method on the multi-target tracking or person re-identification databases such as the DukeMTMC [@ristani2016MTMC], TownCenter [@benfold2011stable], and UA-DETRAC [@wen2015ua] to show applicability of our algorithm on real-world examples. These datasets record challenging scenarios where pedestrians or cars move naturally. We split 20% of the data as a test set and present experimental results on the test set. Additional results, including sample generated videos and a user study, are included in the supplementary material. #### Implementation details. For all experiments, the network architecture, parameters, and initialization are similar to DCGAN [@radford2015unsupervised]. We use transposed convolutional layers with 64 as a base number of filters for both of the generator and discriminator. The batch size is set to 1 and instance normalization is used instead of batch normalization. Input videos are resized to $1024\times2048$ pixels. We crop $u_{(\cdot)}$ and $r_{(\cdot)}$ from the video and resize to $256\times128$ pixels. Then, we render an object on the $256\times128$ pixels patch. It is transformed to $512\times256$ pixels image or video for visualization. For each iteration, we pick a random location in $A$ to put a new object since we want to cover various location and size input of a user. #### Baseline models and qualitative evaluations. As the problem introduced in this paper is a new problem, we design strong baselines for performance evaluation. For object insertion in images, we present six baseline models. First, we apply the state-of-the-art semantic segmentation algorithm [@deeplabv3plus2018] to segment the interested object region in video $A$, a pedestrian in the DukeMTMC dataset. Then, object pixels are copied to a region in video $B$ using the predicted segmentation mask as shown in Figure \[fig:baseline\](c). However, the predicted segmentation mask is inaccurate due to the complex background and articulated human pose. Therefore, some parts of the object are often missing and undesired background pixels from video $A$ are included in the synthesized frame. In addition, the brightness of the inserted pixels does not match with surrounding pixels in video $B$. Second, we apply the Poisson blending [@perez2003poisson] method to the predicted object mask as shown in Figure \[fig:baseline\](d). Although the boundary of object becomes smoother, the blended image still contains artifacts. In addition, the results depend on the performance of the segmentation algorithm. Third, we design four GAN-based methods. One naive approach focuses on synthesizing realistic example using the following objective function: $$\label{eq:base_gan1} \begin{split} \mathcal{L}_{\mathcal{A}}^{base}(G, D) &= \mathbb{E}_{u_B}[\log D(u_B)] \\ & + \mathbb{E}_{(u_A,r_B)}[\log D(G(u_A \oplus r_B))]. \end{split}$$ In this case, the generator easily collapses as it is not guided to preserve the content of the input object as shown in Figure \[fig:baseline\](e). To alleviate this issue, we add an objective function that checks the content in the generated image, a pixel-wise reconstruction loss or the perceptual loss [@gatys2016image] as shown in Figure \[fig:baseline\](f) and Figure \[fig:baseline\](g). The objective functions are defined as follows: $$\label{eq:base_gan2} \begin{split} \mathcal{L}_{pixel}^{base}(G, D) &= \mathcal{L}_{\mathcal{A}}^{base}(G,D) + \|u_Am - v_Am\|, \end{split}$$ $$\label{eq:base_gan3} \begin{split} & \mathcal{L}_{perceptual}^{base}(G, D) = \mathcal{L}_{\mathcal{A}}^{base}(G,D) \\ & \qquad\qquad + \sum_l \frac{1}{C_l H_l W_l}\|\phi_l(u_Am)-\phi_l(v_Am)\|^2_2, \end{split}$$ where $\phi_l$ is $l$-th activation map of the VGG19 network [@simonyan15] with a shape of $C_l\times H_l\times W_l$. We use activation maps of relu2\_2 and relu3\_3 layers of the VGG19 network which is pre-trained on the ImageNet dataset [@russakovsky2015imagenet] to calculate the perceptual loss. The main limitation of these approaches is that the network is trained to preserve all pixels around the object in $u_A$. As a result, a large number of undesired background pixels appear in $v_A$. The final baseline model uses the cycle consistency loss [@CycleGAN2017] which has been used to train networks with unpaired training data. For the cyclic loss, we learn two mapping functions $G\colon (u_A, r_B) \to v_A$ and $F\colon (u_B, r_A) \to v_B$. By taking the conditional inputs into account, the objective function is defined by: $$\label{eq:base_gan4} \begin{split} & \mathcal{L}_{cyc}^{base}(G, F, D_A, D_B) = \mathcal{L}_{\mathcal{A}}(G,D_B) + \mathcal{L}_{\mathcal{A}}(F,D_A) \\ & \ + \mathbb{E}_{(u_A,r_B)}[\|F(G(u_A,r_B),u_A(1-m))-u_A\|_1] \\ & \ + \mathbb{E}_{(u_B,r_A)}[\|G(F(u_B,r_A),u_B(1-m))-u_B\|_1] \\ & \ + \mathbb{E}_{(u_A,r_B)}[\|G(u_A,r_B)(1-m)-r_B(1-m)\|_1] \\ & \ + \mathbb{E}_{(u_B,r_A)}[\|F(u_B,r_A)(1-m)-r_A(1-m)\|_1], \end{split}$$ where $D_A$ and $D_B$ are discriminators for each video and $\mathcal{L}_{\mathcal{A}}(G,D_B)$ and $\mathcal{L}_{\mathcal{A}}(F,D_A)$ are typical adversarial losses. The last two terms are added to force the network to insert an object at a given $r_A$ or $r_B$. Although the formulation has the potential to learn unpaired mappings, it still cannot guide the network to preserve the same object while translating images as shown in Figure \[fig:baseline\](h). In addition, we observe that this makes the network unstable during training. In contrast, the proposed algorithm inserts an object with its sharp shape and renders less noisy background pixels as shown in Figure \[fig:baseline\](i). For video object insertion, we consider two baseline models. First, frames are synthesized without using previous frames. As the model only processes the current frame as an input, the overall video may contain flickering or inconsistent content. Second, a video is generated without injecting noise into previous frames. In such cases, as small errors in each frame accumulate over frame, the synthesized images are likely noisy. [![image](video3/frame0100.jpg){width="0.86\linewidth"}](https://youtu.be/-lL8zPYYNV4) [![ Results of cross dataset pedestrian insertion (from the DukeMTMC dataset to TownCenter dataset) and a car video insertion on the UA-DETRAC dataset. []{data-label="fig:vid2"}](video25/frame0041.jpg "fig:"){width="0.95\linewidth"}](https://youtu.be/iOJcp-JubWA) [![ Results of cross dataset pedestrian insertion (from the DukeMTMC dataset to TownCenter dataset) and a car video insertion on the UA-DETRAC dataset. []{data-label="fig:vid2"}](rebuttal_img/001.jpg "fig:"){width="0.95\linewidth"}](https://youtu.be/v38b-uD0t3o) Figure \[fig:vid1\] shows video object insertion results with baseline comparisons. We use an automatic blending mode of a commercial video editing software (Adobe Premier CC Pro) as one baseline. The other baseline uses DeepLabv3+ [@deeplabv3plus2018] to copy and paste the predicted segment along frames. It shows that the proposed algorithm can synthesize more realistic videos than other baseline methods. In addition, as shown in Figure \[fig:vid2\], our algorithm is capable of inserting videos across databases and different objects such as a car. Method B1 B2 (\[eq:base\_gan1\]) (\[eq:base\_gan2\]) (\[eq:base\_gan3\]) Our -------- ------ ------ --------------------- --------------------- --------------------- ---------- -- Recall 0.39 0.76 0.73 0.80 0.78 **0.86** : Recall of the state-of-the-art object detector [@redmon2018yolov3] on the DukeMTMC database. B1: Adobe Premiere blending mode. B2: Segmentation-based composition [@deeplabv3plus2018]. \[tab:det\_recall\] Method B1 (\[eq:base\_gan1\]) (\[eq:base\_gan2\]) (\[eq:base\_gan3\]) Our ----------- ------ --------------------- --------------------- --------------------- ---------- Precision 0.32 0.61 0.70 0.76 **0.85** Recall 0.28 0.26 0.47 0.61 **0.72** OIS 0.30 0.36 0.56 0.68 **0.78** : Object insertion score on the DukeMTMC dataset. B1: Adobe Premiere blending mode. \[tab:seg\_ois\] #### Quantitative evaluations. To quantify the realism of the inserted object, an object detector is often used to locate the inserted object [@lee2018context; @ouyang2018pedestrian; @chien2017detecting]. The premise is that a detector is likely to locate only well-inserted objects since state-of-the-art methods take both of the object and its surrounding context into account. We use the YOLOv3 detector [@redmon2018yolov3] to determine whether it can correctly detect the inserted object or not. We fix the detection threshold and measure the recall of the detector by calculating the intersection over union (IoU) between the inserted object and detected bounding boxes, using an IoU threshold of 0.5. Table \[tab:det\_recall\] shows the average recall using a network trained with five different iterations. For each experiment, we sample one thousand images at random. It shows that the proposed algorithm achieves the highest recall value on average. In addition, we accidentally found an interesting corner case of this experiment. While (\[eq:base\_gan1\]) generates non-realistic images in a similar mode as shown in Figure \[fig:baseline\](e), this method once achieves the highest recall value. It reveals one limitation of assessing the synthesized image using a detector, , if a trained detector mistakenly returns positive detection result for a non-realistic fake image, then it is highly likely that other non-realistic images in the same mode will be detected as positive samples as well. While detection results give an idea of how realistic (or at least, detectable) the inserted object is, it does not indicate pixel-level accuracy of the object insertion, whether the object pixels in the input are preserved in the output. To this end, we introduce a new metric based on pixel-level precision and recall for object insertion. Given a semantic segmentation algorithm, let $s_A$ denote a binary segmentation mask of the input object image. Also let $s_\Delta$ be a binary mask where $s_\Delta(i,j)=1$ when $v_A(i,j)$ is closer to $u_A(i,j)$ than $r_B(i,j)$. Thus, $s_\Delta$ represents pixel locations of the inserted object. We then define the precision $P$, recall $R$, and object insertion score (OIS) as follows: $$\label{eq:precision_recall} P = \frac{|s_A\odot s_\Delta|}{|s_\Delta|}, \ R = \frac{|s_A\odot s_\Delta|}{|s_A|}, \ \text{OIS} = 2\frac{PR}{P+R},$$ where $\odot$ is an element-wise multiplication, $|s|$ is an area of non-zero region in $s$, and OIS is defined using the $F_1$ score. We calculate the score based on randomly generated one thousand samples and segmentation masks are obtained by the DeepLabv3+ [@deeplabv3plus2018] method. Table \[tab:seg\_ois\] shows that the proposed algorithm achieves the highest OIS against other baseline algorithms. We also note that the OIS of the baseline model based on (\[eq:base\_gan1\]) is the lowest. In order to show potential application for data augmentation, we train a detector using synthesized objects by our algorithm. We detect pedestrians on the DukeMTMC dataset using YOLOv3 initialized on the ImageNet. For training and evaluation, we pick 100 and 1,000 frames at random from the video of camera 5 in the dataset. In addition, 3,000 frames are augmented by inserting pedestrians from camera 1. It boosts mAP from 53.1% to 68.3%. Conclusion ========== In this paper, we have introduced an algorithm to a new problem: manipulating a given video by inserting other videos into it. It is a challenging task as it is inherently an unsupervised (unpaired) problem. Unlike existing approaches, we propose an algorithm that converts the problem to a paired problem by synthesizing fake training pairs and corresponding loss functions. We conducted experiments on real-world videos and demonstrated that the proposed algorithm is able to render long realistic videos with a given object video inserted. As a future work, it is interesting to make the inserted object interact with the new video, , path navigation or occlusion handling. In this supplementary material, we describe additional experimental results. Quantitative Results ==================== As the problem on inserting videos into video is new in the field, there are no existing methods that achieve this task. Sample frames from videos are shown in Figure \[fig:ex1\] to Figure \[fig:occ\]. For each figure, at the upper left corner of the footage, we display a frame from video $A$ that contains the target object marked in a red box. Inserted objects into video $B$ using the proposed algorithm are presented at the upper right corner. Rendering results of a video editing software, Adobe Premier Pro CC, is located at the bottom left corner as the first strong baseline method. We use blending mode of the software to automatically overlay two videos. The second strong baseline deployed at the bottom right corner is based on the state-of-the-art segmentation algorithm [@deeplabv3plus2018]. It often segments the target object incorrectly, , some parts are missing (Figure \[fig:ex1\](a)) or backgrounds are included (Figure \[fig:ex1\](b)). Experimental results show that the proposed algorithm synthesizes more realistic videos in most cases. We discuss our two different failure cases shown in Figure \[fig:fail\] and Figure \[fig:occ\]. If the image patch of the target object contains different objects or rare backgrounds, then the synthesized object is less realistic as shown in Figure \[fig:fail\]. This issue can be alleviated by collecting more data. Occlusions caused by other pedestrians or objects in the scenes are another challenging case. If the object is occluded in video $A$ as shown in Figure \[fig:occ\](a), then ideally the algorithm has to infer the occluded part and infill the missing part. In Figure \[fig:occ\](b), the object has to be inserted behind an existing object in video $B$. It is particularly challenging case since the algorithm has to decide whether the new object has to inserted in front of the existing object or behind it. In addition, if the new object needs to be inserted behind the existing object, then it also has to determine which part should be visible. We note it requires scene parsing and understanding of 3D geometric to better infer how to seamlessly insert objects in videos, which will be our future work. It is also worth mentioning that our long-term goal is on video forensics (i.e., to detect fake or tampered videos) although we focus on inserting videos into videos in this work. User Study ========== Method Baseline 1 Baseline 2 Ours ------------ ------------ ------------ ----------- Avg. Score 2.35 2.27 **3.67** Preference 17.3% 13.7% **70.0%** : User study results on synthesized videos. Baseline 1 renders a video using a blending mode of the Adobe Premier CC Pro. Baseline 2 is based on a segmentation algorithm [@deeplabv3plus2018]. \[tab:user\] We perform a human subject study to evaluate the realism of synthesized videos. We conduct the experiments based on 22 test videos and 13 human workers. Each video contains 300 frames (5 seconds) while descriptions of each algorithm are replaced by method 1, method 2, and method 3 as shown in Figure \[fig:user\]. We ask workers to score each method from 1 to 5 (higher score for the better visual quality). Therefore, each worker actually needs to assess 66 different results. We provide two and three times slower videos with the original video to workers for more accurate evaluation. Table \[tab:user\] shows the average score and percentage of cases that workers give the highest score to the method. We find that for 70% of the time the worker preferred our approach than baseline methods. In addition, the proposed algorithm achieves significantly higher average scores. More Implementation Details =========================== #### Data preparation. The DukeMTMC dataset provides region of interest (ROI) to track pedestrians. We use bounding boxes of pedestrians in the ROI as training and test data. For $r_A$ and $r_B$, we pick a random location and a size around the ROI. Then, we move $r_A$ by following a movement of a random pedestrian in video $A$. We also scale the trajectory of the target object when it is inserted to video $A$ based on the height ratio between $r_B$ and $u_A$. It is based on our assumption that the length of each step is approximately proportional to the height of a person. For the TownCenter dataset, we use bounding boxes that are not cross the boundary of the image. As the dataset does not provide ROI, we randomly sample a location to insert an object around the center of the image. \ \ \ \ \ \ ![ An example of our user study layout. In the middle, a user can play the video. []{data-label="fig:user"}](userstudy.jpg){width="1\linewidth"} ![image](nonoise.jpg){width=".9\linewidth"} #### Network training. While training, we use a parameter $\lambda$ to control the importance between the real and fake pairs. It is multiplied with loss terms that are related to the fake pair. Empirically we find that $\lambda=0.1$ makes the training process stable. To make the training more stable, we inject noise to previous frames when generating the current frame as discussed in the paper. Without the noise injection, the network blindly uses the information in the previous frame. It may result in propagating wrong pixel values over time as shown in Figure \[fig:noise\]. To address this issue, we add $0.01\times z$ at each pixel where $z$ is sampled from a normal Gaussian distribution. [^1]: We denote $\mathbb{E}_{(\cdot)} \triangleq \mathbb{E}_{(\cdot)\sim p_{data}(\cdot)}$ for notational simplicity. [^2]: We denote $(\mathbf{u}_A\oplus \mathbf{r}_B)$ as a sequence of blended inputs $((u_A^{1}\oplus r_B^{1},\dots,(u_A^{T}\oplus r_B^{T}))$ where $T$ is the number of frames.
--- address: - 'Department of Mathematics, University of Illinois Urbana Champaign, Urbana, IL 61820.' - 'Department of Mathematics, University of North Carolina, Chapel Hill, NC 27599.' - ' In previous work Majda[@Majda1; @Majda2] and McLaughlin[@McLMa; @Rico] computed explicit expressions for the $2N$th moments of a passive scalar advected by a linear shear flow in the form of an integral over ${\bf R}^N$. In this paper we first compute the asymptotics of these moments for large moment number. We are able to use this information about the large $N$ behavior of the moments, along with some basic facts about entire functions of finite order, to compute the asymptotics of the tails of the probability distribution function. We find that the probability distribution has Gaussian tails when the energy is concentrated in the largest scales. As the initial energy is moved to smaller and smaller scales we find that the tails of the distribution grow longer, and the distribution moves smoothly from Gaussian through exponential and “stretched exponential”. We also show that the derivatives of the scalar are increasingly intermittent, in agreement with experimental observations, and relate the exponents of the scalar derivative to the exponents of the scalar.' author: - 'Jared C. Bronski' - 'Richard M. McLaughlin' title: '**Rigorous estimates of the tails of the probability distribution function for the random linear shear model.**' --- -.25in -.1in 8.5in 6.3in =10000 =10000 Background ========== It is a well documented experimental fact that, while the statistics of the velocity field in a turbulent flow are roughly Gaussian, the statistics of other quantities like the pressure, derivatives of velocity and a passively advected scalar are generally far from Gaussian.[@CGH; @CGHKLTWZZ; @Ching; @GCGLN; @KSS; @ThVanA] For example Castaing, et. al.[@CGHKLTWZZ] observed in experiments in a Rayleigh-Bénard convection cell that for Rayleigh number $Ra<10^7$ the distribution of temperature appeared to be roughly Gaussian, while for larger Rayleigh numbers, $Ra>10^8$, the temperature distribution appeared to be closer to exponential. In related work Ching[@Ching] studied the probability distribution functions (pdfs) for temperature differences at different scales, again in a Rayleigh-Bénard cell, and found that the pdfs over a wide range of scales were well approximated by a ‘stretched exponential’ distributions of the form $$P(T) = e^{-C |T|^\beta}.$$ At the smallest scales the observed value of the exponent was $\beta\approx .5$, while at the largest scales the observed exponent was roughly $\beta \approx 1.7$. Kailasnath, Sreenivasan and Stolovitky[@KSS] measured the pdfs of velocity differences in the atmosphere for a wide range of separation scales. They found similar distributions to the ones found by Ching, with exponents ranging from $\beta\approx.5$ for separation distances in the dissipative range to $\beta \approx 2$ on the integral scale. Finally Thoroddsen and Van Atta[@ThVanA] studied thermally stratified turbulence in a wind tunnel and found the probability distributions of the density to be roughly Gaussian, while the distributions of the density gradients were exponential. A complete understanding of such intermittency lies at the heart of understanding fluid turbulence, and would certainly require a detailed understanding of the creation of small scale fluid structures involving both patchy regions of strong vorticity and intense gradients [@Chorin; @Sreenivasan2]. An alternative starting point is to assume the statistics of the flow are known a priori and to determine how these statistics are manifest in a passively evolving quantity. This question of inherited statistics is significantly easier than the derivation of a complete theory for fluid turbulence, though still retains many inherent difficulties such as problems of closure. Motivated by the Chicago experiments of the late 80’s [@CGHKLTWZZ], and earlier work[@AS; @LL; @PS; @Sre2], there has been a tremendous effort towards understand the origin of the intermittent temperature probability distribution function in passive scalar models with prescribed (usually Gaussian) velocity statistics. For a very complete review of the subject of turbulent diffusion, including a full discussion of scalar intermittency, see the recent survey article of Majda and Kramer [@MajdaKramer]. Most of the work on the scalar statistics has either been directed at understanding the anomalous scaling of temperature structure functions, or at understanding the shape of the tail of the limiting scalar pdf. There has been a wealth of theoretical efforts addressing this last issue of the tail[@BaFa; @Us; @CGHKLTWZZ; @CCK; @Ch; @CFKL; @ChiTu; @Deu; @HoSig1; @pierrehumbert; @K; @Kerstein; @Majda1; @Majda2; @McLMa; @Pumir; @PumirShraimanSiggia; @ShrSig; @SiYak; @Son; @YOBJSS]. A somewhat common theme, particularly in the pumped case, is the prediction that the scalar pdf should develop an exponential tail. For example Kraichnan[@K], Shraiman and Siggia[@ShrSig] and Balkovsky and Falkovich[@BaFa] all find exponential tails. Another important question is to understand the pdf of the scalar gradient. Naturally, gradient information may be expected to amplify contributions from small scales, and a general theory relating the scalar tail with the gradient tail, even for passively evolving quantities would be quite valuable. There has been somewhat less theoretical effort aimed at exploring the difference in statistics between the scalar and the scalar gradient. Chertkov, Falkovich and Kolokolov[@CFK], Chertkov Kolokolov and Vergassola[@CKV] and Balkovsky and Falkovich[@BaFa] have explored this question and have found a stretched exponential distribution of the scalar gradient in situations for which the scalar has an exponential tail. Holzer and Siggia[@HoSig1; @HoSig2], and Chen and Kraichnan[@CK] have observed similar phenomena numerically. In this paper we examine the scalar and scalar gradient pdf tail in an exactly solvable model first studied by Majda [@Majda1] and McLaughlin and Majda [@McLMa] who were able to construct explicit moment formulas for the moments of a passive scalar advected by a rapidly fluctuating linear shear flow in terms of $N$-dimensional integrals. In that work, it was established that the degree of length scale separation between the initial scalar field and the fluid flow is inherent to the development of a broader than Gaussian pdf. Here, we explicitly calculate the tails of the pdf for this model. We begin by analyzing the expression derived by Majda for the large time $2N$th moment of the pdf for the random uniform shear model, which is given by an integral over ${\bf R}^N$. From these normalized moments, we will construct the tail of the associated pdf. We point out that in this calculation the convergence of the pdf for finite time to the pdf for infinite time is weak - for fixed moment number the finite time moment converges to the limiting moment. The convergence is almost certainly not uniform in the moments. For a more thorough investigation of the uniformity of this limiting process in the context of general, bounded periodic shear layers, see Bronski and McLaughlin[@Us]. The tail is calculated in two steps. First, using direct calculation and gamma function identities we are able to reduce the $N$-dimensional integral to a [*single*]{} integral of Laplace type, from which the asymptotic behavior of the $2N$th moment follows easily. The asymptotic behavior of the moments is important for determining the tails of the probability distribution function, as we establish below. Second, we consider the problem of reconstructing the probability measure from the moments. Using ideas from complex analysis, mainly some basic facts about entire functions of finite order and type, we are able to provide rigorous estimates for the rate of decay of the tails of the measure. We find that the tails decay like $$\exp(-c_\alpha|T|^{\frac{4}{3+\alpha}})$$ so depending on the precise value of the parameter $\alpha$ (defined in section II, below, which sets the degree of scale separation between the scalar and flow field) the model admits tails which are Gaussian, exponential, or stretched exponential. We also show that in this model higher order derivatives of the scalar in the shear direction are always more intermittent, with a very simple relationship between the exponents of the scalar and its derivative. The distributions of derivatives in the cross-shear direction, however, display the same tails as the scalar itself. We remark that, while the stream-line topology for shear profiles is admittedly much simpler than that in fully developed turbulence, the fact that the exact limiting tail for the decaying scalar field may be explicitly and rigorously constructed suggests such models to be exceptionally attractive for testing the validity of different perturbation schemes. It is also extremely interesting because it demonstrates that, at least for unbounded flows, a positive Lyapunov exponent (as would typically occur for a general Batchelor flow) is [*not necessary*]{} for intermittency. For an interesting discussion of the role of Lyapunov exponents in producing intermittency see the work of Chertkov, Falkovich, Kolokolov and Lebedev.[@CFKL] The random shear model ---------------------- Here, we briefly review the framework of the random shear model[@Majda1; @Majda2; @McLMa; @Us]. We follow Majda, and consider the free evolution of a passive scalar field in the presence of a rapidly fluctuating shear profile: $$\begin{aligned} \label{scalar} {\frac{\partial T}{\partial t}} + \gamma(t) v(x) \frac{\partial T}{\partial y} &=& \bar \kappa \Delta T.\end{aligned}$$ The random function, $\gamma(t)$, represents multiplicative, mean zero Gaussian white noise, delta correlated in time: $$\begin{aligned} \left<\gamma(t) \gamma(s)\right> = \delta(|t-s|)\end{aligned}$$ where the brackets, $\left<\cdot\right>$, denote the ensemble average over the statistics of $\gamma(t)$. The original model considered by Majda involved the case of a uniform shear layer, $v(x)=x$, which leads to the moments considered below [@Majda1]. It a quite general fact, not special to shear profiles, that a closed evolution equation for the arbitrary N-point correlator is available for the special case of rapidly fluctuating Gaussian noise, see work of Majda [@Majda2] for a path integral representation of this fact for the special case of random shear layers. For the scalar evolving in (\[scalar\]), the N point correlator, defined as: $$\begin{aligned} \label{correlator} \psi_N({{\bf x}},{{\bf y}},t) &=& \left<\prod_{j=1}^N T(x_j,y_j,t)\right>\\ {{\bf x}}&=& (x_1,x_2,x_3,...,x_N) \nonumber \\ {{\bf y}}&=& (y_1,y_2,y_3,...,y_N) \nonumber\end{aligned}$$ is a function: $\psi_N : R^{2N}\times [0,\infty) \rightarrow R^1$ satisfying $$\begin{aligned} \label{corevolve} {\frac{\partial \psi_N}{\partial t}} &=& \bar \kappa \Delta_{2N} \psi_N + \frac{1}{2} \sum_{i,j=1}^N v(x_i)v(x_j) \frac{\partial^2 \psi_N} {\partial y_i \partial y_j}\end{aligned}$$ where $\Delta_{2N}$ denotes the $2N$ dimensional Laplacian. We next describe the initial scalar field. Following Majda[@Majda1], we assume that the scalar is initially a mean zero, Gaussian random function depending only upon the variable, $y$: $$\label{data} T|_{t=0} = \int_{R^1} dW(k) e^{2 \pi i k y} |k|^{\frac{\alpha}{2}} \hat \phi_0(k)\qquad \alpha > -1$$ Here, $\hat \phi_0(k)$ denotes a rapidly decaying (large k) cut-off function satisfying $\hat\phi_0(k)=\hat\phi(-k), \hat\phi_0(0)\ne 0$ and $dW$ denotes complex Gaussian white noise with $$\begin{aligned} \left<dW\right>_W &=& 0\\ \left<dW(k)dW(\eta)\right>_W &=& \delta(k+ \eta) dk d\eta\end{aligned}$$ The spectral parameter, $\alpha$ appearing in (\[data\]) is introduced to adjust the excited length scales of the initial scalar field, with increasing $\alpha$ corresponding to initial data varying on smaller scales. We remark that the more general case involving initial data depending upon both $x$ and $y$, and data possessing both mean and fluctuating components, was analyzed McLaughlin and Majda [@McLMa]. For this case involving shear flows, the evolution of this $N$ point correlator may be immediately converted to parabolic quantum mechanics through partial Fourier transformation in the ${{\bf y}}$ variable. For the particular initial data presented in (\[data\]), this yields the following solution formula: $$\begin{aligned} \psi_N = \int_{R^N} e^{2 \pi i {{\bf k}}\cdot {{\bf y}}} \hat \psi_N({{\bf x}},{{\bf k}},t) \prod_{j=1}^N \hat \phi_0(k_j) |k_j|^{\frac{\alpha}{2}}dW(k_j)\end{aligned}$$ where the N-body wavefunction, $\hat \psi_N({{\bf x}},{{\bf k}},t)$ satisfies the following parabolic Schrödinger equation: $$\begin{aligned} \label{schrod} {\frac{\partial \hat \psi_N}{\partial t}} &=& \bar \kappa \Delta_{{{\bf x}}} - V_{int}({{\bf k}},{{\bf x}}) \hat \psi_N\\ \hat \psi_N|_{t=0}&=&1 \nonumber\end{aligned}$$ The interaction potential, $V_{int}({{\bf k}},{{\bf x}})$, is $$\begin{aligned} V_{int}&=& 4 \pi^2 |{{\bf k}}|^2 +2 \pi^2 (\sum_{j=1}^N k_j v(x_j))^2 .\end{aligned}$$ For the special case of a uniform, linear shear profile, with $v(x)=x$, the quantum mechanics problem in (\[schrod\]) is exactly solvable in any spatial dimension. Taking the ensemble average over the initial Gaussian random measure using a standard cluster expansion, the general solution formula for $\left<\psi_N({{\bf x}},{{\bf y}},t)\right>_W$ is obtained [@Majda1; @McLMa] in terms of $N$ dimensional integrals. The normalized, long time flatness factors, $\mu^\alpha_{2N}= \lim_{t\rightarrow \infty}\frac{\left<T^{2N}\right>} {\left<T^2\right>^N}$, are calculated by evaluating the correlator along the diagonal, $$\begin{aligned} {{\bf x}}&=& (x,x,x,\cdots,x)\\ {{\bf y}}&=& (y,y,y,\cdots,y)\end{aligned}$$ and utilizing the explicit long time asymptotics available through Mehler’s formula. This leads to the following set of normalized moments for the decaying scalar field, $T$: $$\begin{aligned} \mu^\alpha_{2N}&=& \frac{(2N)!}{2^N N! \sigma^N} \int_{R^N} d{{\bf k}}\frac{\prod_{j=1}^N |k_j|^{\alpha}}{\sqrt{\cosh(|{{\bf k}}|)}}\\ \sigma &=& \int_{R^1} dk \frac{|k|^{\alpha} }{\sqrt{\cosh{|k|}}}. \nonumber\end{aligned}$$ Observe that these normalized moments depend upon the parameter $\alpha$. By varying this parameter Majda and McLaughlin established that the degree of scale separation between the initial scalar and flow field is important in the development of a broader than Gaussian pdf [@Majda1; @McLMa]. They demonstrated through numerical quadrature of these integrals for low order moments that as the initial scalar field develops an infrared divergence (with $\alpha\rightarrow -1$, corresponding to the loss of scale separation between the initial scalar field, and the infinitely correlated linear shear profile) the limiting single point scalar distribution has Gaussian moments[@Majda1]. Conversely they showed that as the length scale of the initial scalar field is reduced, corresponding to increasing values of $\alpha$, the limiting distribution shows growing moments indicative of a broad tailed distribution[@McLMa]. On the basis of these low order moment comparisons, these studies suggest that within these models, the limiting pdf should be dependent upon the scale separation between the scalar and flow field. A fundamental issue concerns whether and how this scale dependence is manifest in the pdf [*tail*]{}. Below, we address precisely this issue, and rigorously establish that the intuition put forth by Majda and McLaughlin is correct through the explicit calculation of the limiting pdf tail. Asymptotics of the probability distribution =========================================== Notation -------- Recall from the previous section that the work of Majda derived exact expressions for the moments of a one parameter family of models indexed by the exponent $\alpha$. In the remainder of the paper $d\mu^\alpha(T)$ will denote the probability measure for the passive scalar $T$ in the Majda model with exponent $\alpha$. The $i^{th}$ moment of the probability measure $d\mu^\alpha(T)$ will be denoted by $\mu^\alpha_i$. In this particular model the distribution is symmetric and thus all odd moments vanish. Large $N$ asymptotics of the moments ------------------------------------ In this model the exact expression for the $2N$th moment is given by $$\begin{aligned} \mu^\alpha_{2N} &=& {\frac{(2N)!}{\sigma^N 2^N N!}} \int \frac{\prod_{j=1}^N |k_j|^\alpha} {\sqrt{\cosh(|\vec k|)}} dk_1 dk_2 dk_3\dots dk_N \\ \sigma &=& \int \frac{|k|^\alpha d\!k}{\sqrt{\cosh(k)}} \end{aligned}$$ As noted by Majda $\cosh(|\vec k|)\le \prod \cosh(k_i)$ which implies the normalized flatness factors are strictly larger than those of a Gaussian, implying broad tails. The simplest way to analyze this integral, and in particular to understand the behavior for large $N$, is to introduce spherical coordinates. Spherical coordinates in $N$ dimensions can easily be constructed iteratively in terms of spherical coordinates in $N-1$ dimensions as follows. The coordinates in $N$ dimensional spherical coordinates are $\{r,\theta_1,\theta_2,\theta_3\dots\theta_{N-1}\}.$ If $\{x^{N-1}_1,x^{N-1}_2\dots x^{N-1}_{N-1}\}$ are coordinates on ${\bf R}^{N-1}$ then coordinates on ${\bf R}^N$ are given by $$\begin{aligned} x^N_i &=& x^{N-1}_j \sin(\theta_{N-1}) \qquad j \in 1\dots N-1 \\ x^N_N &=& r \cos{\theta_{N-1}} \end{aligned}$$ Using this construction it is simple to calculate that the volume element in $N$ dimensional spherical coordinates is given by $$dx_1 dx_2 \dots dx_N = r^{N-1} dr \prod_{j=1}^{N-1} \sin^{j-1}(\theta_j) d\theta_j \qquad \theta_1 \in [0,2\pi] \qquad \theta_{i>1} \in [0,\pi].$$ Since the volume element is a product measure the $N$ dimensional integral factors as a product of $N$ one dimensional integrals and we are left with the expression $$\mu^\alpha_{2N} = {\frac{(2N)!}{\sigma^N 2^N N!}} I_0(N) \prod_{j=1}^{N-1} I_j,$$ where the $I_j$ are given by $$\begin{aligned} I_0(N) &=& \int_0^\infty r^{N(\alpha+1)-1} {\frac{dr}{\sqrt{\cosh(r)}}} \nonumber \\ I_1 &=& \int_0^{2\pi} |\sin(\theta)|^{\alpha} |\cos(\theta)|^{\alpha} d\theta \nonumber \\ I_j &=& \int_0^\pi |\sin(\theta)|^{j(\alpha + 1)-1} |\cos(\theta)|^{\alpha} d\theta \qquad j > 1. \end{aligned}$$ The angular integrals can be done explicitly in terms of gamma functions, using the beta function identity $$2\int_0^{\pi/2} |\sin(\theta)|^{2z-1}|\cos(\theta)|^{2w-1} d\theta = \beta(z,w) = {\frac{\Gamma(z)\Gamma(w)}{\Gamma(z+w)}}$$ which leads to the expression $$\begin{aligned} \mu^\alpha_{2n} &=& 2{\frac{(2N)!}{\sigma^N 2^N N!}} I_0(N) \prod_{j=1}^{N-1} \frac{\Gamma(\frac{\alpha+1}{2})\Gamma(j\frac{\alpha+1}{2})} {\Gamma((j+1)\frac{\alpha+1}{2})} \nonumber\\ &=& 2{\frac{(2N)!(\Gamma(\frac{\alpha+1}{2}))^{N-1}}{\sigma^N 2^N N!}} I_0(N) \prod_{j=1}^{N-1} \frac{ \Gamma(j\frac{\alpha+1}{2})}{\Gamma((j+1)\frac{\alpha+1}{2})}.\end{aligned}$$ Observe that the product telescopes - the numerator of one term is the denominator of the next - giving the final expression $$\begin{aligned} \mu^\alpha_{2n} &=& 2{\frac{(2N)!}{\sigma^N 2^N N!}} \frac{(\Gamma(\frac{\alpha+1}{2}))^N}{\Gamma(N\frac{\alpha+1}{2})} \int r^{N(\alpha+1)-1} {\frac{dr}{\sqrt{\cosh(r)}}} \nonumber\\ &=& 2{\frac{(2N)!}{\sigma^N 2^N N!}} \frac{(\Gamma(\frac{\alpha+1}{2}))^N}{\Gamma(N\frac{\alpha+1}{2})} I_0(N) \label{eqn:mu2n} \end{aligned}$$ The integral over the radial variable $I_0(N)$ cannot be done explicitly, but the large $N$ asymptotics are given by $$I_0(N) \approx 2^{N(\alpha+1)+\frac{1}{2}}\Gamma(N(\alpha+1)),$$ so that the large $N$ behavior of the moments is given by $$\mu^\alpha_{2N} \approx 2^{N\alpha + \frac{3}{2}}{\frac{(2N)!}{\sigma^N N!}} \frac{\Gamma(N(\alpha+1))(\Gamma(\frac{\alpha+1}{2}))^N }{\Gamma(N(\frac{\alpha+1}{2}))}. \label{eqn:moment_asymp}$$ Note that since $$\frac{\Gamma(N(\alpha+1))}{\Gamma(\frac{N(\alpha+1)}{2})} \rightarrow \infty \quad {\rm as\; }N \rightarrow \infty$$ the moments are strictly larger than the moments of the Gaussian. We will use this to provide rigorous quantitative estimates for the tails of the distribution. The Hamburger moment problem ---------------------------- Having computed simple expressions for the moments of the pdf, as well as asymptotic expressions for large moment number, it is natural to ask the question of whether one can do the inverse problem, and deduce the pdf itself. The problem of determining a measure from its moments is a classical one, known as the Hamburger moment problem[@RS; @ShTa; @Wi]. This problem has a rich theory, and we mention only a very few of the most basic results here. For an overview of the subject, see the book by Shohat and Tamarkin[@ShTa] or the recent electronic preprint by Simon[@BS]. The two most important questions are, of course, existence and uniqueness. There is a necessary and sufficient condition for a set of numbers $\{ \mu_i\}$ to be the moments of some probability measure, namely that the expectation of any positive polynomial be positive. This translates into the following linear algebraic conditions on the diagonal determinants of the Hankel matrix, the matrix with $i,j^{th}$ entry $\mu_{i+j}$: $$\left| \mu_0 \right| > 0, \qquad \left| \begin{array}{cc} \mu_0 & \mu_1 \\ \mu_1 & \mu_2 \end{array}\right| > 0, \qquad \left|\begin{array}{ccc} \mu_0 & \mu_1 & \mu_2 \\ \mu_1 & \mu_2 & \mu_3 \\ \mu_2 & \mu_3 & \mu_4 \end{array} \right| > 0 \ldots$$ These conditions appear to be quite difficult to check in practice. However since the moments considered here are, by construction, the moments of a pdf this condition must hold. A more subtle question is the issue of uniqueness of the measure, usually called determinacy in the literature of the moment problem. One classical sufficient condition for the determinacy of the moment problem is the following condition, due to Carleman[@Ca; @ShTa]: If the moments $\mu_n$ are such that the following sum [*diverges*]{} $$\sum_{j=1}^\infty (\mu_{2j})^{-\frac{1}{2j}} = \infty$$ then the Hamburger moment problem is determinate. Given the asymptotic expression for the moments given in Equation (\[eqn:moment\_asymp\]) it is easy to check that $$(\mu_{2j}^\alpha)^{-\frac{1}{2j}} \approx c j^{-\frac{\alpha+3}{4}}$$ and thus there is a unique measure with these moments for $-1 \le \alpha \le 1$. We will see later that this corresponds to probability distribution functions with tails that range from Gaussian through exponential. In the case $\alpha > 1$ which, as we will see later, corresponds to stretched exponential tails, the problem probably does not have a unique solution. Indeed there are classical examples of collections of moments with the same asymptotic behavior as the stretched exponential distribution for which the moment problem has a whole family of solutions. Given this we come to the question of actually calculating the measure given the moments. There is a rather involved theory for this in the determinate case involving, among other things, orthogonal polynomials and continued fractions[@KrMcL; @ShTa], but in general this problem is extremely difficult. However we show in the next section that it is relatively straightforward to reconstruct the [*tails of the measure*]{} from the moments. Asymptotics of the tails of the distribution -------------------------------------------- Recall that $\mu^\alpha_{2N}$ is the $2N$th moment of some probability measure $d\!\mu^\alpha(T)$, $$\mu_{2N}^\alpha = \int T^{2N} d\!\mu^\alpha(T).$$ We are interested in calculating the asymptotic rate of decay of the tails of the probability measure $d\!\mu^\alpha(T)$. The information about the behavior of the tails of the distribution is contained in the asymptotic behavior of the large moments. We study the tails of the measure $d\!\mu^\alpha(T)$ by introducing the function $$f^\alpha(z) = \sum_{j=0}^\infty \frac{\mu_{2j}^\alpha z^{2j}} {\Gamma(\frac{j(3+\alpha)}{2})C^{2j}},$$ where $C$ is some as yet unspecified constant. The factor of $\Gamma(\frac{j(3+\alpha)}{2})$ is chosen so that the series for $f^\alpha$ has a finite but non-zero radius of convergence. This will give us the sharpest control over the tails of $d\!\mu^\alpha(T)$. It is convenient to demand that the radius of convergence of the series be one. Using the root test it is straightforward to check that the radius of convergence of the sum is given by $$r^*= C 2^{-(\alpha+2)}\frac{(\alpha+3)^{\frac{\alpha+3}{4}}} { (\alpha+1)^{\frac{\alpha+1}{4}}} \sqrt{\frac{\sigma}{\Gamma(\frac{\alpha+1}{2})}},$$ so we choose $$C = 2^{\alpha + 2} \frac{(\alpha+1)^{\frac{\alpha+1}{4}}}{(\alpha+3)^{\frac{\alpha+3}{4}}} \sqrt{\frac{\Gamma(\frac{\alpha+1}{2})}{\sigma}}.$$ Since the coefficients $\mu_{2N}^\alpha$ are the moments of a probability measure $d\!\mu^\alpha(T)$ we have the alternative expression $$f^\alpha(z) = \sum_{j=0}^\infty \frac{z^{2j}}{C^{2j} \Gamma(\frac{i(3+\alpha)}{2})} \int T^{2i} d\!\mu^\alpha(T).$$ When $z$ is inside the radius of convergence of the sum (i.e. $|z|<1$) we can switch the integration and the summation and get the following expression for $f^\alpha$ $$\begin{aligned} f^\alpha(z) &=& \int \sum_{j=0}^\infty \frac{T^{2j}z^{2j}} {C^{2j} \Gamma(\frac{N(3+\alpha)}{2})} d\!\mu^\alpha(T)\\ &=& \int F^\alpha(zT) d\!\mu^\alpha(T) \label{eqn:int}.\end{aligned}$$ We note a few simple facts. First notice that the function $f^\alpha(z)$ is a kind of generalized Laplace transform of the measure $d\!\mu^\alpha(T)$. The quantity inside the integral, $F^\alpha(zT)=\sum\frac{ T^{2j}z^{2j}}{C^{2j}\Gamma(\frac{j(3+\alpha)}{2})}$ converges absolutely for all $z$ and thus $F^\alpha(zT)$ is an entire function of the complex variable $z$. Further we know that the integral must converge for $|z|<1$ and diverge for some $|z|>1$, since the original series converged in a circle of unit radius. We note that the entire function $F^\alpha(z)$ satisfies $$\begin{aligned} |F^\alpha(z)| &=& |\sum_{j=0}^\infty \frac{z^{2j}} {C^{2j}\Gamma(i\frac{3+\alpha}{2})}| \\ &\le&\sum_{j=0}^\infty \frac{|z|^{2j}}{|C^{2j}\Gamma(j\frac{3+\alpha}{2})|} \\ &\le& F^\alpha(|z|),\label{eqn:bound} \end{aligned}$$ so the function $F^\alpha(z)$ grows fastest along the real axis. Thus we know that the integral in Equation (\[eqn:int\]) converges for $-1<z<1$ and diverges for $z>1,z<-1$. Thus the problem of understanding the rate of decay of the tails of the probability measure $d\!\mu^\alpha(T)$ has been reduced to that of determining the rate of growth of the function $F(zt)$. There is a well-developed theory for studying the rate of growth of entire functions, the theory of entire functions of finite order. We recall only the basic facts here - the interested reader is referred to the texts of Ahlfors[@Ahlfors] and Rubel with Colliander[@RuCo]. The radial maximal function $M_F(r)$ of an entire function $F(z)$ is defined to be the maximum of the absolute value of $F$ over a ball of radius $r$ centered on the origin: $$M_F(r) = \max_{|z|=r}|F(z)|$$ The order $\rho$ of a function $F$ is defined to be $$\rho = \limsup_{r\rightarrow\infty} \frac{\log_+\log_+ M_F(r)}{\log_+(r)},$$ where $\log_+(x) = \max(0,\log(x))$, if this limit exists. It is easy to see from this definition that $F$ is of order $\rho$ means that $F$ grows asymptotically like $\exp(A(z) |z|^\rho)$ along the direction of maximum growth in the complex plane, where $A(z)$ grows more slowly than any power of $z$. A related notion is the type of a function of finite order. If $F$ is of order $\rho$ then the type $\tau$ is defined to be $$\tau = \limsup_{r\rightarrow\infty} \frac{\log_+ M_F(r)}{r^\rho}$$ when this limit exists. Again speaking very roughly the type $\tau$ gives the next order asymptotics: if $F$ is of order $\rho$ and type $\tau$ then $F$ grows like $B(z)\exp(\tau |z|^\rho)$, where $B(z)$ is subdominant to the exponential term. Note that by Equation (\[eqn:bound\]) the function $F^\alpha$ grows fastest along the real axis, and thus the maximal rate of growth in the complex plane is exactly the rate of growth along the real axis. There exist alternate characterizations of the order and type of a function in terms of the Taylor coefficients $A_n$ which are very useful for our purposes. These are given as follows: $$\begin{aligned} \rho &=& \limsup_{r\rightarrow\infty} \frac{\log_+\log_+ M_F(r)}{\log_+(r)} = \limsup_{n\rightarrow\infty} \frac{n\log(n)}{-\log(|A_n|)} \label{eqn:order}\\ \tau &=& \limsup_{r\rightarrow\infty} \frac{\log_+M_F(r)}{r^\rho} = \frac{1}{\rho e} \limsup_{n\rightarrow\infty} n|A_n|^{\rho/n}\label{eqn:type}.\end{aligned}$$ For the proofs we refer to the text of Rubel with Colliander[@RuCo]. Using the expressions given in equations (\[eqn:order\]) and (\[eqn:type\]) we find that the order $\rho$ and type $\tau$ of $F^\alpha(z)$ are given by $$\begin{aligned} \rho^\alpha = \limsup_{n\rightarrow\infty} {\frac{2n \log(2n)}{\log(C^{2n}\Gamma(\frac{(3+\alpha)n}{2}))}} = {\frac{4}{3+\alpha}} \\ \tau^\alpha ={\frac{1}{\rho e}} \limsup_{n\rightarrow\infty} n |\Gamma(\frac{(3+\alpha)n}{2})|^{\frac{-\rho}{n}} = \frac{1}{C^\rho} \\\end{aligned}$$ Thus we know that $F^\alpha(zT)$ grows like $A(zT)\exp(C^{-\rho}|z|^{\frac{4}{3+\alpha}}|T|^{\frac{4}{3+\alpha}})$ along the real axis, where $A(zT)$ grows more slowly than $\exp(D|T|^\frac{4}{3+\alpha})$ for any $D$. Further we know that the integral $$\int F^\alpha(zT) d\!\mu^\alpha(T)$$ converges for $|z|<1$ and diverges for $z>1$ or $z<-1$, so to leading order the rate of decay of the measure $d\mu^\alpha(T)$ is given by $\exp(-|C|^{-4/(3+\alpha)}|T|^{4/(3+\alpha)})$. It is easy to check that as $\alpha \rightarrow -1$ this estimate becomes $\exp(-\frac{T^2}{4})$, recovering the normalized Gaussian. This result is probably best restated in terms of the cumulative distribution function, rather than the probability measure. If $P(T,T') = \int_T^{T'} d\mu(T)$, with $T'>T$, then it is easy to show that the above implies that $$\begin{aligned} \lim_{T\rightarrow\infty} \exp(c |T|^{\frac{4}{3+\alpha}}) P(T,T') &=& 0 \qquad c < |C|^{\frac{-4}{3+\alpha}} \\ &=& \infty \qquad c > |C|^{\frac{-4}{3+\alpha}}\end{aligned}$$ Interpretation and concluding remarks ===================================== Physically the Majda model can be thought of as a model for the behavior of a passive scalar at small scales, when the scale of the flow field is [*much larger than the scale of the variations of the scalar.*]{} Recall that the random scalar is given by $$\begin{aligned} T(y) &=& \int |k|^{\frac{\alpha}{2}} \hat\phi_0(k) e^{2 \pi i k y} dW(k) \\ <T(y) T(y')> &=& \int |k|^\alpha |\hat\phi_0(k)|^2 e^{2 \pi i k(y-y')} dk.\end{aligned}$$ In the limit as $\alpha$ approaches $-1$ there is an infrared divergence, so that the energy of the scalar is concentrated at larger and larger scales. In this case $\frac{4}{3+\alpha}\rightarrow 2$, so the normalized distribution function becomes Gaussian, as was originally observed by Majda. One important fact about this model which we would like to emphasize is that it predicts that higher derivatives of the advected scalar should be [*increasingly intermittent*]{}, a fact which was not strongly emphasized in previous work. Observe that due to the special nature of shear flows the scalar derivative $\partial T/\partial y$ satisfies the same equation as the scalar $T$ with [*no additional terms!*]{}. We further note that the initial condition for the derivative of the scalar is given by $$\begin{aligned} \frac{\partial T}{\partial y} &=& \int 2 \pi i |k|^{\frac{\alpha}{2}} k \hat\phi_0(k) e^{2 \pi i k y}dW(k) \\ <\frac{\partial T}{\partial y}\frac{\partial T}{\partial y'}> &=& 4 \pi^2 \int |k|^{\alpha+2} |\hat\phi_0(k)|^2 e^{2 \pi i k(y-y')} dk,\end{aligned}$$ so the derivative of the scalar has a representation of the same form as the representation of the scalar itself, but with the exponent $\alpha$ increased by two, and a slightly modified $\phi_0(k)$. Recall that the exponent $\alpha$ determines the amount of energy at the largest scales and thus the degree of intermittency, with the tails decaying as $\exp(-T^{4/(3+\alpha)})$. Our calculation shows that increasing $\alpha$ increases the width of the tails of the probability distribution function, [*implying that derivatives are more intermittent!*]{} These predictions for the behavior of the tails of the scalar as compared with the scalar gradient are in extremely good agreement with experimental and numerical results. For instance our calculation shows that if the scalar has exponent $\alpha=-1$, so that the probability distribution function of the scalar has Gaussian tails, then the derivative of the scalar has exponent $\alpha =1$, implying that the distribution of the derivative has [*exponential tails.*]{} This agrees quite well with the experiments of Van Atta and Thorddsen[@ThVanA], as just one example, who observe that in turbulent thermally stratified flow that the pdf for the density has Gaussian tails, while the pdf of the density gradient has exponential tails. Similarly if the scalar has exponent $\alpha=1$, so that the distribution of the scalar itself is exponential, then derivative of the scalar should have exponent $\frac{2}{3}$. This agrees with the calculations of Chertkov, Falkovich and Kolokolov[@CFK], and Balkovsky and Falkovich[@BaFa] also predict exponential tails for the scalar and stretched exponential tails with exponent $\frac{2}{3}$ for the scalar gradient in the Batchelor regime. This also shows reasonably good agreement with the numerical experiments of Holzer and Siggia[@HoSig1; @HoSig2]. In their experiments Holzer and Siggia find that a scalar with exponential tails has a gradient with stretched exponential tails. For large Peclet number the exponent of these stretched exponential tails is in the range of $.661-.563$. Of course one can eliminate $\alpha$ entirely, and one finds the following relationship between the distribution of the scalar and the scalar gradient within this model. If $T$ is distributed according to a stretched exponential pdf with exponent $\rho$, and the gradient $T_y$ according to a stretched exponential pdf with exponent $\rho'$, then $\rho,\rho'$ are related by $$\frac{1}{2} + \frac{1}{\rho} = \frac{1}{\rho'}.$$ It would be extremely interesting to check if this relationship, or some generalization of it, holds in greater generality than shear flows. The above numerical and experimental evidence suggest that this might not be an unreasonable hope. The distribution of the $x$, or cross-shear, derivatives can also be calculated using the same explicit representations derived by Majda. Calculations by the authors for deterministic initial data have shown that derivatives in the cross-shear direction have a distribution with the [*same*]{} asymptotic behavior as the scalar itself. This should be compared to and contrasted with the papers of Son[@Son], and Balkovsky and Fouxon[@BF], which predict distributions with very broad tails (all of the higher moments diverge as $t \rightarrow \infty$) and which predicts the same distribution for derivatives of the scalar as for the scalar itself. We would also like to comment on the relationship between intermittency and the Lyapunov exponents of the underlying flow field. A number of papers have addressed the problem of intermittency in the large Peclet number limit by attempting to relate broader than Gaussian tails to the Lyapnuov exponents of the flow field[@CFKL]. It is worth noting that a shear flow does not possess a positive Lyapunov exponent, but as we have shown here a shear flow can generate exponential and stretched exponential tails in the passive scalar. This shows that chaotic behavior in the underlying flow, while probably an important effect in realistic flows, is not necessary for the generation of broad tails and intermittency. Finally we would like to comment on the rate of approach to the limiting measure in time. The results presented here analyze the infinite time limit of the measure. As mentioned earlier the convergence to this limiting measure is expected to be highly non-uniform. A preliminary calculation by the authors for a special choice of the cut-off function $\hat\phi_0(k)$ suggests that for large but finite times the pdf looks like the pdf for the infinite time problem in some core region, with Gaussian tails outside this core region. As time increases the size of this core region demonstrating non-Gaussian statistics grows, and the Gaussian tails get pushed out to infinity. We believe this same picture to hold for any choice of the cut-off function $\hat\phi_0(k)$, but more work is needed to establish this fact. [**Acknowledgements:**]{} Jared C. Bronski would like to acknowledge support from the National Science Foundation under grant DMS-9972869. Richard M. McLaughlin would like to acknowledge support from NSF Career Grant DMS-97019242, and would like to thank L. Kadanoff and the James Franck Institute for support during the writing of this paper, and Raymond T. Pierrehumbert for several useful conversations. The authors would like to thank Misha Chertkov, Leo Kadanoff and Kenneth T-R. McLaughlin for several conversations, and Pete Kramer for an extremely thorough reading of the original manuscript. [99]{} L. V. Ahlfors, “Complex analysis: an introduction to the theory of analytic functions of one complex variable.” 3d ed, New York, McGraw-Hill, (1979). R.A. Antonia and K.R. Sreenivasan, “Log-normality of temperature dissipation in a turbulent boundary layer,” Phys. Fluids, [**20**]{}, 1800 (1977). E. Balkovsky and G. Falkovich, “Two complementary descriptions of intermittency.”, Phys. Rev. E [**57**]{}, R1231-R1234, (1998). E. Balkovsky and A. Fouxon, “Universal long-time properties of Lagrangian statistics in the Batchelor regime and their application to the passive scalar problem”, [*Electronic preprint chao-dyn/9905020v2*]{} J.C. Bronski and R.M. McLaughlin, “Passive scalar intermittency and the ground state of Schrödinger operators”, Phys. Fluids [**9**]{}, 181-190, (1997). T. Carleman, “Sur le problème des moments.”, Comptes Rendus [**174**]{}, 1680-1682, (1922). B. Castaing, Y. Gagne and E. J. Hopfinger, “Velocity probability distribution functions of high Reynolds number turbulence”, Physica D [**46**]{}, 177-200 (1990). B. Castaing, G. Gunaratne, F. Heslot, L. Kadanoff, A. Libchaber, S. Thomae, X-Z. Wu, S. Zaleski, and G. Zanetti, “Scaling of hard thermal turbulence in Rayleigh-Bénard convection”, J. Fluid Mech. [**204**]{}, 1-30 (1989). S. Chen and R.H. Kraichnan, “Simulations of a randomly advected passive scalar field”, Physics of Fluids, [**10**]{}, 2867-2884, (1998). H. Chen, S. Chen and R.H. Kraichnan, “Probability Distribution of a stochastically advected scalar field”, Phys. Rev Lett. [**63**]{}, 2657-2660, (1989). M. Chertkov, “Instanton for random advection”, Phys. Rev. E [**55**]{}, 2722-2735, (1997). M. Chertkov, G. Falkovich and I. Kolokolov, “Intermittent dissipation of a scalar in turbulence”, Phys. Rev. Lett. [**80**]{}, 2121-2124, (1998). M. Chertkov, I Kolokolov and M. Vergassola, “Inverse cascade and internittency of passive scalar in one-dimensional smooth flow”, Phys. Rev. E [**56**]{}, 5483-5499, (1997). M. Chertkov, G. Falkovich, I. Kolokolov and V. Lebedev, “Statistics of a passive scalar advected by a large-scale two-dimensional velocity field: analytic solution”, Phys. Rev. E [**51**]{}, 5609–5627, (1995). E.S.C. Ching, “Probabilities for temperature differences in Rayleigh-Bénard convection”, Phys. Rev. A, [**44**]{}, 3622-3629, (1991). E.S.C.Ching and Y. Tu,”Passive scalar fluctuations with and without a mean gradient: A numerical study”, Phys. Rev. E [**49**]{}, 1278-1282, (1994). A. J. Chorin, “Vorticity and Turbulence”, Number 103 in Applied Mathematical Science. Springer-Verlag, New York, 1994. J. M. Deutsch, “Generic behavior in linear systems with multiplicative noise”, Phys. Rev. E [**48**]{}, R4179-R4182, (1993). J.P. Gollub, J. Clarke, M. Gharib, B. Lane, and O.N. Mesquita, “Fluctuations and transport in a stirred fluid with a mean gradient”, Phys. Rev. Lett. [**67**]{}, 3507-3510, (1991). M. Holzer and E. Siggia, “Skewed, exponential pressure distributions from Gaussian velocities”, Phys. Fluids A [**5**]{}, 2525-2532, (1993). M. Holzer and E. Siggia, “Turbulent mixing of a passive scalar.”, Phys. Fluids [**6**]{}, 1820-1837, (1994). M. Holzer and E. Siggia, “Erratum:‘Turbulent mixing of a passive scalar.’”, Phys. Fluids [**7**]{}, 1519 (1995). P. Kailasnath, K.R. Sreenivasan, and G. Stolovitzky, “Probability Density of velocity increments in turbulent flows.”, Phys. Rev. Lett. [**68**]{} 2766-2769, (1992). A. R. Kerstein, “Linear-eddy modelling of turbulent transport. Part 6. Microstructure of diffusive scalar mixing fields,” [*J. Fluid Mech.*]{} [**231**]{} 361-394, 1991. R.H. Kraichnan, “Models of intermittency in hydrodynamic turbulence”, Phys. Rev. Lett. [**65**]{}, 575-578, (1990). R. H. Kraichnan, Phys. Fluids [**11**]{}, 945 (1968). T. Kriecherbauer and K. T-R. McLaughlin, “Strong asymptotics of polynomials orthogonal with respect to Freud weights”, Preprint. J.C. Larue and P.A. Libby,“Temperature fluctuations in a plane turbulent wake,” Phys. Fluids, 17, 1956 (1974) A.J. Majda, “The random uniform shear layer: an explicit example of turbulent diffusion with broad tail probability distributions”, Phys. Fluids A [**5**]{},1963-1970 (1993). A.J. Majda, “Explicit inertial range renormalization theory in a model for turbulent diffusion.”, J. Statist. Phys. [**73**]{} 515-542, (1993). A. Majda and P. Kramer, “Simplified models for turbulent diffusion: Theory, numerical modelling, and physical phenomena,” [*Physics Reports*]{} [**314**]{} 237-574, (1999). R.M. McLaughlin and A.J. Majda, “An explicit example with non-Gaussian probability distribution for nontrivial scalar mean and fluctuation”, Phys. Fluids [**8**]{}, 536 (1996). R.M. McLaughlin, “Turbulent Diffusion” Ph.D. thesis, Program in Applied and Computational Mathematics, Princeton University, (1994). R.T. Pierrehumbert, Personal Communications. R.T. Pierrehumbert, “Lattice models of advection-diffusion”, preprint. R.R. Prasad and K.R. Sreenivasan, “Quantitative three-dimensional imaging and the structure of passive scalar fields in fully turbulent flows,” J. Fluid Mech., [**216**]{}, 1 (1990). A. Pumir, “A numerical study of the mixing of a passive scalar in three dimensions in the presence of a mean gradient.”, Phys. Fluids [**6**]{}, 2118-2132, (1994). Pumir, A., Shraiman, B., and Siggia, E., “Exponential tails and random advection,” [*Phys. Rev. Lett.*]{}, [**66**]{}, 2984 (1991). M. Reed and B. Simon, “Mathematical methods in physics”, San Diego, Academic Press, (1980). L. Rubel (with J. Colliander), “Entire and meromorphic functions”, New York, Springer, (1996). Z.S. She and S.A. Orszag, “Physical model of intermittency in turbulence: Inertial range non-Gaussian statistics”, Phys. Rev. Lett. [**66**]{}, 1701-1704, (1991). J.A. Shohat and J.D. Tamarkin, “The Problem of Moments”, New York, American Mathematical Society, (1943). B. I. Shraiman and E. Siggia, “Lagrangian path integrals and fluctuations in random flow.”, Phys. Rev. E [**49**]{}, 2912-2927, (1994). B. Simon, “The classical moment problem as a self-adjoint finite difference operator”,electronic preprint, http:$\backslash\backslash $front.math.ucdavis.edu/math-ph/9906008, (1999). Ya. G. Sinai and V. Yakhot, “Limiting probability distributions of a passive scalar in a random velocity field”, Phys. Rev. Lett. [**63**]{}, 1962-1964, (1989). D.T. Son, “Turbulent decay of a passive scalar in the Batchelor limit: Exact results from a quantum mechanical approach”, Phys. Rev. E [**59**]{}, R3811-R3814, (1999). K.R. Sreenivasan, “Fluid Turbulence”, Rev. Mod. Phys., [**71**]{}, 383-395, (1999). K.R. Sreenivasan, “Evolution of the centerline probability density function of temperature in a plane turbulent wake”, Phys. Fluids, [**24**]{}, 1232 (1981). K. R. Sreenivasan and R. A Antonia, “The phenomenology of small-scale turbulence”, Ann. Rev. Fluid Mech., [**29**]{} 435-472, 1997. S.T. Thoroddsen and C.W. Van Atta, “Exponential tails and skewness of density-gradient probability density functions in stably stratified turbulence”, J. Fluid Mech. [**244**]{}, 547-566, (1992). Widder, “The Laplace Transform”, Princeton, Princeton UniversityPress, (1972). V. Yakhot, S. Orszag, S. Balachandar, E. Jackson, Z-S. She and L. Sirovich, “Phenomenological theory of probability distributions in turbulence”, J. Sci. Comp. [**5**]{}, 199-221, (1990).
--- abstract: 'Understanding of the parton distributions at small $x(g)$ is one of the most important issues for clarfiying of the QCD basics. In this paper potential of the LHeC for probing small $x(g)$ region via $c\bar{c}$ and $b\bar{b}$ production has been investigated. Comparison of the $ep$ and real $\gamma p$ options of the LHeC clearly show the advantage of $\gamma p$ collider option. Measurement of $x(g)$ down to $3\times10^{-6}$ with high statistics, especially at $\gamma p$ option, seems to be reachable which is two order smaller than HERA coverage.' author: - | U. Kaya$^{1}$, S. Sultansoy$^{1,2}$, G. Unel$^{3}$\ *$^{1}$TOBB University of Economics and Technology, Ankara, Turkey*\ *$^{2}$ANAS Institute of Physics, Baku, Azerbaijan*\ *$^{3}$UC Irvine,USA* date: '$\,$' title: 'Probing small $x(g)$ region with the LHeC based $\gamma p$ colliders' --- Introduction ============ The problem of precise measurement of parton distribution functions (PDF) is yet to be solved for the energy scales relevant to the LHC results. On the other hand, precison knowledge on the parton distribution of small $x_{B}$ and sufficiently large $Q^{2}$ is crucial for enlightening the QCD basics at all levels, from partons to nuclei. Besides, with the recent discovery of the 125 GeV scalar particle [@ATLAS; @Higgs], [@CMS; @Higgs] at the LHC, the basic components of the electroweak part of the Standard Model (SM) have been completed. Hovewer, the Higgs mechanism provides less than 2% of mass of the visible universe. Remaining 98% are provided by the QCD part of the SM. Therefore, clarifying of the basics of the QCD is important for a better understanding of our universe. That’s why the QCD explorer was proposed ten years ago (see review [@QCD; @2004] and references therein). One of the required measurements is the gluon PDF for low momentum fraction: small $x(g)$. The last machine that has probed $x(g)$ was HERA which had a reach of about $x(g)>$$10{}^{-4}$. Large Hadron-Electron Collider (LHeC) project [@J.; @Phys.; @G:; @Nucl.; @Part.; @Phys.; @39; @(2012); @075001] - the most powerful microscope ever designed - will provide a unique opportunity to probe extremely small $x(g)$ region. In this project, where proton-electron collisions are aimed the $e-beam$ can be obtained from a new circular or linear machine. Today, LR option is considered as the basic one for the LHeC [@arXiv:1211.4831v1; @[hep-ex]; @20; @Nov; @2012]. Actually this decision was almost obvious from the beginning due to the complications in constructing by-pass tunnels around the existing experimental caverns and installing the $e-ring$ in the already commissioned tunnel. Let us remind that the CDR stage of the LHC assumed also $ep$ collisions using the already existed LEP ring; but it turned out that LHC installation required dismantling of LEP from the tunnel. Within the linac-ring option of the LHeC, a proton beam from LHC can be hit with a high energy electron or photon beam. The photons may be virtual ones from the electron beam resulting in a typical DIS event or these can be real photons originating from the Compton Back Scattering process. In the latter case, the photon spectrum consists of the high energy photons peaking at about 80% of the electron beam energy on the continuum of Weizsacker-Williams photons. The present study aims to investigate the feasibility of a small $x(g)$ measurement with such a machine. Main parameters of $ep$ and $\gamma p$ options of the LHeC are presented in section 2. Section 3 is devoted to investigation of small $x(g)$ region using the processes $\gamma p$ $\rightarrow$ $c\bar{{c}}X$ and $\gamma p$ $\rightarrow$ $b\bar{{b}}X$. The generator level results are obtained using CompHEP [@CompHep] software package. Comparison with processes $ep$ $\rightarrow$ $ec\bar{{c}}X$ and $ep$ $\rightarrow$ $eb\bar{{b}}X$ shows an obvious advantage of the LHeC $\gamma p$ option, which will provide more than one order higher cross sections at small $x(g)$ region comparing to the $ep$ option. Finally, section 4, provides a summary of the conclusion together with some suggestions. Main parameters of $ep$ and $\gamma p$ options of the LHeC ========================================================== It should be emphasized that real $\gamma p$ collisions can be achieved only on the base of linac ring type $ep$ colliders (see review [@A.; @N.; @Akay] for history and status of linac-ring type collider proposals). Real $\gamma$ beam for $\gamma p$ collider [@S.; @I.; @Alekhin], [@A.; @K.; @Ciftci; @et; @al.], [@TESLA; @*; @HERA; @based], [@Conversion; @efficiency] will be produced using the Compton back scattering of laser beam off the high energy electron beam [@I.; @F.; @Ginzburg], [@Principles; @of; @photon; @colliders]. Possible application of this mechanism to the other LHeC option under consideration, namely to ring-ring type $ep$ colliders results in negligible $\gamma p$ luminosities, $L_{\gamma p}$$<10{}^{-7}$$L{}_{ep}$. Currently, two versions for the $ep$ option of the LHeC are under consideration: multi-pass energy recovery linac (ERL) yielding $L_{ep}=$$10^{33}$$cm^{-2}$$s{}^{-1}$ and pulsed single pass linac yielding $L{}_{ep}=$$10{}^{32}$$cm{}^{-2}$$s{}^{-1}$. In the first case, $E{}_{e}=60\, GeV$ has been chosen as a base electron energy, since higher energies are not available because of the synchrotron radiation loss in the arcs. In the second case, beam energies above $140\, GeV$ would be available [@J.; @Phys.; @G:; @Nucl.; @Part.; @Phys.; @39; @(2012); @075001]. These two options will be denoted as LHeC-1 and LHeC-2. Main parameters of the LHeC $ep$ collisions, in different options are presented in Table 1. $E_{e},\, GeV$ $E_{p},\, TeV$ $\sqrt{s},\, TeV$ $L,\, cm^{-2}$$s{}^{-1}$ -------- ---------------- ---------------- ------------------- -------------------------- ERL $60$ $7$ $1.30$ $10^{33}$ LHeC-1 $60$ $7$ $1.30$ $9\times$$10^{31}$ LHeC-2 $140$ $7$ $1.98$ $4\times10^{31}$ : Main parameters of ep collisions. \[TABLE1\] In the $\gamma p$ option the luminosity of $\gamma p$ collisions will be similar to the luminosity of $ep$ collisions for the pulsed single straight linac. In the ERL case, L$_{\gamma p}$ will be 10 times lower than L$_{ep}$ as the energy recovery does not work after Compton back scattering. Inclusive processes yielding $c\bar{c}$ and $b\bar{b}$ final states at LHeC =========================================================================== The final states that can be easily distinguished from the background events and that would give a good measure of the $x(g)$ are $eg\to eq\bar{q}$ and/or $\gamma g\to q\bar{q}$ where the gluon ($g$) is from the LHC protons, electrons and photons are from a new accelerator (namely, an electron linac providing beams tangential to the LHC) to be build and the letter $q$ stands for a heavy quark flavour, such as $b$ quark and possibly $c$ as well. The $b$ quark final states are easier to identify due to $b$-tagging possibility using currently available technologies: for example, ATLAS silicon detectors have about 70% $b$-tagging efficiency. In Table \[TABLE2\] we present the cross sections for heavy quark pair production via DIS, quasi real photons (WWA) and Compton Back Scattering (CBS) photons at the LHeC with $E{}_{e}=60\, GeV$ and $E{}_{e}=140\, GeV$. For comparison, we also give values for DIS and WWA processes at HERA. It is seen that WWA quasi real photons are advantageous comparing to DIS and CBS photons are advantageous comparing to WWA. All numerical calculations are performed using CompHep [@CompHep] with CTEQ6L1 [@CTEQ] PDF distributions. In Figure \[FIGURE1\], the differential cross section depending on the $x(g)$ has been shown for WWA photons at HERA and at LHeC. As expected, LHeC will give opportunity to investigate an order smaller $x(g)$ than HERA. ------------------------------ -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- Machine DIS WWA CBS DIS WWA CBS HERA $6.07\times10^{2}$ $4.57\times10^{3}$ - $4.66\times10^{4}$ $7.29\times10^{5}$ - LHeC-1($E{}_{e}=60$$\, GeV$) $4.26\times10^{3}$ $2.99\times10^{4}$ $2.41\times10^{5}$ $2.38\times10^{5}$ $3.44\times10^{6}$ $2.38\times10^{7}$ LHeC-2($E{}_{e}=140\, GeV$) $7.07\times10^{3}$ $4.91\times10^{4}$ $3.70\times10^{5}$ $3.72\times10^{5}$ $5.27\times10^{6}$ $3.46\times10^{7}$ ------------------------------ -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- : Heavy quark pair production cross sections via DIS, WWA, and CBS mechanisms. \[TABLE2\] The advantage of the CBS photons becomes even more obvious if one analyzes $x(g)$ distribution of differential cross sections for CBS, WWA and DIS. In Figure \[FIGURE2\], we show the $d\sigma/$$dx(g)$ at the LHeC-1 for $c\bar{c}$ production. It is seen that CBS at small $x(g)$ region provides more than one (two) order higher cross sections comparing to WWA (DIS). For example, differential cross section of $c\bar{c}$ pair production at the LHeC-1 achieves maximum value $94\,\mu b$ at $x(g)=1.44\times10^{-5}$ for CBS, whereas maximum value for WWA and DIS are $4\,\mu b$ at $x(g)=$$1.54\times10^{-5}$and $0.15\,\mu b$ at $x(g)=$$3.89\times10^{-5}$, respectively. Similar distributions for $b\bar{b}$ at LHeC-1, $c\bar{c}$ at LHeC 2 and $b\bar{b}$ at LHeC-2 are shown in Figures \[FIGURE3\], \[FIGURE4\] and \[FIGURE5\], respectively. Maximum values of differential cross sections and corresponding $x(g)$ values for DIS, WWA, and CBS at the LHeC-1 (2) are given in the Table \[TABLE 3\] (\[TABLE 4\]). The advantage of CBS due to large cross section is obvious from the comparison. ----- ---------------- --------------------- ---------------- --------------------- $d\sigma/$$dx$ $x$ $d\sigma/$$dx$ $x$ DIS $0.15\,\mu b$ $3.89\times10^{-5}$ $0.47\, nb$ $1.99\times10^{-4}$ WWA $4.0\,\mu b$ $1.54\times10^{-5}$ $5.02\, nb$ $1.25\times10^{-4}$ CBS $94\,\mu b$ $1.44\times10^{-5}$ $117\, nb$ $1.23\times10^{-4}$ ----- ---------------- --------------------- ---------------- --------------------- : Maximum values of differential cross sections and corresponding $x(g)$ values for DIS, WWA, and CBS at the LHeC-1. \[TABLE 3\] ----- ---------------- --------------------- ---------------- --------------------- $d\sigma/$$dx$ $x$ $d\sigma/$$dx$ $x$ DIS $0.44\,\mu b$ $1.54\times10^{-5}$ $1.73\, nb$ $9.12\times10^{-5}$ WWA $13.2\,\mu b$ $6.45\times10^{-6}$ $17\, nb$ $5.88\times10^{-5}$ CBS $312\,\mu b$ $6.02\times10^{-6}$ $408\, nb$ $5.01\times10^{-5}$ ----- ---------------- --------------------- ---------------- --------------------- : Maximum values of differential cross sections and corresponding $x(g)$ values for DIS, WWA, and CBS at the LHeC-2. \[TABLE 4\] ![The x(g) reach and differential cross sections in $c\bar{{c}}$ (left) and $b\bar{{b}}$ (right) final states for the HERA and the LHeC. []{data-label="FIGURE1"}](Figure1_Left "fig:"){width="0.4\paperwidth"}![The x(g) reach and differential cross sections in $c\bar{{c}}$ (left) and $b\bar{{b}}$ (right) final states for the HERA and the LHeC. []{data-label="FIGURE1"}](Figure1_Right "fig:"){width="0.4\paperwidth"} ![Differential cross sections for $c\bar{{c}}$ final states produced via CBS, WWA and DIS at the LHeC-1.[]{data-label="FIGURE2"}](Figure2_Left "fig:"){width="0.4\paperwidth"}![Differential cross sections for $c\bar{{c}}$ final states produced via CBS, WWA and DIS at the LHeC-1.[]{data-label="FIGURE2"}](Figure2_Right "fig:"){width="0.4\paperwidth"} ![Differential cross sections for $b\bar{{b}}$ final states produced via CBS, WWA and DIS at the LHeC-1.[]{data-label="FIGURE3"}](Figure3_Left "fig:"){width="0.4\paperwidth"}![Differential cross sections for $b\bar{{b}}$ final states produced via CBS, WWA and DIS at the LHeC-1.[]{data-label="FIGURE3"}](Figure3_Right "fig:"){width="0.4\paperwidth"} ![Differential cross sections for $c\bar{{c}}$ final states produced via CBS, WWA and DIS at the LHeC-2.[]{data-label="FIGURE4"}](Figure4_Left "fig:"){width="0.4\paperwidth"}![Differential cross sections for $c\bar{{c}}$ final states produced via CBS, WWA and DIS at the LHeC-2.[]{data-label="FIGURE4"}](Figure4_Right "fig:"){width="0.4\paperwidth"} ![Differential cross sections for $b\bar{{b}}$ final states produced via CBS, WWA and DIS at the LHeC-2 .[]{data-label="FIGURE5"}](Figure5_Left "fig:"){width="0.4\paperwidth"}![Differential cross sections for $b\bar{{b}}$ final states produced via CBS, WWA and DIS at the LHeC-2 .[]{data-label="FIGURE5"}](Figure5_Right "fig:"){width="0.4\paperwidth"} The angular dependency of the relevant processes is important to estimate the necessary $\eta$ coverage of the detector to be built and also to estimate the eventual electron machine selection. For illustration we consider $d\sigma/d\theta$ distribution where $\theta$ is the angle between $c$ $(b)$ quark and proton beam direction. These distributions for CBS at the LHeC-1 and LHeC-2 are presented in Figures \[FIGURE6\] and \[FIGURE7\], respectively. In Table \[TABLE 5\], we present reachable $x(g)$ for different $\theta$ coverage. One can notice that even for an angular loss of about 5 degrees, there is considerable drop in both the cross section and in the $x(g)$ reach. This effect can be understood by considering the $\eta$ dependence of the heavy quark pair production cross section in $\gamma p$ collisions which are shown in Figure \[Fig: eta\_dependency\_gp\] and \[FIGURE9\]. The vertical solid line is representative for a 1 degree, the dashed line for a 5 degree and the dot-dashed line is for 10 degree detector. Therefore, in order to have a good experimental reach the tracking should have an $\eta$ coverage up to 5. ---------- --------------------- --------------------- --------------------- --------------------- $\theta$ $c\bar{c}$ $b\bar{b}$ $c\bar{c}$ $b\bar{b}$ $0-180$ $7.94\times10^{-6}$ $6.91\times10^{-5}$ $3.16\times10^{-6}$ $3.02\times10^{-5}$ $1-179$ $8.31\times10^{-6}$ $6.91\times10^{-5}$ $3.36\times10^{-6}$ $4.36\times10^{-5}$ $5-175$ $1.44\times10^{-5}$ $7.94\times10^{-5}$ $1.20\times10^{-5}$ $4.78\times10^{-5}$ $10-170$ $2.39\times10^{-5}$ $1.00\times10^{-4}$ $2.28\times10^{-5}$ $7.58\times10^{-5}$ ---------- --------------------- --------------------- --------------------- --------------------- : Reachable $x(g)$ for different $\theta$ coverage. \[TABLE 5\] ![The effect of angular reach for $c\bar{c}$ (left) and $b\bar{b}$ (right) final states produced via CBS at the LHeC-1.[]{data-label="FIGURE6"}](Figure6_Left "fig:"){width="0.4\paperwidth"}![The effect of angular reach for $c\bar{c}$ (left) and $b\bar{b}$ (right) final states produced via CBS at the LHeC-1.[]{data-label="FIGURE6"}](Figure6_Right "fig:"){width="0.4\paperwidth"} ![The effect of angular reach for $c\bar{c}$ (left) and $b\bar{b}$ (right) final states produced via CBS at the LHeC-2.[]{data-label="FIGURE7"}](Figure7_Left "fig:"){width="0.4\paperwidth"}![The effect of angular reach for $c\bar{c}$ (left) and $b\bar{b}$ (right) final states produced via CBS at the LHeC-2.[]{data-label="FIGURE7"}](Figure7_Right "fig:"){width="0.4\paperwidth"} ![The $\eta$ dependency of the $c\bar{c}$ (left) and $b\bar{b}$ (right) production cross section via CBS at the LHeC-1. Vertical lines represent $1{}^{o}$ (solid line), $5{}^{o}$ (dashed line) and $10{}^{o}$ (dot-dashed line) detector cuts.[]{data-label="Fig: eta_dependency_gp"}](Figure8_Left "fig:"){width="0.4\paperwidth"}![The $\eta$ dependency of the $c\bar{c}$ (left) and $b\bar{b}$ (right) production cross section via CBS at the LHeC-1. Vertical lines represent $1{}^{o}$ (solid line), $5{}^{o}$ (dashed line) and $10{}^{o}$ (dot-dashed line) detector cuts.[]{data-label="Fig: eta_dependency_gp"}](Figure8_Right "fig:"){width="0.4\paperwidth"} ![The $\eta$ dependency of the $c\bar{c}$ (left) and $b\bar{b}$ (right) production cross section via CBS at the LHeC-2. Vertical lines are same as in Fig. 8.[]{data-label="FIGURE9"}](Figure9_Left "fig:"){width="0.4\paperwidth"}![The $\eta$ dependency of the $c\bar{c}$ (left) and $b\bar{b}$ (right) production cross section via CBS at the LHeC-2. Vertical lines are same as in Fig. 8.[]{data-label="FIGURE9"}](Figure9_Right "fig:"){width="0.4\paperwidth"} Conclusions =========== Measurements of $x(g)$ down to $3\times10^{-6}$ seem to be reachable in $\gamma p$ collisions which is two order smaller than the HERA coverage. These collisions provide higher cross section and better $x(g)$ reach with respect to the $ep$ collisions with the same electron beam energy. For the low $x(g)$ region, the enhancement factor compared to the DIS $ep$ collisions is about 700 for $c\bar{c}$ final states and about 230 for $b\bar{b}$ final states at the LHeC-2. Therefore, for the final states with heavy quarks, even if the $\gamma p$ luminosity is 10 times smaller than $ep$ luminosity (ERL option), the expected number of events in $\gamma p$ collisions would be 70 and 20 times higher than in $ep$ collisions for $c\bar{c}$ and $b\bar{b}$ final state respectively. The enhancement factor compared to WWA $ep$ collisions is about 24 for both final states. The angular sensitivity is very important for smallest $x(g)$ reach for either $e$ or $\gamma$ beams, therefore a detector with a pseudorapidity coverage up to $\eta=5$ is required. This coverage is already achieved at the ATLAS and CMS experiments using forward detector components. Finally, $ep$ option of LHeC will give an opportunity to shed light on the small $x(g)$ dynamics which is crucial for clarifying the QCD basics. On the other hand, the $\gamma p$ option of LHeC will essentially enlarge the LHeC capacity on the subject. Therefore, one pulse linac should be considered as a baseline for LHeC design. In this case, a higher center of mass energies can be achieved by lengthening of the electron linac which will provide an opportunity to investigate smaller $x(g)$ region. The luminosity loss can be compensated using energy recovery linac without re-circulating arcs [@Litvinenko] which may provide luminosity values exceeding $L=$$10^{34}$$cm^{-2}$$s{}^{-1}$ even with a multi-hundred GeV electron linac. [10]{} ATLAS collaboration, G. Aad et al., *Combined search for the standard model Higgs boson using up to 4.9 $fb{}^{-1}$of $pp$ collision data at $\sqrt{s}=7\; TeV$ with ATLAS detector at the LHC, Phys. Lett.* **B 710** (2012) 49 [\[]{}arxiv:1202.1408[\]]{}. CMS collaboration, S. Chatrchyan et al., *Combined results of searches for the standard model Higgs boson in $pp$ collisions at $\sqrt{s}=7\; TeV$, Phys. Lett.* **B 710** (2012) 26 [\[]{}arxiv:1202.1488[\]]{}. S. Sultansoy, *Linac-ring type colliders: Second way to TeV scale*, Eur Phys. J C **33**, s01, s1064-s1066 (2004). J.L.Abelleira Fernandez , C.Adolphsen , A.N.Akay et al., *A Large Hadron Electron Collider at CERN: Report on the Physics and Design Concepts for Machine and Detector*, J. Phys. G: Nucl. Part. Phys. 39 (2012) 075001. J.L.Abelleira Fernandez , C.Adolphsen , P.Adzic et al., *A Large Hadron Electron Collider at CERN*, arXiv:1211.4831v1 [\[]{}hep-ex[\]]{} 20 Nov 2012. E. Boos et al. (CompHEP Collaboration), *Automatic computations from Lagrangians to events*, Nucl. Instrum. Meth. A 534, 250 (2004). A. N. Akay, H. Karadeniz, and S.Sultansoy, *Review of Linac-Ring Type Collider Proposals*, Int. J. Mod. Phys. A 25, 4589 (2010). S. I. Alekhin et al., *PHYSICS AT gamma p COLLIDERS OF TeV ENERGIES*, Int. J. Mod. Phys. A 6, 21 (1991). A. K. Ciftci et al., *Main parameters of TeV energy* $\gamma-p$ *colliders*, Nucl. Instrum. Methods Phys. Res., Sect. A 365, 317 (1995). A. K. Ciftci, S. Sultansoy, and O. Yavas, *TESLA [\*]{} HERA based* $\gamma-p$ *and* $\gamma-A$ *colliders*, Nucl. Instrum. Methods Phys. Res.,Sect. A 472, 72 (2001). H. Aksakal et al., *Conversion efficiency and luminosity for* $\gamma-p$ *colliders based on the LHC-CLIC or LHC-ILC QCD explorer scheme*, Nucl. Instrum. Methods Phys. Res., Sect. A 576, 287 (2007). I. F. Ginzburg et al., *Colliding gamma e and gamma gamma Beams Based on the Single Pass e+ e- Accelerators. 2. Polarization Effects. Monochromatization Improvement*, Nucl. Instrum. Methods Phys. Res., Sect. A 219, 5 (1984). V. I. Telnov, *Principles of photon colliders*, Nucl. Instrum. Methods Phys. Res., Sect. A 355, 3 (1995). J. Pumplin et al., *New generation of parton distributions with uncertainties from global QCD analysis*, JHEP 0207 (2002) 012. V. Litvinenko, *LHeC with \~100% energy recovery linac*, 2nd CERN-ECFA-NuPECC workshop on LHeC, Divonne-les-Bains 1-3 Sep (2009).
--- abstract: | 1. Text S1 to S6 2. Figures S1 to S5 3. Tables S1 to S3 Additional supporting information (files uploaded separately) {#additional-supporting-information-files-uploaded-separately .unnumbered} ============================================================= 1. GADGET-2 cooling routine 2. Condensate orbital evolution model 3. GRAINS code and corresponding thermodynamic data bibliography: - 'References.bib' title: 'Supplementary materials for ”The origin of the Moon within a terrestrial synestia"' --- The canonical giant impact model {#sup:sec:canonical} ================================ In this section, we support our statements in the introduction about the difficulty of forming a lunar mass satellite from a canonical Moon-forming giant impact. In previous giant impact studies [@Canup2001; @Canup2004; @Canup2008; @Canup2012; @Cuk2012; @Reufer2012; @Rufu2017], the mass of the moon formed by a given impact has been approximated based on the mass and specific AM of the material injected into orbit using scaling laws derived from $N$-body lunar accretion studies. $N$-body simulations treat all the mass in orbit as condensed particles that interact by gravity and accrete to form a moon. However, giant impact simulations have shown that the material injected into orbit in the impact is a mixture of liquid and vapor. In condensate-dominated disks, like those considered in models of the canonical case, the condensate is thought to settle to the midplane and form a liquid layer overlain by a vapor atmosphere [@Thompson1988; @Ward2012; @Ward2014; @Ward2017; @Charnoz2015]. Material in orbit beyond the Roche limit would likely condense rapidly and accrete in a manner similar to an $N$-body disk. @Salmon2012 used a hybrid code to simultaneously model a simplified one-dimensional, Roche-interior multiphase disk and an $N$-body Roche exterior disk. They produced scaling laws relating the mass and specific AM of the disk to the mass of the final moon and showed that lunar accretion from a multiphase disk is much less efficient than from a pure $N$-body disk. The growing Moon interacts with the multiphase disk, truncating the edge of the disk. Thus, the Moon must migrate and the disk viscously spread beyond the Roche limit before more mass can be accreted to the moon. These processes bottleneck lunar accretion. To date, a determination of the mass of the moon formed by canonical Moon-forming giant impacts using the more conservative hybrid scaling laws has not been published. We use the results of 105 published canonical Moon-forming impacts [@Canup2001; @Canup2004; @Canup2008a] to consider the likelihood of forming a lunar mass moon from such impacts. Figure \[sup:fig:can\_hist\] shows the range of satellite masses predicted using both $N$-body [@Ida1997 black] and hybrid [@Salmon2012 red] scaling laws. Both scaling laws require an assumption about the mass of material that is ejected from the system during satellite accretion, $M_{\infty}$. A good fit between accretion simulations and the scaling laws is generally achieved for $0$ $<$ $M_{\infty}$ $\le$ $0.05 M_{\rm d}$, where $M_{\rm d}$ is the initial mass of the disk. The two panels of Figure \[sup:fig:can\_hist\] show the predicted satellite mass for loss of 0 and 5% of the disk mass in A and B respectively. The number of impacts that are expected to form a greater than lunar mass moon is significantly lower when using the hybrid scaling laws. Assuming $M_{\infty}=0$, only 29% of the published impacts would be able to produce a lunar mass moon and only one impact formed a lunar mass moon using $M_{\infty}$ $=$ $0.05 M_{\rm d}$. Furthermore, the disks that do form greater than lunar mass moons using the hybrid scaling laws tend to have $L_{\rm d}/M_{\rm d}$ $>$ $\left( G M_{\rm Earth} a_{\rm R} \right )^{1/2}$, where $L_{\rm d}$ is the AM of the disk, $G$ is the gravitational constant, $M_{\rm Earth}$ is the mass of the Earth, and $a_{\rm R}$ is the radius of the Roche limit. The accretion efficiency of such high specific AM disks are best captured by scaling laws with higher $M_{\infty}$ and lower accretion efficiencies [@Ida1997; @Kokubo2000; @Salmon2012; @Salmon2014]. Given the lower efficiency of moon formation in multiphase disks, it is uncertain whether canonical style impacts can inject enough mass and AM into orbit to form a lunar mass moon. Recent work by @Charnoz2015 has also shown that the spreading of the inner disk is slower that assumed by @Salmon2012 and more mass is lost to the planet, further impeding mass addition to a moon from a Roche interior disk. @Charnoz2015 suggested that the issue of spreading mass beyond the Roche limit could be circumvented if the Moon accreted mostly from mass that was injected beyond the Roche limit in the impact. For the published studies that reported the mass injected beyond the Roche limit [@Canup2001; @Canup2004], $\sim$65% of impacts injected more than a lunar mass of material beyond the Roche limit. The accretion efficiency of this material is uncertain as it depends on the surface density profile of the disk. The simulations of @Salmon2012 showed a wide range of accretion efficiencies (0 to 98%) for mass initially outside the Roche limit for the idealized disks they initialized. If a moon did accrete from material emplaced beyond the Roche limit, it would still have to meet the other observational (chemical and isotopic) constraints. More work is needed to ascertain whether the multiphase disks produced by canonical giant impact simulations can form lunar mass moons, while satisfying the other observational constraints. ![Hybrid scaling laws for satellite accretion from a circumterrestrial disk predict the formation of less than a lunar mass moon for most published canonical impact events. Histograms show the predicted lunar mass from different scaling laws applied to published canonical impact simulations [@Canup2001; @Canup2004; @Canup2008a]. In both panels, satellite masses calculated using the $N$-body accretion scaling law [@Ida1997] are shown in black and masses calculated using the scaling law from the hybrid disk evolution models of [@Salmon2012] are in red. Ejected masses of $M_{\infty}=0$ (A) and $M_{\infty}=0.05M_{\rm d}$ (B) were assumed, where $M_{\rm d}$ is the initial disk mass.[]{data-label="sup:fig:can_hist"}](Figures/Canonical_disk_masses_hist.pdf) Determination of the photic surface {#sup:sec:photosphere} =================================== Cooling of a synestia is controlled by radiation of energy from the effective radiating surface of the structure, where the material is optically thin. We refer to the layer from which the structure radiates as the photosphere. The thermal structure is determined in part by radiative transfer through a gas-condensate mixture. A full combined thermal and radiative calculation is beyond the scope of this paper. Here, we approximate the photospheric pressure, and hence temperature, using a simple calculation of radiative transfer through a fixed hydrostatic structure. As we show in §\[sup:sec:adiabats\], adiabats in a synestia are mostly vapor in the high-pressure midplane of the structure but condense a few tens of percent of condensate at lower pressures. The radius at which the structure becomes optically thin therefore depends both on the absorption of the vapor and condensate. The probability of a photon being absorbed as it traverses over a distance $\mathrm{d}r$ is given by $$\mathcal{P}(r)=\alpha(r) \mathrm{d}r = \frac{\mathrm{d}r}{L_{\rm MFP}(r)} + \alpha_{\rm vap}(r) \mathrm{d}r \, ,$$ where $\alpha (r)$ is the average absorption coefficient at radius $r$, $ L_{\rm MFP}$ is the mean free path of a photon traveling through a droplet suspension, and $\alpha_{\rm vap} (r)$ is the absorption coefficient of the vapor which is also a function of $r$. The absorption of silicate vapor in the conditions relevant for post-impact states is poorly known. Here we use the semiconductor type Drude model for the absorption of silica vapor constructed by [@Kraus2012] and constrained by them using first principles molecular dynamics simulations. In our calculations, the droplet absorption dominates over the vapor absorption so errors in $\alpha_{\rm vap}$ will have only a small effect on our conclusions. $ L_{\rm MFP}$ for a photon passing through a cloud of spherical condensates of diameter, $D_0$, is $$L_{\rm MFP} = \frac{4 D_0}{6} \frac{V_{\rm avg}}{V_{\rm cond}} \, ,$$ where $V_{\rm liq}/V_{\rm avg}$ is the volume fraction of condensate. The inner edge of the photosphere is defined as when, integrating from the outside of the structure inwards, the optical depth is unity, $$\int_{\infty}^{r_{\rm rad}} \! \alpha (r) \mathrm{d}r = 1 \, ,$$ where $r_{\rm rad}$ is the radius of the photic surface. We approximate the photosphere for post-impact and thermally equilibrated structures. The low pressure regions of the structure are not resolved in SPH. To overcome this, we integrate radially outwards along the rotation axis from the lowest resolved pressure along an isentrope to find the hydrostatic profile to low pressures. For simplicity, we used the single-phase M-ANEOS forsterite EOS [@Melosh2007; @Canup2012]. We then integrate back along the same profile to find where the optical depth is unity. The location of the photosphere depends strongly on the mass fraction of condensate. In §\[sup:sec:adiabats\] we calculate the mass fraction of condensate that is present along adiabats in bulk BSE material. However, the true mass fraction of condensate at a point in the synestia depends on how efficiently condensates separate from the vapor, which is poorly constrained. To remove this complexity, we assume a constant mass fraction of condensate, $f_{\rm cond}$, in the mixed phase region when calculating both the hydrostatic profile and the optical depth. We now explore the range of possible photospheric pressures, and hence temperatures, varying $f_{\rm cond}$ and the diameter of condensates. Synestias typically become optically thin at low pressures. For example, Figure \[sup:fig:photo\] shows the photospheric pressure at the poles for the synestia shown in Figure \[fig:contourstructures\]E-G calculated using different $f_{\rm cond}$ and $D_0$. For $f_{\rm cond}=0.1$ the photospheric pressure ranges from 10$^{-6}$ - 10$^{-2}$ bar, assuming condensates of diameter 10$^{-6}$ - 10$^{-2}$ m. A few $10^{-2}$ m is the largest size for falling condensates that we approximate in §\[sec:dynamics\_cooling\]. The corresponding temperatures on an adiabat starting at the dew point in the midplane (§\[sup:sec:adiabats\]) range from approximately 1900 to 2600 K. On a saturated adiabat the range is from 2200 to 2700 K. Increasing the mass fraction of condensate decreases the pressure at the photosphere. The relatively narrow range of photospheric temperatures means that variation in condensate size has a limited effect on our calculations with a maximum change in radiative flux of a factor of two. Away from the poles, the effective gravity in the synestia can be substantially lower. A lower gravity, and associated larger scale height, results in a lower pressure photosphere. The effect of changing gravity is similar in magnitude to increasing the mass fraction of condensates. The photospheric pressure, and hence temperature, we calculate remains relatively constant as a synestia cools. For the calculations in this work, we use a photospheric temperature of 2300 K. This radiative temperature is similar to that used for studies of canonical post-impact states [e.g., @Thompson1988; @Pahlevan2007]. ![The photospheric pressure is low but depends on the size of condensates. Shown is the photospheric pressure at the pole as a function of condensate diameter assuming varying mass fractions of condensate in the vapor (colored lines) calculated for the thermally equilibrated synestia shown in Figure \[fig:contourstructures\]E-G.[]{data-label="sup:fig:photo"}](New_Figures/Photosphere_paper.pdf) Because synestias are extended structures that radiate at a high temperature, a substantial amount of power is radiated from the photosphere. The surface area of the photosphere of the initial post-impact state is on the order 10$^{16}$ - 10$^{17}$ m$^2$ which, assuming that the structure is radiating as a black body, corresponds to a radiated power of 10$^{22}$ - 10$^{23}$ W. Most of the energy radiated at the photosphere goes into condensing silicates and the radiated power corresponds to an initial rate of condensation of about $1 M_{\rm Moon}$ yr$^{-1}$. This is a significant mass of condensate providing a large driver for mass transport and mixing. As the structure evolves, it contracts reducing the surface area of the photosphere and hence the rate of production of condensate. However, a significant mass of condensate is produced throughout the evolution of synestias (see green histograms in Figures \[fig:SPH\_cooling\], \[sup:fig:coolingA\], \[sup:fig:coolingB\] and \[sup:fig:coolingC\]). Mixing in a terrestrial synestia {#sup:sec:mixing} ================================ In §\[sec:dynamics\_cooling\], we argue that vertical convection would rapidly mix vertical columns of material in a terrestrial synestia. Here we present supporting calculations for this argument and address some of the finer points. Convection within a synestia is unlike any system that has been studied to date. The main component of a terrestrial synestia, silicate vapor, is condensible. Unlike most studies of planetary atmospheres, where condensible species make up a small mass fraction of the system, convection in a synestia is governed by the phase boundary. Furthermore, unlike most planets and astrophysical bodies, convection is driven by cooling from the top rather than heating from below. The thermal structure of post-impact states is highly stratified \[LS17\], and the densities and pressures in the structure also vary by many orders of magnitude. Hence, we are not able to make direct analogies to convection in other well studied systems. Here we describe the basic aspects of convection in a synestia and approximate the mixing timescale. Synestias rotate rapidly which has a significant effect on convection. Convection in rapidly rotating systems tends to organize into columns parallel to the rotation axis [as in giant planets, see e.g., @Vasavada2005; @Kaspi2009], as dictated by the Taylor-Proudman theorem. In rotating systems with shear similar to the disk-like regions of synestias, such as astrophysical disks, similar columns are formed but they can be transient as they are destroyed by the shear stresses in the bulk flow [see discussion in @Shariff2009]. Given the small Rossby number of the system, we expect synestias to form similar columnar flow patterns in regions of the structure where there is a strong AM gradient with radius. Such a flow pattern can significantly impede the ability of fluid convection to transport mass radially as this requires exchange of mass between columns, which is slow relative to the vertical convective velocities [@Kaspi2009]. In the outer regions of synestias, the specific AM gradient with radius can be small and radial fluid convection may be possible. Exploration of this possibility is left to future work. Despite the uncertainty in the convective pattern, we wish to be able to estimate the timescale for vertical mixing in a synestia. To do this, we used mixing length theory (MLT), a technique that has been used extensively in stellar astrophysics [see e.g., @Kippenhahn2012]. MLT has also been applied to magma oceans on terrestrial planets [e.g., @Solomatov2000] and previously to the canonical lunar disk [@Pahlevan2007]. MLT considers the movement of convecting parcels of material that are able to travel a mixing length (or mean free path), $\ell_{\rm m}$, before becoming indistinguishable from their surroundings. From MLT, it is possible to estimate an average convective velocity, $$v_{\rm conv} \sim \left ( \frac{F_{\rm conv} \ell_{\rm m}}{\rho H} \right )^{\frac{1}{3}} \, , \label{sup:eqn:MLT}$$ where $F_{\rm conv}$ is the convective flux, $\rho$ is the density of the fluid, and $H$ is the scale height [see e.g., @Priestley1959; @Kraichnan1962; @Stevenson1979]. In equilibrium, the convective flux and the radiative flux are equal, $F_{\rm conv} $ $\sim$ $\sigma T_{\rm rad}^4 $, and $v_{\rm conv}$ is determined by the mixing length parameter, $\Lambda_{\rm m} = \ell_{\rm m} / H $. In astrophysics, $\Lambda_{\rm m}$ is typically taken to be of order unity [e.g. @Spiegel1971; @Kippenhahn2012], but in the post-impact structure, rapid rotation could shorten the convective length scale due to the Coriolis effect. In MLT, rotation is often compensated for by simply using a smaller value for the mixing length parameter. Here we use $\Lambda_{\rm m} =0.1$. Alternatively, [@Stevenson1979] considered the effect of rotation on convection in the rapidly rotating limit and found that the convective velocity scaled as $$v_{\rm conv} \sim 1.5 \left ( 2 \Omega \ell_{\rm m} \right )^{-\frac{1}{5}} \left (v_{\rm conv}^0 \right )^{\frac{2}{5}} \, , \label{sup:eqn:MLT_stevenson}$$ where $\Omega$ is the rotational angular velocity, and $v_{\rm conv}^0$ is the convective velocity in the absence of rotation as defined by Equation \[sup:eqn:MLT\]. The description of [@Stevenson1979] has been shown to match well the results of numerical simulations of thermal convection between fixed temperature plates in a rotating system [@Barker2014]. A third approach to account for rotation was suggested by @Solomatov2000, based on experimental results. With rotation, the mixing length scales as $$\ell_{\rm m} \sim \frac{v_{\rm conv}}{\Omega} \, .$$ For an adiabatic scale height, $H= c_{p} / \alpha_{p} g$, the convective velocity is then $$v_{\rm conv} \sim \left ( \frac{\alpha_{p} g F_{\rm conv} }{\rho c_{p} \Omega} \right )^{\frac{1}{2}} \, ,$$ where $\alpha_{p}$ is the coefficient of thermal expansion, $g$ is the gravitational acceleration, and $c_{p}$ is the specific heat capacity at constant pressure. These three estimates of the convective velocity were used to calculate a convective mixing timescale $\tau_{\rm mix} $ $\sim$ $ L / v_{\rm conv} $, where $L$ is the length scale over which mixing occurs. MLT has been shown to work well for purely thermal convection [e.g., @Barker2014] but its applicability to thermochemical convection has not been demonstrated. However, in the limit where the condensates do not separate from the vapor, the buoyancy forcing of the system is similar for a purely thermal and purely condensation driven convection. The change in density of a parcel of material due to thermal contraction alone is given by $$\Delta \rho_{\rm therm} = \rho_{\rm vap} \alpha_{p} \Delta T \, ,$$ where $\rho_{\rm vap}$, $\alpha_{p}$ and $\Delta T$ are the density of the vapor, the thermal expansivity at constant pressure, and the change in temperature of the parcel respectively. If we assume that the temperature change of the parcel is due to an energy change $\Delta E$, we can rewrite the change in density as $$\Delta \rho_{\rm therm} = \rho_{\rm vap} \alpha_{p} \left ( \frac{\Delta E}{m c_{p}} \right ) \, ,$$ where $m$ is the mass of the parcel. For comparison, we calculate the density change of a parcel if we instead assume that $\Delta E$ was doing work, not to cool the parcel, but to condense silicate vapor. The density of a mixed phase with mass fraction $f$ of condensates is $$\rho = \frac{\rho_{\rm vap} \rho_{\rm cond}}{f \rho_{\rm vap} + (1-f)\rho_{\rm cond}} \, ,$$ where $\rho_{\rm cond}$ is the density of the condensate. We assume that the parcel is initially entirely vapor and calculate the change in density with a change in mass fraction of condensate $\Delta f$. To first order in $\Delta f$, $$\Delta \rho_{\rm cond} \sim \Delta f \rho_{\rm vap} \left ( 1- \frac{\rho_{\rm vap}}{\rho_{\rm cond}} \right ) \, .$$ Since $\rho_{\rm vap} \ll \rho_{\rm cond}$, we approximate this as $\Delta \rho_{\rm cond} \sim \Delta f \rho_{\rm vap}$. $\Delta f$ is determined by the energy lost and the latent heat of vaporization, $\ell$, where $$\Delta \rho_{\rm cond} \sim \rho_{\rm vap} \left ( \frac{\Delta E}{m l} \right ) \, .$$ The ratio of the negative buoyancy produced by purely thermal and purely condensation-driven convection is therefore given by $$\frac{\Delta \rho_{\rm therm}}{\Delta \rho_{\rm cond}} \sim \frac{\alpha_{p} l }{c_{p}}.$$ For silicates, the latent heat of vaporization is $\sim$$10^{7}$ J kg$^{-1}$, and we assume that the vapor is an ideal gas; $\alpha_{p} \sim 10^{-4}$ and $c_{p} \sim 10^{3}$ (see §\[sup:sec:adiabats\]). The ratio of the buoyancies for the purely thermal and purely condensate end members is then of order unity. The work that would need to be done to heat up a downwelling parcel and re-equilibrate it thermally with its surroundings would also be similar in each case. We suggest that in the limiting case of perfect condensate-vapor coupling that the system will behave similarly to thermally driven convection, and so we can use MLT to approximate the mixing timescale in a synestia. Our estimates based on MLT do not include the effect of falling condensates transferring mass radially and only give an estimate of the mixing velocity and timescale parallel to the rotation axis. Due to the range of material properties in the post-impact structure, to calculate the mixing timescale, we consider separately convective mixing in the low density and high density regions of a synestia. For the high density regions, we use $g $ $\sim $ $5$-$10$ m s$^{-2}$, $\Omega $ $\sim$ $10^{-4}$ rad s$^{-1}$, $L$ $\sim$ $10^7$ m and assume the fluid has properties comparable to a silicate liquid, where $\alpha_{p} $ $\sim$ $ 10^{-5}$ K$^{-1}$, $\rho $ $\sim$ $ 10^3$ kg m$^{-3}$ and $c_{p} $ $\sim$ $ 10^{3}$ J K$^{-1}$ kg$^{-1}$ [@Lange1987; @Rivers1987]. For the low-density regions of the structure, we use $g $ $\sim$ $ 0.1$-$5$ m s$^{-2}$, $\Omega $ $\sim$ $ 10^{-4}$ rad s$^{-1}$, $\alpha_{p} $ $\sim$ $ 10^{-4}$ K$^{-1}$ (comparable to that of an ideal gas), $c_{p} $ $\sim$ $ 10^{3}$ J K$^{-1}$ kg$^{-1}$, and $L$ $\sim$ $10^7$ m. We considered two different densities for the midplane ($\rho \sim 10$ kg m$^{-3}$) and the photosphere ($\rho \sim 10^{-3}$ kg m$^{-3}$). In the high density regions, the convective velocity is about $10$ m s$^{-1}$ without rotation, and the timescales for mixing are on order a week. Including the effect of rotation decreases the convective velocity to a few meters per second and increases the mixing time to weeks. The low density outer regions of a structure can mix faster. The convective velocities in the midplane are on the order of tens of meters per second, accounting for rotation. The corresponding mixing times are on the order of days. At the photosphere, the convective velocities are hundreds of meters per second and the mixing timescale is less than a day. The three different methods for including the effect of rotation give mixing timescales within the same order of magnitude for the regime considered here with the formulation of [@Solomatov2000] giving slower convective velocities and longer mixing times compared to the other two methods. Thus, we expect that the vertical mixing in synestias is efficient. Synestia cooling calculations {#sup:sec:SPH_cooling} ============================= Here, we provide details of the implementation of the calculation summarized in §\[sec:cooling\_methods\]. Cooling method -------------- We developed a simple model to estimate the shortest timescale possible for lunar accretion and separation from an impact-generated terrestrial synestia. We focused on the process of condensation of the silicate vapor by radiative cooling and neglect internal heating by viscous dissipation. We assumed that the quasi-isentropic vapor region of the synestia is well-mixed with a constant specific entropy down to pressures at which the isentrope intersects the vapor dome. Then, at lower pressures, the specific entropy follows the vapor side of the vapor dome. The size and shape of a synestia were estimated by removing the condensate fraction in the disk-like region and only calculating the pressure field for the remaining vapor and high-pressure fluid. The mass and orbit of a single moon was estimated assuming perfect accretion of Roche-exterior condensates. Based on the estimated circular orbit of the growing moon, we determined the vapor pressure at that location in the synestia. We consider this a minimum vapor pressure around the moon because it does not take into account the gravitational field of the moon. First, giant impact simulations using the GADGET-2 code were calculated for 24 to 48 hours of simulation time, when most structures were nearly axisymmetric and had reached a quasi-hydrostatic equilibrium. Escaping particles were removed and the system was truncated at a radius of $1.5\times10^8$ m for the cooling calculation. Any iron particles remaining in the disk-like region were removed. In a few cases, clumps of self-gravitating pure liquid silicate particles (small moonlets) were present in the inner disk-like region and in the process of falling into the corotating region; these parcels were also removed. ![image](New_Figures/FigureS4_thermal_equilibration.pdf) Second, the post-impact structure was thermally equilibrated, and the pressure field was recalculated. Most post-impact structures had approximately constant entropy particles in the inner disk-like region, as shown in Figure \[sup:fig:thermalequil\]C. In this step, the masses of the SPH particles that were a mixture of liquid and vapor were modified to remove the condensed mass fraction from the particle. The specific entropy and density of the particle was then set to the value for vapor on the vapor dome at the same pressure (Figure \[sup:fig:thermalequil\]F). The particles retained the same specific angular momentum. Based on the unique entropy distribution for each post-impact structure, we chose a value for the specific entropy, $S_{\rm inner}$, that divided the structure between a thermally stratified inner region and a quasi-isentropic outer region. All the fully vapor particles in the quasi-isentropic region were averaged to a constant specific entropy. This modified structure was evolved in GADGET-2 to attain quasi-hydrostatic equilibrium. As the pressure field equilibrated, any particles that partially condensed had the condensed mass fraction removed and the remaining mass was set to pure vapor at the same pressure. Most of the condensate was removed in the first few time steps, and the structure attained quasi-hydrostatic equilibrium on dynamical timescales (hours). The total mass and AM vector of condensate with specific AM greater than the value for a circular orbit at the Roche radius, $j_{\rm Roche}$, was recorded. We assume that this material quickly accretes into a single moonlet that we refer to as the seed of the moon. In the high-energy post-impact structures considered here, the midplane of the Roche-interior region was fully vaporized. Condensates with specific AM below $j_{\rm Roche}$ were redistributed into the vapor structure in the same manner as described in step 3 below. The structure becomes less flared upon thermal equilibration because high entropy particles above and below the midplane are averaged to an intermediate value (Figure \[sup:fig:thermalequil\]). Note that the overall pressure field is similar before and after thermal equilibration and thus this step does not affect our inference of the vapor pressure around the growing moon. However, in the initial unmodified structure, the concentration of condensates in the midplane reduces the pressure compared to the pressure above and below the midplane because of the local reduction in vapor pressure support. Therefore, in the direct output of SPH calculations, the midplane pressures in mixed phase disk-like regions are biased to lower pressures. At the end of thermal equilibration, any escaping particles were removed. Third, the thermally equilibrated synestia was evolved in GADGET-2 using a simple treatment of radiative cooling. The steps included in this process are illustrated in Figure \[fig:methods\_cartoon\]. Because the time steps in an SPH code are set by the Courant criterion, a direct calculation of cooling time is not possible. Thus, we implement an artificially large effective radiative temperature, $T_{\rm eff}$, and scale the simulation time by the factor $\left(T_{\rm eff}/T_{\rm rad}\right)^4$, where $T_{\rm rad}$ is the true photospheric temperature. The expected photospheric temperature is about 2300 K (§\[sec:thermo\]). We typically used $T_{\rm eff}=$15000 K or 20000 K. For each full time step: 1. The structure is centered on the iron core. 2. Each silicate particle is assigned to a group: inner, isentropic, and vapor-dome. The mass-weighted average specific entropy of the isentropic group particles is calculated and assigned to all particles in the isentropic region. 3. For each radial bin $k$ with annulus area $A_{\rm k}$, the radiative energy loss is $dQ_{k}=2A_{\rm k}\sigma T_{\rm eff}^4dt$, where $\sigma$ is the Stefan-Boltzmann constant and $dt$ is the time step. The factor of 2 accounts for radiation from the top and bottom of the structure. Radiative cooling is accommodated by reducing the total enthalpy of the structure under the assumption that the material in vertical columns in the cooling regions is well mixed. For each radial bin, the specific entropy of each isentropic or vapor-dome group particle $i$ is reduced by $dS_{k}$ such that $\sum_{i}^{} m_i T_i dS_k = dQ_{k}$. 4. After reducing the enthalpy of the system, each silicate particle is re-assigned to a group. For particles that partially condense, the mass fraction of condensate is removed from the particle and the specific entropy of the remaining mass is set to that of pure vapor. The mass-weighted average specific entropy of the isentropic region is recalculated, and all particles in the isentropic group are assigned the new mean value. The initial location and AM of all condensate is recorded. For condensate with specific AM that exceeds $j_{\rm Roche}$, the mass is removed from the system and the mass and AM vector is recorded to estimate the mass and location of the growing moon. For condensate with specific AM less than $j_{\rm Roche}$, the mass is evenly distributed in radial bins between the initial location of the condensate and the radius corresponding to the circular Keplerian orbit for the specific AM. This redistribution of mass is a simple function to assess the influence of falling condensates on the evolving structure. 5. The component of falling condensate that is redistributed into the Roche-interior region is added only to the isentropic group particles in each bin. For the total condensate added to each radial bin, the total enthalpy of the isentropic particles in that bin is reduced by the corresponding latent heat of vaporization. The density of the particle is then updated for the change in specific entropy at the same pressure. The mass of each particle in the isentropic group is incremented to accommodate the additional mass, increasing the total AM of the bin. The redistribution of mass from this simple cooling model typically led to a 1 to 2% error (reduction) in tracking total AM over the duration of the cooling calculation. 6. In order to estimate the fastest cooling rate for the structure, if falling condensate is redistributed to a Roche-interior radial bin that is only occupied by particles in the vapor-dome group, the falling condensate mass is removed from the system. This simple calculation does not attempt to model the dynamics of Roche-interior condensates and the potential accretion of Roche-interior material onto a growing satellite. Typically, the cooling simulations were run until the edge of the structure in the midplane receded to the Roche radius. In most cases, the structure was still comprised of an inner, isentropic and vapor-dome region. In some cases, the isentropic region cooled to the value of $S_{\rm inner}$ and the simulation was stopped. The disk-like regions of thermally equilibrated synestias were vertically hydrostatic and radially expanding due to viscous spreading. To estimate the magnitude of the effect of viscous spreading, we calculated the spreading of synthetic, constant-entropy synestias without cooling, i.e., the specific entropy of the system remained constant. We found that, over the scaled duration of our cooling calculations, the effect of artificial viscosity in GADGET-2 is comparable to that due to viscosity values previously used for strong thermal turbulence (e.g., $\alpha$ of $10^{-3}$ to $10^{-4}$, as suggested by [@Pahlevan2007]). We estimate that viscous spreading contributed a small fraction of material to the Roche-exterior region (e.g., less than to about $0.1M_{\rm Moon}$) during thermal equilibration. Because the outer regions of the synestia condense on a timescale faster than viscous spreading, the initial mass of Roche-exterior condensate is not substantially affected by viscous spreading from artificial viscosity. At later times, there is some variation in the estimated mass of the satellite depending on the details of viscosity in the outer regions of the synestia. In this work, we focus on the initial growth of the moon, and this uncertainty in the late evolution of the synestia is left to future refinements on our proposed lunar origin model. Here, we have neglected some physical processes that were emphasized in previous studies of lunar accretion. We do not include viscous heating of the Roche interior region because the initial rapid period of condensation dominates satellite accretion in this simple cooling calculation. As a result, we cannot investigate the late stages of cooling of the structure. We also neglect dynamical resonances between the synestia and the growing moon, such as Lindblad resonances. Prior studies of circumterrestrial disks [@Salmon2012; @Salmon2014] implemented Lindblad torques applicable to cool, thin, near-Keplerian disks [@Goldreich1979; @Goldreich1980]; however the structure of the synestia is unlike any structure for which Lindblad torques have been calculated. Impact-generated synestias have thermal velocities of the same order as orbital velocities, large scale heights compared to the distance to the center of mass, and strong radial pressure support. In some cases, the primary inner Lindblad resonance is located near the boundary between the corotating and disk-like regions of the synestia, calling into question the development of an inner Lindblad density wave. We do not consider the effect of gas drag on the orbit of the growing moon, which will be investigated in future work. Finally, we neglect tides between the synestia and growing satellite because tidal migration is minimal for the short duration of these calculations. In addition, we expect the tidal quality factor of the terrestrial synestia to be very large, e.g., closer to present-day gas giant planets than fully condensed bodies. Estimating the satellite mass ----------------------------- Based on our cooling calculations, we estimated the mass of materials that could accrete to form a moon. We provide two estimates for potential satellite masses, one that incorporates only material with sufficient AM to remain beyond the Roche radius (moon A) and one that incorporates some condensates that could fall within the Roche radius (moon B). The Roche-exterior mass and total AM of condensate removed in the thermal equilibration step is assumed to quickly accrete into a seed body for the moon. This assumption is motivated by previous $N$-body calculations of efficient Roche-exterior accretion [@Ida1997; @Kokubo2000; @Salmon2012; @Salmon2014]; however, most $N$-body studies began with very compact disks. For the range of giant impact simulations considered here, the radial extent of the Roche-exterior condensates varies widely. For simplicity we processed all the post-impact structures in the same way, and future work will revisit the accretion of a wider variety of Roche-exterior mass distributions. During the cooling calculation, the mass and AM of condensates with specific AM that exceed $j_{\rm Roche}$ are assumed to be added to the seed. Under the assumption of perfect accretion, the seed and Roche-exterior condensates form our first estimate (moon A) for the mass and orbital location of the satellite formed from a particular synestia. Moon A is shown by the black circles and black lines in Figure \[fig:SPH\_cooling\], \[fig:SPH\_cooling\_time\], \[sup:fig:coolingA\], \[sup:fig:coolingB\], and \[sup:fig:coolingC\]. The pressure-support in the synestia leads to the generation of condensates that originate beyond the Roche radius but do not have specific AM exceeding $j_{\rm Roche}$. In most cases, the location of the equivalent circular Keplerian orbit of this material is slightly within the Roche radius. In our model, we assumed that this material would be deposited (either as condensate or revaporized) over the volume between its point of origin and the equivalent Keplerian orbit, and the mass was redistributed evenly between the encompassed radial bins. The portion of the mass, and its corresponding AM, that was distributed beyond the Roche radius was included in moon B, under the assumption that falling condensate is likely to encounter and accrete to the moon. The falling condensate increased the total mass and decreased the specific AM of moon B compared to moon A. Moon B is shown by the blue diamond and blue lines in Figures \[fig:SPH\_cooling\], \[fig:SPH\_cooling\_time\], \[sup:fig:coolingA\], \[sup:fig:coolingB\], and \[sup:fig:coolingC\]. Condensation calculations {#sup:sec:cond} ========================= Here, we provide supporting information about the physiochemical calculations presented in §\[sec:thermo\] and our comparisons to lunar data (§\[sec:Moon\_comp\]). Bulk composition of the Moon {#sup:sec:BSM} ---------------------------- There are a wide range of estimates for the bulk composition of the Moon due to the difficulties of inferring a bulk composition from a limited number of surface samples, and seismic and gravity data. Different estimates of the bulk Moon (BM) are compared with the bulk silicate Earth (BSE) in Figure \[fig:lunar\_comp\] for the major and minor elements. For Earth, we chose the widely used BSE composition of [@McDonough1995]. The gray band in Figure \[fig:lunar\_comp\] shows a range of estimates of the bulk Moon composition normalized to bulk silicate Earth. Our range of estimates of the bulk Moon composition is based on estimates and discussions provided by [@Ringwood1977; @Waenke1977; @Morgan1978; @Ringwood1979; @Taylor1982; @Wanke1982; @Ringwood1987; @Warren2005; @Longhi2006; @Taylor2009; @Taylor2014], and [@Hauri2015]. All BM estimates are depleted in volatile elements (Na, K, Mn) relative to BSE, but there are substantial differences in Fe enrichment. The estimate of [@Taylor1982] reflects an early view that the Moon, in addition to volatile element depletion, is enriched in refractory elements and Fe (Mg\# of 84). Other early estimates, such as [@Waenke1977; @Ringwood1979], and [@Ringwood1987] predict similar enrichment in Fe, but not in refractory elements. Because olivines with an Mg\# of 87.5 have been found in two old lunar rocks (troctolite 76535 and dunite 72415), it is difficult to understand how the bulk silicate Moon could have an Mg\# as low as 84. More recently, [@Longhi2006; @Warren2005], and [@Hauri2015] proposed BM compositions with Mg\#’s of 87 - 90, similar to Earth. Re-evaluation of lunar seismic data [@Lognonne2003; @Weber2011; @Garcia2011], and constraints from the recent GRAIL mission [@Wieczorek2013], now suggest a lunar crustal thickness of 30 to 40 km, half the thickness of Apollo-era estimates. Therefore, [@Taylor2014] no longer supports refractory element (Ca, Ti, Al) enrichment in the Moon. Compositions plotted in Figure \[fig:lunar\_comp\] are normalized to the refractory element Al because Ca, Al, Ti and other refractory elements are believed to have close to chondritic ratios in both Earth and the Moon [c.f. @Taylor2014]; i.e. the enrichment factors of the gray band for the selected elements, E, are calculated as (E/Al)$_{\rm BM}$/(E/Al)$_{\rm BSE}$ ratios. The refractory elements have a very small range as they reflect the uncertainty in various estimates of their chondritic ratios. The main planet-building elements Mg and Si have larger uncertainties due to the lack of samples that directly represent the lunar mantle composition. Fe is tied to Mg through the possible range of Mg\#’s. The Fe/Mn ratio of the BSE is about 60 while that of the Moon is about 75. The moderately volatile elements K and Na have depletion factors in the BM of 5 to 10, with K being particularly well established based on K/U ratios [c.f. @Taylor2014]. This is a major constraint on the model discussed in this paper. The elements Cr, Co and Ni may also provide further constraints if their bulk lunar compositions can be better established than their current uncertain values. Recent work has emphasized the observation of volatile species [e.g., water, see @Hauri2015 and references therein] in the Moon. Thus, a complete model for lunar origin must also address the origin of the most volatile components, which we do not consider in this work. Effect of varying the silicate vaporization temperature buffer {#sup:sec:lunar_si_buffer} -------------------------------------------------------------- In our model, the temperature of equilibration of moonlets is controlled by the onset of silicate vaporization. In §\[sec:buffer\], we argue that the equilibration temperature is buffered near the point where $\sim$10 wt% of the total Si is vaporized. However, it is possible that the buffer varies somewhat due to either highly efficient or highly inefficient vaporization of moonlets. Varying the amount of Si in the gas at which the moonlets equilibrate can significantly affect the composition of the resulting moon (Figure \[sup:fig:lunar\_comp\_buffer\]). However, as noted in §\[sec:Moon\_comp\], the moderately volatile element composition of the liquid at slightly lower and higher temperatures are somewhat complimentary. A better understanding of the dynamics and thermodynamics of boundary layers around moonlets will be needed to determine the exact temperature of equilibration for moonlets in a synestia and hence the composition of the moon formed. ![The composition of the moon formed within a terrestrial synestia depends on the exact position of the silicate vaporization temperature buffer. Colored lines present the composition of the condensate at 25 bar and different equilibration temperatures, corresponding to the indicated mass fraction of Si in the vapor. The gray band presents the observed estimate and uncertainties on the bulk Moon composition.[]{data-label="sup:fig:lunar_comp_buffer"}](New_Figures/Moon_comp_varying_Si_2.pdf) Pressure constraints on lunar origin {#sup:sec:lunar_comp_press} ------------------------------------ In this work, we present a physical model for the formation of our Moon that predicts a range of vapor pressures around the growing moon. However, our condensation calculations can place an independent, empirical constraint on the pressure of lunar origin assuming the Moon formed by equilibrium condensation from BSE vapor. The lunar depletion in potassium is probably the best known of all the moderately volatile element depletions. Figure \[sup:fig:lunar\_comp\_press\] shows the composition of the condensate at different pressures where the temperature is set by requiring a depletion in potassium of a factor of 5 compared to BSE. For pressures that are too low ($<10$ bar) or too high ($>\sim50$ bar), the composition of the condensate cannot simultaneously satisfy the potassium depletion and the depletion of other moderately volatile elements. As we saw in Figure \[fig:lunar\_comp\], copper is a particularly good pressure indicator. Therefore, any lunar origin model that relies on equilibrium condensation, or equilibration between terrestrial and lunar material, requires modest vapor pressures (tens of bar) to be able to produce the observed bulk lunar composition. ![The composition of the bulk Moon can constrain the pressure of lunar origin. We used potassium depletion as a proxy for accretion temperature and calculated the composition of condensate at different pressures and temperatures set by requiring a depletion factor of five for potassium. The range of published estimates for the bulk Moon composition are shown by the gray band. The lunar composition can be matched by equilibration at tens of bar. Equilibration at all pressures can match the depletion of potassium but other moderately volatile elements, in particular copper and sodium, are not satisfied simultaneously .[]{data-label="sup:fig:lunar_comp_press"}](New_Figures/Moon_comp_fixedK_3.pdf) Calculation of adiabats {#sup:sec:adiabats} ======================= In order to determine the composition of parcels of material traveling through different pressure-temperature paths within a terrestrial synestia (§\[sec:paths\]), we calculated adiabats incorporating the effect of condensation. Entropy cannot be calculated directly from our condensation calculations but an adiabat can be approximated by treating the vapor and condensate as two phases with a single latent heat of vaporization, $l$. This approach is commonly used in atmospheric physics to consider the effect of water vapor on moving parcels of air [@Chamberlain1987]; however, various assumptions are made that are not suitable for our purposes. Therefore, we derived the adiabats for a parcel of material based on our calculated BSE phase diagram and the assumption that two phases are in equilibrium and these is no dynamical phase separation. The differential of specific entropy assuming $S(p,T,w)$, can be written $$TdS = T \frac{\partial S}{\partial T} \bigg |_{p,w} dT + T \frac{\partial S}{\partial p} \bigg |_{T,w} dp +T \frac{\partial S}{\partial w} \bigg |_{p,T} dw \, , \label{sup:eqn:dS}$$ where $S$ is specific entropy, $T$ is temperature, $p$ is pressure, and $w$ is the mass fraction of vapor. The first term can be simplified using the definition for specific heat capacity at constant pressure (and $w$), $$c_{p,w} = T \frac{\partial S}{\partial T} \bigg |_{p,w} \, .$$ The pre-factor to the second term is converted using a Maxwell relation to $$T \frac{\partial S}{\partial p} \bigg |_{T,w} = - T \frac{\partial }{\partial T} \left ( \frac{1}{\rho} \right)\bigg |_{p,w} \, ,$$ which we write in terms of the coefficient of thermal expansion at constant pressure and $w$, $$T \frac{\partial S}{\partial p} \bigg |_{T,w} = - \frac{T \alpha_{p,w}}{\rho} \, .$$ The final term can be simplified to $l dw$ assuming that the amount of latent heat released during condensation is linear with the mass condensed. Combining these expressions into Equation \[sup:eqn:dS\] gives $$TdS = c_{p,w} dT - \frac{T \alpha_{p,w}}{\rho} dp +l dw \, . \label{sup:eqn:dS2}$$ To determine the adiabat in $p$-$T$ space, we set $dS=0$ and expand $dw$, $$0 = c_{p,w} dT - \frac{T \alpha_{p,w}}{\rho} dp +l \left (\frac{\partial w}{\partial T} \bigg |_{p} dT +\frac{\partial w}{\partial p} \bigg |_{T} dp \right ) \, .$$ Collecting terms and dividing through, $$dp= \left (c_{p,w} +l \frac{\partial w}{\partial T} \bigg |_{p} \right ) \left ( \frac{T \alpha_{p,w}}{\rho} - l \frac{\partial w}{\partial p} \bigg |_{T} \right )^{-1} dT \, .$$ Therefore, $$\frac{\partial p}{\partial T} \bigg |_{S}= \left (c_{p,w} +l \frac{\partial w}{\partial T} \bigg |_{p} \right ) \left ( \frac{T \alpha_{p,w}}{\rho} - l \frac{\partial w}{\partial p} \bigg |_{T} \right )^{-1} \, . \label{sup:eqn:adiabat}$$ This equation can be integrated to find an adiabat. The behavior of Equation \[sup:eqn:adiabat\] is as expected in two commonly used limits. First, if the pressure dependence of $w$ is negated and an ideal gas is assumed (i.e. $\alpha_{p,w}^{\rm vap}=1/T$), Equation \[sup:eqn:adiabat\] becomes $$\frac{\partial p}{\partial T} \bigg |_{S}=\rho \left (c_{p,w} +l \frac{\partial w}{\partial T} \bigg |_{p} \right ) \, , \label{sup:eqn:wet_lapse}$$ which is the expression commonly used in atmospheric physics for moist convection [e.g., @Chamberlain1987]. Secondly, in the limit of $l=0$, the equation for a standard adiabat is recovered, $$\frac{\partial p}{\partial T} \bigg |_{S}=\frac{\rho c_{p}}{T \alpha_p} \, .$$ This expression has been widely used in geophysics [e.g., @Solomatov2000]. Before we can use Equation \[sup:eqn:adiabat\] to calculate adiabats, we must make some further assumptions. The vapor is treated as a monotonic ideal gas. Therefore, the molar heat capacity is $C_{p,w}^{\rm vap}=5R/2$ and the coefficient of thermal expansion is $\alpha_{p,w}^{\rm vap}=1/T$. $R$ is the universal gas constant. The molar heat capacity of the condensate is given by the high temperature limit of the Debye model: $C_{p,w}^{\rm cond}$ $\sim$ $ C_{V,w}^{\rm cond} = 3 R$. Note that we have assumed that the heat capacities are identical and we will discuss the validity of this assumption shortly. The coefficient of thermal expansion for the condensate is taken to be a constant $ \alpha_{p,w}^{\rm cond}= 5 \times 10^{-5}$ K$^{-1}$, consistent with the properties of silicate melts [@Lange1987; @Rivers1987]. The density for the condensate and vapor were taken from the M-ANEOS forsterite vapor dome [@Melosh2007] at the relevant pressure. The bulk density is then given by $$\frac{1}{\rho}=\frac{w}{\rho_{\rm vap}}+\frac{1-w}{\rho_{\rm cond}} \, , \label{sup:eqn:density}$$ where $\rho_{\rm vap}$ and $\rho_{\rm cond}$ are the density of the vapor and condensate, respectively. The latent heat is also taken from the M-ANEOS forsterite vapor dome [@Melosh2007]. The latent heat is calculated as $$l= T \Delta S \label{sup:eqn:latent_heat} \, ,$$ where $\Delta S$ is the entropy difference between liquid and vapor at the required pressure. The latent heat calculated from M-ANEOS at the pressures of our calculations are similar to that widely used in studies of the canonical lunar disk of $l = 1.7 \times 10^{7}$ J kg$^{-1}$ [e.g., @Thompson1988; @Ward2012]. In order to integrate Equation \[sup:eqn:adiabat\], we need to combine the molar heat capacities to give a specific heat capacity of the mixed phase. The specific heat capacity is defined as $$c_{p,w}=\frac{\partial U }{\partial T} \bigg |_{p,w} \, , \label{sup:eqn:comb_cp1}$$ where $U$ is the internal energy. For the bulk $$U=n_{\rm vap} U_{\rm vap} + n_{\rm cond}U_{\rm cond} \, , \label{sup:eqn:U_expanded}$$ where $U_{\rm vap}$ and $U_{\rm cond}$ are the internal energy per mol of the vapor and condensate, respectively. $n_{\rm vap}$ and $n_{\rm cond}$ are the number of moles of the vapor and condensate in a unit mass of bulk material. Substituting Equation \[sup:eqn:U\_expanded\] in Equation \[sup:eqn:comb\_cp1\] gives $$c_{p,w}= n_{\rm vap} \frac{\partial U_{\rm vap} }{\partial T} \bigg |_{p,w}+ n_{\rm cond} \frac{\partial U_{\rm cond} }{\partial T} \bigg |_{p,w} \, , \label{sup:eqn:comb_cpw}$$ since the fraction of vapor, $w$, and therefore $n_{\rm i}$, is held constant. Simplifying, $$U=n_{\rm vap} C_{p,w}^{\rm vap} + n_{\rm cond} C_{p,w}^{\rm cond} \, ,$$ where $C_{p,w}^{\rm vap}$ and $C_{p,w}^{\rm cond}$ are the molar heat capacities for the vapor and condensate. The number of moles of vapor and condensate per unit mass of bulk are given by $$n_{\rm vap} = \frac{w}{\bar{m}_{\rm a}^{\rm vap}} \, \,\,\,\,\,\, \& \,\,\,\,\,\,\, n_{\rm cond} = \frac{1-w}{\bar{m}_{\rm a}^{\rm cond}} \, ,$$ where $\bar{m}_{\rm a}$ is the average atomic weight of either the vapor or condensate which is taken from our condensation calculations. Note the inherent assumption that the vapor and condensate are both monotonic; relaxing this assumption has minimal effect on our conclusions. In a similar fashion, we combine the coefficients of thermal expansion, $\alpha$, to find the relevant value for the bulk. The thermal expansion coefficient is defined as $$\alpha_{p,w}=\rho \frac{\partial }{\partial T} \left (\frac{1}{\rho}\right ) \bigg |_{p,w} \, . \label{sup:eqn:comb_alpha1}$$ Substituting in Equation \[sup:eqn:density\] and noting the the differential is at constant $w$, $$\alpha_{p,w}=\rho \left \{ \frac{w \alpha_{p,w}^{\rm vap} }{\rho_{\rm vap}}+\frac{(1-w) \alpha_{p,w}^{\rm cond}}{\rho_{\rm cond}} \right \} \, . $$ We now return to consider the effect of assuming that $C_{p} $ $\sim$ $ C_{V}$ for the condensate. The general expression relating the specific heat capacities at constant temperature and pressure is $$c_{p} = c_{V} + \frac{T \alpha_{p}^2}{\rho \beta_{T}} \, ,$$ where $\beta_{T}$ is the isothermal compressibility. Given that $T $ $\sim$ $ 10^3$ K, $\alpha_{p} $ $\sim$ $ 10^{-5}$ K$^{-1}$ [@Lange1987; @Rivers1987], $\rho $ $\sim$ $ 10^3$ kg m$^{-3}$ and $\beta_{T} $ $\sim$ $ 10^{-11}$ Pa$^{-1}$ [@Lange1987; @Rivers1987], the difference between the specific heat capacities is of order 10 J K$^{-1}$ kg$^{-1}$. The specific heat capacities for typical condensates in our calculations are $\sim$ $ 10^3$ J K$^{-1}$ kg$^{-1}$; thus, the assumption of $C_{p,w}^{\rm cond}$ $\sim$ $ C_{V,w}^{\rm cond} $ has only a small effect on our results. Using this formulation, we calculated adiabats for material of BSE composition. For material that is initially entirely vapor, adiabats cool rapidly with decreasing pressure until they encounter the dew point of the BSE system. Adiabats then stay close to the dew point as pressure continues to decrease with a slowly increasing condensed mass fraction. For adiabats that begin close to the dew point at the pressures in the midplane of our example Moon-forming structures (tens of bar) the fraction of condensate is small but at lower pressures (tenths of a bar to a bar) the condensed fraction can be on the order of tens of percent. Similarly, adiabats that begin within the stability field of condensate follow lines of approximately constant condensed fraction through the phase diagram. Adiabats that begin with a large condensed fraction tend to have less change in the mass fraction of condensate with pressure. The adiabats of chemically and thermally isolated condensate over the pressure range in the disk-like regions of post-impact structures are isothermal as previously pointed out by @Machida2004. =0.9 [l l]{} \ Species & Phase(s)\ \ \ Species & Phase(s)\ \ \ =0.9 [l l l l l l]{} \ \ \ \ \ =0.9 [l l]{} \ Species & Phase(s)\ \ \ Species & Phase(s)\ \ \ Supplementary References {#supplementary-references .unnumbered} ========================
--- abstract: | When studying safety properties of (formal) protocol models, it is customary to view the scheduler as an adversary: an entity trying to falsify the safety property. We show that in the context of security protocols, and in particular of anonymizing protocols, this gives the adversary too much power; for instance, the contents of encrypted messages and internal computations by the parties should be considered invisible to the adversary. We restrict the class of schedulers to a class of admissible schedulers which better model adversarial behaviour. These admissible schedulers base their decision solely on the past behaviour of the system that is visible to the adversary. Using this, we propose a definition of anonymity: for all admissible schedulers the identity of the users and the observations of the adversary are independent stochastic variables. We also develop a proof technique for typical cases that can be used to proof anonymity: a system is anonymous if it is possible to ‘exchange’ the behaviour of two users without the adversary ‘noticing’. author: - 'Flavio D. Garcia' - Peter van Rossum - Ana Sokolova title: Probabilistic Anonymity and Admissible Schedulers --- Introduction ============ Systems that include probabilities and nondeterminism are very convenient for modelling probabilistic (security) protocols. Nondeterminism is highly desirable feature for modelling implementation freedom, action of the environment, or incomplete knowledge about the system.\ It is often of use to analyze probabilistic properties of such systems as for example “in 30% of the cases sending a message is followed by receiving a message” or “the system terminates successfully with probability at least 0.9”. Probabilistic anonymity [@bp_2005_probabilistic] is also such a property. In order to be able to consider such probabilistic properties we must first eliminate the nondeterminism present in the models. This is usually done by entities called schedulers or adversaries. It is common in the analysis of probabilistic systems to say that a model with nondeterminism and probability satisfies a probabilistic property if and only if it satisfies it no matter in which way the nondeterminism was resolved, i.e., for *all possible schedulers*.\ On the other hand, in security protocols, adversaries or schedulers are malicious entities that try to break the security of the protocol. Therefore, allowing just any scheduler is inadmissible. We show that the well-known Chaum’s Dining Cryptographers (DC) protocol [@cha_1988_dining] is not anonymous if we allow for all possible schedulers. Since the protocol is well-known to be anonymous, this shows that for the treatment of probabilistic security properties, in particular probabilistic anonymity, the general approach to analyzing probabilistic systems does not directly fit.\ We propose a solution based on restricting the class of all schedulers to a smaller class of *admissible schedulers*. Then we say that a probabilistic security property holds for a given model, if the property holds after resolving the nondeterminism under *all admissible schedulers*. Probabilistic Automata ====================== In this section we gather preliminary notions and results related to probabilistic automata [@SL94:concur; @Seg95:thesis]. Some of the formulations we borrow from [@Sok05:thesis] and [@Che06:thesis]. We shall model protocols with probabilistic automata. We start with a definition of probability distribution. \[PrDisDef\] A function $\mu \colon S \to [0,1]$ is a discrete probability distribution, or distribution for short, on a set $S$ if $\sum_{x \in S} \mu(x) = 1$. The set $\{x \in S|\ \mu(x) \gr 0\}$ is the support of $\mu$ and is denoted by $\operatorname{supp}(\mu)$. By $\mathcal{D}(S)$ we denote the set of all discrete probability distributions on the set $S$. We use the simple probabilistic automata [@SL94:concur; @Seg95:thesis], or MDP’s [@Bellman_1957_markov] as models of our probabilistic processes. These models are very similar to the labelled transition systems, with the only difference that the target of each transition is a distribution over states instead of just a single next state. \[ProbAutDef\] A probabilistic automaton is a triple ${\mbox{$\mathcal A$}}= \langle S , A , \alpha \rangle$ where: - $S$ is a set of states. - $A$ is a set of actions or action labels. - $\alpha$ is a transition function $\alpha: S \to {\mathcal{P}}(A \times {{\operatorname{{\mathcal{D}}}}}S)$. A terminating state of $\mathcal A$ is a state with no outgoing transition, i.e. with $\alpha(s) = \emptyset$. We might sometimes also specify an initial state $s_0 \in S$ of a probabilistic automaton ${\mbox{$\mathcal A$}}$. We write $s \stackrel{a}{\to} \mu$ for $(a,\mu) \in \alpha(s), \ s\in S$. Moreover, we write $s \stackrel{a,\mu}\leadsto t$ for $s, t \in S$ whenever $s \stackrel{a}{\to} \mu$ and $\mu(t) \gr 0$. We will also need the notion of a fully probabilistic system. \[FProbAutDef\] A fully probabilistic automaton is a triple ${\mbox{$\mathcal A$}}= \langle S , A , \alpha \rangle$ where: - $S$ is a set of states. - $A$ is a set of actions or action labels. - $\alpha$ is a transition function $\alpha: S \to {{\operatorname{{\mathcal{D}}}}}(A \times S) + 1$. Here $1 = \{*\}$ denotes termination, i.e., if $\alpha(s) = *$ then $s$ is a terminating state. It can also be understood as a zero-distribution i.e. $\alpha(s)(a,t) = 0$ for all $a \in A$ and $t \in S$. By $s_0 \in S$ we sometimes denote an initial state of ${\mbox{$\mathcal A$}}$. We write $s {\to} \mu$ for $\mu = \alpha(s), \ s\in S$. Moreover, we write $s \stackrel{a}\leadsto t$ for $s, t \in S$ whenever $s {\to} \mu$ and $\mu(a,t) \gr 0$. A major difference between the (simple) probabilistic automata and the fully probabilistic ones is that the former can express nondeterminism. In order to reason about probabilistic properties of a model with nondeterminism we first resolve the nondeterminism with help of schedulers or adversaries – this leaves us with a fully probabilistic model whose probabilistic behaviour we can analyze. We explain this in the sequel.\ \[PathsDef\] A path of a *probabilistic automaton* $\mathcal A$ is a sequence $$\pi = s_0 \stackrel{a_1,\mu_1}{\to} s_1 \stackrel{a_2,\mu_2}{\to} s_2 \dots$$ where $s_i \in S$, $a_i \in A$ and $s_i \stackrel{a_{i+1},\mu_{i+1}}{\leadsto} s_{i+1}$. A path of a *fully probabilistic automaton* $\mathcal A$ is a sequence $$\pi = s_0 \stackrel{a_1}{\to} s_1 \stackrel{a_2}{\to} s_2 \dots$$ where again $s_i \in S$, $a_i \in A$ and $s_i \stackrel{a_{i+1}}{\leadsto} s_{i+1}$. A path can be finite in which case it ends with a state. A path is complete if it is either infinite or finite ending in a terminating state. We let $\operatorname{last}(\pi)$ denote the last state of a finite path $\pi$, and for arbitrary path $\operatorname{first}(\pi)$ denotes its first state. A trace of a path is the sequence of actions in $A^{*} \cup A^{\infty}$ obtained by removing the states (and the distributions), hence above $\operatorname{trace}(\pi) = a_1a_2\ldots$. The length of a finite path $\pi$, denoted by $|\pi|$ is the number of actions in its trace. Let ${{\operatorname{{Paths}}}}(\mathcal A)$ denote the set of all paths, ${{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$ the set of all finite paths, and ${{\operatorname{{CPaths}}}}(\mathcal A)$ the set of all complete paths of an automaton $\mathcal A$. Paths are ordered by the prefix relation, which we denote by $\leq$. Let $\mathcal A$ be a (fully) probabilistic automaton and let $\pi_i$ for $i \geq 0$ be finite paths of $\mathcal A$ all starting in the same initial state $s_0$ and such that $\pi_i \leq \pi_j$ for $i \leq j$ and $|\pi_i| = i$, for all $i \geq 0$. Then by $\pi = \lim_{i \to \infty}{\pi_i}$ we denote the infinite complete path with the property that $\pi_i \leq \pi$ for all $i\geq 0$. \[ConeDef\] Let $\mathcal A$ be a (fully) probabilistic automaton and let $\pi \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$ be given. The cone generated by $\pi$ is the set of paths $$C_\pi = \{ \pi'\in {{\operatorname{{CPaths}}}}(\mathcal A) \mid \pi \leq \pi'\}.$$ From now on we fix an initial state. Given a fully probabilistic automaton $\mathcal A$ with an initial state $s_0$, we can calculate the probability-value denoted by ${{\operatorname{\mathbf{P}}}}(\pi)$ of any finite path $\pi$ starting in $s_0$ as follows. $$\begin{aligned} {{\operatorname{\mathbf{P}}}}(s_0) & = & 1\\ {{\operatorname{\mathbf{P}}}}(\pi \stackrel{a}{\to} s) & = & {{\operatorname{\mathbf{P}}}}(\pi)\cdot \mu(a,s) \quad \text{~where~} \operatorname{last}(\pi) \to \mu\end{aligned}$$ Let $\Omega_{\mathcal A} = {{\operatorname{{CPaths}}}}(\mathcal A)$ be the sample space, and let $\mathcal F_{\mathcal A}$ be the smallest $\sigma$-algebra generated by the cones. The following proposition (see [@Seg95:thesis; @Sok05:thesis]) states that ${{\operatorname{\mathbf{P}}}}$ induces a unique probability measure on $\mathcal F_{\mathcal A}$. \[PMeasProp\] Let $\mathcal A$ be a fully probabilistic automaton and let ${{\operatorname{\mathbf{P}}}}$ denote the probability-value on paths. There exists a unique probability measure on $\mathcal F_{\mathcal A}$ also denoted by ${{\operatorname{\mathbf{P}}}}$ such that ${{\operatorname{\mathbf{P}}}}(C_\pi) = {{\operatorname{\mathbf{P}}}}(\pi)$ for every finite path $\pi$. This way we are able to measure the probability of certain events described by sets of paths in an automaton with no nondeterminism. Since our models include nondeterminism, we will first resolve it by means of schedulers or adversaries. Before we define adversaries note that we can describe the set of all sub-probability distributions on a set $S$ by ${{\operatorname{{\mathcal{D}}}}}(S +1)$. These are functions whose sum of values on $S$ is not necessarily equal to 1, but it is bounded by 1. A scheduler for a probabilistic automaton $\mathcal A$ is a function $$\xi \colon {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A) \to {{\operatorname{{\mathcal{D}}}}}(A \times {{\operatorname{{\mathcal{D}}}}}(S) + 1)$$ satisfying $\xi(\pi)(a, \mu) \gr 0$ implies $\operatorname{last}(\pi) \stackrel{a}{\to} \mu$, for each finite path $\pi$. By ${{\operatorname{{Sched}}}}(\mathcal A)$ we denote the set of all schedulers for $\mathcal A$. Hence, a scheduler according to the previous definition imposes a probability distribution on the possible non-deterministic transitions in each state. Therefore it is randomized. It is history dependent since it takes into account the path (history) and not only the current state. It is partial since it gives a sub-probability distribution, i.e., it may halt the execution at any time. A probabilistic automaton $\mathcal A = \langle S, A, \alpha\rangle$ together with a scheduler $\xi$ determine a fully probabilistic automaton $$\mathcal A_\xi = \langle {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A), A, \alpha_\xi \rangle.$$ Its set of states are the finite paths of $\mathcal A$, its initial state is the initial state of $\mathcal A$ (seen as a path with length 1), its actions are the same as those of $\mathcal A$, and its transition function $\alpha_\xi$ is defined as follows. For any $\pi \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$, we have $\alpha_\xi(\pi) \in {{\operatorname{{\mathcal{D}}}}}(A\times {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)) + 1$ as $$\alpha_\xi(\pi)(a,\pi') = \left\{\begin{array}{ll} \xi(\pi)(a, \mu)\cdot \mu(s) & \,\,\, \pi' = \pi \stackrel{a,\mu}{\to} s\\ 0 & \,\,\, \text{otherwise} \end{array}\right.$$ Given a probabilistic automaton $\mathcal A$ and a scheduler $\xi$, we denote by ${{\operatorname{\mathbf{P}}}}_\xi$ the probability measure on sets of complete paths of the fully probabilistic automaton $\mathcal A_\xi$, as in Proposition \[PMeasProp\]. The corresponding $\sigma$-algebra generated by cones of finite paths of $\mathcal A_\xi$ we denote by $\Omega_\xi$. The elements of $\Omega_\xi$ are measurable sets. By $\Omega$ we denote the $\sigma$-algebra generated by cones of finite paths of $\mathcal A$ (without fixing the scheduler!) and also call its elements measurable sets, without having a measure in mind. Actually, we will now show that any scheduler $\xi \in {{\operatorname{{Sched}}}}(\mathcal A)$ induces a measure ${{\operatorname{\mathbf{P}}}}_{(\xi)}$ on a certain $\sigma$-algebra $\Omega_{(\xi)}$ of paths in $\mathcal A$ such that $\Omega \subseteq \Omega_{(\xi)}$. Hence, any element of $\Omega$ can be measured by any of these measures ${{\operatorname{\mathbf{P}}}}_{(\xi)}$. We proceed with the details. Define a function $f: {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A_\xi) \to {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$ by $$\label{PathsFuncEq} f(\hat{\pi}) = \operatorname{last}(\hat{\pi})$$ for any $\hat{\pi} \in \mathcal A_\xi$. The function $f$ is well-defined since states in $\mathcal A_\xi$ are the finite paths of $\mathcal A$. Moreover, we have the following property. \[PrefFLem\] For any $\hat\pi_1, \hat\pi_2 \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A_\xi)$ we have $$\hat\pi_1 \leq \hat\pi_2\quad \iff \quad f(\hat\pi_1) \leq f(\hat\pi_2)$$ where the order on the left is the prefix order in ${{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A_\xi)$ and on the right the prefix order in ${{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$. By the definition of $\mathcal A_\xi$ we have that for $\pi, \pi' \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$ i.e. states of $\mathcal A_\xi$: $\pi \stackrel{a}{\leadsto} \pi'$ if and only if $\xi(\pi) \gr 0$ and $\operatorname{last}(\pi) \stackrel{a,\mu}{ s}$ in $\mathcal A$ for some $\mu$ and $s$. In other words if $\pi \stackrel{a}{\to} \pi'$, then $\pi \leq \pi'$ and $|\pi'| = |\pi| + 1$ i.e. $\pi'$ extends $\pi$ in one step. Therefore, if we have a path $\pi_0 \stackrel{a_0}{\to} \pi_1 \stackrel{a_1}{\to} \pi_2 \stackrel{a_2}{\to}\cdots$ in $\mathcal A_\xi$ , then for all its states: if $i \leq j$, then $\pi_i \leq \pi_j$ and $|\pi_j| = |\pi_i| + (j-i)$. So if $\hat\pi_1, \hat\pi_2 \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A_\xi)$ are such that $\hat\pi_1 \leq \hat\pi_2$, then $\operatorname{last}(\hat\pi_1)$ is a state in $\hat\pi_2$ and therefore we at once get $\operatorname{last}(\hat\pi_1)\leq \operatorname{last}(\hat\pi_2)$. For the opposite implication, again from the definition we notice that if a path $\hat\pi \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$ contains a state $\pi \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A)$, then it also contains all prefixes of $\pi$ as states. Hence, if $\operatorname{last}(\hat\pi_1) \leq \operatorname{last}(\hat\pi_2)$ for $\hat{\pi_1}, \hat{\pi_2} \in {{\operatorname{{Paths}^{\leq\omega}}}}(\mathcal A_\xi)$, then $\operatorname{last}(\hat\pi_1)$ is a state in $\hat\pi_2$ and also all its prefixes are. Since all paths start in the initial state (path), this implies that $\hat\pi_1 \leq \hat\pi_2$. \[InjFCor\] The function $f$ defined by (\[PathsFuncEq\]) is injective. By Lemma \[PrefFLem\] we can extend the function $f$ to $\hat f: {{\operatorname{{CPaths}}}}(\mathcal A_\xi) \to {{\operatorname{{CPaths}}}}(\mathcal A)$ by $$\hat f(\hat\pi) = \begin{cases}f(\hat\pi) & \pi \text{~is~finite}\\ \lim_{i \to infty} f(\hat\pi_i) & \hat\pi_i \leq \pi,\, |\hat\pi_i| = i\end{cases}$$ The properties from Lemma \[PrefFLem\] and Corollary \[InjFCor\] continue to hold for the extended function $\hat f$ as well. We will write $f$ for $\hat f$ as well.\ Recall that $\Omega_\xi$ denotes the $\sigma$-algebra on which the measure ${{\operatorname{\mathbf{P}}}}_\xi$ is defined. We now define a family of subsets $\Omega^{\xi}$ of ${{\operatorname{{CPaths}}}}({\mbox{$\mathcal A$}})$ by $$\label{OmegaXiEq} \Omega^\xi = \{ \Pi\in {{\operatorname{{CPaths}}}}({\mbox{$\mathcal A$}}) \mid f^{-1}(\Pi) \in \Omega_\xi.$$ The following properties are instances of standard measure-theoretic results. \[MeasurePropLem\] The family $\Omega^\xi$ is a $\sigma$-algebra on ${{\operatorname{{CPaths}}}}({\mbox{$\mathcal A$}})$ and by $${{\operatorname{\mathbf{P}}}}^\xi (\Pi) = {{\operatorname{\mathbf{P}}}}_\xi(f^{-1}(\Pi))$$ for $\Pi \in \Omega^\xi$ a measure on $\Omega^\xi$ is defined. Recall that $\Omega$ denotes the $\sigma$-algebra on complete paths of ${\mbox{$\mathcal A$}}$ generated by the cones. We show that for any scheduler $\xi$, $\Omega \subseteq \Omega^\xi$. Hence, the measurable sets (elements of $\Omega$) are indeed measurable by the measure induced by any scheduler. For any scheduler $\xi$, $\Omega \subseteq \Omega^\xi$. Fix a scheduler $\xi$. Since $\Omega$ is generated by the cones it is enough to show that each cone is in $\Omega^\xi$. Let $C_{\pi_0, \mathcal A}$ be a cone in ${{\operatorname{{CPaths}}}}({\mbox{$\mathcal A$}})$ generated by the finite path $\pi_0$, i.e. $$C_{\pi_0,\mathcal A} = \{\pi \in {{\operatorname{{CPaths}}}}({\mbox{$\mathcal A$}}) \mid \pi_0 \leq \pi\}.$$ We have $$\hat f^{-1}(C_{\pi_0,\mathcal A}) = \begin{cases}\emptyset & \pi_0 \not\in \hat f({{\operatorname{{CPaths}}}}({\mbox{$\mathcal A$}}))\\ C_{\hat\pi_0, \mathcal A_\xi} & \hat f(\hat\pi_0) = \pi_0\end{cases}.$$ by Lemma \[PrefFLem\]. Indeed, let $\pi_0 = \hat f(\hat\pi_0)$. Then $$\begin{aligned} \hat f^{-1}(C_{\pi_0,\mathcal A}) & = & \{ \hat\pi \in {{\operatorname{{CPaths}}}}(\mathcal A_\xi) \mid \hat f(\hat\pi) \geq \pi_0\}\\ & = & \{ \hat\pi \in {{\operatorname{{CPaths}}}}(\mathcal A_\xi) \mid \hat f(\hat\pi) \geq \hat f(\hat\pi_0)\}\\ (\text{Lem.}~\ref{PrefFLem}) & = & \{ \hat\pi \in {{\operatorname{{CPaths}}}}(\mathcal A_\xi) \mid \hat\pi \geq \hat\pi_0\}\\ & = & C_{\hat\pi_0, \mathcal A_\xi}\end{aligned}$$ We next define two operations on probabilistic automata used for building composed models out of basic models: parallel composition and restriction. We compose probabilistic automata in parallel in the style of the process algebra ACP. That is, asynchronously with communication function given by a semigroup operation on the set of actions. This is the most general way of composing probabilistic automata in parallel (for an overview see [@SV04:voss]). \[ParCompDef\] We fix an action set $A$ and a communication function $\cdot$ on $A$ which is a partial commutative semigroup. Given two probabilistic automata $\mathcal A_1 = \langle S_1, A, \alpha_1 \rangle$ and $\mathcal A_2 = \langle S_2, A, \alpha_2 \rangle$ with actions $A$, their parallel composition is the probabilistic automaton $\mathcal A_1 \parallel \mathcal A_2 = \langle S_1 \times S_2, A, \alpha\rangle$ with states pairs of states of the original automata denoted by $s_1 \parallel s_2$, the same actions, and transition function defined as follows. $s_1\parallel s_2 \stackrel{a}{\to} \mu$ if and only if one of the following holds - $s_1 \stackrel{b}{\to} \mu_1$ and $s_2 \stackrel{c}{\to} \mu_2$ for some actions $b$ and $c$ such that $a = b\cdot c$ and\ $\mu = \mu_1 \cdot \mu_2$ meaning $\mu(t_1 \parallel t_2) = \mu_1(t_1)\cdot \mu_2(t_2)$. - $s_1 \stackrel{a}{\to} \mu'$ where $\mu'(t_1) = \mu(t_1\parallel s_2)$ for all states $t_1$ of the first automaton. - $s_2 \stackrel{a}{\to} \mu'$ where $\mu'(t_2) = \mu(s_1\parallel t_2)$ for all states $t_2$ of the second automaton. Here, 1. represents a synchronous joint move of both of the automata, and 2. and 3. represent the possibilities of an asynchronous move of each of the automata. In case $s^0_1$ and $s^0_2$ are the initial states of $\mathcal A_1$ and $\mathcal A_2$, respectively, then the initial state of $\mathcal A_1 \parallel \mathcal A_2$ is $s^0_1 \parallel s^0_2$. Often we will use input and output actions like $a?$ and $a!$, respectively, in the style of CCS.In such cases we assume that the communication is defined as hand-shaking $a? \cdot a! = \tau_a$ for $\tau_a$ a special invisible action.\ The operation of restriction is needed to prune out some branches of a probabilistic automaton that one need not consider. For example, we will commonly use restriction to get rid of parts of a probabilistic automaton that still wait on synchronization. \[HidingDef\] Fix a subset $I \subseteq A$ of actions that are in the restricted set. Given an automaton $\mathcal A = \langle S, A, \alpha\rangle$, the automaton obtained from $\mathcal A$ by restricting the actions in $I$ is $\mathcal R_I(\mathcal A) = \langle S, A \setminus I, \alpha'\rangle$ where the transitions of $\alpha'$ are defined as follows: $s \stackrel{a}{\to} \mu$ in $\mathcal R_I(\mathcal A)$ if and only if $s \stackrel{a}{\to} \mu$ in $\mathcal A$ and $a \not\in I$. We now define bisimilarity - a behaviour equivalence on the states of a probabilistic automaton. For that we first need the notion of relation lifting. Let $R$ be an equivalence relation on the set of states $S$ of a probabilistic automaton ${\mbox{$\mathcal A$}}$. Then $R$ lifts to a relation $\equiv_R$ on the set ${{\operatorname{{\mathcal{D}}}}}(S)$, as follows: $$\mu \equiv_R \nu \iff \sum_{s \in C} \mu(s) = \sum_{s\in C} \nu(s)$$ for any equivalence class $C \in S/R$. Let $\cal A = \langle S, A, \alpha\rangle$ be a probabilistic automaton. An equivalence $R$ on its set of states $S$ is a bisimulation if and only if whenever $\langle s, t \rangle \in R$ we have\ if $s \stackrel{a}{\to} \mu_s$, then there exists $\mu_t$ such that $t \stackrel{a}{\to} \mu_t$ and $\mu_s \equiv_R \mu_t$.\ Two states $s, t \in S$ are bisimilar, notation $s \sim t$ if they are related by some bisimulation relation $R$. Note that bisimilarity $\sim$ is the largest bisimulation on a given probabilistic automaton ${\mbox{$\mathcal A$}}$. Anonymizing Protocols ===================== Dining cryptographers {#sec:dining-crypt} --------------------- The canonical example of an anonymizing protocol is Chaum’s Dining Cryptographers [@cha_1988_dining]. In Chaum’s introduction to this protocol, three cryptographers are sitting down to dine in a restaurant, when the waiter informs them that the bill has already been paid anonymously. They wonder whether one of them has paid the bill in advance, or whether the NSA has done so. Respecting each other’s right to privacy, the carry out the following protocol. Each pair of cryptographers flips a coin, invisible to the remaining cryptographer. Each cryptographer then reveals whether or not the two coins he say were equal or unequal. However, if a cryptographer is paying, he states the opposite. An even number of “equals” now indicates that the NSA is paying; an odd number that one of the cryptographers is paying. Formally, Chaum states the result as follows. (Here we are restricting to the case with 3 cryptographers; Chaum’s version is more general.) Here, ${{\mathbb F}_2}$ is the field of two elements. Let $K$ be a uniformly distributed stochastic variable over ${{\mathbb F}_2}^3$. Let $I$ be a stochastic variable over ${{\mathbb F}_2}^3$, taking only values in $\{ (1,0,0),\allowbreak (0,1,0),\allowbreak (0,0,1),\allowbreak (0,0,0) \}$. Let $A$ be the stochastic variable over ${{\mathbb F}_2}^3$ given by $A = (I_1 + K_2 + K_3, K_1 + I_2 + K_3, K_1 + K_2 + I_3)$. Assume that $K$ and $I$ are independent. Then $$\forall a \in {{\mathbb F}_2}^3\; \forall i \in \{1,2,3\}: {{\mathbb P}}[ I = i ] > 0 \implies {{\mathbb P}}[A = a {\;|\;}I = i] = \tfrac14$$ and hence $$\forall a \in {{\mathbb F}_2}^3\; \forall i \in \{1,2,3\}: {{\mathbb P}}[ I = i ] > 0 \implies {{\mathbb P}}[A = a {\;|\;}I = i] = {{\mathbb P}}[A = a]. \tag*{\qed}$$ In terms of the storyline, $K$ represents the coin flips, $I$ represents which cryptographer (if any) is paying, and $A$ represents the every cryptographer says. We will now model this protocol as a probabilistic automaton. We will construct it as a parallel composition of seven components: the Master, who decides who will pay, the three cryptographers Crypt${}_i$, and the three coins Coin$_{i}$. The action $p_i!$ is used by the Master to indicate to Crypt$_{i}$ that he should pay; the action $n_i!$ to indicate that he shouldn’t. If no cryptographer is paying, the NSA is paying, which is not explicitly modelled here. The coin Coin${}_i$ is shared by Crypt${}_i$ and Crypt${}_{i-1}$ (taking the -1 modulo 3); the action $h_{i,j}!$ represents Coin${}_i$ signalling to Crypt${}_j$ that the coin was heads and similarly $t_{i,j}!$ signals tails. At the end, the cryptographers state whether or not the two coins they saw were equal or not by means of the actions $a_i!$ (agree) or $d_i$ (disagree). $$\PandocStartInclude{master.tex}\PandocEndInclude{input}{777}{21} \qquad \PandocStartInclude{coin.tex}\PandocEndInclude{input}{779}{19} \qquad \PandocStartInclude{crypt.tex}\PandocEndInclude{input}{781}{20}$$ Now DC is the parallel composition of Master, Coin${}_0$, Coin${}_1$, Coin${}_2$, Crypt${}_0$, Crypt${}_1$, and Crypt${}_2$ with all actions of the form $p_i$, $n_i$, $h_{i,j}$, and $t_{i,j}$ hidden. Note that in Chaum’s version, there is no assumption on the probability distribution of $I$; in our version this is modelled by the fact that the Master makes a non-deterministic choice between the four options. Since we allow probabilistic schedulers, we later recover all possible probability distributions about who is paying, just as in the original version. Independence between the choice of the master and the coin flips ($I$ and $K$ in Chaum’s version) comes for free in the automata model: distinct probabilistic choices are always assumed to be independent. In Section \[sec:PurelyProbabilisticSystems\] we formulate what it means for DC (or more general, for an anonymity automaton) to be anonymous. Voting ------ At a very high level, a voting protocol can be seen as a blackbox that inputs the voters’ votes and outputs the result of the vote. For simplicity, assume the voters vote yes (1) or no (0), do not abstain, and that the numbers of voters is known. The result then is the number of yes-votes. $$\PandocStartInclude{voting.tex}\PandocEndInclude{input}{804}{19}$$ In such a setting, it is conceivable that an observer has some a-priori knowledge about which voters are more likely to vote yes and which voters are more likely to vote no. Furthermore, there definitely is a-posteriori knowledge, since the vote result is made public. For instance, in the degenerate case where all voters vote the same way, everybody’s vote is revealed. What we expect here from the voting protocol is not that the adversary has no knowledge about the votes (since he might already have a-priori knowledge), and also not that the adversary does not gain any knowledge from observing the protocol (since the vote result is revealed), but rather that observing the protocol does not augment the adversary’s knowledge beyond learning the vote result. For the purely probabilistic case, this notion of anonymity is formalized in Section \[sec:PurelyProbabilisticSystems\]. Anonymity for Purely Probabilistic Systems {#sec:PurelyProbabilisticSystems} ========================================== This section defines anonymity systems and proposes a definition for anonymity in its simplest configuration, i.e., for purely probabilistic systems. Purely probabilistic systems are simpler because there is no need for schedulers. Throughout the following sections, this definitions will be incrementally modified towards a more general setting. Let $M = \langle S, {{\text{Act}}}, \alpha \rangle$ be a fully probabilistic automaton. An *anonymity system* is a triple $\langle M, I, \{ A_i \}_{i \in I}, {{\text{Act}_O}}\rangle$ where 1. $I$ is the set of user identities, 2. $A_i$ is any measurable subset of ${{\operatorname{{CPaths}}}}(M)$ such that $A_i \cap A_j = \emptyset$ for $i \not= j$. 3. ${{\text{Act}_O}}\subseteq {{\text{Act}}}$ is the set of observable actions. 4. ${Otrace}(\pi)$ is the sequence of elements in ${{\text{Act}_O}}$ obtained by removing form $\operatorname{trace}(\pi)$ the elements in ${{\text{Act}}}\setminus {{\text{Act}_O}}$. Define $O$ as the set of observations, i.e., $O = \{\operatorname{trace}(\pi) {\;|\;}\pi \in {{\operatorname{{Paths}}}}(M)\}$. We also define $A = \bigcup_{i \in I} A_i$. Intuitively, the $A_i$s are properties of the executions that the system is meant to hide. For example, in the case of the dining cryptographers $A_i$ would be “cryptographer $i$ payed”; in a voting scheme “voter $i$ voted for candidate $c$”, etc. Therefore, for the previous examples, the predicate $A$ would be “some of the cryptographers payed” or “the vote count” respectively. Next, we propose a definition of anonymity for a purely probabilistic systems. We deviate from the definition proposed by Bhargava and Palamidessi [@bp_2005_probabilistic] for what we consider a more intuitive definition: We say that an anonymity system is anonymous if the probability of seeing a observation is independent of who performed the anonymous action ($A_i$), given that some anonymous action took place ($A$ happened). The formal definition follows. A system $\langle M, I, \{ A_i \}_{i \in I}, {{\text{Act}_O}}\rangle$ is said to be anonymous if $$\begin{aligned} \forall i \in I.\forall o \in O. {{\mathbb P}}[\pi \in A] > 0 \implies & {{\mathbb P}}[{Otrace}(\pi) = o \;\land\; \pi \in A_i {\;|\;}\pi \in A] =\\ & {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A]\, {{\mathbb P}}[\pi \in A_i{\;|\;}\pi \in A].\end{aligned}$$ In the above probabilities, $\pi$ is drawn from the probability space $\operatorname{Paths}(M)$. The following lemma shows that this definition is equivalent to the one proposed in Bhargava and Palamidessi [@bp_2005_probabilistic]. A anonymity system is anonymous if and only if $$\begin{aligned} \forall i,j \in I.\forall o \in O.({{\mathbb P}}[\pi \in A_i]>0 \land {{\mathbb P}}[\pi \in A_j]>0) \implies & {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_i] =\\ & {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_j] \;\end{aligned}$$ The only if part is trivial. For the if part we have $$\begin{aligned} {{\mathbb P}}[ &{Otrace}(\pi) = o {\;|\;}\pi \in A]\, {{\mathbb P}}[\pi \in A_i {\;|\;}\pi \in A] \\ &= {{\mathbb P}}[\pi \in A_i {\;|\;}\pi \in A] \; \sum_{j \in I} {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_j \cap A] \; {{\mathbb P}}[\pi \in A_j {\;|\;}\pi \in A]\\ \intertext{\qquad\quad(since $A_i \cap A_j = \emptyset,i \not= j$)} &= {{\mathbb P}}[\pi \in A_i {\;|\;}\pi \in A] \; \sum_{j \in I} {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_j] \; {{\mathbb P}}[\pi \in A_j {\;|\;}\pi \in A]\\ \intertext{\qquad\quad(by definition of $\pi \in A$)} &= {{\mathbb P}}[\pi \in A_i {\;|\;}\pi \in A] \; {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_i] \;\sum_{j \in I} {{\mathbb P}}[\pi \in A_j {\;|\;}\pi \in A]\\ \intertext{\qquad\quad(by hypothesis)} &= {{\mathbb P}}[\pi \in A_i {\;|\;}\pi \in A] \; {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_i] \;\frac{\sum_{j \in I} {{\mathbb P}}[\pi \in A_j]}{{{\mathbb P}}[\pi \in A]}\\ \intertext{\qquad\quad(since $A_j \subseteq A$)} &= {{\mathbb P}}[\pi \in A_i {\;|\;}\pi \in A] \; {{\mathbb P}}[{Otrace}(\pi) = o {\;|\;}\pi \in A_i] \\ &= \frac{{{\mathbb P}}[\pi \in A_i]}{{{\mathbb P}}[\pi \in A]} \; \frac{{{\mathbb P}}[{Otrace}(\pi) = o \land \pi \in A_i]}{{{\mathbb P}}[\pi \in A_i]}\\ &= {{\mathbb P}}[{Otrace}(\pi) = o \land \pi \in A_i {\;|\;}\pi \in A] \intertext{\qquad\quad(since $A_i \subseteq A$)}\end{aligned}$$ which concludes the proof. Anonymity for Probabilistic Systems =================================== We now try to extend the notion of anonymity to probabilistic automata that are not purely probabilistic, but that still contain some non-deterministic transitions. One obvious try is to say that $M$ is anonymous if $M_\xi$ is anonymous for all schedulers $\xi$ of $M$. The following automaton $M$ and scheduler $\xi$ show that this definition would be problematic. $$\PandocStartInclude{choice1.tex}\PandocEndInclude{input}{898}{22} \qquad \PandocStartInclude{choice2.tex}\PandocEndInclude{input}{900}{22}$$ Here $a_1$ and $a_2$ are invisible actions; they represent which user performed the action that was to remain hidden. The actions $x_1$ and $x_2$ are observable. Intuitively, because the adversary cannot see the messages $a_1$ and $a_2$, she cannot learn which user actually performed the hidden action. On the right hand side $M_\xi$ is shown and the branches the scheduler does not take are indicated by dotted arrows. Now ${{\mathbb P}}_\xi[a_1 {\;|\;}x_1] = 1$, but ${{\mathbb P}}_\xi[a_1] = \frac{1}{2}$, showing that with this particular scheduler $M_\xi$ is not anonymous. Note that this phenomenon can easily occur as a consequence of communication non-determinism. For instance, consider the following three automata and their parallel composition in which $c?$ and $c!$ are hidden. In this example the order of the messages $x_1$ and $x_2$ depends on a race-condition, but a scheduler can make it depend on whether $a_1$ or $a_2$ was taken. I.e., there exists a scheduler $\xi$ such that ${{\mathbb P}}_\xi[x_1 x_2 {\;|\;}a_1] = {{\mathbb P}}_\xi[x_2 x_1 {\;|\;}a_2] = 1$ and hence ${{\mathbb P}}_\xi[x_2 x_1 {\;|\;}a_1] = {{\mathbb P}}_\xi[x_1 x_2 {\;|\;}a_2] = 0$. $$\PandocStartInclude{comm1.tex}\PandocEndInclude{input}{916}{20} \qquad\qquad \PandocStartInclude{comm2.tex}\PandocEndInclude{input}{918}{20}$$ In fact, the Dining Cryptographers example from Section \[sec:dining-crypt\] suffers from exactly the same problem. The order in which the cryptographers say ${\operatorname{\it agree}}_i$ or ${\operatorname{\it disagree}}_i$ is determined by the scheduler and it is possible to have a scheduler that makes the paying cryptographer, if any, go last. In [@bp_2005_probabilistic], a system $M$ is called anonymous if for all schedulers $\zeta$, $\xi$, for all observables $o$, and for all hidden actions $a_i$, $a_j$ such that ${{\mathbb P}}_\zeta[a_i] > 0$ and ${{\mathbb P}}_\xi[a_j] > 0$, ${{\mathbb P}}_\zeta[o {\;|\;}a_i] = {{\mathbb P}}_\xi[o {\;|\;}a_j]$. This definition, of course, has the same problems as above; in the Dining Cryptographers example in [@bp_2005_probabilistic] this is solved by fixing the order in which the the cryptographers say ${\operatorname{\it agree}}_i$ or ${\operatorname{\it disagree}}_i$. However, also a non-deterministic choice between two otherwise anonymous systems can become non-anonymous with this definition. For instance, let $P$ be some anonymous system. For simplicity, assume that $P$ is fully probabilistic (e.g., the Dining Cryptographers with a probabilistic master and a fixed scheduler) and let $P'$ be a variant of $P$ in which the visible actions have been renamed (e.g., the actions ${\operatorname{\it agree}}_i$ and ${\operatorname{\it disagree}}_i$ are renamed to ${\operatorname{\it equal}}_i$ and ${\operatorname{\it unequal}}_i$). Now consider the probabilistic automaton $M$ which non-deterministically chooses between $P$ and $P'$: $$\xymatrix{ & \circ \ar[dl] \ar[dr] \\ P\phantom{'} \drop\frm{-} && P' \drop\frm{-} }$$ This automaton has only two schedulers: the one that chooses the left branch and then executes $P$ and the one that chooses the right branch and then executes $P'$. Let’s call these schedulers $l$ and $r$ respectively. Now pick any hidden action $a_i$ and observable $o$ such that ${{\mathbb P}}_l[o {\;|\;}a_i] > 0$. (e.g., $o = {\operatorname{\it agree}}_1 {\operatorname{\it disagree}}_2 {\operatorname{\it agree}}_3$ and $a_i = {\operatorname{\it pay}}_1$, for which ${{\mathbb P}}[o {\;|\;}{\operatorname{\it pay}}_1] = \frac{1}{4})$. Then, nevertheless, ${{\mathbb P}}_r[o {\;|\;}a_i] = 0$, because the observation $o$ cannot occur in $P'$. So, even though intuitively this system should be anonymous, it is not so according to the definition in [@bp_2005_probabilistic]. Every time the problem is that the scheduler has access to information it shouldn’t have. When one specifies a protocol by giving a probabilistic automaton, an implementation of this protocol has to implement a scheduler as well. This is especially obvious if the non-determinism originates from communication. When we identify schedulers with adversaries, as is common, it becomes clear that the scheduler should not have access to too much information. In the next section we define a class of schedulers, called *admissible* schedulers that base their scheduling behavior on the information an adversary actually has access to: the observable history of the system. Admissible Schedulers ===================== As explained in the previous section, defining anonymity as a condition that should hold true for all possible schedulers is problematic. It is usual to quantify over all schedulers when showing theoretical properties of systems with both probabilities and non-determinism - for example we may say “no matter how the non-determinism is resolved, the probability of an event $X$ is at least $p$”. However, in the analysis of security protocols, for example with respect to anonymity, we would like to quantify over all possible “realistic” adversaries. These are not all possible schedulers as in our theoretic considerations since such a realistic adversary is not able to see all details of the probabilistic automaton under consideration. Hence, considering that the adversary is any scheduler enables the adversary to leak information where it normally could not. We call such schedulers interfering schedulers. This way protocols that are well-known to be anonymous turn out not to be anonymous. One such example is the dining cryptographers protocol explained above. We show that one gets a better definition of anonymity if one restricts the power of the schedulers, in a realistic way. In this section we define the type of schedulers with restricted power that we consider good enough for showing anonymity of certain protocols. We call these schedulers admissible. Schedulers with restricted power have been treated in the literature. In general, as explained by Segala in [@Seg95:thesis], a scheduler with restricted power is given by defining two equivalences, one on the set of finite paths $\equiv_1$ and another one $\equiv_2$ on the set of possible transition, in this case ${{\operatorname{{\mathcal{D}}}}}(A\times S)$. Then a scheduler $\xi$ is oblivious relative to $\langle\equiv_1, \equiv_2\rangle$ if and only if for any two paths $\pi_1, \pi_2$ we have $$\pi_1 \equiv_1 \pi_2 \Longrightarrow \xi(\pi_1) \equiv_2 \xi(\pi_2).$$ Admissible schedulers based on bisimulation ------------------------------------------- In this section we specify $\equiv_1$ and $\equiv_2$ and obtain a class of oblivious adversaries that suits the anonymity definition. Defined $\equiv_1$ on the set of finite paths of an automaton $M$ as, $$\pi_1 \equiv_1 \pi_2 \iff \big(\operatorname{trace}(\pi_1) = \operatorname{trace}(\pi_2) \land \operatorname{last}(\pi_1) \sim \operatorname{last}(\pi_2)\big).$$ Recall that we defined $\equiv_R$ as the lifting of the equivalence relation $R$ on a set $S$ to an equivalence relation on ${{\operatorname{{\mathcal{D}}}}}(A\times S)$. For $\equiv_2$ we take the equivalence $\equiv_{\sim}$ on ${{\operatorname{{\mathcal{D}}}}}(A\times S)$. This is well defined since bisimilarity is an equivalence. Hence, we obtain a class of oblivious schedulers relative $\langle\equiv_1, \equiv_{\sim}\rangle$. These schedulers we call admissible. A scheduler is admissible if for any two finite paths $\pi_1$ and $\pi_2$ we have $$\big(\operatorname{trace}(\pi_1) = \operatorname{trace}(\pi_2) \land \operatorname{last}(\pi_1) \sim \operatorname{last}(\pi_2)\big)\Longrightarrow \xi(\pi_1) \equiv_{\sim} \xi(\pi_2).$$ Intuitively, the definition of a admissible scheduler enforces that in cases when the schedular has observed the same history (given by the traces of the paths) and is in bisimilar states, it must schedule “the same” transitions up to bisimilarity. Existence --------- We now show that admissible schedulers do exist. In fact, we even show that admissible history-independent schedulers exist. A scheduler $\xi$ is history-independent if it is completely determined by its image of paths of length 0 i.e. if for any path $\pi$ it holds that $\xi(\pi) = \xi(\operatorname{last}(\pi))$. There exists a admissible scheduler for every probabilistic automaton. Take a probabilistic automaton $M$. We first show that there exists a map $\xi: S \to {{\operatorname{{\mathcal{D}}}}}(A\times S)\cup \{\bot\}$ with the property that $\xi(s) = \bot$ if and only if $s$ terminates and for all $s, t \in S$, if $s \sim t$, then $\xi(s) \equiv_\sim \xi(t)$. Consider the set of partial maps $$\Xi = \left\{\xi: S \hookrightarrow {{\operatorname{{\mathcal{D}}}}}(A\times S)\cup \{\bot\} \left| \begin{array}{ll} &\xi(s) = \bot \iff s \text{~terminates~},\\ & s \sim t \Longrightarrow \xi(s) \equiv_\sim \xi(t)\\ & \text{~for~} s, t \in dom(\xi) \end{array} \right\}\right..$$ This set is not empty since the unique partial map with empty domain belongs to it. We define an order $\leq$ on $\Xi$ in a standard way by $$\xi_1 \leq \xi_2 \iff \big( dom(\xi_1) \subseteq dom(\xi_2) \land \xi_2|_{dom(\xi_1)} = \xi_1 \big).$$ Consider a chain $(\xi_i)_{i \in I}$ in $\Xi$. Let $\xi = \cup_{i \in I} \xi_i$. This means that $dom(\xi) = \cup_{i\in I} dom(\xi_i)$ and if $x \in dom(\xi)$, then $\xi(x) = \xi_i(x)$ for $i \in I$ such that $x \in dom(\xi_i)$. Note that $\xi$ is well-defined since $(\xi_i)_{i \in I}$ is a chain. Moreover, it is obvious that $\xi_i \leq \xi$ for all $i \in I$. We next check that $\xi \in \Xi$. Let $s, t \in dom(\xi)$, such that $s \sim t$. Then $s \in dom(\xi_i)$ and $t \in dom(\xi_j)$ for some $i,j \in I$ and either $\xi_1 \leq \xi_2$ or $\xi_2 \leq \xi_1$. Assume $\xi_1 \leq \xi_2$. Then $s, t \in dom(\xi_j)$ and $\xi_j \in \Xi$ so we have that $\xi_j(s) \equiv_\sim \xi_j(t)$ showing that $\xi(s) \equiv_\sim \xi(t)$ and we have established that $\xi \in \Xi$. Hence, every ascending chain in $\Xi$ has an upper bound. By the Lemma of Zorn we conclude that $\Xi$ has a maximal element. Let $\sigma$ be such a maximal element in $\Xi$. We claim that $\sigma$ is a total map. Assume opposite, i.e., there exists $s \in S \setminus dom(\sigma)$. If there exists a $t \in dom(\sigma)$ such that $s \sim t$ then we define a new partial scheduler $\sigma'$ as follows. If $\sigma(t) = \bot$ we put $\sigma'(s) = \bot$. If $\sigma(t) = \mu_t$, then, since $t \to \mu_t$ and $s \sim t$, there exists $\mu_s$ such that $s \to \mu_s$ and $\mu_t \equiv_\sim \mu_s$. In this case we put $\sigma'(s) = \mu_s$. Moreover, put $\sigma'(x) = \sigma(x)$ for $x \in dom(\sigma)$. Then we have $\sigma'> \sigma$ and $\sigma' \in \Xi$ contradicting the maximality of $\sigma$. Hence $\sigma$ is a total map. . We are now ready to define anonymity for probabilistic systems, the formal definition follows. A system $\langle M, I, \{ A_i \}_{i \in I}, {{\text{Act}_O}}\rangle$ is said to be anonymous if for all admissible schedulers $\xi$, for all $i \in I$ and for all $o \in O$ $$\begin{aligned} {{\mathbb P}}_\xi[\pi \in A] > 0 \implies & {{\mathbb P}}_\xi[{Otrace}(\pi) = o \;\land\; \pi \in A_i {\;|\;}\pi \in A] =\\ & {{\mathbb P}}_\xi[{Otrace}(\pi) = o {\;|\;}\pi \in A]\, {{\mathbb P}}_\xi[\pi \in A_i{\;|\;}\pi \in A].\end{aligned}$$ Anonymity Examples {#sec:examples} ================== In the purely non-deterministic setting, anonymity of a system is often proved (or defined) as follows: take two users $A$ and $B$ and a trace in which user $A$ is “the culprit”. Now find a trace that looks the same to the adversary, but in which user $B$ is “the culprit” [@ho_2003_anonymity; @ghrp_2005_anonymity; @mvv_2004_anonymity; @HasuoK07a]. In fact, this new trace is often most easily obtained by switching the behavior of $A$ and $B$. In this section, we make this technique explicit for anonymity in our setting, with mixed probability and non-determinism. Let $M$ be a probabilistic automaton. A map $\alpha \colon S \to S$ is called an [*${{\text{Act}_O}}$-automorphism*]{} if $\alpha$ induces an automorphism of the automation $M_\tau$, which is a copy of $M$ with all actions not in ${{\text{Act}_O}}$ renamed to $\tau$. The following result generalized the above-mentioned proof technique that is commonly used for a purely non-deterministic setting. Consider an anonymity system $(M,I,{{\text{Act}_O}})$. Suppose that for every $i, j \in I$ there exists a ${{\text{Act}_O}}$-automorphism $\alpha \colon S \to S$ such that $\alpha(A_i) = A_j$. Then the system is anonymous. Anonymity of the Dining Cryptographers {#anonymity-of-the-dining-cryptographers .unnumbered} -------------------------------------- We can now apply the techniques from the previous section to the Dining Cryptographers. Concretely, we show that there exists a ${{\text{Act}_O}}$-automorphism exchanging the behaviour of the Crypt${}_1$ and Crypt${}_2$; by symmetry, the same holds for the other two combinations. Consider the endomorphisms of Master and Coin${}_2$ indicated in the following figure. The states in the left copy that are not explicitly mapped (by a dotted arrow) to a state in the right copy are mapped to themselves. $$\hspace{3.3cm}\PandocStartInclude{mastermap.tex}\PandocEndInclude{input}{1117}{36}$$ $$\PandocStartInclude{coinmap.tex}\PandocEndInclude{input}{1121}{20}$$ Also consider the identity endomorphism on Crypt${}_i$ (for $i = 0, 1, 2$) and on Coin${}_i$ (for $i = 0, 1$). Taking the product of these seven endomorphisms, we obtain an endomorphism $\alpha$ of DC. [GHvRP05]{} R. Bellman. A markovian decision process. , 6, 1957. Mohit Bhargava and Catuscia Palamidessi. Probabilistic anonymity. In Mart[í]{}n Abadi and Luca de Alfaro, editors, [*Concurency Theory, 16th International Conference (CONCUR ’05)*]{}, volume 3653 of [ *Lecture Notes in Computer Science*]{}, pages 171–185. Springer, 2005. David Chaum. The dining cryptographers problem: Unconditional sender and recipient untraceability. 1(1):65–75, 1988. Ling Cheung. . PhD thesis, RU Nijmegen, 2006. Flavio D. Garcia, Ichiro Hasuo, Peter van Rossum, and Wolter Pieters. Provable anonymity. In Ralf K[ü]{}sters and John Mitchell, editors, [*Proceedings of the 2005 [ACM]{} Workshop on Formal Methods in Security Engineering (FMSE ’05)*]{}, pages 63–72. ACM, 2005. Ichiro Hasuo and Yoshinobu Kawabe. Probabilistic anonymity via coalgebraic simulations. In [*European Symposium on Programming (ESOP ’07)*]{}, volume 4421 of [*Lecture Notes in Computer Science*]{}, pages 379–394. Springer, Berlin, 2007. Joseph Halpern and Kevin O’Neill. Anonymity and information hiding in multiagent systems. In [*16th [IEEE]{} Computer Security Foundations Workshop (CSFW ’03)*]{}, pages 75–88, 2003. S. Mauw, J. Verschuren, and E.P. de Vink. A formalization of anonymity and onion routing. In P. Samarati, P. Ryan, D. Gollmann, and R. Molva, editors, [ *Proceedings of Esorics 2004*]{}, volume 3193 of [*Lecture Notes in Computer Science*]{}, pages 109–124, 2004. R. Segala. . PhD thesis, MIT, 1995. R. Segala and N.A. Lynch. Probabilistic simulations for probabilistic processes. In [*Proc. Concur’94*]{}, pages 481–496. LNCS 836, 1994. A. Sokolova. . PhD thesis, TU Eindhoven, 2005. A. Sokolova and E.P. de Vink. Probabilistic automata: system types, parallel composition and comparison. In C. Baier, B.R. Haverkort, H. Hermanns, J.-P. Katoen, and M. Siegle, editors, [*Validation of Stochastic Systems: A Guide to Current Research*]{}, pages 1–43. LNCS 2925, 2004.
--- abstract: 'We prove low regularity a priori estimates for the derivative nonlinear Schrödinger equation in Besov spaces with positive regularity index conditional upon small $L^2$-norm. This covers the full subcritical range. We use the power series expansion of the perturbation determinant introduced by Killip–Vişan–Zhang for completely integrable PDE. This makes it possible to derive low regularity conservation laws from the perturbation determinant.' address: 'Fakultät für Mathematik, Karlsruher Institut für Technologie, Englerstrasse 2, 76131 Karlsruhe, Germany' author: - Friedrich Klaus - Robert Schippa title: A priori estimates for the derivative nonlinear Schrödinger equation --- Introduction ============ In this note the following derivative nonlinear Schrödinger equation (*dNLS*) is considered $$\label{eq:dNLS} \left\{ \begin{array}{cl} i \partial_t q + \partial_{xx} q + i \partial_x(|q|^2 q) &= 0 \quad (t,x) \in {\mathbb{R}}\times \mathbb{K}, \\ q(0) &= q_0 \in H^s(\mathbb{K}), \end{array} \right.$$ where $\mathbb{K} \in \{ {\mathbb{R}}, {\mathbb{T}}= ({\mathbb{R}}/ (2 \pi {\mathbb{Z}})) \}$. In the seventies was proposed as a model in plasma physics in [@Rogister1971; @MioOginoMinamiTakeda1976; @Mjolhus1976]. In the following let $\mathcal{S}(\mathbb{R})$ denote the Schwartz functions on the line and $\mathcal{S}({\mathbb{T}})$ smooth functions on the circle. Here we prove a priori estimates $$\sup_{t \in {\mathbb{R}}} \Vert q(t) \Vert_{H^s} \lesssim_{s} \Vert q_0 \Vert_{H^s}, \quad 0 < s < \frac{1}{2},$$ where $q \in C^\infty({\mathbb{R}}; \mathcal{S}(\mathbb{K}))$ is a smooth global solution to , which is also rapidly decaying in the line case, *conditional upon small* $L^2$*-norm*. These estimates are the key to extend local solutions globally in time. Local well-posedness, i.e., existence, uniqueness and continuous dependence locally in time, in $H^{1/2}$ was proved by Takaoka [@Takaoka1999] on the real line and Herr [@Herr2006] on the circle. They proved local well-posedness via the contraction mapping principle, that is *perturbatively*. Furthermore, they showed that the data-to-solution mapping fails to be $C^3$ below $H^{1/2}$ in either geometry, respectively. Thus, their results are the limit of proving local well-posedness via fixed point arguments. However, on the real line admits the scaling symmetry $$\label{eq:Scaling} q(t,x) \rightarrow \lambda^{-1/2} q(\lambda^{-2} t, \lambda^{-1} x),$$ which distinguishes $L^2$ as scaling critical space. Hence, we still expect a milder form of local well-posedness in $H^s$ for $0 \leq s < 1/2$. By short-time Fourier restriction, Guo [@Guo2011] proved a priori estimates for $s>1/4$ on the real line, which the second author [@Schippa2017] extended to periodic boundary conditions.\ Moreover, Grünrock [@Gruenrock2005] showed local well-posedness on the real line in Fourier Lebesgue spaces, which scale like $H^s$, $s>0$. Deng *et al.* [@DengNahmodYue2019] recently extended this to periodic boundary conditions; see also the previous work [@GruenrockHerr2008] by Grünrock–Herr. Less is known about global well-posedness. Conserved quantities of the flow include the mass, i.e., the $L^2$-norm, $$M[q] = \int_{\mathbb{K}} |q|^2 dx,$$ the momentum, related with the $H^{1/2}$-norm, $$P[q] = \int_{\mathbb{K}} \mathrm{Im}(\bar{q} q_x) - \frac{1}{2} |q|^4 dx,$$ and the energy, related with the $H^1$-norm, $$E[q] = \int_{\mathbb{K}} |q_x|^2 - \frac{3}{2} |q|^2 \mathrm{Im}( \bar{q} q_x) + \frac{1}{2} |q|^6 dx.$$ A local well-posedness result in $L^2$ seems to be very difficult due to the scaling criticality. On the other hand, it is not straight-forward to use the other quantities to prove a global result due to lack of definiteness. The remedy in previous works was to impose a smallness condition on the $L^2$-norm and use the sharp Gagliardo-Nirenberg inequality. Wu [@Wu2015] observed in the line case that combining several conserved quantities improves the $L^2$-threshold, which can be derived from the energy (cf. [@Wu2013]). Mosincat–Oh carried out the corresponding argument on the torus [@MosincatOh2015]. Additionally making use of the $I$-method (cf. [@CollianderKeelStaffilaniTakaokaTao2002; @MiaoWuXu2011]), Guo–Wu [@GuoWu2017] proved global well-posedness in $H^{1/2}({\mathbb{R}})$ for $\Vert u_0 \Vert_{L^2} < 4 \pi$, and Mosincat [@Mosincat2017] proved global well-posedness in $H^{1/2}({\mathbb{T}})$ under the same $L^2$-smallness condition. Furthermore, Nahmod *et al.* [@NahmodOhReyBelletStaffilani2012] proved a probabilistic global well-posedness result in Fourier Lebesgue spaces scaling like $H^{1/2-\varepsilon}({\mathbb{T}})$. Kaup–Newell [@KaupNewell1978] already observed shortly after the proposal of that it admits a Lax pair with operator $$\label{eq:LaxOperator} L(t;q) = \begin{pmatrix} \partial + i \kappa^2 & - \kappa q \\ - \kappa \bar{q} & \partial - i \kappa^2 \end{pmatrix} .$$ Consequently, there are infinitely many conserved quantities of the flow. However, to the best of the authors’ knowledge, there are no works using the complete integrability for solutions in *unweighted* $L^2$-based Sobolev space, i.e., without imposing additional spatial decay. In particular, there are no results for periodic boundary conditions making use of the complete integrability. Via inverse scattering, Lee [@Lee1983; @Lee1989] proved global existence and uniqueness for certain initial data in $\mathcal{S}({\mathbb{R}})$. Later, Liu [@Liu2017] considered with initial data in weighted Sobolev spaces $H^{2,2}({\mathbb{R}})$ and proved global well-posedness via inverse scattering. See the subsequent works [@JenkinsLiuPerrySulem2018I; @JenkinsLiuPerrySulem2018II; @JenkinsLiuPerrySulem2020] due to Jenkins *et al.* for results addressing soliton resolution in weighted Sobolev spaces. Recently, Pelinovsky–Shimabukuro [@PelinovskyShimabukuro2018] proved global well-posedness in $H^{1,1}({\mathbb{R}}) \cap H^2({\mathbb{R}})$ without $L^2$-smallness condition, but assumptions on the Kaup–Newell spectral problem; see also [@PelinovskySaalmannShimabukuro2017; @Saalmann2017]. There are major technical difficulties to apply inverse scattering techniques in unweighted Sobolev spaces, e.g., on the line the decay of the data is typically insufficient for classical arguments. For the nonlinear Schrödinger equation on the line, Koch–Tataru [@Koch2018] were able to use the transmission coefficient to obtain almost conserved $H^s$-energies for all $s > -\frac{1}{2}$. Killip–Vişan–Zhang [@KillipVisanZhang2018] pointed out a power series representation for the determinant $$\log {\det} \big( \begin{bmatrix} (-\partial +\tilde{\kappa})^{-1} & 0 \\ 0 & (- \partial - \tilde{\kappa})^{-1} \end{bmatrix} \begin{bmatrix} -\partial + \tilde{\kappa} & iq \\ \mp i\bar{q} & -\partial - \tilde{\kappa} \end{bmatrix} \big),$$ given by $$\sum_{l=1}^{\infty} \frac{(\mp 1)^{l-1}}{l} \operatorname{tr}\left\{\left[(\tilde{\kappa}-\partial)^{-1 / 2} q(\tilde{\kappa}+\partial)^{-1} \bar{q}(\tilde{\kappa}-\partial)^{-1 / 2}\right]^{l}\right\},$$ which works in either geometry. Killip *et al.* [@KillipVisanZhang2018] showed that it is conserved for NLS and mKdV by term-by-term differentiation. This led to low regularity conservation laws and corresponding a priori estimates in either geometry. Talbut [@Talbut2018] used the same approach to show low regularity conservation laws for the Benjamin-Ono equation. Motivated by these results, we show that the determinant $$\log {\det} \big( \begin{bmatrix} (\partial + i\kappa^2)^{-1} & 0 \\ 0 & (\partial - i\kappa^2)^{-1} \end{bmatrix} \begin{bmatrix} \partial + i\kappa^2 & - \kappa q \\ -\kappa \bar{q} & \partial - i\kappa^2 \end{bmatrix} \big),$$ given by $$\label{eq:PerturbationDeterminant} \sum_{l=1}^{\infty} \frac{(- 1)^{l}i^{l+1}\tilde{\kappa}^{l}}{l} \operatorname{tr}\left\{\left[(\partial - \tilde{\kappa})^{-1 / 2} q(\partial+ \tilde{\kappa})^{-1} \bar{q}(\partial-\tilde{\kappa})^{-1 / 2}\right]^{l}\right\},$$ where we formally set $\tilde{\kappa} = - i \kappa^2$ (we drop the tilde later on), is conserved for solutions of . This yields the following theorem: \[thm:APrioriEstimates\] Let $u \in C^\infty({\mathbb{R}};\mathcal{S}(\mathbb{K}))$ be a smooth solution to . For any $0<s<1/2$, $r \in [1,\infty]$, there is $c=c(s,r)<1$ such that $$\label{eq:APrioriBesovEstimate} \Vert q(t) \Vert_{B^s_{r,2}} \lesssim \Vert q(0) \Vert_{B^s_{r,2}}$$ provided that $\Vert q(0) \Vert_2 \leq c$. We focus on regularities, for which global results were previously unknown. It appears feasible to cover higher regularities following [@KillipVisanZhang2018 Section 3]. In follow-up works to [@KillipVisanZhang2018], Killip–Vişan showed sharp global well-posedness for the KdV equation [@KillipVisan2019] and later on with Bringmann for the fifth order KdV equation [@BringmannKillipVisan2019]. Sharp global well-posedness for NLS and mKdV on the real line was shown by Harrop-Griffiths–Killip–Vişan [@HarropGriffithsKillipVisan2020]. It is an interesting question to determine whether is within the thrust of these works. *Outline of the paper.* In Section 2 we show that is a conserved quantity. In Section 3 we derive low regularity conservation laws, yielding a priori estimates and finishing the proof of the main result. The perturbation determinant ============================ In the following we consider $$\label{eq:RealPerturbationDeterminant} \alpha(\kappa;q) = \mathrm{Re} \sum_{l \geq 1} \frac{(-1)^{l} i^{l+1} \kappa^l}{l} \text{tr} ((\kappa - \partial)^{-1} q (\kappa + \partial)^{-1} \bar{q})^l).$$ To ensure that $\alpha$ converges geometrically and we can differentiate term by term, we recall that for $\kappa > 0$ [@KillipVisanZhang2018 Eq. (45)] $$\label{eq:HSEstimate} \Vert (\kappa - \partial)^{-1/2} q (\kappa + \partial)^{-1/2} \Vert_{\mathfrak{I}_2({\mathbb{R}})}^2 \approx \int \log\big( 4 + \frac{\xi^2}{\kappa^2} \big) \frac{|\hat{q}(\xi)|^2}{\sqrt{4 \kappa^2 + \xi^2}} d\xi.$$ Since $$\int \log \big( 4 + \frac{\xi^2}{\kappa^2} \big) \frac{|\hat{q}(\xi)|^2}{\sqrt{4 \kappa^2 + \xi^2}} d\xi \leq \sup_\xi \big[ \frac{\log \big( 4 + \frac{\xi^2}{\kappa^2} \big)}{\sqrt{4 \kappa^2 + \xi^2}} \big] \int |\hat{q}(\xi)|^2 d\xi \lesssim \frac{1}{\kappa} \Vert q \Vert_{L^2}^2,$$ we conclude the $L^2$-part of the following: Let $q \in C^\infty({\mathbb{R}}; \mathcal{S}(\mathbb{K}))$. Suppose that $\Vert q \Vert_{L^2} \leq c \ll 1$ small enough and $\kappa>0$, or $\kappa \gg \Vert q \Vert_{H^{s}}^{1/a}$ for $s>0$ and $a=\min(1/4,s)$. Then, $\alpha$ defined in converges geometrically. The $H^s$-part of the above Lemma follows from the fact that $\mathfrak{I}_p \hookrightarrow \mathfrak{I}_q$ for $p<q$ and the estimate $$\Vert (\kappa - \partial)^{-1/2} q (\kappa + \partial)^{-1/2} \Vert_{\mathfrak{I}_p} \lesssim \kappa^{-1/2+1/p} \Vert q \Vert_{L^p} \lesssim \kappa^{-s} \Vert q \Vert_{H^s}$$ for $s=1/2-1/p$, $2 \leq p < \infty$, which is shown in Section \[section:ConservationSobolevNorms\]. Next, we show that $\alpha$ is conserved by term-by-term differentiation. \[prop:ConservationPerturbationDeterminant\] Let $q \in C^\infty({\mathbb{R}};\mathcal{S})$ be a smooth global solution to with\ $\Vert q(0) \Vert_2 \leq c \ll 1$. Then, $$\frac{d}{dt} \alpha(\kappa;q) = 0.$$ \[rem:HsConvergence\] In Section \[section:ConservationSobolevNorms\] we see that $\alpha(\kappa;q)$ converges *without smallness assumption* on the $L^2$-norm, but provided that $\kappa$ is sufficiently large. However, we are not able to show bounds for the $B_{r,2}^s$-norm without $L^2$-smallness assumption. In the following we omit taking the real part in and will thus show that both real and imaginary part are conserved. Consider $$\sum_{l=1}^\infty \frac{(-i)^{l+1} \kappa^l }{l} \text{tr} (\underbrace{( \partial - \kappa)^{-1}}_{R_-} q \underbrace{(\partial + \kappa)^{-1}}_{R_+} \bar{q})^l = \sum_{l \geq 1} \alpha_l.$$ We note similar to the considerations from [@KillipVisanZhang2018 Section 4]: $$\label{eq:q^2qOperatorIdentity} \begin{split} (|q|^2 q)_x &= (\partial- \kappa)(|q|^2 q) -(|q|^2 q) (\partial + \kappa) + 2 \kappa (|q|^2 q) \\ (|q|^2 \bar{q})_x &= (\partial + \kappa)(|q|^2 \bar{q}) - (|q|^2 \bar{q})(\partial - \kappa) - 2 \kappa (|q|^2 \bar{q}), \end{split}$$ and furthermore, $$\label{eq:qxxOperatorIdentity} \begin{split} q_{xx} &= q (\partial^2 - 2 \kappa \partial - \kappa^2) + (\partial^2 + 2\kappa \partial - \kappa^2) q + 2(\kappa - \partial) q (\kappa+\partial), \\ \bar{q}_{xx} &= (\partial^2 - 2 \kappa \partial - \kappa^2) \bar{q} + \bar{q} (\partial^2 + 2 \kappa \partial - \kappa^2) + 2(\kappa + \partial)\bar{q} (\kappa - \partial). \end{split}$$ Differentiating term-by-term, we find two terms $\frac{d}{dt} \alpha_l = A_l + B_l$, which are given by $$\begin{aligned} A_l &= - (-i)^{l+1} \kappa^l \text{tr} ((R_- q R_+ \bar{q})^{l-1} [R_- (|q|^2 q)_x R_+ \bar{q} + R_- q R_+ (|q|^2 \bar{q})_x] \\ B_l &= (-i)^{l+1} \kappa^l \text{tr} ((R_- q R_+ \bar{q})^{l-1} [R_- i q_{xx} R_+ \bar{q} - i R_- q R_+ \bar{q}_{xx} ]).\end{aligned}$$ We show that $A_l + B_{l+1} = 0$. Since $B_1 \equiv 0$ (cf. [@KillipVisanZhang2018]), this yields the claim. We plug in in $A_l$ to find $$\label{eq:Al} \begin{split} A_l = - (-i)^{l+1} \kappa^l \text{tr} ((R_- q R_+ \bar{q})^{l-1} &[|q|^2 q R_+ \bar{q} - R_- |q|^4 + 2 \kappa R_- |q|^2 q R_+ \bar{q} ] \\ + &[ R_- |q|^4 - R_- q R_+ |q|^2 \bar{q} R_-^{-1} - 2 \kappa R_- q R_+ |q|^2 \bar{q} ]. \end{split}$$ The first term from the first line is cancelled by the second term from the second line by cycling the trace, and the second term in the first line is cancelled by the first term in the second line. Only the third terms remain.\ Plugging in in $B_{l+1}$, we note that the first and second terms from cancel each other because constant coefficient differential operators commute, and it remains $$\label{eq:Bl+1} B_{l+1} = (-i)^{l+1} 2 \kappa \kappa^l \text{tr} ((R_- q R_+ \bar{q})^l [R_- (\kappa - \partial) q (\kappa + \partial) R_+ \bar{q} - R_- q R_+ (\kappa + \partial) \bar{q} (\kappa - \partial)).$$ The first term in is cancelled by the third term in the second line of and the second term in is cancelled by the third term in the first line of . This finishes the proof. Conservation of Besov norms with positive regularity index {#section:ConservationSobolevNorms} ========================================================== In the following, we want to construct Besov norms from the leading term of $\alpha(q;\kappa)$. Set $$w(\xi,\kappa) = \frac{\kappa^2}{\xi^2 + 4 \kappa^2} - \frac{(\kappa/2)^2}{\xi^2 + \kappa^2} = \frac{3 \kappa^2 \xi^2}{4(\xi^2+ \kappa^2)(\xi^2+ 4 \kappa^2)}$$ and $$\label{eq:BesovSubstitute} \Vert f \Vert_{Z_r^s} = \big( \sum_{N \in 2^\mathbb{N}} N^{rs} \langle f, w(-i \partial_x, N) f \rangle^{r/2} \big)^{1/r}.$$ The $Z_r^s$–norm consists of homogeneous components, which can be linked to the perturbation determinant. We will use the identities from [@KillipVisanZhang2018]: $$\begin{aligned} \Vert f \Vert_{B^s_{r,2}} &\lesssim_s \Vert f \Vert_{H^{-1}} + \Vert f \Vert_{Z^s_{r}}, \label{eq:BesovZsEmbedding}\\ \Vert f \Vert_{Z^s_{r}} &\lesssim \Vert f \Vert_{B_{r,2}^s}. \label{eq:ZsBesovEmbedding}\end{aligned}$$ Consequently, it suffices to control the $Z_r^s$-norm to infer about the Besov norms. In the $Z^s_r$-quantities introduced in [@KillipVisanZhang2018], there is an additional parameter $\kappa_0$. One might hope that this flexibility helps to obtain a result for arbitrary initial data. However, $\kappa_0$ enters with a positive exponent into the estimates. This reflects indeed the relation of $\kappa_0$ with rescaling and the $L^2$-criticality of . To keep things simple, we choose $\kappa_0 = 1$. The line case ------------- To analyze the growth of the $Z_r^s$-norm, we link the multiplier from above with the first term of $\alpha$. We recall the following identity on the real line: For $\kappa > 0$ and $q \in \mathcal{S}$, we find $$\mathrm{Re} \big( \kappa \, \mathrm{tr} \, ( (\kappa - \partial)^{-1} q (\kappa + \partial)^{-1} q ) \big) = \int \frac{2 \kappa^2 |\hat{q}(\xi)|^2}{\xi^2+ 4 \kappa^2}.$$ This yields $$\begin{split} \langle f, w(-i \partial_x,N) f \rangle &= \int \frac{N^2}{\xi^2 + 4N^2} |\hat{f}(\xi)|^2 d\xi - \int \frac{(N/2)^2}{\xi^2 + N^2} |\hat{f}(\xi)|^2 d\xi \\ &= \frac{1}{2} [ \alpha_1(N, f) - \alpha_1(N/2,f)]. \end{split}$$ Set $A = (\kappa - \partial)^{-1/2} q (\kappa + \partial)^{-1/2} \kappa^{1/2}$. Firstly, we argue that we can gain powers of $\kappa$ by estimating $q$ in $H^s$-norms. Note that $$\begin{aligned} \Vert A \Vert_{\mathfrak{I}_2} \lesssim \Vert q \Vert_2, \qquad \quad \Vert A \Vert_{\mathfrak{I}_\infty} \lesssim \kappa^{-1/2} \Vert q \Vert_{\infty}.\end{aligned}$$ The first identity follows from , and the second from viewing $q$ as a multiplication operator in $L^2$. Interpolating the estimates (cf. [@BenyaminiLindenstrauss2000 Proposition I.1]) and using Sobolev embedding, we find $$\Vert A \Vert_{\mathfrak{I}_p} \lesssim \kappa^{-1/2+1/p} \Vert q \Vert_p \lesssim \kappa^{-s} \Vert q \Vert_{H^s} \text{ for } s = 1/2-1/p, \quad 2 \leq p < \infty.$$ Let $0<s'<1/4$ in the following and set $s'=\frac{1}{2} - \frac{1}{p_{s'}}$. By Hölder’s inequality and embeddings for Schatten spaces, we find $$|\text{tr} (A^4) | \leq \Vert A \Vert^4_{\mathfrak{I}_4} \lesssim \Vert A \Vert^4_{\mathfrak{I}_{p_{s^\prime}}} \lesssim \kappa^{-4s^\prime} \Vert q \Vert_{H^{s^\prime}}^4.$$ Similarly for the higher order terms $l \geq 3$, we find $$|\text{tr} (A^{2l}) | \leq \Vert A \Vert^{2l}_{\mathfrak{I}_{2l}} \lesssim \Vert A \Vert_{\mathfrak{I}_{p_{s'}}}^{2l} \lesssim \kappa^{-2ls'} \Vert q \Vert_{H^{s'}}^{2l}.$$ Note that this implies that the series converges for $q \in H^s$, $s>0$ by choosing $\kappa \gg \Vert q \Vert_{H^{s}}^{1/a}$ for $a=\min(1/4,s)$ as claimed in Remark \[rem:HsConvergence\]. Furthermore, we can estimate $|\alpha - \alpha_1|$ favorably: $$\big| \sum_{l \geq 2} \alpha_l(\kappa,q(t)) \big| \lesssim \kappa^{-4s^\prime} \Vert q(t) \Vert_{H^{s^\prime}}^4.$$ This gives by the embedding $B_r^{s,2} \hookrightarrow H^{s^\prime}$ for $s > s^\prime$ and $r \in [1,\infty]$ $$\begin{aligned} \langle q(t), w(-i \partial_x, N) q(t) \rangle &\lesssim \langle q(0), w(-i\partial_x, N) q(0) \rangle + N^{-4s^\prime} [ \Vert q(t) \Vert_{H^{s^\prime}}^4 + \Vert q(0) \Vert_{H^{s^\prime}}^4 ] \\ &\lesssim \langle q(0), w(-i\partial_x, N) q(0) \rangle + N^{-4s^\prime} [ \Vert q(t) \Vert_{B_r^{s,2}}^4 + \Vert q(0) \Vert_{B_r^{s,2}}^4 ].\end{aligned}$$ Raising the estimate to the power $r/2$, multiplying with $N^{rs}$, and carrying out the dyadic sums over $N \in 2^{\mathbb{N}_0}$, we find $$\begin{aligned} \Vert q(t) \Vert^r_{Z^s_r} &\lesssim \Vert q(0) \Vert^r_{Z^s_r} + [ \Vert q(t) \Vert^{2r}_{B_r^{s,2}} + \Vert q(0) \Vert^{2r}_{B_r^{s,2}} ] \end{aligned}$$ provided that we choose $s'<s<2s'$. This can be satisfied for $0<s<1/2$.\ By and $L^2$-conservation, we arrive at $$\label{eq:PropagationZsNorm} \Vert q(t) \Vert_{Z^s_r} \leq C_{r,s} ( \Vert q(0) \Vert_{Z^s_r} + \Vert q(0) \Vert^2_{L^2} + [\Vert q(t) \Vert^2_{Z_r^{s}} + \Vert q(0) \Vert^2_{Z_r^{s}} ])$$ with $C_{r,s} \geq 1$. The above display can be bootstrapped. Suppose that $$\max( \Vert q(0) \Vert_{Z^s_r}, \Vert q(0) \Vert_{L^2}) \leq \varepsilon \ll 1,$$ where $\varepsilon$ is chosen below as $\varepsilon = \varepsilon(C_{r,s})$. We prove that for any $t \in {\mathbb{R}}$ $$\label{eq:BootstrapAssumption} \sup_{t \in {\mathbb{R}}} \Vert q(t) \Vert_{Z^s_r} \leq 2 C_{r,s} \varepsilon.$$ For this purpose, let $I$ denote the maximal interval containing the origin such that holds for any $t \in I$. $I$ is non-empty and closed due to continuity of $\Vert q(t) \Vert_{Z^s_r}$. Furthermore, $I$ is open: For $t \in I$ yields $$\Vert q(t) \Vert_{Z^s_r} \leq C_{r,s} ( \varepsilon + 2\varepsilon^2 + (2 C_{r,s} \varepsilon^2)) \leq (3/2) C_{r,s} \varepsilon$$ by choosing $\varepsilon = C_{r,s}/8$. We conclude that $I = {\mathbb{R}}$. This finishes the proof for small initial data. The assumption $\Vert q(0) \Vert_{Z^s_r} \leq \varepsilon$ can be salvaged for arbitrary initial data by rescaling, leaving us with the assumption $\Vert q(0) \Vert_2 \leq \varepsilon$. The proof of Theorem \[thm:APrioriEstimates\] is complete in the line case. The circle case --------------- We shall rescale the circle, too, to accomplish smallness of the homogeneous norms. In [@KillipVisanZhang2018], this was not necessary due to more freedom in the parameter $\kappa$. We use the conventions from [@OhWang2020].\ Given $\lambda \geq 1$, let $\mathbb{T}_\lambda={\mathbb{R}}/(2 \pi \lambda {\mathbb{Z}})$ and set $$\hat{f}(\xi) = \frac{1}{\sqrt{2 \pi}} \int_0^{2 \pi \lambda} f(x) e^{-i x \xi} dx \text{ and } f(x) = \frac{1}{\sqrt{2 \pi}\lambda} \sum_{\xi \in {\mathbb{Z}}/\lambda} \hat{f}(\xi) e^{i x \xi}$$ for $f \in L^1({\mathbb{T}}_\lambda,{\mathbb{C}})$, where $\xi \in {\mathbb{Z}}_\lambda = \lambda^{-1} {\mathbb{Z}}$. The guideline for the conventions is that Plancherel’s theorem remains true: $$\Vert f \Vert_{L^2({\mathbb{T}}_\lambda)} = \Vert \hat{f} \Vert_{L^2({\mathbb{Z}}_\lambda,(d\xi)_\lambda)},$$ where $(d\xi)_\lambda$ denotes the normalized counting measure on ${\mathbb{Z}}_\lambda$: $$\int_{{\mathbb{Z}}_\lambda} f(\xi) (d\xi)_\lambda = \frac{1}{\lambda} \sum_{\xi \in {\mathbb{Z}}_\lambda} f(\xi).$$ For further basic Fourier analysis identities on ${\mathbb{T}}_\lambda$, we refer to [@CollianderKeelStaffilaniTakaokaTao2002 Section 2]. In [@OhWang2020] the following identities were computed: \[lem:ErrorEstimatesCircle\] Let $\lambda \geq 1$ and $n \geq 1$. Then, we have $$\begin{aligned} \Vert (\kappa - \partial)^{-1/2} u (\kappa + \partial)^{-1/2} \Vert^2_{\mathfrak{I}_2({\mathbb{T}}_\lambda)} &\sim \int_{{\mathbb{Z}}_\lambda} \log\big(4 + \frac{\xi^2}{\kappa^2} \big) \frac{|\hat{u}(\xi)|^2}{\sqrt{4 \kappa^2 + \xi^2}} (d\xi)_\lambda, \\ \Vert (\kappa + \partial)^{-1/2} \bar{u} (\kappa - \partial)^{-1/2} \Vert^2_{\mathfrak{I}_2({\mathbb{T}}_\lambda)} &\sim \int_{{\mathbb{Z}}_\lambda} \log\big(4 + \frac{\xi^2}{\kappa^2} \big) \frac{|\hat{u}(\xi)|^2}{\sqrt{4 \kappa^2 + \xi^2}} (d\xi)_\lambda\end{aligned}$$ for any smooth function $u$ on ${\mathbb{T}}_\lambda$. For the leading term in we find: \[lem:LeadingTermCircle\] Let $\kappa \geq 1$ and $\lambda \geq 1$. Then, we have $$\mathrm{Re} \text{tr} \big( \kappa (\kappa - \partial)^{-1} u (\kappa + \partial)^{-1} \bar{u} \big) = \frac{1 + e^{-2 \pi \lambda \kappa}}{1-e^{-2 \pi \lambda \kappa}} \int_{{\mathbb{Z}}_\lambda} \frac{2 \kappa^2 |\hat{u}(\xi)|^2}{4 \kappa^2 + \xi^2} (d\xi)_\lambda$$ for any smooth function $u$ on ${\mathbb{T}}_\lambda$. Set $C(\lambda,N) = (1+e^{-2 \pi \lambda N})/(1-e^{-2 \pi \lambda N})$. Clearly, $C(\lambda,N) \sim 1$ for $\lambda N \geq 1$. With $w$ defined as above, we find $$\langle f , w(-i \partial_x , N) f \rangle = \frac{1}{2} \big[ \frac{\alpha_1(N,f)}{C(\lambda,N)} - \frac{\alpha_1(N/2,f)}{C(\lambda,N/2)} \big].$$ Suppose that $q\in C^\infty({\mathbb{R}}\times {\mathbb{T}})$ is a solution to . Let $q_\lambda$ denote the rescaled solution to : $$q_\lambda: {\mathbb{R}}\times {\mathbb{T}}_\lambda \to {\mathbb{C}}, \quad q_\lambda(t,x) = \lambda^{-1/2} q(\lambda^{-2} t, \lambda^{-1} x).$$ With the conventions introduced above, the identities from Lemmas \[lem:ErrorEstimatesCircle\] and \[lem:LeadingTermCircle\] allow for the same error estimates in the real line case uniformly for $\lambda \geq 1$. We arrive at $$\Vert q_\lambda(t) \Vert_{Z^s_r} \lesssim_{r,s} \Vert q_\lambda(0) \Vert_{Z^s_r} + [ \Vert q_\lambda(t) \Vert^{2}_{B_r^{s,2}} + \Vert q_\lambda(0) \Vert^{2}_{B_r^{s,2}} ].$$ By $L^2$-conservation and estimating the $B_r^{s,2}$-norm in terms of the $Z^s_r$-norm: $$\Vert q_\lambda(t) \Vert_{Z^s_r} \lesssim_{r,s} \Vert q_\lambda(0) \Vert_{Z^s_r} + \Vert q_\lambda(0) \Vert_{L^2_\lambda}^2 + [ \Vert q_\lambda(t) \Vert^{2}_{Z^s_r} + \Vert q_\lambda(0) \Vert^{2}_{Z^s_r} ].$$ Smallness of the $Z^s_r$-norm can be accomplished by taking $\lambda \to \infty$. The $L^2_\lambda$-norm is scaling critical: $$\Vert q_\lambda(0) \Vert_{L^2_\lambda} = \Vert q(0) \Vert_{L^2}.$$ The continuity argument given in the line case proves global a priori estimates in the circle case for small $L^2$-norm of the initial data. The proof of Theorem \[thm:APrioriEstimates\] is complete. Acknowledgements {#acknowledgements .unnumbered} ================ Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 258734477 - SFB 1173. [10]{} Yoav Benyamini and Joram Lindenstrauss. , volume 48 of [*American Mathematical Society Colloquium Publications*]{}. American Mathematical Society, Providence, RI, 2000. Bjoern [Bringmann]{}, Rowan [Killip]{}, and Monica [Visan]{}. . , page arXiv:1912.01536, December 2019. J. Colliander, M. Keel, G. Staffilani, H. Takaoka, and T. Tao. A refined global well-posedness result for [S]{}chrödinger equations with derivative. , 34(1):64–86, 2002. Yu [Deng]{}, Andrea R. [Nahmod]{}, and Haitian [Yue]{}. . , page arXiv:1905.04352, May 2019. Axel Grünrock. Bi- and trilinear [S]{}chrödinger estimates in one space dimension with applications to cubic [NLS]{} and [DNLS]{}. , (41):2525–2558, 2005. Axel Grünrock and Sebastian Herr. Low regularity local well-posedness of the derivative nonlinear [S]{}chrödinger equation with periodic initial data. , 39(6):1890–1920, 2008. Zihua Guo. Local well-posedness and a priori bounds for the modified [B]{}enjamin-[O]{}no equation. , 16(11-12):1087–1137, 2011. Zihua Guo and Yifei Wu. Global well-posedness for the derivative nonlinear [S]{}chrödinger equation in [$H^{\frac{1}{2}}(\Bbb{R})$]{}. , 37(1):257–264, 2017. Benjamin [Harrop-Griffiths]{}, Rowan [Killip]{}, and Monica [Visan]{}. . , page arXiv:2003.05011, March 2020. Sebastian Herr. On the [C]{}auchy problem for the derivative nonlinear [S]{}chrödinger equation with periodic boundary condition. , pages Art. ID 96763, 33, 2006. Robert Jenkins, Jiaqi Liu, Peter Perry, and Catherine Sulem. Soliton resolution for the derivative nonlinear [S]{}chrödinger equation. , 363(3):1003–1049, 2018. Robert Jenkins, Jiaqi Liu, Peter Perry, and Catherine Sulem. The derivative nonlinear [S]{}chrödinger equation: global well-posedness and solton resolution. , 78(1):33–73, 2020. Robert Jenkins, Jiaqi Liu, Peter A. Perry, and Catherine Sulem. Global well-posedness for the derivative non-linear [S]{}chrödinger equation. , 43(8):1151–1195, 2018. David J. Kaup and Alan C. Newell. An exact solution for a derivative nonlinear [S]{}chrödinger equation. , 19(4):798–801, 1978. Rowan Killip and Monica Vişan. Kd[V]{} is well-posed in [$H^{-1}$]{}. , 190(1):249–305, 2019. Rowan Killip, Monica Vişan, and Xiaoyi Zhang. Low regularity conservation laws for integrable [PDE]{}. , 28(4):1062–1090, 2018. Herbert Koch and Daniel Tataru. Conserved energies for the cubic nonlinear [S]{}chrödinger equation in one dimension. , 167(17):3207–3313, 2018. Jyh-Hao Lee. . ProQuest LLC, Ann Arbor, MI, 1983. Thesis (Ph.D.)–Yale University. Jyh-Hao Lee. Global solvability of the derivative nonlinear [S]{}chrödinger equation. , 314(1):107–118, 1989. Jiaqi Liu. . ProQuest LLC, Ann Arbor, MI, 2017. Thesis (Ph.D.)–University of Kentucky. Changxing Miao, Yifei Wu, and Guixiang Xu. Global well-posedness for [S]{}chrödinger equation with derivative in [$H^{\frac12}(\Bbb R)$]{}. , 251(8):2164–2195, 2011. Koji Mio, Tatsuki Ogino, Kazuo Minami, and Susumu Takeda. Modified nonlinear [S]{}chrödinger equation for [A]{}lfvén waves propagating along the magnetic field in cold plasmas. , 41(1):265–271, 1976. E. Mjolhus. On the modulational instability of hydromagnetic waves parallel to the magnetic field. , (16):321–334, 1976. Razvan Mosincat. Global well-posedness of the derivative nonlinear [S]{}chrödinger equation with periodic boundary condition in [$H^{\frac {1}{2}}$]{}. , 263(8):4658–4722, 2017. Razvan Mosincat and Tadahiro Oh. A remark on global well-posedness of the derivative nonlinear [S]{}chrödinger equation on the circle. , 353(9):837–841, 2015. Andrea R. Nahmod, Tadahiro Oh, Luc Rey-Bellet, and Gigliola Staffilani. Invariant weighted [W]{}iener measures and almost sure global well-posedness for the periodic derivative [NLS]{}. , 14(4):1275–1330, 2012. Tadahiro Oh and Yuzhao Wang. Global well-posedness of the one-dimensional cubic nonlinear [S]{}chrödinger equation in almost critical spaces. , 269(1):612–640, 2020. Dmitry E. Pelinovsky, Aaron Saalmann, and Yusuke Shimabukuro. The derivative [NLS]{} equation: global existence with solitons. , 14(3):271–294, 2017. Dmitry E. Pelinovsky and Yusuke Shimabukuro. Existence of global solutions to the derivative [NLS]{} equation with the inverse scattering transform method. , (18):5663–5728, 2018. A. Rogister. Parallel propagation of nonlinear low-frequency waves in high-[$\beta$]{} plasma. , (14), 1971. Aaron [Saalmann]{}. . , page arXiv:1704.00071, March 2017. Robert [Schippa]{}. . , page arXiv:1704.07174, April 2017. Hideo Takaoka. Well-posedness for the one-dimensional nonlinear [S]{}chrödinger equation with the derivative nonlinearity. , 4(4):561–580, 1999. Blaine [Talbut]{}. . , page arXiv:1812.00505, December 2018. Yifei Wu. Global well-posedness for the nonlinear [S]{}chrödinger equation with derivative in energy space. , 6(8):1989–2002, 2013. Yifei Wu. Global well-posedness on the derivative nonlinear [S]{}chrödinger equation. , 8(5):1101–1112, 2015.
--- abstract: 'A Density Matrix Functional theory is constructed semi-empirically for the two-level Lipkin model. This theory, based on natural orbitals and occupation numbers, is shown to provide a good description for the ground state energy of the system as the two-body interaction and particle number vary. The application of Density Matrix Functional theory to the Lipkin model illustrates that it could be a valuable tool for systems presenting a shape phase-transition such as nuclei. The improvement of one-body observables description as well as the interest for Energy Density Functional theory are discussed.' author: - 'Denis Lacroix.' bibliography: - 'paper\_DMFTlipkin.bib' title: Density Matrix Functional Theory for the Lipkin model --- Introduction ============ Recently, large efforts are devoted to the construction of an Energy Density Functional (EDF) able to described at best properties of nuclei over the whole nuclear chart[@Ben03; @Sto07]. The standard strategy to design an EDF for nuclei is to start with a single-reference EDF (SR-EDF) where an effective interaction (Skyrme or Gogny type) and a trial state (Slater Determinant or more generally quasi-particle vacuum) are chosen. This technique is able to describe short-range correlations like pairing and provides already a rather good description of observables such as masses, under the condition that some symmetries of the original Hamiltonian are broken. The SR-EDF is then extended to restore broken symmetries and/or incorporate long-range correlations through configuration mixing, leading to the so-called Multi-Reference EDF (MR-EDF)[@Rin80]. Recent applications of this technique have revealed important conceptual and practical difficulties [@Dob07] related to the absence of a constructive framework for multi-reference calculations. A solution to this problem, has been recently proposed [@Lac08] and successfully tested in nuclei [@Ben08]. However, this cure does not apply to most of the functionals currently used [@Dug08], i.e. those with fractional powers of density. This motivates the search of new techniques to extend actual SR-EDF. The Density Matrix Functional Theory (DMFT) [@Gil75] appear as an alternative to configuration mixing [@Umr00]. Although this theory was proposed more than 30 years ago [@Gil75], explicit forms of functionals and applications have only been explored rather recently. There are nowadays an increasing interest in proposing accurate DMFT [@Kol06]. In this work, DMFT is applied to the two-level Lipkin model [@Lip65]. In this model, the Hartree-Fock (HF) theory fails to reproduce the ground state energy [@Aga66] while configuration mixing like Generator Coordinate Method (GCM) provides a suitable tool [@Rin80]. Therefore, the two-level Lipkin model is perfectly suited both to illustrate that DMFT could be a valuable tool and to provide an example of functional for system with a “shape” phase-transition. In this following, some aspects of DMFT are first recalled. Then a semi-empirical functional is constructed for the Lipkin model and applied for various particle number and two-body interaction strengths. It is shown to improve significantly the HF theory. Finally, the interest of constructing more general functional of natural orbitals and occupation numbers in the EDF context is outlined. Discussion on DMFT ================== The concept of Density Matrix Functional Theory is a generalization of the Hohenberg-Kohn theorem [@Hoh64] due to Gilbert [@Gil75]. It relies on a theorem showing that the ground state energy could be written as a functional of the one-body density matrix (OBDM) $\gamma(\mathbf{r},\mathbf{r'})$ (instead of the local one-particle density $\rho(\mathbf{r}) \equiv \gamma(\mathbf{r},\mathbf{r})$ in the standard Hohenberg-Kohn theorem). Then, similarly to the Kohn-Sham orbitals [@Koh65], the eigenvalues $n_i$ and eigenvectors $\varphi_i$, called hereafter resp. [occupation numbers]{} and natural orbitals, of the OBDM are often used instead of $\gamma(\mathbf{r},\mathbf{r'})$ with the relation $\gamma = \sum_i | \varphi_i \rangle n_i \langle \varphi_i |$. The variation of the functional $$\begin{aligned} {\cal F}[\{\varphi_i \}, \{n_i \}] &=& {\cal E} [\{\varphi_i \}, \{n_i \} ] \nonumber \\ &-&\mu \{ Tr(\rho) -N \} -\sum_{ij} \lambda_{ij} (\langle \varphi_i | \varphi_j \rangle - \delta_{ij}) , \label{eq:dmft}\end{aligned}$$ with respect to one-particle state components $\varphi_i^*(\mathbf{r})$ and occupation numbers (with the additional constraint $0 < n_i < 1$) is then performed to obtained the optimal $\varphi_i$, $n_i$ and associated ground state energy. The set of Lagrange multipliers $\mu$ and $\{ \lambda_{ij} \}$ are introduced to insure particle number conservation and orthogonality of the single-particle states. In Eq. (\[eq:dmft\]), ${\cal E} [\{\varphi_i \}, \{n_i \} ]$ is nothing but the functional itself which has to be found. In electronic system, the functional is generally separated into the Hartree, denoted by ${\cal E}_{H}$, (eventually Hartree-Fock, ${\cal E}_{HF}$) part and the exchange-correlation part, denoted here by ${\cal E}_{XC}$ (eventually correlation only ${\cal E}_C$). While the DMFT has been studied theoretically for a rather long time [@Gil75; @Val80a; @Val80b; @Zum85; @Mul84], only recently explicit functionals of the OBDM or directly to natural orbitals have been proposed and applied to realistic situations[@Goe98; @Csa00; @Csa02; @Yas02; @Kol04; @Cio03; @Per04; @Gri05; @Lat05; @Cio05; @Lei05; @Kol06; @Mar08; @Lat08]. There are nowadays extensive works to test functionals especially in infinite systems, the so-called Homogeneous Electronic Gas (HEG) [@Cio99; @Lat07]. Application of the DMFT to the Lipkin model =========================================== The “Lipkin Model” [@Lip65] is an exactly solvable model that has often been used as a benchmark for approximations for the nuclear many-body problem [@Rin80]. In this model, one considers $N$ particles distributed in two N-fold degenerated shells separated by an energy $\varepsilon$. The associated Hamiltonian is given by: $$\begin{aligned} H = \varepsilon J_0 - \frac{V}{2} (J_+ J_+ + J_- J_-) , \label{eq:hamillipkin}\end{aligned}$$ where $V$ denotes the interaction strength while $J_0$, $J_\pm$ are the quasi-spin operators defined as $$\begin{aligned} J_0 &=& \frac{1}{2} \sum_{p=1}^{N} \left(c^\dagger_{+,p}c_{+,p} - c^\dagger_{-,p}c_{-,p}\right) , \nonumber \\ J_+ &=& \sum_{p=1}^{N} c^\dagger_{+,p}c_{-,p},~~~ J_- = J_+^\dagger ,\nonumber\end{aligned}$$ $c^\dagger_{+,p}$ and $c^\dagger_{-,p}$ are creation operators associated with the upper and lower level. The exact solution of this model, is easily obtained by noting that $J^2$ (but not $J_0$) commute with $H$. It is then convenient to introduce the basis of eigenstates of $J^2$ and $J_0$ and diagonalize the Hamiltonian in this particular space (for more detail see for instance [@Sev06]). Hartree-Fock approximation -------------------------- In the Hartree-Fock (or Mean-Field) theory, the many-body wave function is replaced by a Slater Determinant (SD) given by $| \Phi \rangle = \Pi_{p=1}^N a^\dagger_{0,p} | - \rangle$. Here, a new single-particle basis, denoted by $\{\varphi_{0,p},\varphi_{1,p} \}$ associated to the set of creation/annihilation operators $\{a^\dagger_{0,p},a^\dagger_{1,p} \}$ has been introduced through the relation $$\begin{aligned} \left( \begin{array} {c} a^\dagger_{1,p} \\ a^\dagger_{0,p} \end{array} \right) &=& \left( \begin{array} {cc} f^* & -g^* \\ g & f \end{array} \right) \left( \begin{array} {c} c^\dagger_{+,p} \\ c^\dagger_{-,p} \end{array} \right), \label{eq:matac}\end{aligned}$$ where the choice $$\begin{aligned} f = \cos(\alpha), ~~~g = \sin(\alpha) e^{i\varphi} ,\end{aligned}$$ automatically insures the orthogonality of the new states. Due to simple structure of the Lipkin model, the variation with respect to the SD state is identical to the variation of the $\alpha$ and $\varphi$, i.e. $| \Phi \rangle = | \Phi (\alpha , \varphi) \rangle$ and the HF energy becomes a functional of these parameters: $$\begin{aligned} {\cal E}_{MF}(\alpha,\varphi) \equiv \left\langle \Phi(\alpha, \varphi) | H | \Phi (\alpha, \varphi) \right\rangle \end{aligned}$$ Anticipating for the forthcoming discussion, we first write ${\cal E}_{MF}$ as a functional of the OBDM $\gamma$: $$\begin{aligned} {\cal E}_{MF}[\gamma] &=& \varepsilon {\rm Tr}(J_0 \gamma) \nonumber \\ &-& \frac{V(N-1)}{2 N} \Big\{ ({\rm Tr}[\gamma J_+ ])^2 + ({\rm Tr}[\gamma J_-])^2 \Big\} . \label{eq:dmft1lipkin}\end{aligned}$$ In the Hartree-Fock limit, the OBDM contains all the information on the many-body state and simply reads $\gamma=\sum_{p=1}^N | \varphi_{0,p} \rangle \langle \varphi_{0,p} |$. Reporting the expression of $| \varphi_{0,p} \rangle$ in terms of $\alpha$ and $\varphi$, we recover the standard HF expression: $$\begin{aligned} {\cal E}_{MF}[\alpha , \varphi] &=& -\frac{\varepsilon N}{2} \left\{ \cos(2\alpha) + \frac{\chi}{2} \sin^2(2\alpha) \cos(2\varphi) \right\}. \label{eq:hflipkin}\end{aligned}$$ where $\chi = V(N-1) / \varepsilon$. Minimizing with respect to $(\alpha,\varphi)$ leads to HF energy, denoted by ${\cal E}^0_{HF}$ (both with $\varphi=0$): $$\begin{aligned} %\left\{ %\begin{array}{cc} {\cal E}^0_{HF} &=& - \frac{\varepsilon N}{2} ~~{\rm for} ~~\chi \leq 1 ~~~( {\rm at}~ \alpha =0) , \nonumber \\ {\cal E}^0_{HF} &=& - \frac{\varepsilon N} {4\chi} \left( 1+ \chi^2 \right) \nonumber ~{\rm for} ~\chi > 1 ~( {\rm at}~ \chi\cos(2\alpha) = 1) . %\end{array} %\right.\end{aligned}$$ The HF solution for the Lipkin model has been extensively discussed in the literature [@Aga66; @Rin80]. While it provides a rather good estimate of the exact energy in the weak coupling or in the large $N$ limit, it generally differs rather significantly from it in particular for $\chi \sim 1$. This discrepancy essentially reflects the failure of the HF method to account for configuration mixing in a single-reference framework for systems with a shape phase-transition [@Rin80]. Expression of the Hamiltonian for a General correlated state ------------------------------------------------------------ Due to the two-body nature of the Hamiltonian (Eq. (\[eq:hamillipkin\])), the most natural way to extend the Mean-Field framework to correlated system is to introduce the two-body density matrix, denoted by $\Gamma_{12}$ and the associated correlation matrix $\sigma_{12}$ (see for instance [@Cio00]). Using the OBDM of the correlated system $\gamma$, $\sigma_{12}$ is defined through the relation: $$\begin{aligned} \Gamma_{12} = \gamma_{1} \gamma_{2} (1-P_{12}) + \sigma_{12} ,\end{aligned}$$ where $P_{12}$ denotes the anti-symmetrization operator while the label “i” in $\gamma_i$ refers to the particle on which the density is applied, i.e. $\langle ij |\gamma_{1} \gamma_{2}| kl \rangle = \langle i|\gamma| k \rangle \langle j |\gamma| l \rangle$ [@Lac04]. The expectation value of the energy then splits into a mean-field part and a correlated part as : $$\begin{aligned} {\cal E} = {\cal E}_{MF} [\gamma ] + {\cal E}_{C} [\sigma_{12}], \label{eq:emfec0}\end{aligned}$$ where ${\cal E}_{MF} [\gamma ]$ is given by expression (\[eq:dmft1lipkin\]) except that $\gamma$ now refers to the OBDM of the correlated system. ${\cal E}_{C}$ reads: $$\begin{aligned} {\cal E}_{C} [\sigma_{12}] &=& -\frac{V}{2} {\rm Tr} \left\{ [J_+J_+ + J_- J_-]_{12} \sigma_{12} \right\} \nonumber \\ &=& -V \sum_{p,p'} \Re\left( \left\langle +, p ; + ,p' ~ |\sigma_{12} | - , p ; - , p' \right\rangle \right) ,\end{aligned}$$ where the notation $[.]_{12}$ emphasizes that the trace is performed using the two-body matrix elements of the operators. Eq. (\[eq:emfec0\]) is valid for any correlated system including the exact ground state. The minimization of it with respect to any kind of OBDM and correlations will therefore lead to the ground state energy. Such a direct minimization is complex due to the number of degrees of freedom involved in the variation [@Cio00]. DMFT provides a practical solution to this problem. Indeed, according to this theory [@Gil75], the total energy could be written as a functional of $\gamma$. Since ${\cal E}_{MF}[\gamma]$ is already written as a functional of the OBDM, we are left to find a functional for ${\cal E}_{C}$. In many cases, both ${\cal E}_{MF}$ and ${\cal E}_{C}$ are directly written as a functional of the natural orbitals $\varphi_i$ and occupation numbers $n_i$, i.e. $$\begin{aligned} {\cal E}[\{\varphi_i , n_i\}] = {\cal E}_{MF} [\{\varphi_i , n_i\}] + {\cal E}_{C} [\{\varphi_i , n_i\}] . \label{eq:emfec}\end{aligned}$$ DMFT has some additional advantages compared to standard DFT. Besides the fact that the exchange contribution could be expressed exactly with the OBDM, at the minimum the optimal OBDM identifies with the ground state OBDM. Therefore, expectation values of any one-body observable could be computed and should correspond to the ground state expectation value. Accordingly, if the mean-field prescription is used in ${\cal E}_{MF}$ for a given $H$, the value of ${\cal E}_{MF}$ at the minimum will truly correspond to the true mean-field contribution to the total energy. Consequently, ${\cal E}_{C}$ will truly correspond to the contribution of correlation (this issue will further be discussed in section \[sec:obobs\]). Recently, significant efforts have been made in the practical construction of such density matrix functionals [@Goe98; @Csa00; @Csa02; @Yas02; @Kol04; @Cio03; @Per04; @Gri05; @Lat05; @Cio05; @Lei05; @Kol06; @Mar08; @Lat08]. In most cases, guided by general considerations [@Mul84] and/or constraints on the two-body density [@Csa00], $\sigma_{12}$ is first written as a functional of the natural orbitals and occupation numbers. This corresponds generally to a starting point for more elaborated functionals. Functionals are further enriched by incorporating additional terms either to correct for the self-interaction problem or eventually to better reproduce some specific limits in the infinite HEG or configuration interaction calculation in molecules (for a recent review see [@Klo07]). Construction of a density matrix functional for Correlated state {#sec:semi} ---------------------------------------------------------------- Due to the specific form of the Lipkin Hamiltonian given by Eq. (\[eq:hamillipkin\]), $\gamma$ simply writes in the natural basis as: $$\begin{aligned} \gamma = \sum_{p=1}^N \Big\{| \varphi_{0,p} \rangle n_0 \langle \varphi_{0,p} | + | \varphi_{1,p} \rangle n_1 \langle \varphi_{1,p} | \Big\} , \label{eq:obdm}\end{aligned}$$ with $n_1 = (1-n_0)$. The single-particle states $| \varphi_{ i,p} \rangle$ (with $i=0,1$) now stand for the natural orbitals while the $n_i$ correspond to occupation numbers. Similarly to the HF theory, creation operators, denoted by $a^\dagger_{i,p}$, associated to natural orbitals are expressed from the $c^\dagger_{\pm,p}$ using Eq. (\[eq:matac\]). The mean-field contribution is easily deduced from the HF case, using Eq. (\[eq:dmft1lipkin\]) and the expression of $\gamma$ given above, leads to: $$\begin{aligned} {\cal E}_{MF}(\{ \varphi_{i,p}, n_i \}) & = & {\cal E}_{MF}(\alpha,\varphi,n_0) \nonumber \\ &=& -\frac{\varepsilon}{2} N \Big\{ \cos(2\alpha) (2n_0 -1) + \frac{\chi}{2} \sin^2(2\alpha) \cos(2\varphi) (2n_0 -1)^2 \Big\} . \label{eq:emflipkin}\end{aligned}$$ Expressing ${\cal E}_C$ is less straightforward. A possible strategy to construct functionals is to identify specific limits at which explicit forms are known. For instance the two electron case which has been studied in [@Low56] has largely influence presently used DMFT for molecules [@Klo07]. Following this idea, the $N=2$ case is first considered. ### The $N=2$ case To study the $N = 2$ case, a basis of Slater Determinant is constructed using the natural orbitals, i.e. $| \Phi_{ij} \rangle = a^\dagger_{i,p}a^\dagger_{j,p'}| - \rangle$ with $i$ and $j$ either $0$ or $1$. The ground state wave-function $\Psi$ then reads: $$\begin{aligned} | \Psi \rangle = \sum_{ij} C_{ij} | \Phi_{ij} \rangle .\end{aligned}$$ From the expression of the OBDM and using the fact, that the single-particle states are natural orbitals we obtain the simple relation $|C_{ij}|^2 = \delta_{ij} n_{i}$. From which we deduce $C_{ij} = \delta_{ij} e^{i\theta_{ii}} \sqrt{n_i}$. As illustrated below, the simplest choice $e^{i\theta_{ii}}=1$ is convenient and leads to $$\begin{aligned} | \Psi \rangle = \sqrt{n_0} | \Phi_{00} \rangle + \sqrt{n_1} | \Phi_{11} \rangle , \label{eq:phin2}\end{aligned}$$ which is nothing but the exact ground state wave-function written as a functional of occupation numbers and natural states. Reporting this expression in $\left\langle \Phi | H |\Phi \right\rangle$, leads to a total ground state energy ${\cal E}^{^{{N=2}}}$ given by: $$\begin{aligned} {\cal E}^{^{{N=2}}}(\alpha, \varphi, n_0) &=& -\varepsilon \cos(2\alpha) (2n_0 -1) %\nonumber \\ %&&~~~~~~~~ - V \Big\{ \frac{1}{2} \sin^2(2\alpha) \cos(2\varphi) + 2\left( \sin^4(\alpha) \cos(4 \varphi) + \cos^4(\alpha) \right) \sqrt{n_0 (1-n_0)} \Big\} . \label{eq:funcn2}\end{aligned}$$ Using the expression of the mean-field contribution (Eq. (\[eq:emflipkin\])), we deduce $$\begin{aligned} {\cal E}^{^{{N=2}}}_C(\alpha, \varphi, n_0) &=& - 2 V \Big\{\sin^2(2\alpha) \cos(2\varphi) n_0(1-n_0) +\left(\sin^4(\alpha) \cos(4 \varphi)+ \cos^4(\alpha)\right) \sqrt{n_0 (1-n_0)} \Big\}.\end{aligned}$$ Since the above functional is exact, the ground state energy should be recovered by minimization with respect to $n_0$, $\alpha$ and $\varphi$. As in the HF case, $\varphi=0$ could always be taken. The variation of $n_0$ should be made under the constraint $n_0 \in [0,1]$. A similar technique as in the HF+BCS case could be employed [@Rin80]. Writing $n_0 = \cos^2(\theta)$ with $\theta \in [0,\pi/2]$, gives at the minimum $$\begin{aligned} \tan(2\theta) = \chi \left(\frac{ \sin^4(\alpha) \cos(4 \varphi)+ \cos^4(\alpha)}{\cos(2\alpha)} \right).\end{aligned}$$ Then, variation of the $\alpha$ is made to obtain the minimum energy. The result (filled circles) is displayed in Fig. \[fig:N2\] and compared to the exact ground state energy given by $E=-\sqrt{1+\chi^2}$ [@Lip65] (solid line) and the Hartree-Fock energy (dashed line). Not surprisingly, while the HF curve deviates significantly from the solid line, the DMFT could not be distinguished from the exact case. ![Comparison between the exact energy (solid line), Hartree-Fock (dashed line) and the energy obtained from the minimization of Eq. (\[eq:funcn2\]) (filled circles) as a function of $\chi$ for the $N=2$ case.[]{data-label="fig:N2"}](fig1_nnn.eps){height="5.cm"} ### DMFT for $N \ge 3$ and large N scaling The Lipkin model with $N=2$ is an interesting pedagogical example of DMFT where the exact energy functional in terms of natural orbitals and occupation numbers is known. This limit is used here as a guide to provide a DMFT for $N \ge 3$. The most simple extension of the functional derived for $N=2$, consists in assuming that the interaction of the $N$ particle could be written as a sum of the interaction of the $N(N-1)/2$ pairs of particles, each pair contributing to the total energy as in the $N=2$ case. This prescription naturally leads to the mean-field Hamiltonian given (\[eq:emflipkin\]) and amount to take for all $N$ $$\begin{aligned} {\cal E}_C(\alpha, \varphi, n_0) = \frac{N(N-1)}{2} {\cal E}^{^{{N=2}}}_C(\alpha, \varphi, n_0). \label{eq:corsimp}\end{aligned}$$ ![Exact ground state energy (solid lines) displayed as a function of $\chi$ for $N=5$ to $20$ resp. from top to bottom. In each case, the corresponding HF (dashed line) and DMFT (filled circles) are shown. The latter are obtained by minimization of the functional using the mean-field and correlation energy resp. given by Eq. (\[eq:emflipkin\]) and Eq. (\[eq:corsimp\]).[]{data-label="fig:chiref"}](fig2_lipkinnref.eps){height="9.cm"} The minimal total energy obtained by varying both occupation numbers and $\alpha$ (using $\varphi=0$) using this prescription is displayed in Fig \[fig:chiref\] (solid circles) as a function of $\chi$ for various particle numbers. In each case, the exact solution (solid line) and the Hartree-Fock prescription (dashed line) are shown. The simple scheme using (\[eq:corsimp\]) clearly provides a very poor approximation for the ground state energy and always leads to an energy much below the exact one. In addition, the discrepancy increases as $N$ increases. This failure points out the complex many-body correlations present in the Lipkin model coming from the mixing of 1 particle-1 hole (1p-1h), 2p-2h, $n$p-$n$h excitations in the component in the ground state. This leads to a much more complex situation than the $N=2$ case (Eq. (\[eq:phin2\])). In particular, using the contracted Schrödinger equation (CSE), we do expect that the two-body correlation depends on higher order correlation matrices [@Yas02]. The correlation energy given by (\[eq:corsimp\]) clearly neglects these higher-order effects. From Fig. \[fig:chiref\], we see that the correlation energy is largely overestimated. In the following, we use a slightly different prescription for the correlation energy given by: $$\begin{aligned} {\cal E}^{^{{N\ge 3}}}_C(\alpha, \varphi, n_0) = \eta(N) \frac{N(N-1)}{2} {\cal E}^{^{{N=2}}}_C(\alpha, \varphi, n_0) , \label{eq:coreta}\end{aligned}$$ where $\eta(N)$ is a reduction factor introduced to mimic the effect of higher order correlations. The optimal value of $\eta$ this is determined empirically from the following procedure. For a given value of $N$ and $\eta$, the mimimum energy of the corresponding DMFT, denoted by ${\cal E}^N_{min} (\eta, \chi)$ is computed for $\chi$ between 0 and $\chi_{max} = 3$. Then, the quantity $D^N(\eta)$, given by $$\begin{aligned} D^N (\eta) = \int_0^{\chi_{max}} \left \{ {\cal E}^N_{GS}(\chi) - {\cal E}^N_{min} (\eta, \chi)\right\}^2 d\chi , \end{aligned}$$ where ${\cal E}^N_{GS}(\chi)$ denotes the exact ground state energy for a given $N$ and $\chi$, is computed. Obviously, $D^N (\eta)$ gives a measure of the deviation between the DFMT minimum energy and the exact energy over the interval $\chi \in [0,\chi_{max}]$. The optimal value of $\eta(N)$ is defined as the minimum of $D^N (\eta)$ as $\eta$ varies between 0 (the HF limit) and $1$ (the prescription of Eq. (\[eq:corsimp\])). ![Values of the optimal quenching factor $\eta(N)$ as a function of the particle number N. The solid line represents the function $\eta(N) = c~N^{-2/3}$ with $c=1.5$.[]{data-label="fig:fit"}](fig3_lipkinfit.eps){height="5.cm"} Values of optimal reduction factors are reported by filled circles in Fig. \[fig:fit\] as a function of $N$. As guessed from the increasing discrepancy observed with increasing $N$ observed in Fig. \[fig:chiref\], the higher is $N$ the smaller should $\eta$ be taken. The variation of $\eta(N)$ turns out to simply behave as $N^{-2/3}$. The solid line in Fig (\[fig:fit\]) represents the function $$\begin{aligned} \eta(N) = c ~ N^{-2/3}, \label{eq:etan}\end{aligned}$$ with $c=1.5$ deduced by fitting optimal values of $\eta$. In the following, we show that the semi-empirical density matrix functional theory constructed from the mean-field and correlation energy resp. given by Eq. (\[eq:emflipkin\]) and Eq. (\[eq:coreta\]) significantly improves the HF theory. Combining expression (\[eq:etan\]) with Eq. (\[eq:coreta\]) shows that the correlation energy scales as ${\cal E}_C \propto N^{4/3}$ as $N$ increases. Since this correlation energy is proportional to $\langle J^2_x \rangle$, we observe that the $N^{-2/3}$ deduced empirically is nothing but the large $N$ scaling behavior obtained analytically in ref. [@Dus04], i.e. $\langle J^2_x \rangle/N^2 \propto N^{-2/3}$. Results of the semi-empirical DMFT ---------------------------------- The minimal energy deduced from the semi-empirical density matrix functional proposed above (filled circles) is systematically compared to the exact ground state energy (solid line) and HF energy (dashed line) for different particle number and two-body interaction strength in Fig. \[fig:chi\] and \[fig:nn\]. In all cases, the DMFT significantly improves the HF result and turns out to be very close to the exact one. As illustrated in Fig. \[fig:chi\], the HF energy generally deviates rather significantly from the exact energy around $\chi=1$ and does not provides the correct asymptotic behavior as $\chi$ increases. This deviation, which disappears as $N \rightarrow \infty$, is due to the failure of HF theory in the presence of a “shape” phase transition [@Aga66] between the spherical solution ($\chi < 1$) and the “deformed” solution ($\chi > 1$). The standard technique to properly account for this effect is to mix different slater determinants like in the GCM theory. We see in figure \[fig:chi\] and \[fig:nn\] that, except a small deviation around $\chi=1$ which seems to slightly increase as $N$ increases, both the asymptotic behavior at large $\chi$ and $N$ and the energy around $\chi=1$ are rather well reproduced. It is worth mentioning that the DMFT is much less demanding in terms of computational power than the GCM and thus provides a rather interesting alternative to the latter theory. ![Exact ground state energy (solid lines) displayed as a function of $\chi$ for $N=5$ to $20$ resp. from top to bottom. In each case, the corresponding HF (dashed line) and DMFT (filled circle) minimum energy are shown. The DMFT calculation is performed using the mean-field and correlation energy resp. given by Eq. (\[eq:emflipkin\]) and Eq. (\[eq:coreta\]) with $\eta(N) =1.5 ~{N}^{-2/3}$.[]{data-label="fig:chi"}](fig4_lipkinn.eps){height="9.cm"} ![Exact ground state energy per particles (solid lines) displayed as a function of $N$ for $\chi =1$, $2$ and $3$ from top to bottom. In each case, the corresponding HF (dashed line) and DMFT (filled circle) minimum energy are shown. The DMFT calculation is performed using the mean-field and correlation energy resp. given by Eq. (\[eq:emflipkin\]) and Eq. (\[eq:coreta\]) with $\eta(N) =1.5~N^{-2/3}$.[]{data-label="fig:nn"}](fig5_lipkinchi.eps){height="9.cm"} Discussion on Density Matrix Functional Theory with broken symmetry {#sec:obobs} ------------------------------------------------------------------- Similarly to the Hartree-Fock case, the one-body density solution of the functional developed here can eventually violate some symmetries of the “true” ground state density. The invariance of the Hamiltonian (\[eq:hamillipkin\]) with respect to parity imposes $\left\langle J_+ \right\rangle = \left\langle J_- \right\rangle = 0$. This implies that at the minimum of the functional $\alpha=0$. This is indeed the case for the exact functional given by Eq. (\[eq:funcn2\]) for $N=2$. However, solutions with $\alpha \neq 0$ are found for larger particle number. This is illustrated in figure \[fig:ejp\] where the value of $\Delta_{+-} \equiv (\left\langle J_+ \right\rangle + \left\langle J_- \right\rangle)/2N$ is displayed as a function of $\chi$. The value of $\chi$, for which $\alpha$ becomes non-zero, denoted by $\chi_c$ is infinite for $N=2$, around 1.6 for $N=5$ and tends to the HF case ($\chi_c = 1$) as $N$ goes to infinity. Note that, in this limit, the HF functional alone already provides a very good functional for the Lipkin model. ![Expectation value of $\Delta_{+-}$ as a function of $\chi$ for $N=5$ (top), $N=10$ (middle) and $N=20$ (bottom). The DMFT result (filled circles) is systematically compared with the HF (open circles) value.[]{data-label="fig:ejp"}](fig6_ejp.eps){height="9.cm"} According to DMFT, the OBDM $\gamma$ obtained by minimizing the [*exact*]{} functional should match the exact OBDM at the minimum. Therefore, strictly speaking, the extracted one-body density and associated natural states and occupation numbers could not be the exact OBDM when some of the symmetries of the system are broken. This has to be kept in mind when discussing expectation values of one-body observables. As an illustration, the expectation value of the single-particle part of the Hamiltonian ${\cal E}_{J_0} \equiv \varepsilon Tr( \gamma J_0)$ obtained using the OBDM minimizing the DMFT is displayed in Fig. \[fig:ej0\] as a function of $\chi$ for different particle number (filled circles). The exact (solid line) and HF (open circles) prescription are also shown. The value of the total energy minus ${\cal E}_{J_0}$ is also shown in each panel for the DMFT (filled square), exact (dashed line) and HF (open square). The density obtained in DMFT using the semi-empirical functional described in section \[sec:semi\] always improves the estimate of ${\cal E}_{J_0}$ for small $\chi$. It generally gives an almost perfect result when $\alpha=0$ at the minimum, i.e. when the parity symmetry is respected by the OBDM. The behavior of ${\cal E}_{J_0}$ is also rather satisfactory at large $N$ and $\chi$. In other cases, small particle number and large $\chi$ or large particle number and intermediate $\chi$ ($1 \le \chi \le 2$). Fig. \[fig:ej0\] illustrates that, when the functional respects all symmetries of the original Hamiltonian, expectation values of one-body observables perfectly match the exact results. This is the case of the semi-empirical functional proposed here for all particle number and $\chi \leq 1$. Of course it would be desirable to provide functionals that respects the symmetries at the first place. However, as it is well known already at the HF level, the introduction of theories where symmetries are explicitly broken is a way to grasp some of the correlations which would have been very hard to incorporate without breaking the symmetries. The success of the DMFT functional at large $N$ and $\chi$ can be attributed to the explicit parity symmetry breaking as in the HF case. ![Expectation value of the one-body part of the Hamiltonian, denoted by ${\cal E}_{J_0}$ obtained at the minimum of the DMFT as a function of $\chi$ for $N=5$ (top), $N=10$ (middle) and $N=20$ (bottom). The DMFT result (filled circles) is systematically compared with the exact (solid line) and HF (open circles) value. The value of the total energy minus ${\cal E}_{J_0}$ is also shown in each panel for the DMFT (filled square), exact (dashed line) and HF (open square).[]{data-label="fig:ej0"}](fig7_ejj.eps){height="9.cm"} Summary and discussion on EDF ============================= In this work, the Density Matrix Functional Theory is applied to the two-level Lipkin model. Guided by the $N=2$ case, a semi-empirical functional of the natural wave-functions and occupation numbers is constructed. The minimization of the DMFT is shown to give a much better agreement of the exact ground state energy than the HF scheme over a wide range of particle number and two-body interaction strength. The success of DMFT in the Lipkin model, shows that this theory could be a valuable tool in many-body system in the presence of “shape” phase transition. Such transition often occurs for instance in nuclear physics [@Rin80; @Ben03] and is generally treated by first introducing the Energy Density Functional of a single-reference vacuum (Slater Determinant or quasi-particle state) and then use GCM theory [@Ben03]. DMFT could be a powerful tool to improve actual SR-EDF functionals, by writing directly the EDF in terms of natural orbitals and occupation numbers. The possibility to introduce occupation numbers has been promoted in Ref. [@Pap07] for the pairing Hamiltonian and in [@Ber08] for the three-level Lipkin model using a slightly different technique. However, the strategy based on DMFT seems quite natural to extend actual EDF. Indeed, following the strategy used here for the Lipkin model, we can write the nuclear EDF as $$\begin{aligned} {\cal E}_{EDF}[\{\varphi_i, n_i \} ] = {\cal E}_{MF} [\{\varphi_i, n_i \} ] + {\cal E}_{C} [\{\varphi_i, n_i \} ] . \label{eq:edfdmft}\end{aligned}$$ The most natural choice for the mean-field part is to use the actual Skyrme functional which has been optimized for decades. Note that, the nuclear problem differs from the electronic case and/or the Lipkin model presented here due to the fact that coefficients of the functional are not directly linked to the bare interaction but adjusted on experimental data. Therefore, the mean-field contribution already contains a large fraction of the correlation. Nevertheless, the above decomposition (Eq. (\[eq:edfdmft\])) is already used in the nuclear context in SR-EDF calculations when a quasi-particle vacuum is retained for the trial state. Then, the correlation energy identifies with the pairing energy which could, in the canonical basis, be written as a functional of occupation numbers and natural orbitals. Indeed, using the notation $(i,\bar i)$ for canonicals pairs of single-particle states, the pairing energy writes $$\begin{aligned} {\cal E}_{C} [\{\varphi_i, n_i \} ] &\equiv& \frac{1}{4} \sum_{i,j} \bar v^{\kappa \kappa}_{i\bar i j \bar j} \sqrt{n_i (1-n_i)} \sqrt{n_j (1-n_j)} \nonumber \\\end{aligned}$$ where $\bar v^{\kappa \kappa}$ is the effective interaction in the pairing channels and where we have replaced components of the anomalous density $\kappa$ in the natural basis by $\kappa_{i \bar i} = \sqrt{n_i (1-n_i)}$. This functional is adapted for pairing like correlations. However, the use of a reference state written as a product of quasi-particle states clearly restricts the type of density matrix functional that can be guessed. This class appears to be not general enough to account for the diversity of phenomena occurring in nuclei. The possibility to use different functional than the BCS like ones has already been discussed in the early time of the Skyrme EDF history [@Vau73]. The functional developed here for the Lipkin model as well as functionals recently proposed in electronic systems [@Klo07], clearly point out the possibility to use alternative functionals which could be of interest for nuclear systems. A crucial aspect of the present work is the introduction of DMFT functionals that explicitly breaks some of the symmetries of the original Hamiltonian to incorporate complex correlations. It should be kept in mind that broken symmetries imply that symmetries should a priori be restored. The problem of restoration of broken symmetries in functional theories is an important aspect which deserves specific studies in the near future [@Lac08; @Ben08; @Dug08]. The author thank M. Assié, B. Avez, T. Duguet, C. Simenel, O. Sorlin and P. Van Isacker for enlightening discussions at different stages of this work and T. Papenbrock for useful remarks on the scaling behavior in the Lipkin model.
--- abstract: | Assume that $S$ is a semigroup generated by $\{x_1,\ldots,x_n\}$, and let $\Uscr$ be the multiplicative free commutative semigroup generated by $\{u_1,\ldots,u_n\}$. We say that $S$ is of *$I$-typ*e if there is a bijection $v:\Uscr\r S$ such that for all $a\in\Uscr$, $\{v(u_1a),\ldots v(u_na)\}=\{x_1v(a),\ldots,x_nv(a)\}$. This condition appeared naturally in the work on Sklyanin algebras by John Tate and the second author. In this paper we show that the condition for a semigroup to be of $I$-type is related to various other mathematical notions found in the literature. In particular we show that semigroups of $I$-type appear in the study of the settheoretic solutions of the Yang-Baxter equation, in the theory of Bieberbach groups and in the study of certain skew binomial polynomial rings which were introduced by the first author. address: - | Section of Algebra\ Institute of Mathematics\ Bulgarian Academy of Sciences\ 113 Sofia\ Bulgaria - | Limburgs Universitair Centrum\ Departement WNI\ Universitaire Campus\ 3590 Diepenbeek\ Belgium author: - 'Tatiana Gateva-Ivanova' - Michel Van den Bergh title: 'Semigroups of $I$-type' --- Introduction ============ In the sequel $k$ will be a field. Our starting point for this paper are certain semi-groups which were introduced in [@GI3]. Let $X=\{x_1,\ldots,x_n\}$ be a set of generators. In [@GI3] the first author considers semigroups $S$ of the form $\langle X;R\rangle$ where $R$ is a set of quadratic relations $$R=\{x_jx_i=u_{ij}\mid i=1,\ldots,n; j=i+1,\ldots,n\}$$ satisfying 1. $u_{ij}=x_{i'}x_{j'}$, $i'<j'$, $i'<j$. 2. As we vary $(i,j)$, every pair $(i',j')$ occurs exactly once. 3. The overlaps $x_kx_jx_i$ for $k>j>i$ do not give rise to new relations in $S$. The motivation for (\*) is developed in [@GI3]. Condition (\*1) says that the semigroup algebra $kS$ is a *binomial skew polynomial ring*, so the theory of (non-commutative) Gröbner bases applies to it. Condition (\*3) says that as sets $$S=\{x_1^{a_1}\cdots x_n^{a_n}\mid (a_1,\ldots,a_n)\in\NN^n\}$$ Furthermore it is shown in [@GI3 Thm II] that (\*2) is equivalent with $kS$ being noetherian (assuming (\*1,3)). However conditions (\*1,2,3) are also natural for intrinsic reasons. There are exactly as many monomials $x_j x_i$ with $j>i$ as there are monomials $x_{i'}x_{j'}$ with $i'<j'$. This provides the motivation for imposing (\*2). Furthermore, it follows from [@GI3 Thm 3.16] that (\*1,2,3) imply $j,j'>i,i'$ for the relations in $R$. Thus conditions (\*1,2,3) are actually symmetric, in the sense that if they are satisfied by $S=\langle X;R\rangle$ then they are also satisfied by $S^\circ$. The purpose of this paper is to show that the semigroups defined in the previous paragraphs are intimately connected with various other mathematical notions which are currently of some interest. In particular we show that they are related to 1. Set theoretic solutions of the Yang-Baxter equation [@Drinfeld1]. 2. Bieberbach groups [@Charlap]. 3. Rings of $I$-type [@TVdB]. We will now sketch these connections. We start by proving the following proposition. \[intrth1\] Assume that $R$ satisfies (\*1,2,3). Define $r:X^2\r X^2$ as follows : $r$ is the identity on quadratic monomials and if $(x_jx_i=x_{i'}x_{j'})\in R$ then $r(x_jx_i)=x_{i'}x_{j'}$, $r(x_{i'}x_{j'})=x_jx_i$. Then $r$ satisfies 1. $r^2=\Id_{X^2}$. 2. $r$ satisfies the settheoretic Yang Baxter equation. That is, one has $$r_{1}r_{2}r_{1}=r_{2}r_{1}r_{2}$$ where as usual $r_{i}:X^m\r X^m$ is defined as $\Id_{X^{i-1}}\times r\times \Id_{X^{m-i-1}}$. 3. Given $a,b\in\{1,\ldots,n\}$ there exist unique $c,d$ such that $$r(x_cx_a)=x_dx_b$$ Furthermore if $a=b$ then $c=d$. In view of this theorem it is natural to consider semigroups of the form $\langle X;x_ix_j=r(x_ix_j)\rangle$ where $r$ is a settheoretic solution of the Yang-Baxter equation. We will show that some of these are of “$I$-type” [@TVdB]. Being of $I$-type is a technical condition which is very useful for computations. Let us recall the definition here. We start with a set of variables $u_1,\ldots,u_n$ and we let $\Uscr$ be the free *commutative* multiplicative semigroup generated by $u_1,\ldots,u_n$. Let $S$ be a semigroup generated by $X=\{x_1,\ldots,x_n\}$. $S$ is said to be of (left) $I$-type if there exists a bijection $v:\Uscr\r S$ (an $I$-structure) such that $v(1)=1$ and such that for all $a\in\Uscr$ $$\label{intreq1} \{v(u_1a),\ldots,v(u_na)\}=\{x_1 v(a),\ldots,x_nv(a)\}$$ It is clear that if $S$ is of $I$-type then $kS$ is of $I$-type in the sense of [@TVdB]. Assume that $S$ is $I$-type with $I$-structure $v$. implies that for every $a\in\Uscr$, $i\in\{1,\ldots,n\}$ there exists a unique $x_{a,i}\in X$ such that $$x_{a,i}v(a)=v(au_i)$$ and $\{x_{a,i}\mid i=1,\ldots, n\}=X$. \[intrex14\] Let $S$ be the semigroup $\langle x,y; x^2=y^2\rangle$ and consider the following doubly infinity graph. Define $v(u_1^{a_1} u_2^{a_2})$ as one (or all) of the paths from $(0,0)$ to $(a_1,a_2)$, written in reverse order (for example $v(u_1^2u_2)=xy^2=x^3=y^2x$). Then it is clear that this $v$ defines a $I$-structure on $S$. We have the following result \[intrth2\] Assume that $S$ is $I$-type. Define $r:X^2\r X^2$ by $$r(x_{u_i,j}x_{1,i})=x_{u_j,i}x_{1,j}$$ Then $r$ satisfies the conclusions of Theorem \[intrth1\]. Conversely if $r:X^2\r X^2$ satisfies \[intrth1\].1.,2.,3. then the semigroup $S=\langle X;x_i x_j=r(x_i x_j)\rangle$ is of $I$-type. From Theorems \[intrth1\],\[intrth2\] it follows that semigroups defined by relations satisfying (\*1,2,3) are of $I$-type. The proof of the following result is similar to the proof of [@TVdB Thm 1.1,1.2]. For a cocycle $c:S^2\r k^\ast$ we use the notation $k_c S$ for the twisted semi-group algebra associated to $(S,c)$. Thus $k_cS$ is the $k$-algebra with basis $S$ and with multiplication $x\cdot y=c(x,y)xy$ for $x,y\in S$. \[intrth3\] Assume that $S$ is of $I$-type and let $A=k_cS$ for some cocycle $c:S^2\r k^\ast$. Then 1. $A$ has finite global dimension. 2. $A$ is Koszul. 3. $A$ is noetherian. 4. $A$ satisfies the Auslander condition. 5. $A$ is Cohen-Macaulay. 6. If $c$ is trivial then $k_cS$ is finite over its center. For the definition of “Cohen-Macaulay” and the “Auslander condition” see [@Le]. \[intrcor\] Assume that $S$ is a semigroup of $I$-type. Then $k_cS$ is a domain, and in particular $S$ is a cancellative. This corollary follows from [@Le]. Let $S$ be a semi-group of $I$-type with $I$-structure $v:\Uscr\r S$. Since $S$ is a cancellative semigroup of subexponential growth, it is Öre. Denote its quotient group by $\bar{S}$. We identify $\Uscr$ in the natural way with $\NN^n$, and in this way we embed it in $\RR^n$. We will prove the following \[intrth4\] Assume that $S$ is of $I$-type with $I$-structure $v:\Uscr\r S$. Let $S$ act on the right of $\Uscr$ by pulling back under $v$ the action of $S$ on itself by right translation. Then this action extends to a free right action of $\bar{S}$ on $\RR^n$ by Euclidean transformations and for this action $[0,1[^n$ is a fundamental domain. In particular $\bar{S}$ is a Bieberbach group. If we take for $S$ the semigroup of Example \[intrex14\] then using one checks that $x$ and $y$ act on $\RR^2$ by glide reflections along parallex axes. Hence $\RR^2/\bar{S}$ is the Klein bottle! Proof of Theorem \[intrth1\] ============================ In this section we prove Theorem \[intrth1\]. The notations will be as in the introduction. So $S$ is a semigroup of the form $\langle X;R\rangle$ where $R$ is a set of relations satisfying (\*). It is clear that \[intrth1\].1. is true by definition. So we concentrate on \[intrth1\].2. and \[intrth1\].3. Below we denote the diagonal of $X^m$ by $\Delta_m$. Clearly $$r_1(\Delta_3)=\Delta_3,\qquad r_2(\Delta_3)=\Delta_3$$ Furthermore it follows from the “cyclic condition” [@GI3 Thm 3.16] that $$\label{cyclic} r_1r_2(\Delta_2\times X)=X\times \Delta_2$$ \[somelemma\] The relation $$r(zt)=xy$$ defines bijections between $X^2$ and itself given by $$(t,y)\leftrightarrow(z,t)\leftrightarrow (x,y)\leftrightarrow (z,x)$$ That $(z,t)\leftrightarrow (x,y)$ defines a bijection is clear. Now consider the map which assigns $(t,y)$ to $(z,t)$. We claim that it is an injection. If this is so then by looking at the cardinality of the source and the target (which are both $X^2$) we see that it must be a bijection. To prove the claim we compute $r_2r_1(xy^2)=r_2(zty)=z^2\ast$ where the last equality follows from . Thus $r(ty)=z\ast$ and hence $z$ is uniquely determined by $t,y$. This proves the claim. That $(z,t)\leftrightarrow (z,x)$ is a bijection is proved similarly. Note that lemma \[somelemma\] contains \[intrth1\].3 as a special case. Hence we are left with proving \[intrth1\].2. Let us call $w,w'\in \langle X\rangle $ equivalent if they have the same image in $S$. Notation : $w\sim w'$. Clearly $w\sim w'$ iff $$w'=r_{i_1}r_{i_2}\cdots r_{i_p} w$$ for some $p,i_1,\ldots,i_p$. Concerning the structure of the equivalence classes there is the following easy lemma. \[easy\] Every equivalence class for $\sim$ in $X^m$ contains exactly one monomial of the form $x_{a_1}\cdots x_{a_m}$, $a_1\le \cdots \le a_m$. This is a consequence of the Bergman diamond lemma. After these preliminaries we prove the Yang-Baxter relation for $r$. The proof is based upon a careful examination of the equivalence classes in $X^3$, together with a counting argument. Let $D$ be the infinite dihedral group $\langle r_1,r_2; r_1^2=r_2^2=e\rangle$. $D$ acts on $X^3$ and is clear the the equivalence classes correspond to $D$-orbits. Let $O$ be such an orbit. There are three possibilities. - $O\cap\Delta_3\neq \emptyset$. In this case clearly $|O|=1$. - $O\cap ((\Delta_2\times X\cup X\times \Delta_2)\setminus \Delta_3)\neq \emptyset$. In this case it follows from that $|O|=3$. - $O\cap (\Delta_2\times X\cup X\times\Delta_2)=\emptyset$. Now $O=\{w,r_1w,r_2r_1w,\ldots\}$. Thus a general member of $O$ is of the form $(r_2r_1)^aw$ or $r_1(r_2r_1)^a w$. We claim that $(r_2r_1)^aw\neq r_1(r_2r_1)^b w$ for $a,b\in\ZZ$. To prove this, assume the contrary and define $$w_1=\begin{cases} r_1(r_2r_1)^{\left\lfloor\frac{a+b}{2}\right\rfloor}w&\text{if $a+b$ is odd}\\ (r_2r_1)^{\left\lfloor\frac{a+b}{2}\right\rfloor}w&\text{if $a+b$ is even} \end{cases}$$ Thus $r_1w_1=w_1$ or $r_2w_1=w_1$ (depending on whether $a+b$ is even or odd), whence $w_1\in \Delta_2\times X\cup X\times \Delta_2$, contradicting the hypotheses. Let $p$ be the smallest positive integer such that $(r_2r_1)^pw=w$. Then $$\label{ybeq1} O=\{w,(r_2r_1)w,\ldots,(r_2r_1)^{p-1}w,r_1w,r_1(r_2r_1)w,\ldots,r_1 (r_2r_1)^{p-1}w\}$$ In particular $|O|=2p$ is even. We claim $|O|\ge 6$. To prove this we have to exclude $|O|=2,4$. The case $|O|=2$ is easily excluded using \[intrth1\].3. Hence we are left with $|O|=4$. This means that $O$ looks like $$\begin{CD} x_ax_bx_c @>r_2 >> x_a x_d x_e\\ @V r_1VV @VV r_1 V\\ x_f x_g x_c @>r_2 >> x_f x_h x_e \end{CD}$$ which implies that $R$ contains relations $$\begin{aligned} \label{al1}x_b x_c&=x_d x_e\\ \label{al2}x_ax_b&=x_fx_g\\ \label{al3}x_a x_d&=x_fx_h\\ \label{al4}x_g x_c&=x_hx_e\end{aligned}$$ Now in a relation $x_ux_v=x_wx_t$ the couples $(u,v)$ and $(v,t)$ determine each other (lemma \[somelemma\]). So looking at we find $b=d$, $g=h$. This implies that is actually of the form $x_dx_c=x_dx_c$, which is a contradiction. Hence $|O|\ge 6$. An alternative classification of these orbits goes through the elements they contain of the form $x_ax_b x_c$, $a\le b\le c$. A unique such element exist in every orbit by lemma \[easy\]. If $O$ contains an element of the form $x_ax_bx_c$, $a<b<c$ then it is of type (C) because if not, it contains an element of the form $x_dx_dx_e$ or $x_dx_ex_e$ with $d\ge e$. Using and (\*1) such elements are equivalent to elements of the form $x_fx_g x_g$, $x_fx_fx_g$ with $f\le g$. Contradiction. If $O$ contains an element of the form $x_ax_ax_b$ or of the form $x_ax_bx_b$ with $a<b$ then $O$ is clearly of type (B). Finally $O$ is of type (A) iff it contains an element of the form $x_ax_ax_a$. Thus we find that there are $n$ orbits of type (A), $n(n-1)$ orbits of type (B) and $n(n-1)(n-2)/6$ orbits of type (C). From the equality $$|X^3|=n^3=1\cdot n+2\cdot n(n-1)+6\cdot \frac{n(n-1)(n-2)}{6}$$ we deduce that the orbits of type (C) contain exactly $6$ elements. Now Yang-Baxter easily follows. If $w$ has orbit of type (C) then from we deduce that $(r_2r_1)^3w=w$. If the orbit is of type (B) then $(r_2r_1)^3w=w$ follows directly from . Finally if the orbit is of type (A) then $r_1w=r_2w=w$ and there is nothing to prove. This concludes the proof of Theorem \[intrth1\]. Proof of Theorem \[intrth2\] ============================ In this section we prove Theorem \[intrth2\]. One direction is trivial, so we concentrate on the other direction. That is, given $r$ satisfying \[intrth1\].1.,2.,3. we will construct $v:\Uscr\r S$ and $x_{b,i}\in X$ for $b\in\Uscr$, $i=\{1,\ldots,n\}$ in such a way that - $v$ is a bijection. - $v(u_i b)=x_{b,i}v(b)$ - $\{x_{b,i}\mid i=1,\ldots,n\}=\{x_1,\ldots,x_n\}$ - $r(x_{bu_j,i}x_{b,j})=x_{bu_i,j}x_{b,i}$ The construction is inductive. To start we put $v(1)=1$ and $v(u_i)=x_{\sigma(i)}$ for an arbitrary element $\sigma$ of $\Sym_n$. From here on everything will be uniquely defined. Assume that we have constructed $v(b)$ for $\deg b\le m-1$, $x_{b,i}$ for $\deg b\le m-2$ satisfying (a-d). We will define $x_{a,i}$ for $\deg a=m-1$ such that (c)(d) hold. $a\neq u_i^{m-1}$. So $a=bu_j$, $j\neq i$. Computing $v(bu_iu_j)$ in two ways (as a heuristic device, since $v(bu_iu_j)$ is still undefined) we find that $x_{a,i}$ must be defined by $$\label{alpha} r(x_{a,i}x_{b,j})=\ast x_{b,i}$$ This indeed defines $x_{a,i}$ uniquely thanks to \[intrth1\].3. However one still must deal with the possibility that $x_{a,i}$ might depend on $j$. To analyze this assume $k\neq i$, $a=du_ju_k$. Put $b=du_k$, $c=du_j$, $e=du_i$. We now define $p,q,p',q'$ by $$\begin{aligned} \label{sec2eq1} r(px_{b,j})&=qx_{b,i}\\ \label{sec2eq2} r(p'x_{c,k})&=q'x_{c,i}\end{aligned}$$ We have to show $p=p'$. By induction we have the following identities. $$\begin{aligned} r(x_{b,j}x_{d,k})&=x_{c,k}x_{d,j}\\ r(x_{b,i}x_{d,k})&=x_{e,k}x_{d,i}\\ \label{comp} r(x_{c,i}x_{d,j})&= x_{e,j}x_{d,i}\end{aligned}$$ We can now construct a “Yang-Baxter diagram” $$\begin{CD} px_{b,j}x_{d,k} @>r_1>> qx_{b,i} x_{d,k}\\ @V r_2VV @VV r_2 V\\ px_{c,k} x_{d,j} @. qx_{e,k} x_{d,i}\\ @V r_1 VV @VV r_1V\\ X Y x_{d,j} @>r_2 >> X Z x_{d,i} \end{CD}$$ with $X,Y,Z$ unknown sofar. Comparing $r(Yx_{d,j})=Zx_{d,i}$ with yields $Y=x_{c,i}$, $Z=x_{e,j}$. So we find that $$r(px_{c,k})=Xx_{c,i}$$ and comparing this with yields $p=p'$. Hence we can now legally define $x_{a,i}=p$. Furthermore can also be read as $$r(qx_{b,i})=px_{b,j}$$ Since obviously $bu_i\neq u^{m-1}_j$ we obtain $q=x_{{bu_i},j}$. We conclude that with our present definitions we have for $j\neq i$, $\deg b\le m-2$ $$\label{sec2eq3} r(x_{bu_j,i}x_{b,j})=x_{bu_i,j}x_{b,i}$$ We claim that this relation holds more generally under the hypotheses that $\deg b\le m-2$ and $bu_j\neq u_i^{m-1}$ (or equivalently $bu_i\neq u_j^{m-1}$). The only case that still has to be checked is : $i=j$, $\deg b=m-2$, $b\neq u_i^{m-2}$. In this case we may put $b=cu_k$, $k\neq i$. We construct again a Yang-Baxter diagram $$\label{beta} \begin{CD} x_{cu_iu_k,i} x_{cu_k,i} x_{c,k} @>r_1>> x_{cu_iu_k,i} Y x_{c,k}\\ @V r_2 VV @AA r_2 A\\ x_{cu_iu_k,i}x_{cu_i,k}x_{c,i} @. x_{cu_iu_k,i}x_{cu_i,k} x_{c,i}\\ @V r_1 VV @AA r_1A\\ x_{cu_i^2,k} x_{cu_i,i} x_{c,i} @>r_2>> x_{cu^2_i,k} x_{cu_i,i} x_{c,i} \end{CD}$$ From the relation $$r(x_{cu_i,k}x_{c,i})=Yx_{c,k}$$ we deduce $Y=x_{cu_k,i}$. Looking at the toprow of finishes the proof of under the hypotheses that $bu_j\neq u_i^{m-1}$. Now we claim that if $\deg a=m-1$, $i\neq j$ and $a\neq u_i^{m-1}, u_j^{m-1}$ then $x_{a,i}\neq x_{a,j}$. Assume the contrary and write $a=bu_l$. Then by we have $$\label{sec2eq4} \begin{split} r(x_{bu_l,i}x_{b,l})&=x_{bu_i,l} x_{b,i}\\ r(x_{bu_l,j}x_{b,l})&=x_{bu_j,l}x_{b,j}\\ \end{split}$$ Since the lefthand sides of are the same and this is not the case with the righthand sides we obtain a contradiction. $a=u_i^{m-1}$. In this case we take $x_{a,i}$ different from $x_{a,j}$, $j\neq i$. This defines $x_{a,i}$ uniquely, and obviously (c) is satisfied if $\deg b\le m-1$. Now we prove in the remaining case $b=u_i^{m-2}$, $i=j$. Since we already know (c) we can write $$r(x_{bu_k,l}x_{b,k})=x_{bu_i,i}x_{b,i}$$ for some $k$, $l$ and we have to show $k=l=i$. Assume on the contrary that $k\neq i$ or $l\neq i$. By what we know so far we have $$r(x_{bu_k,l}x_{b,k})=x_{bu_l,k}x_{b,l}$$ But then $k=l=i$. Contradiction. So up to this point we have defined $x_{b,i}$ and we have proved (c)(d) for $\deg b\le m-1$. Now if $a=bu_i$ has length $m$ then we define $$\label{sec2eq5} v(a)=x_{b,i}v(b)$$ so that (b) certainly holds. That is well defined follows easily from (d). Hence to complete the induction step it suffices to show that (a) holds. That is $v$ should define a bijection on words of length $m$. Let $U=\{u_1,\ldots,u_n\}$ and let $U^m$ be the words of length $m$ in $U$. Furthemore let $r_i:U^m\r U^m$ be given by exchanging the $i,i+1$’th letter. Define a map $ \tilde{v}:U^m\r X^m $ by $$\tilde{v}(u_{i_1}\cdots u_{i_m})=x_{u_{i_2}\cdots u_{i_m},i_1}\cdots x_{u_{i_{m-1}}u_{i_m},i_{m-2}}x_{u_{i_m},i_{m-1}}x_{1,i_m}$$ By (c), $\tilde{v}$ is clearly a bijection. From (d) we obtain the following commutative diagram. $$\begin{CD} U^m @>\tilde{v}>> X^m\\ @V r_{i} VV @VV r_{i} V\\ U^m @>\tilde{v} >> X^m \end{CD}$$ So $\tilde{v}$ defines a bijection between the orbits $U^m/\Sym_m$ and $X^m/\Sym_m$. We have $$\Uscr_m=U^m/\Sym_m,\qquad S_m=X^m/\Sym_m$$ where $\Uscr_m$, $S_m$ are the elements of degree $m$ in $\Uscr$ and $S$ respectively. Furthermore the map $\Uscr_m\r S_m$ induced by $\tilde{v}$ is precisely $v$. This finishes the proof of Theorem \[intrth2\]. Semigroups of $I$-type ====================== Below $S$ will be a semigroup of $I$-type, with $I$-structure $v:\Uscr\r S$ (as defined in the introduction). In this section we will give some properties of $S$, and in particular we will prove Theorem \[intrth3\]. First observe that every element of $\langle X\rangle$ can be written uniquely in the form $$x_{u_1\cdots u_{i_{m-1}},i_{m}}\cdots x_{u_{i_1},i_2} x_{1,i_1}$$ Two different elements $w$, $w'$ in $X^2$ have the same image in $S$ iff there exist $i\neq j$ such that $$w=x_{u_i,j}x_{1,i},\qquad w'=x_{u_j,i}x_{1,j}$$ The following lemma summarizes some observations in [@TVdB], translated into the language of semigroups. \[ZZlemma41\] 1. The natural grading by degree on $\Uscr$ induces via $v$ a grading on $S$ such that $\deg(x_i)=1$. 2. The map $s\mapsto sv(\mu)$ for a given $\mu\in\Uscr$ induces a bijection between $S$ and $\{v(a\mu)\mid a\in\Uscr\}$. 3. $S$ is right cancellative. 4. $S$ is a quotient of $\langle X\rangle$ by $n/(n-1)/2$ different relations in degree $2$ given by $$x_{u_i,j}x_{1,i}=x_{u_j,i}x_{1,j},\qquad j>i$$ If $\sigma\in \Sym_n$ then we extend $\sigma$ to $\Uscr$ via $$\sigma (u_{i_1}\cdots u_{i_p})=u_{\sigma{i_1}}\cdots u_{\sigma i_p}$$ \[ZZlemma42\] Every bijection $w:\Uscr\r S$, satisfying is of the form $v\circ \sigma$, $\sigma\in \Sym_n$. Clearly there exist $\sigma \in \Sym_n$ such that $w$ and $v\circ \sigma$ take the same values on $\{u_1,\ldots,u_n\}$. Hence to prove the lemma we have to show that a map $v$ satisfying is uniquely determined by the values it takes on $\{u_1,\ldots,u_n\}$. This was part of the proof of Theorem \[intrth2\]. Now we want to develop some kind of calculus for semigroups of $I$-type. Consider the arrows $$\label{somearrows} \begin{CD} S @>s\mapsto sv(b)>> \{v(ab)\mid b\in\Uscr\} \\ @. @AA \begin{matrix} v(ab)\\ \uparrow\\b\end{matrix} A\\ @. \Uscr \end{CD}$$ It is clear that the vertical map is a bijection and so is the horizontal map by lemma \[ZZlemma41\]. Thus we may define a bijection $w:\Uscr\r S$ which makes commutative. Furthermore $w$ obviously satisfies , so according to lemma \[ZZlemma42\] $w=v\circ \phi(b)$ where $\phi(b)\in\Sym_n$. We view $\phi$ as a map from $\Uscr$ to $\Sym_n$. Expressing the fact that $w$ completes to a commutative diagram yields $$\label{basiceq1} v(ab)=v(\phi(b)(a))\,v(b)$$ If we now compute $v(abc)$ in two ways we find $$v(abc)=v(\phi(\phi(c)(b))(\phi(c)(a)))\,v(\phi(c)(b))\,v(c)$$ and $$v(abc)=v(\phi(bc)(a))\,v(\phi(c)(b))\,v(c)$$ Using the fact that $S$ is right cancellative we obtain $$\phi(\phi(c)(b))(\phi(c)(a))=\phi(bc)(a)$$ or put differently $$(\phi(\phi(c)(b))\circ \phi(c))(a)=\phi(bc)(a)$$ Since this is true for all $a$ be obtain $$\label{basiceq2} \phi(bc)=\phi(\phi(c)(b))\circ \phi(c)$$ Let us define $\ker\phi$, $\im \phi$ in the usual way (even though $\phi$ is clearly not a semigroup homomorphism). $$\begin{aligned} \ker\phi &=\{a\in\Uscr\mid \phi(a)=\Id\}\\ \im\phi&=\{\phi(a)\mid a\in\Uscr\}\end{aligned}$$ To simplify the notation we put $P=\ker\phi$, $G=\im\phi$. Then yield the following lemma. 1. If $b\in P$ then $$\begin{aligned} \label{eqa}\phi(ab)&=\phi(a)\\ \label{eqb} v(ab)&=v(a)v(b)\end{aligned}$$ 2. $P$ is a saturated subsemigroup of $\Uscr$ ($a\in P\Rightarrow (ab\in P\iff b\in P)$). 3. $G$ is a subgroup of $\Sym_n$ (note that a finite subsemigroup of a group is itself a group). 4. If $b\in G$ and $a\in P$ then $b(a)\in P$. \[ZZlem43\] There exist $t_1,\ldots,t_n>0$ such that $u_i^{t_i}\in P$. Since $\Sym_n$ is finite there exist $r_i<s_i$ such that $$\label{secXXeq1} \phi(u_i^{r_i})=\phi(u_i^{s_i})$$ Put $a=\prod_i u_i^{r_i}$, $t'_i=s_i-r_i$. Now if $\phi(p)=\phi(q)$ then implies that $\phi(rp)=\phi(rq)$. Applying this with $p=u^{r_i}$, $q=u^{s_i}$ and $r=\prod_{j\neq i} u_j^{r_i}$ yields $\phi(a)=\phi(au^{t'_i})=\phi(\phi(a)(u^{t_i}_i))\phi(a)$ and thus $$\phi(a)(u_i)^{t'_i}\in\ker\phi$$ Now $\phi(a)(u_i)=u_{\phi(a)(i)}$ so if we put $t_i=t'_{\phi(a)(i)}$ then $\phi(u_i^{t_i})=\Id$. \[eencor\] Let $P_0$ be the subsemigroup of $\Uscr$ generated by $u^{t_i}_i$. Then 1. $v(P_0)$ is a free abelian subsemigroup of $S$, generated by $v(u_i^{t_i})$. 2. $S=\bigcup_a v(a) v(P_0)$ where the union runs over those $a=u_1^{p_1}\cdots u_n^{p_n}$ with $0\le p_i\le t_i-1$. The corresponding statements for $\Uscr$ are obvious. To obtain them for $S$ one applies $v$ and uses . This is entirely similar to the proof of [@TVdB Thm 1.1, Thm 1.2] so we content ourselves with a quick sketch. Note that by [@TVdB Cor 3.6] an algebra of $I$-type is automatically Koszul and has finite global dimension, so we only have to prove 3.-6. Note that the equations of $k_cS$ are given by $x_{u_i,j}x_{1,i}=d_{ij}x_{u_j,i}x_{1,j}$ for some $d_{ij} \in k^\ast$. We first assume that the $(d_{ij})_{ij}$ are roots of unity. Then (using ) we can take $P_0$ so small that $v(P_0)$ is commutative in $k_cS$. Thus by corollary \[eencor\], $k_cS$ is finite on the left over a commutative ring, and hence is PI. This proves in particular 6. and using the same results of Stafford and Zhang [@Staf2] as in the proof of [@TVdB Thm 1.1] also yields 2.-5. in this case. The general case is now proved using reduction to a finite field as in [@TVdB]. Proof of Theorem \[intrth4\] ============================ In this section we use the same notations and assumptions as in the previous sections. Since $S$ is cancellative (Cor. \[intrcor\]) and has subexponential growth it is (left and right) Öre. For an Öre semigroup $T$ denote by $\bar{T}$ its quotient group. We now extend $v$, $\phi$ to maps $$\begin{aligned} \bar{v}&:\bar{\Uscr}\r \bar{S}:up^{-1}\mapsto v(u)v(p)^{-1}\\ \bar{\phi}&:\bar{\Uscr}\r \Sym_n:up^{-1}\mapsto \phi(u)\phi(p)^{-1}\end{aligned}$$ where $p\in P$. This is well defined because of and the fact that it is clear from lemma \[ZZlem43\] that every element of $\bar{\Uscr}$ can be written as $up^{-1}$, $p\in P_0\subset P$. 1. If $s\in S$ then there exists $t\in S$ such that $ts\in v(P)$, $st\in v( P)$. 2. $\bar{v}$ is a bijection. <!-- --> 1. Assume $t=v(c)$. We have to find $b\in\Uscr$ such that $$\begin{aligned} \phi(v^{-1}(v(b)v(c)))=\phi(b)\phi(c)=\Id\\ \phi(v^{-1}(v(c)v(b)))=\phi(c)\phi(b)=\Id\end{aligned}$$ It is clear that this is possible since $\im\phi$ is a group. 2. It is easy to see that $\bar{v}$ is an injection, and from 1. we deduce that it is also a surjection. One verifies that $\bar{v}$ satisfies and it is also clear $\ker\bar{\phi}$, $\im\bar{\phi}$ have the same properties as $\ker\phi$, $\im \phi$ (lemma \[ZZlem43\]). Furthermore $\ker \bar{\phi}$ is now actually a group and $\im\bar{\phi}=\im\phi$. We deduce the following slight strengthening of lemma \[ZZlem43\] (and generalization of [@GI3]) which is however not needed in the sequel. For all $i$ : $u_i^{n!}\in \ker\phi$. Let $p$ be the smallest positive integer such that $u^p_i\in \ker\phi $. Then $ p$ divides $| \bar{\Uscr}/\ker \bar{\phi}| $ Now $\bar{\phi}$ defines a bijection (*not* a group homomorphism) between $\bar{\Uscr}/\ker\bar{\phi}$ and $\im\bar{\phi}$. Thus $ p$ divides $|\im \bar{\phi}|$ which in turn divides $|\Sym_n|=n!$ $\bar{S}$ acts on itself by right and left multiplictation. If we transport this action to $\Uscr$ through $v$ we obtain *commuting* left and right actions of $\bar{S}$ on $\bar{\Uscr}$ given by the formulas $$\begin{aligned} \label{action1} \forall a\in\bar{S}, b\in\bar{\Uscr}&:a\cdot b=\bar{v}^{-1}(a\bar{v}(b))\\ \label{action2} \forall a\in\bar{\Uscr},b\in\bar{S}&:a\cdot b=\bar{v}^{-1}(\bar{v} (a) b) \end{aligned}$$ In the previous sections we have concentrated on the action . Now we will say something about the action . Using we deduce that for $a\in\bar{\Uscr}$, $b\in \bar{S}$ : $$a\cdot b= \bar{\phi}(\bar{v}^{-1}(b))^{-1}(a)\, \bar{v}^{-1}(b)$$ By permuting the $x_i$ we may and we will assume that $v(u_i)=x_i$. Consider the map $$\psi:\ZZ^n\r \bar{\Uscr}:(a_1,\ldots,a_n)\mapsto u_1^{a_1}\cdots u_n^{a_n}$$ For $a\in\ZZ^n$, $b\in\bar{S}$ we write $$a\cdot b=\psi^{-1}(\psi(a)\cdot b)$$ and we put $\tilde{\phi}(c)=\phi(c)\circ \psi$, $\tilde{\phi}_i=\tilde{\phi}(u_i)$. We find for $(a_1,\ldots,a_n)\in\ZZ^n$ : $$\label{euaction} (a_1,\ldots,a_n)\cdot x_i= (a_{\tilde{\phi}_i(1)},\ldots, a_{\tilde{\phi}_i(i)}+1,\ldots, a_{\tilde{\phi}_i(n)})$$ We conclude that $(x_i)_i$, and hence all of $\bar{S}$ acts on the right of $\ZZ^n$ by Euclidean transformations. Keeping the formula we can extend this action to an action on $\RR^n$ and it is then clear that $[0,1[^n$ is a fundamental domain. Furthermore if the action were not free then there would be a fixed point $(a_1,\ldots,a_n)\in\RR^n$ for some element $s$ of $\bar{S}$. But then $(\lfloor a_1\rfloor,\ldots,\lfloor a_n\rfloor )\in\ZZ^n$ is also a fixed point for $s$. This is impossible since by construction the action of $\bar{S} $ on $\Uscr$ and hence on $\ZZ^n$ is free. [1]{} S. C. Charlap, Leonard, [*Bieberbach groups and flat manifolds*]{}, Springer-Verlag, New York, 1986. V. G. Drinfeld, [*On some unsolved problems in quantum group theory*]{}, Quantum Groups (P. P. Kulish, ed.), Lecture Notes in Mathematics, vol. 1510, Springer Verlag, 1992, pp. 1–8. T. Gateva-Ivanova, [*Skew polynomial rings with binomial relations*]{}, to appear in J. Algebra, 1994. T. Levasseur, [*Some properties of non-commutative regular rings*]{}, Glasgow Math. J. [**34**]{} (1992), 277–300. J. T. Stafford and J. J. Zhang, [*Homological properties of (graded) [N]{}oetherian [PI]{} rings*]{}, J. Algebra [**168**]{} (1994), 988–1026. J. Tate and M. Van den Bergh, [*Homological properties of [S]{}klyanin algebras*]{}, Invent. Math. [**124**]{} (1996), 619–647.
--- abstract: 'We investigate the steady-state cooling dynamics of vibrational degrees of freedom related to a nanomechanical oscillator coupled with a laser-pumped quantum dot in an optical resonator. Correlations between phonon-cooling and quantum-dot photon emission processes occur respectively when a photon laser absorption together with a vibrational phonon absorption is followed by photon emission in the optical resonator. Therefore, the detection of photons generated in the cavity mode concomitantly contribute to phonon cooling detection of the nanomechanical resonator.' author: - Sergiu - 'Mihai A.' title: 'Long-time correlated quantum dynamics of phonon cooling' --- Introduction ============ The nanomechanical resonator (NMR) is a relevant tool for building ultra-sensitive measurement devises [@gr_cool; @rew_nm; @rew_m]. Therefore, its properties were and are continuously investigated. Outstanding works towards NMR cooling to quantum regimes were already reported [@cool; @cool1; @ccooll; @ccool; @cool2; @cool3; @cool4]. An important issue here is how to detect experimentally the mechanical vibrations of the NMR. One option is the superconducting quantum interferometer where the vibrations of the NMR are detected via variation of the magnetic field [@interf]. The mechanical vibrations can be detected as well via a single-electron transistor which is extremely sensitive to electrical charges [@tranz]. Additionally one can use interference effects among the incident light on the NMR and the reflected one [@tabl_int]. Furthermore, high-sensitivity optical monitoring of a micro-mechanical resonator with a quantum-limited opto-mechanical sensor was reported in Ref. [@opt_sens], while fast sensitive displacement measurements in Ref. [@fast], respectively. Remarkably, the quantum motion of a nanomechanical resonator was experimentally observed in [@qmot]. The possibility of real time displacement detection by the luminescence signal and of displacement fluctuations by the degree of second-order coherence function was recently demonstrated in [@g2]. Here, we look for a regime where cooling of a nano-mecanical oscillator is correlated with emission processes such that the maximum photon detection corresponds to vibrational phonon minimum. For this, we investigate a laser-pumped two-level quantum dot which is fixed on a nanomechanical beam while suspended in an optical resonator (see Fig. \[fig-1\]a). If the quantum dot dynamics is faster than that of nano-beam and cavity dynamics, respectively, one arrives at a situation where laser photon and phonon absorption processes are accompanied with photon emission in the cavity mode (see Fig. \[fig-1\]b). Therefore, the cavity photon detection assures the cooling of the nanomechanical resonator. The article is organized as follows. In Sec. II we describe the theoretical framework used to obtain the master equation characterizing the correlated cooling dynamics of nanomechanical degrees of freedom. Section III deals with the corresponding equations of motion and discussion of the obtained results while the Summary is given in the last section, i.e., Sec. IV. ![\[fig-1\] (color online) Schematic model: (a) A laser-pumped two-level quantum dot with transition frequency $\omega_{0}$ is fixed on a nanomechanical resonator vibrating at frequency $\omega$. The quantum dot is interacting also with the quantized resonator optical mode of frequency $\omega_{C}$. (b) Correlated cooling dynamics occurs when a laser photon absorption together with a vibrational phonon absorption is accompanied by the emission of a cavity photon.](Fig1){height="3.5cm"} Theoretical framework ===================== Let’s consider a setup represented in Figure ([\[fig-1\]]{}a): Inside an optical resonator is placed a NMR incorporating a laser pumped two-level quantum dot. The laser beam wave-vector is $\vec k_{L}$ while its frequency is $\omega_{L}$. The frequency of the optical resonator is $\omega_{C}$ and the nanomechanical vibrational frequency is $\omega$. The energy separation between the excited bare-state $|e\rangle$ and the ground one, $|g\rangle$, is denoted by $\hbar\omega_{0}$. The master equation describing the whole system in the Born-Markov approximations and in a frame rotating at the laser frequency $\omega_{L}$ is: $$\begin{aligned} &&\frac{d}{dt}\rho(t) + \frac{i}{\hbar}[H,\rho]=-\gamma [S^{+},S^{-}\rho] - \gamma_{c}[S_{z},S_{z}\rho] \nonumber \\ &&- \kappa_{a}[a^{\dagger},a\rho] - \kappa_{b}(1+\bar{n})[b^{\dagger},b\rho] - \kappa_{b}\bar{n}[b,b^{\dagger}\rho] + H.c., \nonumber \\ \label{Me}\end{aligned}$$ where $S_{z}$ and $S^{\pm}$ are the qubit operators satisfying the standard commutation relations, while $\{a^{\dagger},a\}$ and $\{ b^{\dagger},b\}$ are the generation and annihilation operators for photon and phonon subsystems, respectively, and obey the boson commutation relations [@kmek]. $\gamma$ and $\gamma_c$ are the single-qubit spontaneous decay and dephasing rates, respectively, whereas $\kappa_{a}(\kappa_{b})$ is the photon (phonon) resonator damping rate, and $\bar n$ is the mean-phonon number corresponding to temperature $T$ and vibrational frequency $\omega$. The Hamilton operator, i.e. $H$, is given by the following expression: $$\begin{aligned} H&=& \hbar \Delta S_{z} - \hbar \Delta_{1}a^{\dagger}a + \hbar \omega b^{\dagger}b + \hbar \Omega(S^{+}+ S^{-}) \nonumber \\ &+&\hbar g(a^{\dagger}S^{-} + aS^{+}) + \hbar\lambda S_{z} (b^{\dagger}+b). \label{Hmn} \end{aligned}$$ In Eq. (\[Hmn\]), the first three terms describe the free energies of the artificial two-level system as well as of the optical and mechanical modes. The fourth and the fifth terms characterize the interaction of the quantum dot with the laser field and optical resonator mode, respectively. The last term takes into account the interaction of the vibrational degrees of freedom with the radiator [@cool]. Correspondingly, $g$ and $\lambda$ denote the interaction strengths among the two-level emitter and the involved optical and mechanical modes, while $\Omega$ is the corresponding Rabi frequency due to external laser pumping. $\Delta=\omega_{0}-\omega_{L}$ describes the detuning of the laser frequency from the two-level transition frequency, while $\Delta_{1}=\omega_{L} - \omega_C$ accordingly is the detuning of the cavity frequency from the laser one. For our purpose, it is more appropriate to use the laser-qubit dressed-state representation given by [@SCMM]: $|g\rangle=\sin{\theta}| + \rangle + \cos{\theta}|-\rangle$ and $|e\rangle=\cos{\theta}|+\rangle - \sin{\theta}|-\rangle$, with $|+\rangle$ and $|-\rangle$ being the corresponding states in the dressed-state picture. Here, $2\theta$ is the angle in the right triangle, drawn in imaginative space of frequencies, with adjoining cathetus $\Delta/2$ and opposite cathetus $\Omega$, and, therefore, $\cot{2\theta}=\Delta/2\Omega$. In the case when $\Omega \gg \{\gamma,\gamma_{c}\} \gg \kappa_{a,b}$ while $\Omega \gg \{g,\lambda\} >\{\gamma,\gamma_{c}\}$, meaning that the dynamics of the cavity photon and NMR phonon subsystems are slower than the quantum dot dynamics, one can eliminated the quantum dot variables (see also [@SCMM; @zb1; @kzek; @gxl]). Thus, the master equation describing the cavity and NMR degrees of freedom can be represented as: $$\begin{aligned} \frac{d}{dt}\rho(t) &+&\frac{i}{2}(\Delta_{1}+\omega)[b^{\dagger}b-a^{\dagger}a,\rho] = \nonumber \\ &-& A^{\ast}_{1}[a,a^{\dagger}\rho] - B^{\ast}_{1}[a^{\dagger},a\rho] - A^{\ast}_{2}[b,b^{\dagger}\rho] \nonumber \\ &-& B^{\ast}_{2}[b^{\dagger},b\rho]+ C^{\ast}_{1}[b,a^{\dagger}\rho] + D^{\ast}_{1}[b^{\dagger},a\rho] \nonumber \\ &+& C^{\ast}_{2}[a^{\dagger},b\rho] + D^{\ast}_{2}[a,b^{\dagger}\rho] + H.c.. \label{MeCN}\end{aligned}$$ Here $"\ast"$ means complex conjugation, whereas $$\begin{aligned} A^{\ast}_{1}&=&\frac{1}{4}\frac{g^2 \sin ^2{2 \theta}}{\Gamma_{\shortparallel} - i\Delta_{1} }+\frac{g^2 P_{-} \sin^4{\theta}}{\Gamma_{\perp} +i (2\Omega_{R} -\Delta_{1} )}\nonumber\\ &+&\frac{g^2 P_{+}\cos ^4{\theta}}{\Gamma_{\perp}- i (2 \Omega_{R}+\Delta_{1} )}, \end{aligned}$$ $$\begin{aligned} A^{\ast}_{2}&=&\frac{1}{4}\bigg(\frac{\lambda^{2}\cos^{2}{2\theta}}{\Gamma_{\shortparallel}+i \omega }+\frac{\lambda^{2}P_{-} \sin^{2}{2\theta}}{\Gamma_{\perp} + i(2 \Omega_{R} +\omega )}\nonumber\\ &+&\frac{\lambda^{2}P_{+}\sin^{2}{2 \theta}}{\Gamma_{\perp} - i(2 \Omega_{R}-\omega )}\bigg)+\kappa_{b}\bar{n}, \nonumber \\ C^{\ast}_{1}&=&\frac{P_{+}}{2}\frac{g\lambda \sin{2\theta}\cos^{2}{\theta}}{\Gamma_{\perp} -i (2\Omega_{R}+\Delta_{1}) } -\frac{P_{-}}{2}\frac{g\lambda \sin{2\theta}\sin^{2}{\theta}}{\Gamma_{\perp}+ i (2\Omega_{R}-\Delta_{1})} \nonumber\\ &-&\frac{1}{4}\frac{g\lambda \sin{2 \theta}\cos{2\theta}}{\Gamma_{\shortparallel} -i\Delta_{1}}, \nonumber\\ C^{\ast}_{2}&=&\frac{P_{-}}{2}\frac{g\lambda \sin{2\theta}\cos^{2}{\theta}}{\Gamma_{\perp} + i(2\Omega_{R}-\omega) } -\frac{P_{+}}{2}\frac{g\lambda \sin{2\theta}\sin^{2}{\theta}}{\Gamma_{\perp}- i (2\Omega_{R}+\omega)}\nonumber\\ &-&\frac{1}{4}\frac{g\lambda \sin{2\theta}\cos{2\theta}}{\Gamma_{\shortparallel} - i\omega }.\end{aligned}$$ Other parameters are: $\Omega_{R}=\sqrt{\Omega^{2}+(\Delta/2)^{2}}$, $\Gamma_{\shortparallel}$=$\gamma(1-\cos^{2}{2\theta})+ \gamma_{c}\sin^{2}{2\theta}$, $\Gamma_{\perp}$=$4\gamma_0+\gamma_{+}+\gamma_{-}$, $\gamma_{+}$=$\gamma \cos^4{\theta} + \frac{\gamma_{c}}{4}\sin^{2}{2\theta}$, $\gamma_{-}$=$\gamma \sin^{4}{\theta} + \frac{\gamma_{c}}{4}\sin^{2}{2\theta}$ and $\gamma_{0}$=$\frac{1}{4}(\gamma \sin^2{2\theta} +\gamma_{c} \cos^2{2\theta})$. The dressed-state populations are given by: $$\begin{aligned} P_{+} = \frac{\gamma_-}{\gamma_++\gamma_-}, ~~{\rm and}~~ P_{-}=\frac{\gamma_+}{\gamma_++\gamma_-}.\end{aligned}$$ In Eq. (\[MeCN\])$, B^{\ast}_{i}$ can be obtained from $A^{\ast}_{i}$ via $P_{\mp} \leftrightarrow P_{\pm}$ as well as by adding $\kappa_{a}$ to $B^{\ast}_{1}$ and $\kappa_{b}$ to $B^{\ast}_{2}$, correspondingly. Respectively, $D^{\ast}_{i}$ can be obtained from $C^{\ast}_{i}$ through $P_{\mp} \leftrightarrow P_{\pm}$, and $\{i \in 1,2\}$. Notice that in obtaining Eq. (\[MeCN\]) we have ignored rapidly oscillating terms at frequencies: $\pm 2\Delta_{1}, \pm (\Delta_{1}- \omega)$ and $\pm 2\omega$, that is, we are interested in a situation where $\omega \approx -\Delta_{1}$. Results and discussion ====================== In the following, we shall describe the steady-state correlated cooling dynamics of the vibrational degrees of freedom. Actually, adjusting the laser frequency to fulfill the condition $\omega + \Delta_{1} \approx 0$, one shall look at situations when simultaneously photon laser and vibrational phonon absorption processes are accompanied by a photon emission in the cavity mode, see Figure (\[fig-1\]b). Using the master equation (\[MeCN\]), the following equations of motion can be obtained for the mean photon and phonon numbers, respectively: $$\begin{aligned} \frac{d}{dt}\langle a^{\dagger}a\rangle& = &\langle a^{\dagger}a\rangle(A_{1} - B_{1} + A_{1}^{*} - B_{1}^{*}) + \langle a^{\dagger}b\rangle(C_{2}^{*} - D_{2}) \nonumber\\ &+&\langle b^{\dagger}a\rangle (C_{2} - D^{\ast}_{2}) + A_{1} + A_{1}^{*}, \nonumber\\ \frac{d}{dt}\langle b^{\dagger} b\rangle& = &\langle b^{\dagger} b\rangle(A_{2} - B_{2} + A_{2}^{*} - B_{2}^{*}) - \langle a^{\dagger}b\rangle(C_{1}^{*} - D_{1}) \nonumber\\ &-&\langle b^{\dagger}a\rangle (C_{1} - D^{\ast}_{1}) + A_{2} + A_{2}^{*},\nonumber\\ \frac{d}{dt}\langle a^{\dagger}b\rangle &=& \langle a^{\dagger}b\rangle\bigl(A^{\ast}_{1} - B_{1} + A_{2} - B^{\ast}_{2} - i(\Delta_{1}+ \omega)\bigr) \nonumber\\ &-& \langle a^{\dagger}a\rangle(C_{1} - D^{\ast}_{1}) + \langle b^{\dagger}b\rangle (C_{2} - D^{\ast}_{2}) - C_{1} - D^{\ast}_{2}, \nonumber\\ \frac{d}{dt}\langle b^{\dagger}a\rangle &=& \langle b^{\dagger}a\rangle\bigl(A_{1} - B_{1}^{*} + A_{2}^{*} - B_{2} + i(\Delta_{1} + \omega)\bigr) \nonumber\\ &-&\langle a^{\dagger} a\rangle(C_{1}^{*} - D_{1}) + \langle b^{\dagger} b\rangle (C_{2}^{*} - D_{2}) - C_{1}^{*} - D_{2}. \nonumber \\ \label{ba}\end{aligned}$$ ![\[fig-2\] (color online) The steady-state mean-value of the photon number $\langle a^{\dagger}a\rangle$ as a function of $\Delta_{1}/\gamma$. Here, $\gamma_{c}/\gamma=0.3$, $g/\gamma=2$, $\lambda/\gamma=4$, $\Omega/\gamma=50$, $\omega/\gamma=50$, $\Delta/(2\Omega)=0.5$, $\kappa_{a}/\gamma=0.01$ and $\kappa_{b}/\gamma=0.001$. The solid line corresponds to $\bar n=10$, the long-dashed one to $\bar n=4$ whereas the short-dashed line is for $\bar n=2$, respectively.](Fig2.eps){height="5.3cm"} ![\[fig-3\] (color online) The same as in Fig. (\[fig-2\]) but for the vibrational phonon mean-number $\langle b^{\dagger}b\rangle$.](Fig3.eps){height="5.3cm"} Based on Eqs. (\[ba\]), Figures (\[fig-2\]) and (\[fig-3\]) show the steady-states of the cavity mean-photon number, and the vibrational NMR mean-phonon number, respectively. As it was mentioned before, the maximum photon detection corresponds to NMR phonon minimum around $\Delta_{1}+\omega \approx 0$. Furthermore, the quantum cooling is still efficient while increasing the temperature, i.e. $\bar n$. These behaviors can be understood also by taking into account that for certain positive laser-qubit frequency detunings the qubit population is mostly in the lower dressed-state $|-\rangle$. This means that the phonon generation processes are minimized while phonon absorption followed simultaneously by photon laser absorption processes are accompanied by photon emission in the cavity mode. Therefore, the cavity maximum photon detection signifies also minimum of the vibrational quanta, that is, the cooling detection of the NMR vibrational degrees of freedom. Notice a small shift between photon maximum and phonon minimum detections for arbitrary larger qubit-cavity and qubit-NMR coupling strengths (see, also, Eq. [\[meqm\]]{}). Finally, efficient cooling occurs as well for uncorrelated regimes of phonon-photon detection processes, i.e. when $\Delta_{1}+ \omega \gg \gamma$ (while correlated regimes occur for $\Delta_{1} + \omega \ll \gamma$). However, in this case, i.e. uncorrelated regime, it is hard to manage to have maximum photon cavity emission corresponding to vibrational NMR phonon cooling processes simply because these processes are uncorrelated. In what follows, we shall represent approximative analytical expressions for the variables of interest in the steady-state. This will help to understand the behaviors shown in Figs. (\[fig-2\]) and (\[fig-3\]). If one performs further approximations $\{\Delta_{1},\omega\} \gg \Gamma_{\shortparallel}$, $2\Omega_{R} \pm \Delta_{1} \gg \Gamma_{\perp}$ and $2\Omega_{R} \pm \omega \gg \Gamma_{\perp}$, the master equation (\[MeCN\]) simplifies considerably, namely, $$\begin{aligned} &&\frac{d}{dt}\rho(t)=\frac{i}{4}(\Delta_{1}+\omega-\bar \delta_{a} + \bar \delta_{b})[a^{\dagger}a-b^{\dagger}b,\rho] + i\eta[ab^{\dagger},\rho] \nonumber \\ && - \kappa_{a}[a^{\dagger},a\rho] - \kappa_{b}(1+\bar{n})[b^{\dagger},b\rho] - \kappa_{b}\bar{n}[b,b^{\dagger}\rho] + H.c.. \nonumber \\ \label{meqm}\end{aligned}$$ Here, the frequency shifts $\bar \delta_{a(b)}$ observed also in Fig. (\[fig-2\]) and Fig. (\[fig-3\]) are given by the following expressions: $\bar \delta_{a}=g^{2}(P_{+}-P_{-})\{\sin^{4}{\theta}/(2\Omega_{R}+\omega) + \cos^{4}{\theta}/(2\Omega_{R}-\omega)\}$ and $\bar \delta_{b}=\Omega_{R}\lambda^{2}\sin^{2}{2\theta}(P_{+}-P_{-})/(4\Omega^{2}_{R}-\omega^{2})$, whereas $\eta=g\lambda\sin{2\theta}(P_{+}-P_{-}) (\Omega_{R}\cos{2\theta}+\omega/2)/(4\Omega^{2}_{R}-\omega^{2})$. The last term of the first line in Eq. (\[meqm\]) with its H.c. part describe the vibrational phonon emission followed by cavity-photon absorption processes, and viceversa, mediated by the laser field. Consequently, basing on Eq. (\[meqm\]), Eqs. (\[ba\]) reduce to: $$\begin{aligned} \frac{d}{dt}\langle a^{\dagger}a\rangle &=& -i\eta \langle x\rangle - 2\kappa_{a}\langle a^{\dagger}a\rangle, \nonumber \\ \frac{d}{dt}\langle x\rangle &=& 2i\eta\bigl(\langle b^{\dagger}b\rangle - \langle a^{\dagger}a\rangle \bigr) - (\kappa_{a}+\kappa_{b})\langle x\rangle, \nonumber \\ \frac{d}{dt}\langle b^{\dagger}b\rangle &=& i\eta\langle x\rangle - 2\kappa_{b}\langle b^{\dagger}b\rangle + 2\kappa_{b}\bar n, \label{eqsm}\end{aligned}$$ where $x=ab^{\dagger}-a^{\dagger}b$. We have assumed also that $\Delta_{1}+\omega = \bar \delta_{a} - \bar \delta_{b}$, i.e., we are interested in the maximal values of mean-photon number corresponding to minimal values of vibrational mean-phonon number, respectively (see, also, Fig. [\[fig-2\]]{} and Fig. [\[fig-3\]]{}). In the steady-state, one immediately obtains from Eqs. (\[eqsm\]) that: $$\begin{aligned} \kappa_{a}\langle a^{\dagger}a\rangle + \kappa_{b}\langle b^{\dagger}b\rangle=\kappa_{b}\bar n. \label{eg}\end{aligned}$$ This expression can help us to estimate the mean-vibrational-phonon number if the mean-photon number is known (i.e., detected). The explicit expressions for the steady-state values of the photon and phonon mean numbers are, respectively, $$\begin{aligned} \langle a^{\dagger}a\rangle &=& \frac{\bar n\kappa_{b}\eta^{2}}{(\kappa_{a}+\kappa_{b})(\kappa_{a}\kappa_{b}+\eta^{2})}, \nonumber \\ \langle b^{\dagger}b\rangle &=& \frac{\bar n \kappa_{b}}{\kappa_{a}+\kappa_{b}}\biggl(1 + \frac{\kappa^{2}_{a}}{\kappa_{a}\kappa_{b}+\eta^{2}}\biggr ), \label{ap_exp1}\end{aligned}$$ or $$\begin{aligned} \langle b^{\dagger}b\rangle=\langle a^{\dagger}a\rangle\bigl(1 + \kappa_{a}(\kappa_{a}+\kappa_{b})/\eta^{2} \bigr). \label{ap_exp2}\end{aligned}$$ Expressions (\[eg\]) and (\[ap\_exp2\]) describe the efficiency of the proposed vibrational phonon cooling method. In particular, if $\kappa_{a} \gg \kappa_{b}$ while $(\kappa_{a}/\eta)^{2} \ll 1$, then $\langle b^{\dagger}b\rangle \approx \langle a^{\dagger}a\rangle \approx (\kappa_{b}/\kappa_{a})\bar n$ which can be as well below unity, i.e., $\langle b^{\dagger}b\rangle < 1$. Summary ======= In summary, we have proposed a scheme to detect the vibrational phonon cooling of a nanomechanical oscillator in the steady-state. The idea is based on correlating the vibrational degrees of freedom with those of a laser-pumped quantum dot when fixed on a nanomechanical beam while interacting with an optical resonator. More concretely, when the quantum dot dynamics is faster than the corresponding ones of other involved subsystems, one needs to adjust the laser frequency such that both photon laser and NMR phonon absorption processes are accompanied by photon emission in the resonator mode. Therefore, detection of the cavity photons is followed in parallel by cooling of the nanomechanical oscillator. Finally, we give approximative analytical expressions for the variables of interest which describe also the method efficiency. [33]{} A. Vinante, M. Bignotto, M. Bonaldi, M. Cerdonio, L. Conti, P. Falferi, N. Liguori, S. Longo, R. Mezzena, A. Ortolan, G. A. Prodi, F. Salemi, L. Taffarello, G. Vedovato, S. Vitale, and J.-P. Zendri, Phys. Rev. Lett. [**101**]{}, 033601 (2008). Y. Greenberg, Y. Pashkin, and E. Il’ichev, Physics-Uspekhi [**55**]{}, 382 (2012). Jin-Jin Li, and Ka-Di Zhu, Physics Reports [**525**]{}, 223 (2013). I. W. Rae, P. Zoller, and A. Imamo${\rm \tilde g}$lu, Phys. Rev. Lett. [**92**]{}, 075507 (2004). O. Arcizet, P.-F. Cohadon, T. Briant, M. Pinard, and A. Heidmann, Nature [**444**]{}, 71 (2006). F. Marquardt, J. P. Chen, A. A. Clerk, and S. M. Girvin, Phys. Rev. Lett. [**99**]{}, 093902 (2007). S. Gröblacher, J. B. Hertzberg, M. R. Vanner, G. D. Cole, S. Gigan, K. C. Schwab, and M. Aspelmeyer, Nature Phys. [**5**]{}, 485 (2009). A. D. O’Connell, M. Hofheinz, M. Ansmann, R. C. Bialczak, M. Lenander, E. Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, J. Wenner, J. M. Martinis, and A. N. Cleland, Nature [**464**]{}, 697 (2010). J. D. Teufel, T. Donner, D. Li, J. W. Harlow, M. S. Allman, K. Cicak, A. J. Sirois, J. D. Whittaker, K. W. Lehnert, and R. W. Simmonds, Nature [**475**]{}, 359 (2011). K. Xia, and J. Evers, Phys. Rev. Lett. [**103**]{}, 227203 (2009); G. Morigi, J. Eschner, and C. H. Keitel, Phys. Rev. Lett. [**85**]{}, 4458 (2000). S. Etaki, M. Poot, I. Mahboob, K. Onomitsu, H. Yamaguchi, and H. S. J. Van Der Zant, Nature Phys. [**4**]{}, 785 (2008); M. D. LaHaye, J. Suh, P. M. Echternach, K. C. Schwab, and M. L. Roukes, Nature [**459**]{}, 960 (2009). Yu. A. Pashkin, T. F. Li, J. P. Pekola, O. Astafiev, D. A. Knyazev, F. Hoehne, H. Im, Y. Nakamura, and J. S. Tsai, Appl. Phys. Lett. [**96**]{} 263513 (2010); R. G. Knobel, and A. N. Cleland, Nature [**424**]{}, 291 (2003); A. Aassime, G. Johansson, G. Wendin, R. J. Schoelkopf, and P. Delsing, Phys. Rev. Lett. [**86**]{} 3376 (2001). D. Rugar, H. Mamin, and P. Guethner, Appl. Phys. Lett. [**55**]{} 2588 (1989); D. Rugar, O. Züger, S. Hoen, C. S. Yannoni, H.-M. Vieth, and R. D. Kendrick, Science [**264**]{} 1560 (1994). O. Arcizet, P.-F. Cohadon, T. Briant, M. Pinard, A. Heidmann, J.-M. Mackowski, C. Michel, L. Pinard, O. Francais, and L. Rousseau, Phys. Rev. Lett. [**97**]{}, 133601 (2006). N. E. Flowers-Jacobs, D. R. Schmidt, and K.W. Lehnert, Phys. Rev. Lett. [**98**]{}, 096804 (2007). A. H. Safavi-Naeini, J. Chan, J. T. Hill, T. P. Mayer Alegre, A. Krause, and O. Painter, Phys. Rev. Lett. [**108**]{}, 033602 (2012). V. Puller, B. Lounis, and F. Pistolesi, Phys. Rev. Lett. [**110**]{}, 125501 (2013). M. Kiffner, M. Macovei, J. Evers, and C. H. Keitel, Prog. Opt. [**55**]{}, 85 (2010). S. Carlig, and M. A. Macovei, arXiv:1404.1262 \[quant-ph\]. W. Ge, M. Al-Amri, H. Nha, and M. S. Zubairy, Phys. Rev. A [**88**]{}, 022338 (2013); Phys. Rev. A [**88**]{}, 052301 (2013). M. Kiffner, M. S. Zubairy, J. Evers, and C. H. Keitel, Phys. Rev. A [**75**]{}, 033816 (2007). W.-j. Gu, and G.-x. Li, Phys. Rev. A [**87**]{}, 025804 (2013); M. Macovei, and G.-x. Li, Phys. Rev. A [**76**]{}, 023818 (2007).
--- abstract: 'We discuss $\mathcal{N}=1$ Klein and Klein-Conformal superspaces in $D=(2,2)$ space-time dimensions, realizing them in terms of their functor of points over the split composition algebra $\mathbb{C}_{s}$. We exploit the observation that certain split form of orthogonal groups can be realized in terms of matrix groups over split composition algebras; this leads to a natural interpretation of the the sections of the spinor bundle in the critical split dimensions $D=4$, $6$ and $10$ as $\mathbb{C}_{s}^{2}$, $\mathbb{H}_{s}^{2}$ and $\mathbb{O}_{s}^{2}$, respectively. Within this approach, we also analyze the non-trivial spinor orbit stratification that is relevant in our construction since it affects the Klein-Conformal superspace structure.' --- DFPD/2016/TH/1 2.0 cm 1.5 cm [, [**Emanuele Latini$^1$**]{}, [**Alessio Marrani$^{2,3}$**]{}]{} 1.0 cm $^1$[*Dipartimento di Matematica, Università di Bologna\ Piazza di Porta S. Donato 5, I-40126 Bologna, Italy*]{}\ `rita.fioresi@UniBo.it`, `emanuele.latini@UniBo.it` 0.5 cm $^2$[*Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi"\ Via Panisperna 89A, I-00184, Roma, Italy*]{}\ 0.5 cm $^3$[*Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova\ and INFN, Sez. di Padova\ Via Marzolo 8, I-35131 Padova, Italy*]{}\ `Alessio.Marrani@pd.infn.it` 3.5 cm \[Intro\]Introduction ===================== *Supersymmetry* (*Susy*) is a deep and elegant symmetry relating half-integer spin fields (*fermions*, constituents of matter) to integer-spin fields (*bosons*, giving rise to interactions). Such a symmetry was originally formulated, as a *global* symmetry of fields, back in the early 70’s in former Soviet Union by physicists Gol’fand and Likhtman [@GL], Volkov and Akulov [@VA], and independently in Europe by Wess and Zumino [@WZ]. A major advance in the formulation of supersymmetric theories in space-time, which then allowed for the construction of manifestly invariant interactions, was due to Salam and Strathdee, who were the first to introduce the concept of *superfield* [@SS-1; @SS-2]. In fact, depending on the number $s$ and $t$ of *spacelike* resp. *timelike* dimensions, space-time Susy recasts bosonic and fermionic fields into multiplet structures, each providing a certain representation of such an underlying symmetry. Within the simplest formulation of Susy, in which a unique fermionic generator exists besides the bosonic ones, fields defined in a space $\mathbf{M}^{s,t}\cong \mathbb{R}^{s,t}$ (which in the case $s=3$ and $t=1$ yields the usual Minkowski space-time) are assembled into a unique object, named *superfield*, defined into the so-called $\mathcal{N}=1$, $\left( s+t\right) $-dimensional superspace $\mathbf{M}^{s,t|1}$, which is characterized by the presence of an anti-commuting Grassmannian coordinate besides the usual commuting bosonic coordinates of $\mathbf{M}^{s,t}$. Such developments eventually led to major advances in Quantum Field Theory, constituting the foundational pillars on which consistent candidates for a unified theory encompassing Quantum Gravity and the Standard Model of particle interactions were constructed. In combination with local gauge invariance, global Susy allowed for the formulation of Supersymmetric Yang-Mills Theories (SYM’s) [@FZ]. In such a framework, Susy gives rise to remarkable cancellations between bosons and fermions in their quantum corrections, thus allowing for a study of SYM’s beyond perturbation theory. This generally provides a framework for a possible solution of the *hierarchy problem*, for the search of natural candidates for *dark matter*, as well as for addressing the conceptual issue of the *dark energy*. In presence of general diffeomorphisms covariance, Susy becomes a *local* symmetry. In 1976, Ferrara, Freedman, Van Niewenhuizen [@FFVN] and Deser and Zumino [@DZ] succeeded in formulating Susy as a local symmetry and coupling it to General Relativity. This resulted into the first formulation of *supergravity*, providing a low-energy effective description of more fundamental theories such as *superstrings* and ${M}$*-theory*, and playing a crucial role in *supersymmetry breaking*, an essential ingredient of all realistic elaborations beyond the Standard Model. Also in its world-sheet formulation, Susy is one of the main tools for the construction of the most promising frameworks - the aforementioned superstring theory and $M$-theory - in which Quantum Theory and General Relativity may be reconciled and consistently formulated (*cfr. e.g.* [@GSW-book; @Polchinski-book]). Quite recently, local Susy also proved to be a surprisingly successful tool in the investigation of the properties and dynamics of *black holes*, the endpoints of gravitational collapse, in which an horizon surface acts as a cosmic censor for the possible formation of a space-time singularity. Susy had a major impact in Mathematics, as well (*cfr.* [@vsv] for an excellent introduction). It gave rise also to a vast, deep and flourishing arena of mathematical investigation, inspiring generations of mathematicians to change their approach to geometry, both from the differential and algebraic point of view. In such frameworks, the symmetries of superspaces are naturally described by *superalgebras* and *supergroups*, the super-generalizations of the usual concept of algebras and groups. Nowadays, superseding the more traditional sheaf theoretic approach, supergroups and *superspaces* are investigated by exploiting the elegant machinery of the *functor of points*, originally introduced by Grothendieck in algebraic geometry (see *e.g.* [@bcf1; @bcf2]). Remarkably, such a deeply abstract point of view, formalized and developed by Shvarts [@sh] and Voronov [@vo], shares surprising similarities with the physicists’ approach in the aforementioned early times of Susy, in which points in the superspace were understood by exploiting Grassmann algebras, which are nothing but superalgebras over a superspace consisting of a point [@be]. The subsequent work of Manin [@ma1; @ma2] applied the powerful abstract machinery of the functor of points to the theory of superspaces and *superschemes*; ultimately, this led to the development of the theory of *superflags* and *super-Grassmannians*. However, sharing the same approach as in [@FL-1] and essentially relying on [@ccf] and [@flv], we would like to point out that in the present investigation we will strive to leave abstract subtleties pertaining to the formal machinery of functor of points on the background, though employing its descriptive power while dealing with $T$-points of a supergroup or with a superspace. An intriguing aspect of Susy is its deep relation to the four normed *division* algebras [@Hurwitz] $\mathbb{A}=\mathbb{R}$ (*real numbers*), $\mathbb{C}$ (*complex numbers*), $\mathbb{H}$ (*quaternions*, or Hamilton numbers), $\mathbb{O}$ (*octonions*, or Cayley numbers), especially involving *super-twistors* [Baez-Huerta-1,cederwall1,cederwall2,cederwall3]{}. In fact, non-Abelian YM theories are supersymmetric (thus giving rise to SYM’s) only if the space-time dimension is $D=3$, $4$, $6$ or $10$ (and the same is true for the Green-Schwarz superstring), named *critical* dimension. In this context, the consistent formulation of Susy relies on the vanishing of a certain trilinear expression relying on the existence of $\mathbb{A}$, whose real dimension is respectively given by $D-2$ [evans,Baez-Huerta-1,Huerta-2,Huerta-3,Huerta]{}. Motivated by attempts at explaining the remarkable fact that (super)gravity scattering amplitudes can be obtained from those of (S)YM theories (*cfr. e.g.* [@Bern]), in [@ICL-1309] Duff and collaborators exploited normed division algebras $\mathbb{A}$’s in order to obtain the massless spectrum and the multiplet structure of supergravity theories in various dimensions by tensoring SYM multiplets (also *cfr.* subsequent developments in [@ICL-2; @ICL-3]). The core of their main argument relies on the observation that the entries of second row of the order-$2$ *split* magic square $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}\right) $ [@MS; @MS2; @BS-2]$$\begin{tabular}{ccccc} \hline & & & & \\[-4mm] & $\mathbb{R}$ & $\mathbb{C}$ & $\mathbb{H}$ & $\mathbb{O}$ \\[0.1mm] \hline & & & & \\[-3mm] $\mathbb{C}_{s}$ & $\mathfrak{so}(2,1)$ & $\mathfrak{so}(3,1)$ & $\mathfrak{so}(5,1)$ & $\mathfrak{so}(9,1)$\end{tabular}$$can be naturally represented as $\mathfrak{sl}(2,\mathbb{A})$, then yielding the isomorphisms of Lie algebras (*cfr.* [@Baez], as well as [vsv-2,ICL-1309]{} and Refs. therein)$$\mathfrak{sl}(2,\mathbb{A})\cong \mathfrak{so}(q+1,1), \label{iso-Lie-1}$$where $$q:=\text{dim}_{\mathbb{R}}\mathbb{A}=1,2,4,8\text{~for~}\mathbb{A}=\mathbb{R},\mathbb{C},\mathbb{H},\mathbb{O}\text{,~respectively}, \label{q-def}$$and $\mathfrak{so}(q+1,1)$ is the Lie algebra of the Lorentz group in $D=q+2$ dimensions. Analogously, the third line of $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}\right) $, *i.e.*$$\begin{tabular}{ccccc} \hline & & & & \\[-4mm] & $\mathbb{R}$ & $\mathbb{C}$ & $\mathbb{H}$ & $\mathbb{O}$ \\[0.1mm] \hline & & & & \\[-3mm] $\mathbb{H}_{s}$ & $\mathfrak{so}(3,2)$ & $\mathfrak{so}(4,2)$ & $\mathfrak{so}(6,2)$ & $\mathfrak{so}(10,2)$\end{tabular}$$can be reinterpreted by noting the following Lie algebraic isomorphism [BS-2]{}$$\widetilde{\mathfrak{sp}}(4,\mathbb{A})\cong \mathfrak{so}(q+2,2), \label{iso-Lie-2}$$with $\mathfrak{so}(q+2,2)$ standing for the conformal Lie algebra in $D=q+2$, and $\widetilde{\mathfrak{sp}}(4,\mathbb{A})$ denoting the *Barton-Sudbery symplectic algebra*, in which the matrix transposition is replaced by the Hermitian conjugation, differently from the usual definition of symplectic algebras [@BS-1; @BS-2]. The Lie algebraic isomorphisms ([iso-Lie-1]{})-(\[iso-Lie-2\]) have been recently extended to the Lie group level (considering the spin covering of the Lorentz and conformal groups, namely $\mathrm{Spin}(q+1,1)$ resp. $\mathrm{Spin}(q+2,2)$), by explicit constructions worked out in a series of paper [MS-2-Groups,Dray-2,Dray-3,Dray-4]{} by Dray, Manogue and collaborators. In particular, in [@MS-2-Groups] a Lie group version of the aforementioned order-$2$ split magic square $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}\right) $ was constructed and studied. *Conformal symmetry* also plays a crucial role in Physics and in Mathematics. While it is usually associated to massless particles, it also characterizes, possibly as an approximated symmetry, a number of physical systems in certain regimes of their dynamics. Conformal symmetry also provides the foundation of an important branch of geometry, named *conformal geometry*, in which equivalence classes of metrics are exploited for a manifest, locally Weyl-invariant formulation of the equations governing the evolution of physical systems. In fact, conformal geometry enjoys a natural and remarkably elegant formulation as curved *Cartan geometry*, and essentially relies on the so-called Weyl-covariant differential calculus, also known as *tractor calculus*. This is the conformal-covariant generalization of the ordinary differential calculus; it was originally constructed in [@tractor] (*cfr.* also [@Gover; @Curry] for more physicists’ minded treatments) and subsequently generalized to all parabolic geometries in [tractorparabolic]{}. Minkowski $D$-dimensional space-time $\mathbf{M}^{D-1,1}$ (or the aforementioned generalizations $\mathbf{M}^{s,D-s}$ thereof) cannot support a linear implementation of conformal symmetry, and a compactification procedure, which amounts to adding suitable points at infinity, is needed. This framework has been formalized and developed by Fefferman and Graham in [@FG], especially for curved manifolds. A simple instance of flat conformal geometry is provided by the *Dirac cone* construction, in which the $D$-dimensional *compactified* Minkowski space $\overline{\mathbf{M}}^{D-1,1}$ is obtained as a particular section of the space of light like rays in the so-called *conformal space* $\mathbf{M}^{D,2}$. In [@Kuzenko], the compactified $3$-dimensional Minkowski space $\overline{\mathbf{M}}^{2,1}$ was constructed, along with its $\mathcal{N}=1$ supersymmetric extension $\mathbf{M}^{2,1|1}$, in terms of a *Lagrangian manifold* over the twistor space $\mathbb{R}^{4}$, by exploiting the Lie group isomorphism $\mathrm{Spin}(3,2)$ $\cong $ $\mathrm{Sp}(4,\mathbb{R})$. Taking inspiration from the isomorphisms (\[iso-Lie-1\])-(\[iso-Lie-1\]) and also relying on [@MS-2-Groups], in [@FL-1] a symplectic characterization of the $4$-dimensional (compactified and real) Minkowski space $\overline{\mathbf{M}}^{3,1}$ and $\mathcal{N}=1$ Poincaré superspace $\mathbf{M}^{3,1|1}$ was given, exploiting the Lie group isomorphism $\mathrm{Spin}(4,2)$ $\cong $ $\widetilde{\mathrm{Sp}}(4,\mathbb{C})$. Therein, it was also argued the possibility to extend the approach also to the other critical dimensions $D=6$ and $10$, thus providing a uniform and elegant description of $\mathcal{N}=1$ Poincaré superspaces $\mathbf{M}^{q+1,1|1} $ in critical dimensions $D=q+2$ in terms of the four normed division algebras $\mathbb{A}$’s. In the present paper, we shall be interested in space-time signatures characterized by the same number of spacelike and timelike dimensions : $s=t$. The corresponding signature is usually named *Kleinian* (or also *ultrahyperbolic*). Usually, Susy, SYM’s and supergravity theories in such a signature are investigated by focussing on suitably Wick-rotated versions of the corresponding theories in Lorentz signature (*cfr. e.g.* [@Klemm-Nozawa], and Refs. therein). However, also other, more exotic, possibilities can be considered, such as compactifications of the so-called ${M}^{\prime }$*-theory* or ${M}^{\ast }$*-theory* (see *e.g.* [@Hull-1; @Hull-2; @Ferrara-Spinors]). Geometries in Kleinian signature currently remains a vast and yet unexplored realm, displaying a rich mathematical structure, whose little knowledge is essentially based on a few studies scattered in literature (*cfr. e.g.* [@Bryant-1; @Dun-1; @Dun-2; @Dun-3; @Hervik; @Klemm-Nozawa]). Although considering Kleinian signature might seem at first a purely mathematical *divertissement*, important motivations are actually provided by Physics. The computation and the study of symmetries of scattering amplitudes in SYM’s and in supergravity highlighted the relevance of Kleinian signature, especially in $4$ dimensions; indeed, in [@OV] Ooguri and Vafa showed that $D=4$ is the critical dimension of the $\mathcal{N}=2$ superstring, whose bosonic part is given by a self-dual metric of signature $s=t=2$. It is also worth pointing out here that $4$-dimensional Kleinian signature is essentially related to *twistors* [@Penrose], thus providing a powerful computational tool in the investigation of scattering amplitudes [@Witten]. The present paper is then devoted to the study of the $4$-dimensional *Klein space* $\mathbf{M}^{2,2}$, viewed inside the related *Klein-conformal* space [^1], as well as of their supersymmetric extensions, namely the *Klein* $\mathcal{N}=1$ *superspace* $\mathbf{M}^{2,2|1}$ and the corresponding *Klein-conformal* $\mathcal{N}=1$ *superspace*. By recalling the *split* counterparts of the division algebras, namely $\mathbb{A}_{s}$ $=\mathbb{C}_{s}$ (*split complex numbers*), $\mathbb{H}_{s}$ (*split quaternions*) and $\mathbb{O}_{s}$ (*split octonions*), we rely on the observation that the entries of second row of the order-$2$ *doubly-split* magic square $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}_{s}\right) $ [@MS; @MS2; @BS-2]$$\begin{tabular}{ccccc} \hline & & & & \\[-4mm] & $\mathbb{R}$ & $\mathbb{C}_{s}$ & $\mathbb{H}_{s}$ & $\mathbb{O}_{s}$ \\[0.1mm]\hline & & & & \\[-3mm] $\mathbb{C}_{s}$ & $\mathfrak{so}(2,1)$ & $\mathfrak{so}(2,2)$ & $\mathfrak{so}(3,3)$ & $\mathfrak{so}(5,5)$\end{tabular}$$can be naturally represented as $\mathfrak{sl}(2,\mathbb{A}_{s})$, then yielding the isomorphisms of Lie algebras (*cfr. e.g.* [@Rios], and Refs. therein)$$\mathfrak{sl}(2,\mathbb{A}_{s})\cong \mathfrak{so}(q/2+1,q/2+1), \label{iso-Lie-3}$$where $q$ is here defined as $$q:=\text{dim}_{\mathbb{R}}\mathbb{A}_{s}=2,4,8\text{~for~}\mathbb{A}_{s}=\mathbb{C}_{s},\mathbb{H}_{s},\mathbb{O}_{s}\text{,~respectively}, \label{q-def-2}$$and $\mathfrak{so}(q/2+1,q/2+1)$ is the Lie algebra of the *Klein group* in $D=q+2$. It is then natural to think, in analogy with the non split case, that the third line of $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}_{s}\right) $, *i.e.*$$\begin{tabular}{ccccc} \hline & & & & \\[-4mm] & $\mathbb{R}$ & $\mathbb{C}_{s}$ & $\mathbb{H}_{s}$ & $\mathbb{O}_{s}$ \\[0.1mm]\hline & & & & \\[-3mm] $\mathbb{H}_{s}$ & $\mathfrak{so}(3,2)$ & $\mathfrak{so}(3,3)$ & $\mathfrak{so}(4,4)$ & $\mathfrak{so}(6,6)$\end{tabular}$$can be reinterpreted by means of the following Lie algebraic isomorphism $$\widetilde{\mathfrak{sp}}(4,\mathbb{A}_{s})\cong \mathfrak{so}(q/2+2,q/2+2), \label{iso-Lie-4}$$and $\mathfrak{so}(q/2+2,q/2+2)$ is the Lie algebra of the *Klein-conformal group* in $D=q+2$. More in detail, in this paper we give an explicit proof and take advantage of the Lie group isomorphism $\mathrm{Spin}(2,2)\cong \mathrm{SL}(2,\mathbb{H}_{s})$ and $\mathrm{Spin}(3,3)\cong \mathrm{Sp}(4,\mathbb{C}_{s})$, by constructions similar to the ones made in [@MS-2-Groups] and [@FL-1]. While in our treatment the construction and the Lie group isomorphisms analogues of (\[iso-Lie-3\]) and ([iso-Lie-4]{}) are explicitly worked out in those cases, nothing[^2] seemingly prevents us from putting forward the conjecture that our approach equally works well in the other critical dimensions with ultrahyperbolic signature, *i.e.* in $D=(3,3)$ and in $D=(5,5)$. We will point out that the Klein-conformal space in $D=4$, $6$ or $10$ dimensions may be respectively regarded as a certain Lagrangian manifold over the three aforementioned normed split algebras $\mathbb{A}_{s}$’s. In fact, the inner motivation of the present analysis also relies on the belief that a deeper understanding of the relation between Susy and split normed algebras $\mathbb{A}_{s}$’s from a supergeometric point of view could provide interesting insights on the classical and quantum properties of SYM’s and supergravity theories in critical dimensions with ultrahyperbolic signature. Our approach to $\mathbf{M}^{2,2}$ and its $\mathcal{N}=1$ super-extensions will follow closely the one of [@FL-1], which in turn developed a procedure exploited in [@flmink; @flv], in which the *complex* $4$-dimensional Minkowski (super)space was realized inside a complex flag (super)manifold, with the conformal group $\mathrm{SL}(4,\mathbb{C})$ acts naturally. It is here worth remarking that this is a more physics-oriented approach, in which superspaces come along with the supergroups describing their supersymmetries; this is to be contrasted to the approach *e.g.* of [@ma2], in which super-Grassmannians and superflags are essentially conceived as complex entities and constructed by themselves. It should also be recalled that in [@flmink; @flv] *real* forms of four-dimensional Minkowski and conformal (super)spaces were introduced through suitable involutions, compatible with the natural (supersymmetric) action of the Poincaré and conformal $\mathcal{N}=1$, $D=(3,1)$ supergroups. In the present study, by essentially adapting the treatment of [@FL-1] to Kleinian signature and thus leaving the complex structure and superflags on the background, we will find a much richer mathematical structure with respect to the Minkowski case studied in [@FL-1] itself. Such a deep difference can ultimately be traced back to the fact that the action of the Klein and Klein-conformal group on its irreducible spinor representation, that can be identified with $\mathbb{C}_{s}^{2}$ and $\mathbb{H}_{s}^{2}$, is *not* transitive, and the corresponding spinor space gets then *stratified* into *orbits*, defined by suitable invariant constraints. Remarkably, this has deep consequences in the construction of the Klein (super)space, since one must from the beginning choose a particular pair of orbit representative; in this paper, we focus only on one particular choice of pair of spinors, called *generic*. We point out that such a phenomenon of *spinor stratification* is absent in Lorentzian signature, in which case the whole spinor representation space - apart from its origin - consists of a *unique* orbit of the (spin covering of the) Lorentz group ${\mathrm{Spin}}(q+1,1)$. This uniquely determines the construction of Minkowski and conformal superspaces [@FL-1]. We will determine the isotropy groups (also named *stabilizers*) of the spinor orbits, as well as the constraints which define them. Relying on the theory of Clifford algebras, spinor algebras and their representations, we will highlight the relevance of the interplay between split algebras and the dimensions and reality properties of spinors of space-time symmetries in Kleinian signature, which in turn are ultimately based on the representability of the relevant spinor representation spaces as $2$-dimensional vector spaces over $\mathbb{A}_{s}$ [@Sudbery] (also *cfr.* [@Kugo-Townsend], and Refs. therein). It is also worth anticipating here that the symmetry of the order-$2$ doubly-split magic square $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}_{s}\right) $ (as opposed to the order-$2$ split magic square $\mathcal{L}_{2}\left( \mathbb{A}_{s},\mathbb{B}\right) $, which is *not* symmetric) - promoted to the Lie group level by relying on the work of Dray, Manogue and collaborators [@MS-2-Groups; @Dray-2; @Dray-3; @Dray-4] - will play an important role in our treatment. Indeed, the Klein-conformal group $\mathrm{Spin}(3,3)$ in $4$ dimensions, besides occurring in the entry $\mathcal{L}_{2}\left( \mathbb{H}_{s},\mathbb{C}_{s}\right) $ and thus being characterized as $\mathrm{Spin}(3,3)\cong \widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s})$, also appears in the entry $\mathcal{L}_{2}\left( \mathbb{C}_{s},\mathbb{H}_{s}\right) $, and as such it enjoys the isomorphism $\mathrm{Spin}(3,3)\cong \mathrm{SL}(2,\mathbb{H}_{s})$, as well. In other words, $\mathrm{Spin}(3,3)$ can be regarded as the Klein-conformal group in $D=(2,2)$, namely as $\mathrm{Spin}(q/2+2,q/2+2)$ with $q=2$, or as the Klein group in $D=(3,3)$, namely as $\mathrm{Spin}(q/2+1,q/2+1)$ with $q=4$. Since the spinor stratification of $\mathrm{Spin}(q/2+1,q/2+1)$ over $\mathbb{A}_{s}^{2}$ is known, this latter observation immediately allows for the knowledge of the spinor stratification of the twistor space $\mathbb{C}_{s}^{4}\cong \mathbb{H}_{s}^{2}$ relevant for the explicit construction of the Klein space $\mathbf{M}^{2,2}$ as a suitable section of the $D=(3,3)$ Klein-conformal space. In our treatment, we will present an explicit derivation of the aforementioned Lie group isomorphisms, as well as of the above geometric construction. We conclude by briefly mentioning the possible implications of our analysis for the fascinating task of *space-time quantization*, on which many approaches have been pursued and many research venues have been explored in literature. *E.g.*, in [@cfln; @cfl; @cfl2] the quantum deformation of the complex (chiral) Minkowski and conformal superspaces was investigated by exploiting the formal machinery of flag varieties developed in [fquant,fquant2]{}. The more direct approach which stems from the present study is essentially the one developed in [@FL-1]; it exhibits an intrinsic elegance based on split algebras $\mathbb{A}_{s}$’s, and it may pave the way to the intriguing task to construct a quantum deformation of both real Klein and Klein-conformal $\mathcal{N}=1$ superspaces. The plan of the paper is as follows\ In Section 2 we introduce split composition algebras $\mathbb{A}_{s}$, setting the notation used in the present work, while in Section 3 we discuss the construction of quadratic Jordan algebras over $\mathbb{A}_{s}$. Section 4 reports on the classification of the spinor bundles in critical dimensions, stressing out the differences between Lorentz and Kleinian signature. In Section 5, we focus our attention on the $D=(2,2)$ case, which is related to the split complex algebra $\mathbb{C}_{s}$, by realizing explicitly the action of the *Klein group* on vectors, $2\times 2$ Hermitian matrices over $\mathbb{C}_{s}$, and spinors, identified with vectors in $\mathbb{C}_{s}^{2}$; in particular, we compute the orbit stratification of spinors, and derive corresponding representatives. In Section 6, we then extend our analysis to the conformal case, and discuss the symplectic realization of $\mathrm{Spin}(3,3)$, whose proof can be found in the Appendix A. Finally, Section 7 deals with the $D=(2,2)$ construction of the $\mathcal{N}=1$ Klein superspace viewed inside the Klein-conformal $\mathcal{N}=1$ superspace. In the Appendix B, we also give a short introduction to the basic Supergeometry ingredients needed for a better understanding of this last Section. Split Algebras ============== Addressing the reader to extended treatments given *e.g.* in [hypercomplex]{} and [@split-H] (also *cfr.* App. A of [@Gun-2], and Refs. therein), we present here some basic definitions on the *split algebras* $\mathbb{C}_{s}$ and $\mathbb{H}_{s}$, useful for the subsequent treatment. For each of the composition, normed *division* algebras $\mathbb{C}$ (complex numbers), $\mathbb{H}$ (Hamilton numbers, or quaternions) and $\mathbb{O}$ (Cayley numbers, or octonions), one can respectively construct, by suitably adapting the Cayley-Dickson procedure, the corresponding *split* (composition) algebras $\mathbb{C}_{s}$ (split complex numbers), $\mathbb{H}_{s}$ (split quaternions) and $\mathbb{O}_{s}$ (split octonions); these are characterized by the fact that some of the imaginary units square to $1$ instead of $-1$. More in detail, one starts constructing the *split complex numbers* $\mathbb{C}_{s}$, also named *hyperbolic numbers*, as$$\mathbb{C}_{s}:=\{\alpha+j\beta\,|\,j^{2}=1\,\,\,\alpha,\beta\in \mathbb{R}\}\,;$$this algebra is equipped with a natural conjugation $$a=\alpha+j\beta\longrightarrow \alpha-j\beta=:\overline{a}, \label{conjug-Cs}$$which is used in order to define the norm$$|a|^{2}:=a\overline{a}=\alpha^{2}-\beta^{2}. \label{norm-Cs}$$ Not all elements in $\mathbb{C}_{s}$ are *invertible*; in fact, it holds that $$\frac{1}{a}=\frac{\overline{a}}{|a|^{2}};$$therefore, an element of $\mathbb{C}_{s}$ with vanishing norm, *i.e.* $a=\alpha\pm j\beta$, is *non-invertible*. Then, we denote by $\mathbb{C}_{s}^{\times }$ the invertible elements of $\mathbb{C}_{s}$: $$\mathbb{C}_{s}^{\times }:=\{\alpha+j\beta\,|\,\alpha\neq \pm \beta\}\,. \label{Cs-inv}$$ Every (non-zero) non-invertible element must be of the form $\alpha \mathcal{E}$ or $\alpha \overline{\mathcal{E}}$, with $\mathcal{E}:=1+j$ and $\alpha \in \mathbb{R}$. Moreover, it is here worth noting the following useful relations:$$\begin{aligned} \mathcal{E}^{2} &=&2\mathcal{E},~~\overline{\mathcal{E}}^{2}=2\overline{\mathcal{E}}; \\ \mathcal{E}\overline{\mathcal{E}} &=&0; \label{EE} \\ a\mathcal{E} &=&(\alpha +\beta )\mathcal{E}\,,\,\,\,\forall \,a=\alpha +j\beta \in \mathbb{C}_{s}. \label{res}\end{aligned}$$Moreover, we observe that every element $a=\alpha +j\beta $ can be uniquely decomposed according to the following $$a=\alpha _{+}\mathcal{E}+\alpha _{-}\overline{\mathcal{E}}\,,\,\,\,\,\,\,\,\alpha _{\pm }:=\frac{1}{2}(\alpha \pm \beta ) \label{decomposition}$$It should also be remarked that a non-invertible element is always a *zero divisor*, due to (\[EE\]). By the iterating the Cayley-Dickson procedure, we then proceed constructing the *split quaternions* $$\mathbb{H}_{s}:=\{a+kc\,|\,k^{2}=-1\,\,\,a,c\in \mathbb{C}_{s}\}\,,$$which, as their divisional counterparts $\mathbb{H}$, are *non-commutative*. Explicitly, any element $h\in \mathbb{H}_{s}$ can be written as $$h =(\underbrace{\alpha+j\beta}_{h_{R}})+k(\underbrace{\gamma+j\delta}_{h_{I}})=\alpha+j\,\beta+k\,\gamma+(kj)\,\delta\,, $$ where $h_{R}$ and $h_{I}$ respectively denote the real and imaginary part of the split quaternion $h$. Moreover, $j$, $k$ and $kj$ are three imaginary" units, whose multiplication rules are summarized in the following table : $$\begin{array}{c|ccc} & k & kj & j \\ \hline k & -1 & -j & kj \\ kj & j & 1 & k \\ j & -kj & -k & 1\end{array}$$ In $\mathbb{H}_{s}$, the conjugation is defined as$$h=h_{R}+kh_{I}\longrightarrow \overline{h}_{R}-kh_{I}=:h^{\ast }, \label{conjug-Hs}$$or explicitly : $$h=\alpha+j\beta+k\,\gamma+(kj)\,\delta\longrightarrow \alpha-j\,\beta-k\,\gamma-(kj)\,\delta=:h^{\ast }.$$The norm of a split quaternion then reads $$|h|^{2}:=hh^{\ast }=\alpha^{2}+\gamma^{2}-\beta^{2}-\delta^{2}\,. \label{norm-Hs}$$ It is straightforward to check that the *invertible* split quaternions $\mathbb{H}_{s}^{\times }$ are given by $$\mathbb{H}_{s}^{\times }:=\{\alpha+j\,\beta+k\,(\gamma+j\delta)\,|\,\alpha^{2}+\gamma^{2}\neq \beta^{2}+\delta^{2}\}.$$Due to the aforementioned non-commutativity, one should properly discuss left and right invertibility; nevertheless, it can be proved that left and right inverse coincide. It is also worth pointing out that one can construct the following isomorphism between $\mathbb{H}_{s}$ and the space of $2\times 2$ matrices with $\mathbb{C}_{s}$-valued entries $$\begin{aligned} N &:&=\{M\in \mathbb{M}_{2}(\mathbb{C}_{s})\,|\,\overline{M}\epsilon =\epsilon M\}\,, \label{def-N} \\ \epsilon &:&=\begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix}, \label{epsilon}\end{aligned}$$by means of the map $$\begin{array}{rrrcl} Z: & \mathbb{H}_{s} & \rightarrow & N, & \\[3mm] & h & \mapsto & \begin{pmatrix} h_{R} & h_{I} \\ -\overline{h}_{I} & \overline{h}_{R}\end{pmatrix}. & \end{array} \label{transfo}$$When considering matrices with $\mathbb{H}_{s}$-valued entries, one can apply the map $Z$ (\[transfo\]) entry-wise. Finally, *split octonions* $\mathbb{O}_{s}$ are obtained from $\mathbb{H}_{s}$ by further iterating the Cayley-Dickson procedure: $$\mathbb{O}_{s}:=\{h+lf\,|\,l^{2}=-1\,\,\,h,f\in \mathbb{H}_{s}\}\,.$$We will not further deal with the algebra $\mathbb{O}_{s}$, since this not relevant for the present investigation (for a very recent excellent account, we address to the monography [@Tray-Manogue-Book]). For convenience in the subsequent treatment, it is here worth recalling the definition of two symmetries which can be associated to split algebras : the *norm-preserving* symmetry and the *triality* symmetry. As it can be seen from (\[norm-Cs\]) and ([norm-Hs]{}), the squared norm of a split algebra element is given by the symmetric bilinear form $\eta _{ab}=\eta ^{ab}$ with signature $\left( \frac{q}{2},\frac{q}{2}\right) $, and $a,b=1,...,q$, with $q$ defined in (\[q-def-2\]) being the real dimension of the split algebra.This is in fact the canonical inner product on the Klein space $\mathbf{M}^{q/2,q/2}\cong \mathbb{R}^{q/2,q/2}$, which is preserved by $\mathrm{SO}(q/2,q/2)=:\mathrm{SO}(\mathbb{A}_{s})$ (whose Lie algebra we denote by $\mathfrak{so}\left( q/2,q/2\right) =:\mathfrak{so}(\mathbb{A}_{s})$). Thus, $\mathrm{SO}(\mathbb{A}_{s})$ is named as the *norm-preserving group* of $\mathbb{A}_{s}$ itself. Then, let us consider the following Lie algebra [@Springer-book]:$$\mathfrak{tri}(\mathbb{A}_{s}):=\left\{ \left( A,B,C\right) |A\left( x,y\right) =B(x)y+xC(y),~A,B,C\in \mathfrak{so}\left( q/2,q/2\right) ,~x,y\in \mathbb{A}_{s}\right\} . \label{triality-def}$$This algebra, appearing explicitly in the magic square formula of Barton and Sudbery [@BS-1; @BS-2] (see also *e.g.* [@Evans]), is named as the *triality symmetry algebra* of $\mathbb{A}_{s}$, and the corresponding Lie group $Tri\left( \mathbb{A}_{s}\right) $ is referred to as the *triality group* of $\mathbb{A}_{s}$ itself. In general, it holds that $\mathrm{SO}(\mathbb{A}_{s})$ is a (not necessarily proper) subgroup of $Tri\left( \mathbb{A}_{s}\right) $, and thus one can define the following (symmetric) cosets[^3] (for further elucidation, see *e.g.* [@Gun-2; @CFMZ1-D=5; @Magic-Coset-Decomp; @ADFMT-1], and Refs. therein) :$$\widetilde{\mathcal{A}}_{q}:=\frac{Tri\left( \mathbb{A}_{s}\right) }{\mathrm{SO}(\mathbb{A}_{s})}\cong \left\{ \begin{array}{l} q=2:\mathrm{SO}(1,1), \\ q=4:\mathrm{Sp}(2,\mathbb{R}) \\ q=8:Id,\end{array}\right. \label{A-tilde}$$whose relevance will be exploited further below. For completeness, and later convenience, we also report the analogue result for the four normed *division* algebras [@Hurwitz] $\mathbb{A}=\mathbb{R},\mathbb{C},\mathbb{H},\mathbb{O}$ (for which $q=1,2,4,8$, respectively):$$\mathcal{A}_{q}:=\frac{Tri\left( \mathbb{A}\right) }{\mathrm{SO}(\mathbb{A})}\cong \left\{ \begin{array}{l} q=1:Id \\ q=2:\mathrm{U}(1), \\ q=4:\mathrm{USp}(2) \\ q=8:Id.\end{array}\right. \label{A}$$ Quadratic Jordan Algebras over Split Algebras ============================================= Referring to thorough treatments given *e.g.* in [@McCrimmon; @Iordanescu] for references and details, we shall here give a brief account of quadratic Jordan algebras. A *Jordan algebra* over a field $\mathbb{F}$ (which we shall henceforth assume to be $\mathbb{R}$, unless otherwise specified) is an algebra $J$ with a symmetric product $\circ $$$X\circ Y=Y\circ X\in J,~\forall X,Y\in J$$which satisfies the *Jordan identity*$$X\circ (Y\circ X^{2})=(X\circ Y)\circ X^{2},$$where $X^{2}:=X\circ X$. Therefore, a Jordan algebra is commutative and generally non-associative. Given a Jordan algebra $J$, one can define a *norm* $\mathbf{N}$ $:J\rightarrow \mathbb{R}$ over it, satisfying the composition property [Jacobson]{}$$\mathbf{N}[2X\circ (Y\circ X)-(X\circ X)\circ Y]=\mathbf{N}^{2}(X)\mathbf{N}(Y).$$The *degree* $p$, of the norm form as well as of $J$, is defined by $\mathbf{N}(\lambda X)=\lambda ^{p}\mathbf{N}(X)$, where $\lambda \in \mathbb{R}$. A *Euclidean* Jordan algebra is a Jordan algebra for which the condition $X\circ X+Y\circ Y=0$ implies that $X=Y=0$ for all $X,Y\in J$; they are sometimes called *compact* Jordan algebras, since their automorphism groups are compact. In the present investigation, we are interested in a particular class of simple, *quadratic* Euclidean Jordan algebras (degree $p=2$); the algebras of such a class [@JWVN] are denoted by $J_{2}^{\mathbb{C}_{s}}$, $J_{2}^{\mathbb{H}_{s}}$ and $J_{2}^{\mathbb{O}_{s}}$, and they are generated by Hermitian $(2\times 2)$-matrices over the split composition algebras $\mathbb{A}_{s}=\mathbb{C}_{s}$, $\mathbb{H}_{s}$, $\mathbb{O}_{s}$, respectively :$$\mathcal{J}=\left( \begin{array}{cc} \alpha & Z \\ \overline{Z} & \beta\end{array}\right) \in J_{2}^{\mathbb{A}_{s}}, \label{matr}$$where $\alpha ,\beta \in \mathbb{R}$ and $Z\in \mathbb{A}_{s}$, and the bar stands for the conjugation pertaining to the algebra under consideration; moreover, the Jordan product $\circ $ is realized as (one half) the matrix anticommutator. The set of linear invertible transformations leaving the quadratic norm of $J_{2}^{\mathbb{A}_{s}}$$$\mathbf{N}(\mathcal{J}):=\text{det}(\mathcal{J}),~\mathcal{J}\in J_{2}^{\mathbb{A}_{s}}, \label{norm-J2}$$invariant is the so-called *reduced structure group* $Str_{0}\left( J_{2}^{\mathbb{A}_{s}}\right) $ of $J_{2}^{\mathbb{A}_{s}}$ itself, and it holds that (recall (\[q-def-2\]))$$Str_{0}\left( J_{2}^{\mathbb{A}_{s}}\right) =\mathrm{Spin}(q/2+1,q/2+1).$$In other words, the reduced structure group of $J_{2}^{\mathbb{A}_{s}}$ is the *Klein* *group* $Spin(q/2+1,q/2+1)$ in[^4] $D=q+2$. \[Sec-Spinors\]Spinors ====================== In this Section, we provide some basic definitions and results on spinors, useful for the subsequent treatment; for further details and elucidation, we address the reader *e.g.* to [@Budinich-1; @Budinich-2; @Charlton-Th], and Refs. therein. We will henceforth assume $D=s+t$ *even* (in view of the specific case we will be interested in below, namely $D=4$ and $s=t=2$). Let us start and consider the properties of (irreducible) spinor representations of the spin covering group $\mathrm{Spin}(s,t)$ of pseudo-orthogonal groups $\mathrm{SO}(s,t)$. For more details, *cfr.* *e.g.* [@Spinor-Algebras; @Ferrara-Spinors], and Refs. therein. Let $V$ be a real vector space of dimension $D=s+t$, with basis $\left\{ \mathbf{e}_{a}\right\} $ ($a=1,...,D$) and signature $\left( s,t\right) $ : $V\cong \mathbb{R}^{s,t}$. Then, $V$ admits a non-degenerate symmetric bilinear form $\eta $ with signature $\left( s,t\right) $, which in the basis $\left\{ \mathbf{e}_{a}\right\} $ is given by the metric $$\eta _{ab}=\eta ^{ab}=\left( \underset{s}{\underbrace{+,...,+}},\underset{t}{\underbrace{-,...,-}}\right) . \label{metric}$$ The group $\mathrm{Spin}(V)$ is defined as the unique double-covering of the identity-connected component of $\mathrm{SO}(s,t)$. A spinor representation of $\mathrm{Spin}(V)^{\mathbb{C}}$ is an irreducible complex representation whose highest weights are the fundamental weights corresponding - within usual convention - to the right extreme nodes in the Dynkin diagram. A spinor representation of $\mathrm{Spin}(V)$ over the reals $\mathbb{R}$ (which we will be interested in) is an irreducible representation over $\mathbb{R}$, whose complexification is a direct sum of spin representations. Two parameters, namely the signature $\rho :=s-t$ mod$(8)$ and the dimension $D=s+t$ mod$(8)$, classify the properties of the spinor representation (*cfr. e.g.* [@Spinor-Algebras], and Refs. therein). When $s=t$ (and thus $\rho =0$), the real space $V\cong \mathbb{R}^{s,s}$ is named *Klein space*, its signature $\left( s,t\right) =(s,s)$ *Kleinian* (or *hyperbolic*), and the corresponding spin group $\mathrm{Spin}(s,s)$ is named *Klein group*. Pure Spinors ------------ The *Clifford algebra*[^5] $\mathcal{C}(s,t)$ associated to $V$ is generated by the $s+t$ Dirac gamma matrices $\Gamma ^{a}$’s obeying$$\left\{ \Gamma ^{a},\Gamma ^{b}\right\} =2\eta ^{ab}\mathbb{I},$$where $\mathbb{I}$ denotes the identity matrix. By $\psi $ we denote a $2^{(s+t)/2}$-dimensional spinor, namely a vector of the $2^{(s+t)/2}$-dimensional representation space $S$ of $\mathcal{C}(s,t)$; for $z\in V$, $\psi $ is defined by the Cartan equation [@Cartan]$$z_{a}\Gamma ^{a}\psi =0, \label{C-Eq}$$yielding the existence of a *totally null* plane of dimension $d\leqslant (s+t)/2$, denoted by $T_{d}(\psi )$. In $D=s+t$ *even* dimensions (as we are assuming throughout; *cfr.* the start of the present Section), $\psi $ does *not* provide an *irreducible* representation for $\mathrm{Spin}(s,t)$. A *volume element"* in the Clifford algebra $\mathcal{C}(s,t)$ can be defined by introducing the gamma matrix $\Gamma _{s+t+1}:=\Gamma _{1}\Gamma _{2}...\Gamma _{s+t}$, which anticommutes with all $\Gamma _{a}$’s; it can be used to construct an invariant projector $\mathbb{P}_{\pm}$ and we denote by $\psi ^{\pm }$ the *chiral* (or *Weyl*) spinors, namely the $2^{(s+t)/2-1}$-dimensional spinors defined by $$\psi ^{\pm }:=\mathbb{P}_{\pm} \psi ,$$implying the corresponding chiral Cartan–Weyl equations to read$$z_{a}\Gamma ^{a}\mathbb{P}_{\pm} \psi =0. \label{CW-Eqs}$$Eq. (\[CW-Eqs\]) define a $d$-dimensional totally null plane $T_{d}(\psi ^{\pm })$, and each of the chiral spinors $\psi ^{\pm }$ provides an *irreducible* representation for $\mathrm{Spin}(s,t)$. The existence of chiral spinors determines the splitting of the $\mathcal{C}(s,t)$-representation space $S$ (with generic element $\psi $) into the direct sum of two $\mathrm{Spin}(s,t)$-representation spaces $S^{\pm }$ (with generic elements $\psi ^{\pm }$) : $$S=S^{+}\oplus S^{-}. \label{chiral-split}$$ For $d=(s+t)/2$, *i.e.* for the *maximal* dimension of $T_{d}(\psi ^{\pm })$, the corresponding Weyl spinor $\psi ^{\pm }$ is named *pure*, and $T_{\left( s+t\right) /2}(\psi ^{\pm })\cong \pm \psi ^{\pm }$ [@Cartan]. Cartan himself stressed out the importance of this equivalence, which indeed establishes the crucial link between spinor geometry and *projective* Euclidean geometry. Actually, Cartan named such spinors *simple*, and the nowadays customary naming *pure* is due to Chevalley [@Chevalley]. It should be remarked that the dimension of $T_{(s+t)/2}(\psi ^{\pm })$ increases linearly with $(s+t)/2$, while that of the pure $\psi ^{\pm }$’s increases as $2^{(s+t)/2-1}$; consequently, for high $(s+t)/2$’s, pure spinors will be given by the solutions of suitable (quadratic) constraining relations, named *pure spinor constraints*, which allow to separate (in a $\mathrm{Spin}(V)$-invariant way) the space of pure spinors from the space of *impure"* ones. In fact, *all* spinors are pure for $\left( s+t\right) /2=1,2,3$ (*i.e.* in $D=2,4,6$ dimensions), while for $(s+t)/2=4,5,6,7,...$ (*i.e.* in $D=8,10,12,14,...$ dimensions ) pure spinors are subject to $1$, $10$, $66$, $364$, $...$ constraints, respectively; in general, in $D=s+t$ dimensions there are $\binom{s+t}{(s+t)/2-4}$ pure spinor constraints. For instance, in $D=s+t=10$ dimensions, there are $10$ pure spinor constraints, given by$$\psi \Gamma ^{a}\psi =0,~\forall a=1,...,10, \label{pure-D=10}$$which are especially relevant for the formulation of the *pure spinor formalism* of superstrings [@Berkovits] (see *e.g.* [@PSF] for an introduction). Classification -------------- The problem of classifying spinors is usually formulated in subsequent steps as : **(i)** determining the structure of the spinor orbits $\mathcal{O} $’s under the action of the $\mathrm{Spin}$ group; **(ii)** computing the isotropy (*stabilizer*) group $\mathcal{H}\subset \mathrm{Spin}$ of each orbit $\mathcal{O}$; and **(iii)** determining the algebra of invariants of the spinor representation space $S$. The *orbit* $\mathcal{O}_{\psi }$ of a well-defined spinor representative $\psi $ under the $\mathrm{Spin}$ group is a coset manifold, whose structure is determined by the isotropy group $\mathcal{H}_{\psi }$ of $\psi $ :$$\mathcal{O}_{\psi }\cong \frac{\mathrm{Spin}}{\mathcal{H}_{\psi }};$$in general, the embedding of $\mathcal{H}_{\psi }$ into $\mathrm{Spin}$ is not maximal nor symmetric; thus, the coset $\mathcal{O}_{\psi }$ is usually non-symmetric. Classification of spinors was first studied by Chevalley [@Chevalley], who considered the orbit of pure spinors. He found that, in general, the orbit of pure spinors is the orbit of *least* dimension (or, equivalently, the stabilizer of pure spinors is the *largest* one among all spinor stabilizers). Chevalley’s analysis classifies spinors in all dimensions up to $D=s+t=6$; as mentioned above, in these cases *all spinors are pure*. Igusa has then classified spinors in dimensions up to $D=s+t=12$ [@Igusa]. For each spinor orbit, he provided a well-defined representative, as well as the stabilizer of the orbit itself. Using similar techniques, full classifications of spinors have been worked out in more than $12$ dimensions by Kac and Vinberg [@KV78], Popov [@Pop80], Zhu [@Zhu92], Antonyan and Elashvili [@AE82], but very little is known beyond $16$ dimensions. A nice summary of the spinor classification programme has been recently accounted in [@Charlton-Th] (for what concerns pure spinors, see also *e.g.* [@Pure-Spinors-Polish; @Furlan]). Spinors in *critical* dimensions $D=s+t=q+2=3,4,6,10$ have also been studied by Bryant [@Bryant-1; @Bryant-2], whose approach exploited the connection between spinors and the four normed division algebras $\mathbb{A}=\mathbb{R},\mathbb{C},\mathbb{H},\mathbb{O}$. As a physical application, such results have been recently applied to the gauging of $\mathcal{N}=(1,0)$ *magic* [@GST] chiral supergravities in $D=6$ (Lorentzian : $s=5,t=1$) space-time dimensions in [@Gunaydin-D=6]. \[Lorentz-vs-Klein\]Spinors and Space-Time Signature : Lorentz *versus* Klein {#spinors} ----------------------------------------------------------------------------- Before treating in some detail the irreducible spinor representations of the *Klein group* $\mathrm{Spin}(2,2)$ in Sec. \[Spin(2,2)\] (which will then be instrumental for the introduction of the Klein and conformal $D=(2,2)$ $\mathcal{N}=1$ superspaces in Sec. \[supermink\]), we now briefly recall the crucial differences between Lorentzian and Klein spinors in critical dimensions $D=q+2$ (for $q=2,4,8$), especially for what concerns the *representability* in terms of division and split algebras, respectively. In the specific case of $D=4$, this reasoning will also highlight the important differences between the approach exploited in the present investigation and the one considered in [@FL-1] (note that we will anticipate some results, which will then be obtained and discussed in the treatment of subsequent Sections). As far as notation is concerned, by $M_{p}(\mathbb{R})$ ($M_{p}(\mathbb{C})$) we will denote the algebra of $p\times p$ matrices with entries in the $\mathbb{R}$ ($\mathbb{C}$) (consistently with (\[def-N\])). Instead, $M_{p}(\mathbb{H})$ will denote the set of $p\times p$ complex matrices satisfying the *quaternionic condition* $$\overline{M}=-\Omega M\Omega , \label{finite-sympl}$$where the bar denotes conjugation in $\mathbb{C}$, and $\Omega $ is the symplectic metric (for $p=2$, $\Omega =\epsilon $ (\[epsilon\])). If $\Omega $ is non-degenerate, (\[finite-sympl\]) implies $p$ to be *even*, and $M$ can be written as a $p/2\times p/2$ matrix whose entries are quaternionic. It should also be stressed that we will be considering the Clifford algebras as real algebras throughout (*cfr. e.g.* Tables 1 and 2 of [@Spinor-Algebras]). - $D=10$ ($\leftrightarrow q=8$, thus corresponding to $\mathbb{O}_{s}$ or $\mathbb{O}$). Let us first consider the **Klein case** : $D=(5,5)$, namely $s=t=5$, and thus $\rho =0$. The Clifford algebra $\mathcal{C}(5,5)$, as a real algebra, is isomorphic to *real* $32\times 32$ matrices :$$\mathcal{C}(5,5)\cong M_{32}(\mathbb{R}),$$with dim$_{\mathbb{R}}\mathcal{C}(5,5)=32^{2}=2^{10}$. The spinor representation space $S$ of $\mathcal{C}(5,5)$ is *real*, with real dimension $2^{5}=32$, and it splits into chiral spinor representation spaces $S^{\pm }$ as given by (\[chiral-split\]). Each of $S^{\pm }$ is *real*, with real dimension $2^{4}=16$ : namely, it is a *Majorana-Weyl* spinor representation space. After[^6] [@Sudbery] (also *cfr.* [@Kugo-Townsend], and Refs. therein), a Majorana-Weyl spinor $\psi ^{\pm }$ of $\mathrm{Spin}(5,5)\cong \mathrm{SL}(2,\mathbb{O}_{s})$ can be represented by a vector in[^7] $\mathbb{O}_{s}^{2}$ (from (\[A-tilde\]), recall that $\widetilde{\mathcal{A}}_{8}\cong Id$) :$$\left. \begin{array}{c} \psi ^{+} \\ \psi ^{-}\end{array}\right\} \cong \mathbb{O}_{s}^{2}\cong \left\{ \begin{array}{c} \mathbf{16} \\ \mathbf{16}^{\prime }\end{array}\right. ~\text{of~}\mathrm{Spin}(5,5)\mathbf{.}$$Let us then consider the **Lorentz case** : $D=(9,1)$, namely $s=9$, $t=1$, and thus $\rho =8=0$ mod$\left( 8\right) $. Since $\rho $ and $D$ are the same as the Klein case previously considered, the spinor properties coincide. Indeed, the Clifford algebra $\mathcal{C}(9,1)$, as a real algebra, is isomorphic to *real* $32\times 32$ matrices :$$\mathcal{C}(9,1)\cong M_{32}(\mathbb{R}),$$with dim$_{\mathbb{R}}\mathcal{C}(9,1)=32^{2}=2^{10}$, and the spinor representation space $S$ of $\mathcal{C}(9,1)$ is *real*, with real dimension $2^{5}=32$. Each of the chiral spinor representation spaces $S^{\pm }$ is *Majorana-Weyl*, with real dimension $2^{4}=16$. Once again, after [@Sudbery] (also *cfr.* [@Kugo-Townsend], and Refs. therein), a Majorana-Weyl spinor $\psi ^{\pm }$ of $\mathrm{Spin}(9,1)\cong \mathrm{SL}(2,\mathbb{O})$ can be represented by a vector in $\mathbb{O}^{2}$ (from (\[A\]), recall that $\mathcal{A}_{8}\cong Id$) :$$\left. \begin{array}{c} \psi ^{+} \\ \psi ^{-}\end{array}\right\} \cong \mathbb{O}^{2}\cong \left\{ \begin{array}{c} \mathbf{16} \\ \mathbf{16}^{\prime }\end{array}\right. ~\text{of~}\mathrm{Spin}(9,1)\mathbf{.}$$ - $D=6$ ($\leftrightarrow q=4$, thus corresponding to $\mathbb{H}_{s}$ or $\mathbb{H}$). Let us first consider the **Klein case** : $D=(3,3)$, namely $s=t=3$, and thus $\rho =0$. The Clifford algebra $\mathcal{C}(3,3)$, as a real algebra, is isomorphic to *real* $8\times 8$ matrices :$$\mathcal{C}(3,3)\cong M_{8}(\mathbb{R}),$$with dim$_{\mathbb{R}}\mathcal{C}(3,3)=8^{2}=2^{6}$. The spinor representation space $S$ of $\mathcal{C}(3,3)$ is *real*, with real dimension $2^{3}=8$, and it splits into chiral spinor representation spaces $S^{\pm }$, which are also *real* and with real dimension $2^{2}=4$ : namely, they are *Majorana-Weyl* $4$-dimensional spinor representation spaces. Therefore, a generic element $\psi =\psi ^{+}\oplus \psi ^{-}\in S$, namely a *non-chiral* spinor of $\mathrm{Spin}(3,3)\cong \mathrm{SL}(4,\mathbb{R})\cong \mathrm{SL}(2,\mathbb{H}_{s})\cong \widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s})$ (*cfr.* (\[iso-3\]) below), can be represented by a vector in $\mathbb{H}_{s}^{2}$ :$$\psi =\psi ^{+}\oplus \psi ^{-}\cong \mathbb{H}_{s}^{2}\cong \left( \mathbf{4,2}\right) ~\text{of~}\mathrm{Spin}(3,3)\times \widetilde{\mathcal{A}}_{4}\mathbf{,} \label{(3,3)}$$where $\widetilde{\mathcal{A}}_{4}\cong \mathrm{SL}(2,\mathbb{R})\cong \mathrm{Sp}(2,\mathbb{R})$ has been recalled from (\[A-tilde\]). Note that the presence of a non-trivial $\widetilde{\mathcal{A}}_{q}\neq Id$ ([A-tilde]{}) is crucial for the consistency of the spinor properties with the representability in terms of split algebras. Let us then consider the **Lorentz case** : $D=(5,1)$, namely $s=5$, $t=1$, and thus $\rho =4$. The Clifford algebra $\mathcal{C}(5,1)$, as a real algebra, is isomorphic to *quaternionic* $4\times 4$ matrices (in the sense specified above) :$$\mathcal{C}(5,1)\cong M_{4}(\mathbb{H}),$$with dim$_{\mathbb{C}}\mathcal{C}(5,1)=4^{2}=2^{4}$. Thus, the spinor representation space $S$ of $\mathcal{C}(5,1)$ is *quaternionic*, with *complex* dimension $2^{3}=8$. Each of the chiral spinor representation spaces $S^{\pm }$ is *quaternionic*, with *complex* dimension $2^{2}=4$. After [@Sudbery] (also *cfr.* [Kugo-Townsend]{}, and Refs. therein), a *quaternionic* (also named *symplectic-Majorana-Weyl*) spinor $\psi ^{\pm }$ of $\mathrm{Spin}(5,1)\cong \mathrm{SU}^{\ast }(4)\cong \mathrm{SL}(2,\mathbb{H})$ can be represented by a vector in $\mathbb{H}^{2}$ :$$\left. \begin{array}{c} \psi ^{+} \\ \psi ^{-}\end{array}\right\} \cong \mathbb{H}_{s}^{2}\cong \left\{ \begin{array}{c} \left( \mathbf{4,2}\right) \\ \left( \overline{\mathbf{4}}\mathbf{,2}\right)\end{array}\right. ~\text{of~}\mathrm{Spin}(5,1)\times \mathcal{A}_{4}\mathbf{,} \label{(5,1)}$$where $\mathcal{A}_{4}\cong \mathrm{SU}(2)\cong \mathrm{USp}(2)$ has been recalled from (\[A\]). Again, let us point out that the presence of a non-trivial $\mathcal{A}_{q}\neq Id$ (\[A\]) is crucial for the consistency of the spinor properties with the representability in terms of division algebras[^8]. Note that in (\[(5,1)\]) the bar denotes the conjugation in $\mathbb{C}$. - $D=4$ ($\leftrightarrow q=2$, thus corresponding to $\mathbb{C}_{s}$ or $\mathbb{C}$). Let us first consider the **Klein case** : $D=(2,2)$, namely $s=t=2$, and thus $\rho =0$; this will be the case considered in detail in the next Sections. The Clifford algebra $\mathcal{C}(2,2)$, as a real algebra, is isomorphic to *real* $4\times 4$ matrices :$$\mathcal{C}(2,2)\cong M_{4}(\mathbb{R}), \label{C(2,2)}$$with dim$_{\mathbb{R}}\mathcal{C}(3,3)=4^{2}=2^{4}$. The spinor representation space $S$ of $\mathcal{C}(3,3)$ is *real*, with real dimension $2^{2}=4$, and it splits into chiral spinor representation spaces $S^{\pm }$, which are also *real* and with real dimension $2$ : namely, they are *Majorana-Weyl* $2$-dimensional spinor representation spaces. Thus, a generic element $\psi =\psi ^{+}\oplus \psi ^{-}\in S$, namely a *non-chiral* spinor of $\mathrm{Spin}(2,2)\cong \mathrm{SL}(2,\mathbb{R})\times \mathrm{SL}(2,\mathbb{R})\cong \mathrm{SL}(2,\mathbb{C}_{s})$ (*cfr.* (\[isso\]) below), can be represented by a vector in $\mathbb{C}_{s}^{2}$ :$$\psi =\psi ^{+}\oplus \psi ^{-}\cong \mathbb{C}_{s}^{2}\cong \left( \mathbf{2,1}\right) _{+}+\left( \mathbf{1},\mathbf{2}\right) _{-}~\text{of~}\mathrm{Spin}(2,2)\times \widetilde{\mathcal{A}}_{2}\mathbf{,} \label{(2,2)}$$where the $+$“ and $-$” subscripts denote weights with respect to $\widetilde{\mathcal{A}}_{2}\cong \mathrm{SO}(1,1)$ (*cfr.* (\[A-tilde\])). Again, we observe that the presence of a non-trivial $\widetilde{\mathcal{A}}_{q}\neq Id$ (\[A-tilde\]) is crucial for the consistency of the spinor properties with the representability in terms of split algebras. Also, note the *non-simple* nature of $\mathrm{Spin}(2,2)\cong \mathrm{SL}(2,\mathbb{R})\times \mathrm{SL}(2,\mathbb{R})$ yields the spinor split $\psi =\left( \mathbf{2,1}\right) _{+}+\left( \mathbf{1},\mathbf{2}\right) _{-}$, as well as the chirality interpretation of $\widetilde{\mathcal{A}}_{2}$ itself (see below). Let us then consider the **Lorentz case** : $D=(3,1)$, namely $s=3$, $t=1$, and thus $\rho =2$. The Clifford algebra $\mathcal{C}(3,1)$, as a real algebra, is isomorphic to *real* $4\times 4$ matrices :$$\mathcal{C}(3,1)\cong M_{4}(\mathbb{R}),$$with dim$_{\mathbb{R}}\mathcal{C}(5,1)=4^{2}=2^{4}$. The spinor representation space $S$ of $\mathcal{C}(3,1)$ is *real*, with real dimension $2^{2}=4$. Each of the chiral spinor representation spaces $S^{\pm }$ is *complex*, with *complex* dimension $2$. Therefore, a *chiral complex* spinor $\psi ^{+}$ (or $\psi ^{-}$) of $\mathrm{Spin}(3,1)\cong \mathrm{SL}(2,\mathbb{C})$ can be represented by a vector in $\mathbb{C}^{2}$ :$$\begin{aligned} \psi ^{+} &\cong &\mathbb{C}^{2}\cong \mathbf{2}_{\mathbb{C}}\text{ of }\mathrm{SL}(2,\mathbb{C})\equiv \left( \mathbf{2,1}\right) _{+}+\left( \mathbf{1},\mathbf{2}\right) _{-}~\text{of~}\mathrm{Spin}(3,1)\times \mathcal{A}_{2}\mathbf{;} \label{(3,1)} \\ \psi ^{-} &\cong &\overline{\psi ^{+}}\cong \mathbb{C}^{2}\cong \overline{\mathbf{2}}_{\mathbb{C}}\text{ of }\mathrm{SL}(2,\mathbb{C})\equiv \left( \mathbf{2,1}\right) _{-}+\left( \mathbf{1},\mathbf{2}\right) _{+}~\text{of~}\mathrm{Spin}(3,1)\times \mathcal{A}_{2}\mathbf{;} \label{(3,1)-2}\end{aligned}$$where the $+$“ and $-$” subscripts here denote charges with respect to $\mathcal{A}_{2}\cong U(1)$ (*cfr.* (\[A\])). Again, we stress that the presence of a non-trivial $\mathcal{A}_{q}\neq Id$ (\[A\]) is crucial for the consistency of the spinor properties with the representability in terms of division algebras. The comparison between (\[(2,2)\]) and (\[(3,1)\])-(\[(3,1)-2\]) explains the necessary differences between the approach exploited in the present investigation and the one considered in [@FL-1]. Note that in ([(3,1)-2]{}) the bar in $\overline{\mathbf{2}}_{\mathbb{C}}$ denotes the conjugation in $\mathbb{C}$, whereas the bar in $\overline{\psi ^{+}}$ denotes the spinor conjugation, which in turn - because of the representability $\psi ^{+}\cong \mathbb{C}^{2}$ - is *induced* by the conjugation in $\mathbb{C}$ itself. \[Spin(2,2)\]Vectors and Spinors of the Klein group $\mathrm{Spin}(2,2)$ ======================================================================== We are now going to consider in some detail the irreducible spinor representations of the *Klein group* $\mathrm{Spin}(2,2)$, namely of $\mathrm{Spin}(V)$, where $V$ is the *Klein space* $\mathbf{M}^{2,2}\cong \mathbb{R}^{2,2}$. As mentioned above, this latter is a $4$-dimensional real vector space with Kleinian signature, *i.e.* with $s=2$ spacelike dimensions and $t=2$ timelike dimensions (thus, having $\rho =0$). As reported in Sec. \[Lorentz-vs-Klein\], the theory of spinor algebras (see *e.g.* [@Spinor-Algebras]) yields that the *non-chiral* spinor representation $\psi $ is *real*, of dimension $2^{D/2}=4$. This provides an irreducible representation of the Clifford algebra $\mathcal{C}(2,2)$ (\[C(2,2)\]); however, since $D=4$ is *even*, such a representation $\psi $ is *not* irreducible under $\mathrm{Spin}(2,2)$, and the corresponding representation space $S$ splits into two $\mathrm{Spin}(2,2)$-irreducible *Majorana-Weyl* spinor subspaces[^9], as given by (\[chiral-split\]), each of real dimension $2$. Thus, one can reconsider (\[(2,2)\]), writing$$\psi_{(2,2)} =\underset{\psi ^{+}}{(\mathbf{2},\mathbf{1})_{+}}\oplus \underset{\psi ^{-}}{(\mathbf{1},\mathbf{2})_{-}}~\cong \left( \begin{array}{c} a \\ c\end{array}\right) ,~a,c\in \mathbb{C}_{s}. \label{Weyl}$$As noted below ([(2,2)]{}), subscripts $+$“ and $-$” in (\[Weyl\]) denote weights with respect to $\widetilde{\mathcal{A}}_{2}\cong \mathrm{SO}(1,1)$ (\[A-tilde\]); on the other hand, they also represent the chirality, since $\psi ^{+}$ and $\psi ^{-}$ are *Majorana-Weyl* spinors of real dimension $2$ with *opposite* chirality. Thus, in $D=4 $ Kleinian dimensions $\widetilde{\mathcal{A}}_{2}\cong \mathrm{SO}(1,1)$, commuting with the Klein group $\mathrm{Spin}\left( 2,2\right) $, can actually be identified the chirality operator in $\mathbb{C}_{s}^{2}$. Summarizing, $\mathrm{Spin}\left( 2,2\right) \times \mathrm{SO}(1,1)$, has the following three representations of (real) dimension $4$ : 1. The (*non-chiral*) *spinor* representation $\psi $ ([Weyl]{}). 2. Its *conjugate* *spinor* representation$$\overline{\psi }_{(2,2)} =\underset{\overline{\psi ^{+}}}{(\mathbf{2},\mathbf{1})_{-}}\oplus \underset{\overline{\psi ^{-}}}{(\mathbf{1},\mathbf{2})_{+}}~\cong \left( \begin{array}{c} \overline{a} \\ \overline{c}\end{array}\right) , \label{Weyl-conjug}$$where it is immediate to realize that, by virtue of the representability of $\psi $ as $\mathbb{C}_{s}^{2}$, the conjugation in $S$ is *induced* by the conjugation[^10] (\[conjug-Cs\]) in $\mathbb{C}_{s}$. 3. The *vector* $x:=\left( \mathbf{2},\mathbf{2}\right) _{0}$, which (differently from the spinor representations at points 1 and 2 above) *descends* to an irreducible representation of $\mathrm{SO}(2,2)\left( \times \mathrm{SO}(1,1)\right) $. Consistently, it is given by the tensor product of the Majorana-Weyl spinors $\psi ^{+}$ and $\psi ^{-}$ (or of their conjugate; *cfr. e.g.* Table 3 of [@Spinor-Algebras], with $D=4$ and $k=1$):$$\underset{\left( \mathbf{2},\mathbf{2}\right) _{0}}{x}~=~\underset{(\mathbf{2},\mathbf{1})_{+}}{\psi ^{+}}\otimes \underset{(\mathbf{1},\mathbf{2})_{-}}{\psi ^{-}}~=~\underset{(\mathbf{2},\mathbf{1})_{-}}{\overline{\psi ^{+}}}\otimes \underset{(\mathbf{1},\mathbf{2})_{+}}{\overline{\psi ^{-}}}. \label{vector}$$$x$ (\[vector\]) can be consistently represented as an element of $J_{2}^{\mathbb{C}_{s}}$, as follows. In the standard basis of $\mathbf{M}^{2,2}$, $x^{a}=(x^{1},\cdots ,x^{4})$ ($a=1,...,4=s+t$); then, its components can be rearranged as entries of the following $2\times 2$ Hermitian matrix (recall (\[matr\]) and (\[iso-2\])):$$\mathcal{X}:=\begin{pmatrix} x_{+} & \overline{a} \\ a & x_{-}\end{pmatrix}\in J_{2}^{\mathbb{C}_{s}}, \label{X-call}$$where $a:=x^{3}+jx^{2}\in \mathbb{C}_{s}$, and $\mathbb{R}\ni x_{\pm }:=x^{1}\pm x^{4}$, and the bar denotes the conjugation in $\mathbb{C}_{s}$ (see (\[conjug-Cs\])). The so-called *trace reversal* $\widetilde{\mathcal{X}}$ of $\mathcal{X}$ is defined as follows :$$\widetilde{\mathcal{X}}:=-\begin{pmatrix} x_{-} & -\overline{a} \\ -a & x_{+}\end{pmatrix}\in J_{2}^{\mathbb{C}_{s}}\,. \label{X-call-tilde}$$Then, by recalling (\[norm-Cs\]), we observe that $$\text{det}\mathcal{X}=\left( x^{1}\right) ^{2}-\left( x^{4}\right) ^{2}-|z|^{2}=\left( x^{1}\right) ^{2}+\left( x^{2}\right) ^{2}-\left( x^{3}\right) ^{2}-\left( x^{4}\right) ^{2}=\eta _{ab}x^{a}x^{b},$$ or equivalently $$\mathcal{X}\widetilde{\mathcal{X}}=\widetilde{\mathcal{X}}\mathcal{X}=:-\eta _{ab}x^{a}x^{b}\,\mathbb{I}\,,$$ where the metric $\eta _{ab}=\eta ^{ab}$ is given by (\[metric\]) with $s=t=2$, and $\mathbb{I}$ denotes the $2\times 2$ identity matrix. In other words, recalling (\[norm-J2\]), one can conclude that the squared norm $\left\vert x\right\vert ^{2}$ of $x$ (as a vector in $\mathbf{M}^{2,2}$) is given by the quadratic norm of $x$ as an element (\[X-call\]) of $J_{2}^{\mathbb{C}_{s}}$ itself :$$\left\vert x\right\vert ^{2}=x^{a}x^{b}\eta _{ab}=\text{det}\mathcal{X}=\mathbf{N}\left( x\right) . \label{vector-norm}$$ Let us now consider the following transformations : $$\begin{array}{rcl} J_{2}^{\mathbb{C}_{s}} & \rightarrow & J_{2}^{\mathbb{C}_{s}}, \\[3mm] \mathcal{X} & \mapsto & \lambda ^{\dag }\mathcal{X}\lambda =:\mathcal{X}^{\prime }\,,\,\,\,\,\,\lambda \in M_{2}(\mathbb{\mathbb{C}}_{s}),\end{array} \label{transfo-2}$$ where $\dag $ stands for transposition times conjugation (\[conjug-Cs\]) in the underlying split algebra $\mathbb{\mathbb{C}}_{s}$. *Klein transformations* are defined as those transformations (\[transfo-2\]) in which $\lambda \in \mathrm{SL}(2,\mathbb{C}_{s})$; it is then immediate to realize that such transformations induce orthogonal transformations in $\mathbf{M}^{2,2}$, since they do preserve the determinant of $\mathcal{X}$, and thus $\left\vert x\right\vert ^{2}$. In particular, $\mathrm{SL}(2,\mathbb{C}_{s})\cong \mathrm{SL}(2,\mathbb{R})\times \mathrm{SL}(2,\mathbb{R})$ doubly covers $\mathrm{SO}(2,2)$, and it is then possible to identify it (or, more precisely, its identity-connected component) with the *Spin* group $\mathrm{Spin}(2,2)$, which we anticipated above to be named *Klein group* in $4$ dimensions. In other words, $\mathrm{SL}(2,\mathbb{C}_{s})$ acts naturally on $J_{2}^{\mathbb{C}_{s}}$ as the spin covering of $\mathrm{SO}(2,2)$. Thus, the following group isomorphisms hold:$$\mathrm{Spin}(2,2)\cong \mathrm{SL}(2,\mathbb{C}_{s})\cong \mathrm{SL}(2,\mathbb{R})\times \mathrm{SL}(2,\mathbb{R}). \label{isso}$$ As we have mentioned above, in signature $\left( 2,2\right) $ spinors are Majorana, and they are identified with vectors in $\mathbb{C}_{s}^{2}$. We identify them with the vector representation of $\mathrm{SL}(2,\mathbb{C}_{s})$, *i.e.* $\mathbb{C}_{s}^{2}$. It is here instructive to observe that, as an $\mathrm{SL}(2,\mathbb{C}_{s})$-module, $\mathbb{C}_{s}^{2}$ is *not* irreducible. This can be realized by decomposing every vector in $\mathbb{C}_{s}^{2}$ according to (\[decomposition\]) as $$\underbrace{\left( \begin{matrix} a \\ c\end{matrix}\right) }_{\psi }=\underbrace{\left( \begin{matrix} \alpha _{+} \\ \gamma _{+}\end{matrix}\right) }_{\psi _{\mathcal{E}}}\mathcal{E}+\underbrace{\left( \begin{matrix} \alpha _{-} \\ \gamma _{-}\end{matrix}\right) }_{\psi _{\overline{\mathcal{E}}}}\overline{\mathcal{E}}\,,\,\,\,\,\,\,\,\,\,\alpha _{\pm },\gamma _{\pm }\in \mathbb{R};$$analogously, any element of $\mathrm{M}_{2}(\mathbb{C}_{s})$ can be splitted as follows : $$\underbrace{\left( \begin{matrix} a & b \\ c & d\end{matrix}\right) }_{M}=\underbrace{\left( \begin{matrix} \alpha _{+} & \beta _{+} \\ \gamma _{+} & \delta _{+}\end{matrix}\right) }_{M_{\mathcal{E}}}\mathcal{E}+\underbrace{\left( \begin{matrix} \alpha _{-} & \beta _{-} \\ \gamma _{-} & \delta _{-}\end{matrix}\right) }_{M_{\overline{\mathcal{E}}}}\overline{\mathcal{E}}.$$ Consider now a matrix $M=M_{\mathcal{E}}\mathcal{E}+M_{\overline{\mathcal{E}}}\overline{\mathcal{E}}\in \mathrm{SL}(2,\mathbb{C}_{s})$; then, $2M_{\mathcal{E}}\in \mathrm{SL}(2,\mathbb{R})$ and $2M_{\overline{\mathcal{E}}}\in \mathrm{SL}(2,\mathbb{R})$ and every $\mathrm{SL}(2,\mathbb{C}_{s})$-module $\psi \in \mathbb{C}_{s}^{2}$ splits into two irreducible submodules on which $M$ acts by an $\mathrm{SL}(2,\mathbb{R})$ matrix. To see this, we observe that $\det M=2\det M_{\mathcal{E}}\,\mathcal{E}+2\det M_{\overline{\mathcal{E}}}\,\overline{\mathcal{E}}$, from which one obtains that the unitarity of $M$ implies $\det M_{\mathcal{E}}=\det M_{\overline{\mathcal{E}}}=\frac{1}{4}$, and thus $2M_{\mathcal{E}}$ and $2M_{\overline{\mathcal{E}}}$ are $\mathrm{SL}(2,\mathbb{R})$-matrices. Then, the action on any spinors splits as $$M\psi =(2M_{\mathcal{E}})\,\psi _{\mathcal{E}}+(2M_{\overline{\mathcal{E}}})\,\psi _{\overline{\mathcal{E}}}\,.$$Consistent with (\[Weyl\]), we thus identify $\psi _{\mathcal{E}}$ and $\psi _{\overline{\mathcal{E}}}$ with the Majorana-Weyl spinors $\psi ^{+}$ resp. $\psi ^{-}$ of opposite chirality. \[reps\]Spinor Orbits and Representatives ----------------------------------------- Let us now discuss how the linear action of $\mathrm{Spin}(2,2)\left( \times \mathrm{SO}(1,1)\right) $ on the spinor $\psi =(\mathbf{2},\mathbf{1})_{+}\oplus (\mathbf{1},\mathbf{2})_{-}$ (or, equivalently, on its conjugate $\overline{\psi }=\left( \mathbf{2,1}\right) _{-}+\left( \mathbf{1},\mathbf{2}\right) _{+}$) determines the *stratification* of the corresponding spinor representation space $S$ into orbits. The crucial outcome of our analysis (in agreement with literature; *cfr. e.g.* [Bryant-1,Bryant-2]{}, and Refs. therein) is that $\mathrm{Spin}(2,2)\cong \mathrm{SL}(2,\mathbb{C}_{s})$ (*cfr.* (\[isso\])) does *not* act transitively on $\mathbb{C}_{s}^{2}$. We start by noting that the orbit of[^11] $e_{1}:=(1,0)^{t}\in \mathbb{C}_{s}^{2}$ contains all elements of the form $(a,c)^{t}$ with $a$ *and/or* $c$ *invertible*; in fact : $$\begin{aligned} \begin{pmatrix} a & 0 \\ c & a^{-1}\end{pmatrix}\begin{pmatrix} 1 \\ 0\end{pmatrix} &=&\begin{pmatrix} a \\ c\end{pmatrix}, \\ \begin{pmatrix} a & -c^{-1} \\ c & 0\end{pmatrix}\begin{pmatrix} 1 \\ 0\end{pmatrix} &=&\begin{pmatrix} a \\ c\end{pmatrix}.\end{aligned}$$Thus, we are henceforth going to deal only with elements $(a,c)^{t}\in \mathbb{C}_{s}^{2}$ with *both* $a$ and $c$ *non-invertible*. By recalling the remark below (\[Cs-inv\]) and Eq. (\[res\]), this amounts to consider *both* $a$ and $c$ either zero or $u\mathcal{E}$ or $u^{\prime }\overline{\mathcal{E}}$, with $u,u^{\prime }\in \mathbb{C}_{s}^{\times }$ and $\mathcal{E}:=1+j$. We notice that $$\begin{pmatrix} u^{-1} & 0 \\ 0 & u\end{pmatrix}\begin{pmatrix} u\mathcal{E} \\ u^{\prime }\mathcal{E}\end{pmatrix}=\begin{pmatrix} \mathcal{E} \\ uu^{\prime }\mathcal{E}\end{pmatrix};$$furthermore, $(\mathcal{E},v\mathcal{E})^{t}$ lies in the orbit of $(\mathcal{E},\mathcal{E})^{t}$, because $$\begin{pmatrix} 1 & 0 \\ 1-v & 1\end{pmatrix}\begin{pmatrix} \mathcal{E} \\ v\mathcal{E}\end{pmatrix}=\begin{pmatrix} \mathcal{E} \\ \mathcal{E}\end{pmatrix}.$$ Therefore, up to conjugation in $S\cong \mathbb{C}_{s}^{2}$ (induced by the conjugation in $\mathbb{C}_{s}$, and mapping the spinor $\psi $ into its conjugate $\overline{\psi }$), in $S$ one needs to consider (besides $(0,0)^{t}$ and $(1,0)^{t}$) only the following elements: $$\mathbf{1}:\begin{pmatrix} 0 \\ \mathcal{E}\end{pmatrix},\quad \mathbf{2}:\begin{pmatrix} \mathcal{E} \\ 0\end{pmatrix},\quad \mathbf{3}:\begin{pmatrix} \mathcal{E} \\ \overline{\mathcal{E}}\end{pmatrix},\quad \mathbf{4}:\begin{pmatrix} \mathcal{E} \\ \mathcal{E}\end{pmatrix};\quad \label{I-IV}$$in other words, one can disregard the multiplication by invertible split complex numbers (as well as the conjugation in $\mathbb{C}_{s}^{2}$) when dealing with the stratification of the spinor representation space $S$. By definition of group orbit, in order to establish the *stratification structure* of $\mathbb{C}_{s}^{2}$ under the action of the Klein group in four dimensions, we have to determine which elements in $\mathbb{C}_{s}^{2}$ are connected through the action of an element $g\in \mathrm{SL}(2,\mathbb{C}_{s})$. Let us then analyze the elements listed in (\[I-IV\]) : 1. This element belongs to the orbit of $(\mathcal{E},\mathcal{E})^{t}$, because : $$\begin{pmatrix} (1+\overline{\mathcal{E}})/2 & (1-\mathcal{E})/2 \\ (1-\overline{\mathcal{E}})/2 & (1+\mathcal{E})/2\end{pmatrix}\begin{pmatrix} \mathcal{E} \\ \mathcal{E}\end{pmatrix}=\begin{pmatrix} 0 \\ 2\mathcal{E}\end{pmatrix}. \label{ep}$$ 2. A similar argument also shows that $(\mathcal{E},0)^{t}$ is in the orbit of $(\mathcal{E},\mathcal{E})^{t}$. 3. Quite surprisingly, the element $(\mathcal{E},\overline{\mathcal{E}})^{t}$ can be proved to lie in the orbit of $(1,0)^{t}$, because : $$\begin{pmatrix} \mathcal{E} & -1/2 \\ \overline{\mathcal{E}} & 1/2\end{pmatrix}\begin{pmatrix} 1 \\ 0\end{pmatrix}=\begin{pmatrix} \mathcal{E} \\ \overline{\mathcal{E}}\end{pmatrix}.$$ 4. There exists *no* transformation of $SL(2,\mathbb{C}_{s})$ connecting $(\mathcal{E},\mathcal{E})^{t}$ to $(1,0)^{t}$. In fact, if this were the case, one would have $$\begin{pmatrix} a & b \\ c & d\end{pmatrix}\begin{pmatrix} 1 \\ 0\end{pmatrix}=\begin{pmatrix} \mathcal{E} \\ \mathcal{E}\end{pmatrix},$$hence $a=c=\mathcal{E}$. This *cannot* be, since otherwise the determinant $ad-bc=\mathcal{E}(d-b)$ would be a zero divisor $\blacksquare $ From this analysis, it follows that the orbits of $\mathbb{C}_{s}^{2}$ under the action of the Klein group $\mathrm{SL}(2,\mathbb{C}_{s})$ (up to conjugation in $\mathbb{C}_{s}^{2}$, equivalent to conjugation in $S$) are characterized by one of the following three well-defined *representatives* : $$\begin{pmatrix} 0 \\ 0\end{pmatrix},\quad \begin{pmatrix} 1 \\ 0\end{pmatrix},\quad \begin{pmatrix} \mathcal{E} \\ \mathcal{E}\end{pmatrix}.$$ Thus, besides the trivial orbit (given by the origin $\left( 0,0\right) ^{t}$ of $\mathbb{C}_{s}^{2}$), $(1,0)^{t}$ and $\left( \mathcal{E},\mathcal{E}\right) ^{t}$ (or equivalently $\left( \mathcal{E},0\right) ^{t}$) are well-defined representatives of the orbit stratification. In particular, the representative $(1,0)^{t}$ is stabilized by any matrix of $\mathrm{SL}(2,\mathbb{C}_{s})$ of the form $$\left( \begin{matrix} 1 & b \\ 0 & 1\end{matrix}\right) \cong \mathbb{C}_{s}\cong \mathbb{R}^{2},$$while the stabilizer of $(\mathcal{E},0)^{t}$ reads $$M_{\overline{\mathcal{E}}}\overline{\mathcal{E}}+\left( \begin{matrix} \frac{1}{2} & \beta _{+} \\ 0 & \frac{1}{2}\end{matrix}\right) \mathcal{E}\cong \mathrm{SL}(2,\mathbb{R})\ltimes \mathbb{R}.$$ Summarizing, we obtained $$\begin{aligned} \mathcal{O}_{(1,0)^{t}} &\cong &\frac{\mathrm{Spin}(2,2)}{\mathbb{R}^{2}},~\text{dim}_{\mathbb{R}}=4; \label{ps-2-2} \\ \mathcal{O}_{(\mathcal{E},0)^{t}} &\cong &\frac{\mathrm{Spin}(2,2)}{\mathrm{SL}(2,\mathbb{R})\ltimes \mathbb{R}},~\text{dim}_{\mathbb{R}}=2. \label{ps-3-2}\end{aligned}$$ Since dim$_{\mathbb{R}}\mathcal{O}_{(1,0)^{t}}=$dim$_{\mathbb{R}}S=4$, such an orbit can be regarded as the *generic* one; consequently, $\mathcal{O}_{(\mathcal{E},0)^{t}}$ is the *non-generic* spinor orbit. Klein-Conformal group and $\mathrm{Spin}(3,3)$ ============================================== We are now going to consider the *Klein-ambient space* $\mathbf{M}^{3,3}\cong \mathbb{R}^{3,3}$ and the corresponding *Klein group* in $6$ dimensions, namely $\mathrm{Spin}(3,3)$. In this case, $s=t=3$, and thus again $\rho =0$. In turn, $\mathrm{Spin}(3,3)$ can also be regarded as the *conformal*[^12] group of $\mathbf{M}^{2,2}\cong \mathbb{R}^{2,2}$ itself. In complete analogy with the treatment of $\mathrm{Spin}(2,2)$ given above, one can identify a vector $x^{A}=(x^{1},\cdots ,x^{6})$ ($A=1,...,6$) in $\mathbf{M}^{3,3}$ with an element of the quadratic simple Jordan algebra $J_{2}^{\mathbb{H}_{s}}$over split quaternions $\mathbb{H}_{s}$, by rearranging the vector components as entries of the $2\times 2$ Hermitian matrix $$\mathcal{V}=\begin{pmatrix} \hat{x}_{+} & z^{\ast } \\ z & \hat{x}_{-}\end{pmatrix}\in J_{2}^{\mathbb{H}_{s}}, \label{V-J2-Hs}$$where $z:=x^{5}+jx^{1}+kx^{4}+(kj)x^{2}\in \mathbb{H}_{s}$, $\mathbb{R}\ni \hat{x}_{\pm }:=x^{3}\pm x^{6}$ , and the star denoting the conjugation in $\mathbb{H}_{s}$ (*cfr.* (\[conjug-Hs\])). By recalling the definition (\[norm-Hs\]), the quadratic form associated to the metric $\eta _{AB}=\eta ^{AB}$ of signature $(3,3)$ (given by (\[metric\]) with $s=t=3$) of $\mathbf{M}^{3,3}$ is then obtained by computing[^13] $$\text{det}\mathcal{V}=\left( x^{3}\right) ^{2}-\left( x^{6}\right) ^{2}-|z|^{2}=\left( x^{1}\right) ^{2}+\left( x^{2}\right) ^{2}+\left( x^{3}\right) ^{2}-\left( x^{4}\right) ^{2}-\left( x^{5}\right) ^{2}-\left( x^{6}\right) ^{2}:=\eta _{AB}x^{A}x^{B},$$or equivalently $$\mathcal{V}\widetilde{\mathcal{V}}=\widetilde{\mathcal{V}}\mathcal{V}=-\eta _{AB}x^{A}x^{B}\mathbb{I}\,,$$where $\widetilde{\mathcal{V}}$ is the *trace reversal* of $\mathcal{V}$. In other words, recalling (\[norm-J2\]), one can conclude that the squared norm $\left\vert x\right\vert ^{2}$ of $x$ (as a vector in $\mathbf{M}^{3,3}$) is given by the quadratic norm of $x$ as an element (\[V-J2-Hs\]) of $J_{2}^{\mathbb{H}_{s}}$ itself :$$\left\vert x\right\vert ^{2}=x^{A}x^{A}\eta _{AB}=\text{det}\mathcal{V}=\mathbf{N}\left( x\right) .$$ Let us now consider the following transformations : $$\begin{array}{rcl} J_{2}^{\mathbb{H}_{s}} & \rightarrow & J_{2}^{\mathbb{H}_{s}}, \\[3mm] \mathcal{V} & \mapsto & \lambda ^{\dag }\mathcal{V}\lambda =:\mathcal{V}^{\prime }\,,\,\,\,\,\,\lambda \in M_{2}(\mathbb{H}_{s}),\end{array} \label{transfo-3}$$where $\dag $ stands for transposition times conjugation (\[conjug-Hs\]) in the underlying split algebra $\mathbb{\mathbb{H}}_{s}$. *Klein-conformal transformations* are defined as those transformations ([transfo-3]{}) in which $\lambda \in \mathrm{SL}(2,\mathbb{H}_{s})$, where the special linear group is defined as (recall (\[transfo\])) $$\mathrm{SL}(2,\mathbb{H}_{s}):=\{M\in M_{2}({\mathbb{H}_{s}})\,|\,\text{det}(Z(M))=1\},$$or equivalently (recall (\[epsilon\])) $$\mathrm{SL}(2,\mathbb{H}_{s}):=\left\{ M\in \mathrm{SL}(4,{\mathbb{C}_{s}})\,\left\vert\, \overline{M}\begin{pmatrix} \epsilon & \mathbf{0} \\ \mathbf{0} & \epsilon\end{pmatrix}=\begin{pmatrix} \epsilon & \mathbf{0} \\ \mathbf{0} & \epsilon\end{pmatrix}M\right. \right\} .$$It is then immediate to realize that such transformations induce orthogonal transformations in $\mathbf{M}^{3,3}$ (and correspondingly *conformal* transformations in $\mathbf{M}^{2,2}$), since they do preserve det$\mathcal{V}$ and thus $\left\vert x\right\vert ^{2}$. In particular, $\mathrm{SL}(2,\mathbb{H}_{s})$ doubly covers $\mathrm{SO}(3,3)$, and it is then possible to identify it (or, more precisely, its identity-connected component) with the *Spin* group $\mathrm{Spin}(3,3)$. In other words, $\mathrm{SL}(2,\mathbb{H}_{s})$ acts naturally on $J_{2}^{\mathbb{H}_{s}}$ as the spin covering of $\mathrm{SO}(3,3)$. This establishes the group isomorphism$$\mathrm{Spin}(3,3)\cong \mathrm{SL}(2,\mathbb{H}_{s}).\newline \label{iso-2}$$ A Further Group Isomorphism --------------------------- For the subsequent treatment, we find convenient to present also another isomorphism involving $\mathrm{Spin}(3,3)$, namely[^14]$$\mathrm{Spin}(3,3)\cong \widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s}). \label{iso-1}$$In order to prove it, we start from the $4\times 4$ matrix given by ([app-1]{}) in the App. \[App-Sympl\], which we report below for convenience’s sake : ` ` $$\mathbb{X}:=\begin{pmatrix} \hat{x}_{+}\epsilon & \mathcal{X}\epsilon \\[3mm] -\widetilde{\mathcal{X}}\epsilon & -\hat{x}_{-}\epsilon \end{pmatrix}.$$Then, one can compute that $$\mathbb{X}^{\dag }\Omega \mathbb{X}=\eta _{AB}x^{A}x^{B}\Omega ,$$where $\dag $ stands for transposition times conjugation (\[conjug-Cs\]) in $\mathbb{\mathbb{C}}_{s}$, and $\Omega $ here denotes for the $4\times 4$ symplectic metric (recall (\[epsilon\]))$$\Omega =\begin{pmatrix} 0 & \mathbb{I} \\ -\mathbb{I} & 0\end{pmatrix}=\mathbb{I}\otimes \epsilon . \label{Omega-4x4}$$Therefore, one can define the symplectic group *à la Barton and Sudbery* [@BS-1; @BS-2]: $$\widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s}):=\{M\in \mathrm{SL}(4,\mathbb{C}_{s})\,|\,M^{\dag }\Omega M=\Omega \}, \label{def-Sp4}$$and any transformation of the form $$\mathbb{X}\rightarrow \mathbb{X}=\lambda \mathbb{X}\lambda ^{t},\,\,\,\text{with}\,\,\,\,\lambda \in \widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s}),$$preserves the (squared) norm $\left\vert x\right\vert ^{2}$ in $\mathbf{M}^{3,3}$ and induces a *Klein-conformal* transformation in $\mathbf{M}^{2,2}$, thus providing an alternative realization of $\mathrm{Spin}(3,3)$, and then inducing the isomorphism (\[iso-1\]) $\blacksquare $ We thus obtain the following chain of group isomorphisms:$$\mathrm{Spin}\left( 3,3\right)\cong \mathrm{SL}(2,\mathbb{H}_{s})\cong \widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s}). \label{iso-3}$$ We have already discussed in Section \[spinors\] that spinors of $\mathrm{Spin}\left( 3,3\right)$ can be interpreted as vectors of $\mathbb{H}_{s}^2\cong \mathbb{C}_{s}^4$ on which, in analogy with the $\mathrm{Spin}\left( 2,2\right)$ case, $\mathrm{SL}(2,\mathbb{H}_{s})$ does not act transitively; this determines the stratification into orbits. Even if the stratification reveals to be more evident using the special linear group over split quaternions, in the next section we will use instead the symplectic group over split complexes to realize the Klein superspace as the space of $2|0$ totally isotropic subspaces in $\mathbb{C}_{s}^{4|1}$, *i.e.* the Lagrangian superspace. We will focus on the case in which the representative super plane is given by a pair of generic vectors of $\mathbb{C}_{s}^{4|1}$ whose even part is a generic spinor. One has of course the possibility to choose other isotropic subspaces as representative given by other combinations (namely, non generic-generic or non generic-non generic) of spinors. Since the action of the Klein group stratifies the spinor space, we expect to obtain different and intriguing constructions. We leave a detailed analysis for a future project, while in this paper we focus on the generic-generic case for spinor representatives. Klein and Klein-Conformal $\mathcal{N}=1$ Superspaces\[supermink\] ================================================================== We can now proceed to construct the $\mathcal{N}=1$ Klein-conformal and Klein superspaces in $D=(2,2)$. Supermanifolds, and in particular the $\mathcal{N}=1$ Minkowski and conformal superspaces in $D=(3,1)$, have been studied intensively in the past years. A thorough account of such a broad field of investigation lies well beyond the scope of this paper; we here confine ourselves to addressing the interested reader to [@flmink], and Refs. therein, for an exhaustive bibliography. In order to construct the $\mathcal{N}=1$ Klein-conformal and Klein superspaces in $D=(2,2)$, we will exploit a procedure which is very similar to the one of [@FL-1]; however, some extra attention should be paid in the definition of the *functor of points* of a $\mathbb{C}_{s}$-group. We could give our definitions in full generality, but for clarity’s sake we do prefer to adapt them to our specific framework. We have provided App. \[sgeo-app\] for the basic facts of supergeometry and supergroups; for more details on the technicalities involved, we address the reader *e.g.* to Ch. 10 of [@ccf]. The $A$-points of the general linear supergroup over ${\mathbb C}_s$ are given by (see App. \[sgeo-app\] (\[glfun\])): $$\mathrm{GL}(m|n)(A) =\left\{ \begin{pmatrix} a & \alpha \\ \beta & b\end{pmatrix} \right\}\,=\, \mathrm{Hom}_{\mathrm{(salg)}_{k}}( {\mathbb C}_s[\mathrm{GL}(m|n)],A),$$ where $a$, $b$, $\alpha $, $\beta $ are matrices with entries in $A$ (roman and greek lowercase letters denote even resp. odd entries throughout), and $a $ and $b$ are invertible. If we regard ${\mathrm{GL}}(m|n)$ as a real supergroup, we can define its $A$-points (where here $A$ is a real superalgebra): $$({\mathrm{GL}}(m|n)(A))_{\mathbb{R}}={\mathrm{Hom}}_{{\mathrm{(salg)}}}({\mathbb C}_{s}[[x_{ij},\xi _{kl}][\det (x_{ij})_{1\leq i,j\leq m}^{-1},\det (x_{ij})_{m+1\leq i,j\leq m+n}^{-1}],A\otimes {\mathbb C}_{s})$$ (see App. \[sgeo-app\] (\[cs-real\])). We now define, in complete analogy to [@FL-1], the *symplectic orthogonal supergroup *$\widetilde{\mathrm{SpO}}(4|1)$ as the (real) subsupergroup of $\mathrm{GL}(m|n)_{\mathbb{R}}$ given as: $$\widetilde{\mathrm{SpO}}{(4|1)}(A)\,=\,\{\Lambda \in (\mathrm{GL}{(4|1)} )_{\mathbb{R}}(A)\,|\,\Lambda ^{\dag }\mathbb{J}\Lambda =\mathbb{J}\},\qquad \text{with}\qquad \mathbb{J}=\begin{pmatrix} 0 & \mathbb{I} & 0 \\ -\mathbb{I} & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}=\begin{pmatrix} \Omega & 0 \\ 0 & 1\end{pmatrix}\,.$$ where $\Lambda ^{\dagger }:=\overline{\Lambda }^{t}$ (with $t$ here denoting the supertranspose) and the conjugation is consistently understood in $\mathbb{C}_{s}$, as detailed in the treatment above. If $$\Lambda =\begin{pmatrix} B & \alpha \\ \beta & u\end{pmatrix},\quad B=\begin{pmatrix} a & b \\ c & d\end{pmatrix},\quad \beta =(\beta _{1},\beta _{2}),\quad \alpha =(\alpha _{1},\alpha _{2})^{t},$$ with $\beta _{i}$, $\alpha _{i}\in A^{2}$ ($i=1,2$), from the condition$$\Lambda ^{\dag }\mathbb{J}\Lambda =\mathbb{J}$$one obtains the following set of equations : $$\left\{ \begin{array}{c} B^{\dagger }JB+\beta ^{\dagger }\beta =J; \\ \\ B^{\dagger }J\alpha +\beta ^{\dagger }u=0; \\ \\ -\alpha ^{\dagger }JB+u^{\dagger }\beta =0; \\ \\ -\alpha ^{\dagger }J\alpha +u^{\dagger }u=1.\end{array}\right. \,\iff \,\left\{ \begin{array}{c} a^{\dagger }c-c^{\dag }a+\beta _{1}^{\dagger }\beta _{1}=0; \\ a^{\dagger }d-c^{\dagger }b+\beta _{1}^{\dagger }\beta _{2}=\mathbb{I}; \\ b^{\dagger }c-d^{\dagger }a+\beta _{2}^{\dagger }\beta _{1}=-\mathbb{I}; \\ b^{\dagger }d-d^{\dagger }b+\beta _{2}^{\dagger }\beta _{2}=0; \\ -c^{\dagger }\alpha _{1}+a^{\dagger }\alpha _{2}+\beta _{1}^{\dagger }u=0; \\ -d^{\dagger }\alpha _{1}+b^{\dagger }\alpha _{2}+\beta _{2}^{\dagger }u=0; \\ \alpha _{2}^{\dagger }\alpha _{1}-\alpha _{1}^{\dagger }\alpha _{2}+u^{\dagger }u=1.\end{array}\right. \label{spo-eq}$$ We now consider the (real) supermanifold $\mathcal{L}$ of $2|0$ totally isotropic subspaces in $\mathbb{C}_{s}^{4|1}$. Let us take $\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3},\mathbf{e}_{4},\epsilon\}$ the canonical basis for $\mathbb{C}_{s}^{4|1}$. We define $\mathcal{L}$ as the orbit of the super subspace $\mathrm{span}_{{\mathbb C}_s} \{{\mathbf{e}}_{1},{\mathbf{e}}_{2}\}$ under the natural action of the real supergroup $\widetilde{\mathrm{SpO}}{(4|1)}$. This is a supermanifold, and if $A$ is a local $\mathbb{C}_{s}$-superalgebra, one obtains $$\mathcal{L}(A) =\left\{ \begin{pmatrix} a \\ c \\ \beta _{1}\end{pmatrix}\,\Big|\,a^{\dagger }c-c^{\dagger }a+\beta _{1}^{\dagger }\beta _{1}=0\right\} /\mathrm{GL}_{2}(A)\,. \label{slag-eq}$$ It should be here stressed that $A$ needs to be taken *local* in order to express in an easier way the action of $\widetilde{\mathrm{SpO}}{(4|1)}$ on $\mathcal{L}$; we address the reader to Chs. 2 and 4 of [@flmink] for a detailed treatment of this technical point. **Remark**. The real supergroup $\widetilde{\mathrm{SpO}}{(4|1)}$ does *not* act transitively on the superspace $\mathbb{C}_{s}^{4|1}$; in the standard (*i.e.*, non-super) case, we have mentioned such a feature in the previous section and in Sec. \[reps\] for the Klein case. However, this fact will not influence our treatment, since we realize the Klein $\mathcal{N}=1$ superspace as an *open* inside the $\widetilde{\mathrm{SpO}}{(4|1)}$-orbit $\mathcal{L}$ of $\mathrm{span}_{{\mathbb C}_s} \{{\mathbf{e}}_{1},{\mathbf{e}}_{2}\}$,*i.e.* of the generic-generic spinor case. We consider the open subset of $\mathcal{L}$ consisting of those subspaces corresponding to $a$ invertible. We call it $\mathbf{M}^{2,2|1}$ : it will be our model for the $D=(2,2)$ $\mathcal{N}=1$ *Klein superspace*, while $\mathcal{L}$ is topologically the compactification of $\mathbf{M}^{2,2|1}$, and it is the $D=(2,2)$ $\mathcal{N}=1$ *Klein-conformal superspace*. By multiplying by a suitable element of $\mathrm{GL}_{2}(A)$ we have: $$\mathbf{M}^{2,2|1}(A)=\left\{ \begin{pmatrix} \mathbb{I} \\ \mathcal{Y} \\ \zeta \end{pmatrix}\,\left\vert \mathcal{Y}^{\dagger }=\mathcal{Y}+\zeta ^{\dagger }\zeta \right. \right\} \,.\label{M2,2 1}$$ Here $A$ is a commutative superalgebra, not necessarily local as before. Notice that $\mathcal{Y}=ca^{-1}$, $\zeta =\beta _{1}a^{-1}$ with respect to the expression in (\[slag-eq\]). Hence, the equation is obtained immediately from (\[slag-eq\]) by setting $a=1$. This is precisely the condition found in [@flv] and in [@FL-1]. Furthermore, we remark that the relation: $$\mathcal{Y}^{\dagger }=\mathcal{Y}+\zeta ^{\dagger }\zeta$$for $\zeta =0$ reduces to the condition of $\mathcal{Y}$ to be Hermitian (in the context of $\mathbb{C}_{s}$). A comparison with (\[X-call\]) shows that this is precisely the condition for an element in $M_{2}(\mathbb{C}_{s})$ to belong to $\mathbf{M}^{2,2}$. Thus, the $\mathbb{C}_{s}$ points of the supermanifold $\mathbf{M}^{2,2|1}$ coincide with the Klein space $\mathbf{M}^{2,2}$ discussed above, and this justifies the use of our super-terminology. We now proceed to examine the *Klein-Poincaré supergroup*, acting on $\mathbf{M}^{2,2|1}$. We start by noticing that the supergroup functor $$\widehat{sKP}(A):=\left\{ \begin{pmatrix} L & 0 & 0 \\ M & R & R\phi \\ d\chi & 0 & d\end{pmatrix}\right\} \subset \widetilde{\mathrm{SpO}}{(4|1)}(A) \label{sKP}$$ leaves $\mathbf{M}^{2,2|1}$ invariant ($A$ as usual is a commutative superalgebra). This subgroup is representable (see App. \[sgeo-app\] for the definition of representable supergroup functor). In fact the real superalgebra representing it is obtained as a quotient of ${\mathbb R}[ \widetilde{\mathrm{SpO}}{(4|1)}]$, namely setting to zero those generators corresponding to the positions where we have zeros for the $A$-points in (\[sKP\]). Notice that its reduced group (see App. \[sgeo-app\], (\[red-grp\])) is the Klein-Poincaré group itself. We then define $\widehat{sKP}$ as *the Klein-Poincaré supergroup*. Its $A$-points are given by (\[sKP\]). Applying the equations in (\[spo-eq\]) to $\widehat{sKP}(A)$, one obtains $$R=(L^{\dagger })^{-1},\quad \phi =\chi ^{\dagger },\quad ML^{-1}=(ML^{-1})^{\dagger }+(L^{\dagger })^{-1}\chi ^{\dagger }\chi L^{-1}\,,$$ yielding $$\widehat{sKP}(A)=\left\{ \begin{pmatrix} L & 0 & 0 \\ M & (L^{\dagger })^{-1} & (L^{\dagger })^{-1}\chi ^{\dagger } \\ d\chi & 0 & d\end{pmatrix}\right\} .$$ Then, the action on $\mathbf{M}^{2,2|1}$ (\[M2,2 1\]) can be readily computed to yield : $$\begin{array}{rclcl} \widehat{sKP} & \times & \mathbf{M}^{2,2|1} & \longrightarrow & \mathbf{M}^{2,2|1} \\ & & & & \\ \scalebox{.8}{$\begin{pmatrix} L & 0 & 0 \\ M & (L^\dagger )^{-1} & (\chi L^{-1})^\dag\ \\ d\chi & 0 & d \end{pmatrix}$} & , & \scalebox{.8}{$\begin{pmatrix} \mathbb{I}\\[2mm] \mathcal{Y} \\[2mm] \zeta \end{pmatrix} $} & \mapsto & \scalebox{.8}{$\begin{pmatrix} \mathbb{I}\\[2mm] ML^{-1}+(L^\dag)^{-1}\mathcal{Y}L^{-1} +(\chi L^{-1})^\dag\zeta L^{-1}\\[2mm] d\chi L^{-1}+d\zeta L^{-1} \end{pmatrix} $}.\end{array}$$ We end this Section with an important observation that relates our construction of the Klein-Poincaré supergroup with our previous treatment of spinors of $\mathrm{Spin}(2,2)$. The Klein-Poincaré supergroup contains as its closed subgroup the *Klein supergroup*, whose functor of points is given by: $$\widehat{sK}(A)=\begin{pmatrix} (R^{\dagger })^{-1} & 0 & 0 \\ 0 & R & R\phi \\ d\phi ^{\dagger } & 0 & d\end{pmatrix}.$$As for its counterpart in Lorentz signature, $\widehat{sK}$ is obtained from $\widehat{sKP}$ by removing the inhomogeneous translational part given by $M$ (note we here use the variables $R=(L^{-1})^{\dagger }$, $\phi =\chi ^{\dagger }$). The corresponding Lie superalgebra reads $$\mathrm{Lie}(\widehat{sK})=\begin{pmatrix} -r^{\dagger } & 0 & 0 \\ 0 & r & r\varphi \\ \mathcal{D}\varphi ^{\dagger } & 0 & \mathcal{D}\end{pmatrix}.$$If $\mathfrak{g}_{0}$ and $\mathfrak{g}_{1}$ respectively denote the even and odd part of $\mathfrak{g}:=\mathrm{Lie}(\widehat{sK})$, there is a natural action of $\mathfrak{g}_{0}$ on $\mathfrak{g}_{1}$. Indeed, in this framework, it holds that $\mathfrak{g}_{0}=\mathfrak{g}_{0}^{\prime }\oplus \mathbb{C}_{s}$, where $\mathfrak{g}_{0}^{\prime }$ is the Lie algebra of the spin group $\mathrm{SL}_{2}(\mathbb{C}_{s})\cong \mathrm{Spin}(2,2)$ (*cfr.* (\[isso\])), and $\mathbb{C}_{s}$ corresponds to dilatations. As one can readily check, the action of $\mathfrak{g}_{0}$ on the odd part $r\varphi $ (that is $\mathfrak{g}_{1}$) is precisely the spinor representation $\mathbb{C}_{s}^{2}$ studied in previous Sections. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank John C. Baez, Leron Borsten, Andrew Waldron, Francesco Toppan and V. S. Varadarajan for useful discussions and suggestions, and especially Tevian Dray for its kind help in understanding the magic squares of Lie groups. A.M. wishes to thank the Department of Mathematics at the University of Bologna, for the kind hospitality during the realization of this work. \[App-Sympl\]Symplectic Realization of $\mathrm{Spin}(3,3)$ =========================================================== Consider a $\{\mathbf{e}_{\mu }\}$ for $\mathbb{C}_{s}^{4}$, and $\{\mathbf{e}^{\mu }\}$ its dual basis ($\mu =1,...,4$). A natural inner product $<\bullet ,\bullet >$ in $\Lambda ^{2}\mathbb{C}_{s}^{4}$ can be defined as follows $$\begin{array}{rrrcl} <\bullet ,\bullet > & : & \Lambda ^{2}\mathbb{C}_{s}^{4}\otimes \Lambda ^{2}\mathbb{C}_{s}^{4} & \rightarrow & \mathbb{C}_{s}, \\ & & x\wedge y\,,\,z\wedge w\,\,\, & \mapsto & (x\wedge y\wedge z\wedge w)(\mathbf{e}^{1}\wedge \mathbf{e}^{2}\wedge \mathbf{e}^{3}\wedge \mathbf{e}^{4}).\end{array}$$Note that $\widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s})$ (\[def-Sp4\]) acts in an obvious way on $\Lambda ^{2}\mathbb{C}_{s}^{4}$ preserving the inner product $<\bullet ,\bullet >$. We are now going to determine a *real* $6$-dimensional subspace of $\Lambda ^{2}\mathbb{C}_{s}^{4}$, which is stable under $\widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s})$ and on which $<\bullet ,\bullet >$ takes real values. To this aim, let us define the *symplectic* inner product $$\begin{array}{rrrcl} <\bullet ,\bullet >_{\Omega } & : & \mathbb{C}_{s}^{4}\otimes \mathbb{C}_{s}^{4} & \rightarrow & \mathbb{C}_{s}, \\ & & x,y\,\,\, & \mapsto & y^{\dag }\Omega x,\end{array}$$where $\Omega $ is given by (\[Omega-4x4\]). Then, one can use $<\bullet ,\bullet >$ and $<\bullet ,\bullet >_{\Omega }$ in order to construct the isomorphisms $\phi :\Lambda ^{2}\mathbb{C}_{s}^{4}\xrightarrow{\sim}(\Lambda ^{2}\mathbb{C}_{s}^{4})^{\ast }$ and $\varphi :\mathbb{C}_{s}^{4}\xrightarrow{\sim}(\mathbb{C}_{s}^{4})^{\ast }$. It is then possible to naturally identify $(\Lambda ^{2}\mathbb{C}_{s}^{4})^{\ast }\cong \Lambda ^{2}(\mathbb{C}_{s}^{\ast })^{4}$, and use it to construct the $\widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s})$-invariant isomorphism of $\Lambda ^{2}\mathbb{C}_{s}^{4}$ into itself as $\Phi :=\varphi ^{-1}\otimes \varphi ^{-1}\cdot \phi $. This identifies a subspace of $\Lambda ^{2}\mathbb{C}_{s}^{4}$ on which $\Phi $ acts as the identity operator. A convenient basis of such a subspace reads as follows : $$\begin{array}{rclcrcl} E_{1} & = & \frac{1}{\sqrt{2}}e_{1}\wedge e_{4}-\frac{1}{\sqrt{2}}e_{2}\wedge e_{3}; & \,\,\,\, & E_{4} & = & \frac{1}{\sqrt{2}}e_{1}\wedge e_{4}+\frac{1}{\sqrt{2}}e_{2}\wedge e_{3}; \\ E_{2} & = & j\frac{1}{\sqrt{2}}e_{1}\wedge e_{3}+j\frac{1}{\sqrt{2}}e_{2}\wedge e_{4}; & \,\,\,\, & E_{5} & = & \frac{1}{\sqrt{2}}e_{2}\wedge e_{4}-\frac{1}{\sqrt{2}}e_{1}\wedge e_{3}; \\ E_{3} & = & \frac{1}{\sqrt{2}}e_{1}\wedge e_{2}-\frac{1}{\sqrt{2}}e_{3}\wedge e_{4}; & \,\,\,\, & E_{6} & = & \frac{1}{\sqrt{2}}e_{1}\wedge e_{2}+\frac{1}{\sqrt{2}}e_{3}\wedge e_{4}, \\ & & & & & & \end{array}$$and it can be checked that within the such a subspace the inner product $<\bullet ,\bullet >$ takes real values, and has signature $\left( 3,3\right) $. Therefore, any vector in this subspace can be represented as antisymmetric $4\times 4$ matrix of the form $$\begin{aligned} \mathbb{X} &:&=\begin{pmatrix} 0 & x_{3}+x_{6} & -x_{5}+jx_{2} & x_{1}+x_{4} \\ -x_{3}-x_{6} & 0 & x_{4}-x_{1} & x_{5}+jx_{2} \\ x_{5}-jx_{2} & -x_{4}+x_{1} & 0 & x_{6}-x_{3} \\ -x_{1}-x_{4} & -x_{5}-jx_{2} & x_{3}-x_{6} & 0\end{pmatrix} \notag \\ &=&\begin{pmatrix} \hat{x}_{+}\epsilon & \mathcal{X}\epsilon \\[3mm] -\widetilde{\mathcal{X}}\epsilon & -\hat{x}_{-}\epsilon\end{pmatrix}=\epsilon \otimes \begin{pmatrix} \hat{x}_{+} & \mathcal{X} \\[3mm] -\widetilde{\mathcal{X}} & -\hat{x}_{-}\end{pmatrix}, \label{app-1}\end{aligned}$$where in the last step definition (\[epsilon\]) has been recalled, $\hat{x}_{\pm }:=x^{5}\pm x^{6}\in \mathbb{R}$, and $\mathcal{X}$, $\widetilde{\mathcal{X}}\in J_{2}^{\mathbb{C}_{s}}$ (*cfr.* definitions ([X-call]{})-(\[X-call-tilde\])). Supergeometry {#sgeo-app} ============= In this appendix we recall few well known facts about superalgebras and more in general supergeometry. We refer the reader to [@ccf] and the references within for more details. Let $k$ be a commutative algebra. For our purposes, it is enough to consider the cases of $k={\mathbb R}, {\mathbb C},{\mathbb C}_s$. A *super vector space* is a ${\mathbb Z}/2{\mathbb Z}$-graded vector space $V = V_0 \oplus V_1$; the elements of $V_0$ are called *even* and elements of $V_1$ are called *odd*. Notice that a parity of a vector $v$, denoted by $p(v)$, is not defined in general, but, since any element may be expressed as the sum of homogeneous ones, it suffices to consider only homogeneous vectors in all of the statements relying on linearity. The *super dimension* of a super vector space $V$ is the pair $(p,q)$, where dim($V_{0}$)=$p$ and dim($V_{1}$)=$q$ as ordinary vector spaces. When the dimension of $V$ is $p|q$, we can find a basis $\{e_{1},\ldots ,e_{p}\}$ of $V_{0}$ and a basis $\{\epsilon _{1},\ldots ,\epsilon _{q}\}$ of $V_{1}$ so that $$V={\mathrm{span}}\{e_{1},\ldots ,e_{p},\epsilon _{1},\ldots ,\epsilon _{q}\}.$$For us, the most relevant example is ${\mathbb C}_{s}^{4|1}={\mathrm{span}}\{e_{1},\ldots ,e_{4},\epsilon _{1}=:\epsilon\}$ (when there is just one odd basis element we omit the numbering). A *superalgebra* over $k$ is a super vector space $A$ together with a multiplication preserving parity. $A$ is commutative if $$xy=(-1)^{p(x)p(y)}yx$$The prototype of a commutative superalgebra is the *polynomial superalgebra*, generated by the even indeterminates $t_{1},\dots ,t_{m}$, which commute, and the odd ones $\theta _{1},\dots ,\theta _{n}$, which anticommute: $\theta _{i}\theta _{j}=-\theta _{j}\theta _{i}$, hence $\theta _{i}^{2}=0$. We denote such superalgebra with $k[t_{1},\dots ,t_{m},\theta _{1},\dots ,\theta _{n}]$. The reader may safely think of such superalgebra when we make our statements regarding commutative superalgebras (any such will be indeed a quotient of it). If $A$ is the polynomial superalgebra, we have: $$A_{0}=\left\{ f_{0}+\sum_{r\text{ even}}f_{I}\theta _{I}\,|\,I=\{i_{1}<\ldots <i_{r}\}\right\} ,\qquad A_{1}=\left\{ \sum_{s\text{ odd}}f_{J}\theta _{J}\,|\,J=\{j_{1}<\ldots <j_{s}\}\right\} .$$where we are using the multi-index notation and $f_{I},f_{j}\in k[t_{1},\dots ,t_{n}]$ the ordinary polynomial algebra in the commuting variables $t_{1},\dots ,t_{n}$. Let $V$ be a vector space and $A$ a commutative superalgebra. We define: $$V(A)=(A_{0}\otimes V_{0})\oplus (A_{1}\otimes V_{1}).$$If $V=k^{p|q}$, we most immediately have $$V(A)=\{(a_{1},\dots ,a_{p},{\alpha}_{1},\dots {\alpha}_{q})\,|\,a_{i}\in A_{0},{\alpha}_{j}\in A_{1}\}$$ We define the *$A$-points* of the *general linear supergroup* ${\mathrm{GL}}(p|q)(A)$, as the parity preserving linear maps from $V(A)$ to itself. An easy calculation shows that: $$\label{glfun} {\mathrm{GL}}(p|q)(A)=\left\{ \begin{pmatrix} (a_{ij}) & ({\alpha}_{il}) \\ ({\alpha}_{kj}) & (a_{kl})\end{pmatrix}\,\Big|\,a_{ij},a_{kl}\in A_{0},\quad \alpha _{il},\alpha _{kj}\in A_{1}\right\} $$where $1\leq i,j\leq p$, $p+1\leq k,l\leq p+q$ and $\det (a_{ij})$, $\det (a_{kl})$ are invertible. This is an ordinary group with the matrix multiplication. The super nature of this geometric object lies into the anticommuting entries of its odd part, namely the ${\alpha}_{rs}$’s. We can identify ${\mathrm{GL}}(p|q)(A)$ with the group of superalgebra morphisms from the superalgebra $$k[\mathrm{GL}(p|q)]:=k[x_{ij},\xi _{kl}][\det (x_{ij})_{1\leq i,j\leq p}^{-1},\det (x_{ij})_{p+1\leq i,j\leq p+q}^{-1}]$$to the superalgebra $A$. Let us see this identification through an example (the general case is a straightforward modification of it). Consider $${\mathrm{GL}}(1|1)(A)=\left\{ \begin{pmatrix} a & {\alpha}\\ {\beta}& b\end{pmatrix}\,\Big|\,a,b\in A_{0},\quad {\alpha},{\beta}\in A_{1},\quad a,b\,\hbox{invertible}\right\} \label{matrixgl}$$and $k[{\mathrm{GL}}(1|1)]=k[x,y,\xi ,\eta ][x^{-1},y^{-1}]$. A morphism $\phi :k[x,y,\xi ,\eta ][x^{-1},y^{-1}]{\longrightarrow}A$ is determined by the images of the generators, namely $\phi (x)=a$, $\phi (y)=b$, $\phi (\xi )={\alpha}$, $\phi (\eta )={\beta}$, where $a$, $b$ are invertible in $A_{0}$ and ${\alpha},{\beta}\in A_{1}$. The identification of $\phi $ with a matrix in the form ([matrixgl]{}) is then immediate. The identification between ${\mathrm{GL}}(p|q)(A)$ and the set of morphisms of superalgebras as above, denoted by ${\mathrm{Hom}}_{\mathrm{(salg)}}(k[{\mathrm{GL}}(p|q)],A)$, allows us to say that the general linear supergroup is *represented* by the superalgebra $k[{\mathrm{GL}}(p|q)]$. The information contained in ${\mathrm{GL}}(p|q)(A)$ *for all* $A$ is effectively contained in the superalgebra $k[{\mathrm{GL}}(p|q)]$. More appropriately, we call *general linear supergroup over $k$* and we denote it by ${\mathrm{GL}}(p|q)$, the functor that associates to a given commutative superalgebra $A$ the group ${\mathrm{GL}}(p|q)(A)$. The reader does not need to be familiar with the theory of categories, but should be aware that a supergroup functor $G$ is a way of giving, for any commutative superalgebra $A$, a group, denoted by $G(A)$, that behaves nicely when we change $A$ (namely, if we have a morphism $A{\longrightarrow}B$, this morphism should naturally induce another morphism $G(A) {\longrightarrow}G(B)$). Furthermore, to fully deserve the name of supergroup, the functor $G$ must be [*representable*]{}, that is, there is a superalgebra $k[G]$, playing the role of $k[{\mathrm{GL}}(p|q)]$, so that we can identify $G(A)$, the $A$-points of the supergroup functor, with the morphisms $k[G] {\longrightarrow}A$. However, for the present work, we shall not be interested in these subtleties: all of the supergroup functors we consider in this paper are indeed representable. The *reduced group* associated to a supergroup is the ordinary group that we obtain by taking $A=k$. For example, for ${\mathrm{GL}}(p|q)$: $$\label{red-grp} {\mathrm{GL}}(p|q)(k)=\left\{ \begin{pmatrix} (a_{ij}) & 0 \\ 0 & (a_{kl})\end{pmatrix} \right\} \,= \, {\mathrm{GL}}(p)\times {\mathrm{GL}}(q)$$ because the only value in a field $k$ that the nilpotent variables ${\alpha}_{rs}$ can take is zero. At this point we need to make a step forward in this theory and look at the differences in the choice of $k$. So to mark the difference between the different $k$’s, we speak of $k$-supergroups or we say that a supergroup is defined over $k$. For the purpose of the present paper, we need to consider ${\mathbb C}_s$-supergroups, that we want to view as supergroups over ${\mathbb R}$. Let us look at an example and consider the supergroup ${\mathrm{GL}}(1|1)$ over ${\mathbb C}_{s}$; again, the general case is not conceptually different. The superalgebra representing the supergroup is ${\mathbb C}_{s}[z,w,\zeta ,\eta ][z^{-1},w^{-1}]$ (see (\[matrixgl\])). This superalgebra will give us the $A$-points of ${\mathrm{GL}}(1|1)$, when $A$ is a ${\mathbb C}_s$-superalgebra, while now we want to determine the $A$-points of ${\mathrm{GL}}(1|1)$ as a [*real*]{} supergroup, that is when $A$ is a [*real*]{} superalgebra. We then define the $A$-points of the ${\mathbb C}_s$-supergroup ${\mathrm{GL}}(1|1)$, viewed as ${\mathbb R}$-supergroup, the $A \otimes {\mathbb C}_s$ points of ${\mathrm{GL}}(1|1)$: $${\mathrm{GL}}(1|1)_{\mathbb R}(A)={\mathrm{GL}}(1|1)(A \otimes {\mathbb C}_s)={\mathrm{Hom}}_{\mathrm{(salg)}}({\mathbb C}_{s}[z,w,\zeta ,\eta ][z^{-1},w^{-1}],A\otimes {\mathbb C}_s)$$ where the tensor product is over ${\mathbb R}$. In fact, a morphism $\psi :{\mathbb C}_{s}[z,w,\zeta ,\eta ][z^{-1},w^{-1}]{\longrightarrow}A\otimes {\mathbb C}_{s}$ is specified once we know $\psi (z)$, $\psi (w)$, $\psi (\zeta )$, $\psi (\eta )$. Let us look at $\psi (z)=a\otimes 1+b\otimes i$. The image of $z$ is effectively recovered by the pair $(a,b)$ with $a,b\in A_{0}$. So we see that a complex indeterminate $z$ is associated with two real indeterminates. The images of the $4$ ${\mathbb C}_s$-generators $z,w.\psi,\zeta$ give $8$ elements of the real algebra $A$, as one expects (in analogy to what we expect for ordinary vector spaces or algebras: the complex coordinates double their number, when viewed as real). For the ${\mathbb C}_s$ general linear supergroup: $$\label{cs-real} ({\mathrm{GL}}(p|q)(A))_{\mathbb{R}}={\mathrm{Hom}}_{{\mathrm{(salg)}}}({\mathbb C}_{s}[[x_{ij},\xi _{kl}][\det (x_{ij})_{1\leq i,j\leq p}^{-1},\det (x_{ij})_{p+1\leq i,j\leq p+q}^{-1}],A\otimes {\mathbb C}_{s})$$ The definition for a generic ${\mathbb C}_s$-supergroup is: $$G_{\mathbb{R}}={\mathrm{Hom}}_{{\mathrm{(salg)}}}({\mathbb C}_{s}[G],A\otimes {\mathbb C}_{s})$$ [999]{} Y. A. Gol’fand, E. P. Likhtman, *Extension of the Algebra of Poincaré Group and Violation of* $\mathit{P}$*Invariance*, JETP Lett. **13**, 323 (1971). D. V. Volkov, V. P. Akulov, *Is the Neutrino a Goldstone Particle?*, Phys. Lett. **B46**, 109 (1973). J. Wess, B. Zumino, *Supergauge Transformations in Four Dimensions*, Nucl. Phys. **B70**, 39 (1974). A. Salam, J. Strathdee, *Super-Gauge Transformations*, Nucl. Phys. **B76**, 477 (1974). A. Salam, J. Strathdee, *Unitary Representations of Supergauge Symmetries*, Nucl. Phys. **B80**, 499 (1974). S. Ferrara, B. Zumino, *Supergauge Invariant Yang-Mills Theories*, Nucl. Phys. **B79** (1974) 413. D. Z. Freedman, P. van Nieuwenhuizen, S. Ferrara, *Progress Toward a Theory of Supergravity*, Phys. Rev. **D13** (1976) 3214-3218. S. Deser, B. Zumino, *Consistent Supergravity*, Phys. Lett. **B62** (1976) 335. M. B. Green, J. H. Schwarz, E. Witten : *Superstring Theory"*, 2 Vols., Cambridge Monographs On Mathematical Physics, Cambridge University Press (Cambridge), 1987. J. Polchinski : *String theory"*, 2 Vols., Cambridge University Press (Cambridge), 1998. V. S. Varadarajan : *Supersymmetry for Mathematicians: An Introduction"*, Courant Lecture Notes **1**, AMS, 2004. L. Balduzzi, C. Carmeli, R. Fioresi, *The local functor of points of supermanifolds*, Expositiones Mathematicæ **28** (2010), 201-217, `arXiv:0908.1872 [math.RA]`. L. Balduzzi, C. Carmeli, R. Fioresi, *A Comparison of the functors of points of Supermanifolds*, J. Algebra Appl. **12**, (2013), 1407-1415, `arXiv:0902.1824 [math.RA]`. A. S. Shvarts, *On the definition of superspace*, Teoret. Mat. Fiz. **60**(1):37-42 (1984). A. Voronov, *Maps of supermanifolds*, Teoret. Mat. Fiz. **60**(1):43-48 (1984). F. A. Berezin : *Introduction to superanalysis*", D. Reidel Publishing Company, Holland, 1987. Y. I. Manin : *Topics in non commutative geometry"*, Princeton University Press, 1991. - Y. I. Manin : *Gauge field theory and complex geometry"*, translated by N. Koblitz, J.R. King, Springer-Verlag, Berlin-New York, 1988. R. Fioresi, E. Latini, *The symplectic origin of conformal and Minkowski superspaces*, `arXiv:1506.09086 [hep-th]`. C.  Carmeli, L. Caston, R. Fioresi : *Mathematical Foundation of Supersymmetry"*, with an appendix with I. Dimitrov, EMS Ser. Lect. Math., European Math. Soc., Zurich, 2011. R. Fioresi, M. A. Lledó, V. S. Varadarajan, *The Minkowski and conformal superspaces*, J. Math. Phys. **48**, 113505, (2007), `math/0609813 [math.RA]`. A. Hurwitz, *Über die Composition der quadratischen Formen von beliebig vielen Variabeln*, Nachr. Ges. Wiss. G[ö]{}ttingen (1898), 309-316. J. C. Baez, J. Huerta, *Division Algebras and Supersymmetry I*, in : *Superstrings, Geometry, Topology, and* $\mathit{C}^{\ast }$*-algebras"*, eds. R. Doran, G. Friedman and J. Rosenberg, Proc. Symp. Pure Math. **81**, AMS, Providence, 2010, 65-80, `arXiv:0909.0551 [hep-th]`. M. Cederwall, *Jordan algebra dynamics*, Nucl.Phys. **B302** (1988) 81. M. Cederwall, *Octonionic particles and the* $S(7)$* symmetry*, J.Math.Phys. **33** (1992) 388. M. Cederwall, *Introduction to Division Algebras, Sphere Algebras and Twistors,* `arXiv:hep-th/9310115`. J. M. Evans, *Supersymmetric Yang-Mills theories and division algebras,* Nucl. Phys. **B298** (1988), 92-108. J. C. Baez, J. Huerta, *Division Algebras and Supersymmetry II*, Adv. Theor. Math. Phys. **15** (2011) 5, 1373-1410, `arXiv:1003.3436 [hep-th]`. J. Huerta, *Division Algebras and Supersymmetry III*, Adv. Theor. Math. Phys. **16** (2012) 5, 1485-1589, `arXiv:1109.3574 [hep-th]`. J. Huerta, *Division Algebras and Supersymmetry IV*, `arXiv:1409.4361 [hep-th]`. Z. Bern, J. J. M. Carrasco, H. Johansson, *Perturbative Quantum Gravity as a Double Copy of Gauge Theory*, Phys. Rev.Lett. **105**, 061602 (2010), `arXiv:1004.0476 [hep-th]`. A. Anastasiou, L. Borsten, M.J. Duff, L.J. Hughes, S. Nagy, *Super Yang-Mills, division algebras and triality*, JHEP **1408** (2014) 080, `arXiv:1309.0546 [hep-th]`. A. Anastasiou, L. Borsten, M.J. Duff, L.J. Hughes, S. Nagy, *Yang-Mills origin of gravitational symmetries*, Phys. Rev. Lett. **113** (2014) no.23, 231606, `arXiv:1408.4434 [hep-th]`. A. Anastasiou, L. Borsten, M.J. Hughes, S. Nagy, *Global symmetries of Yang-Mills squared in various dimensions*, JHEP **1601** (2016) 148, `arXiv:1502.05359 [hep-th]`. H. Freudenthal, *Lie groups in the foundations of geometry*, Adv. Math. **1**, 145-190 (1964). J. Tits, *Algébres Alternatives, Algébres de Jordan et Algébres de Lie Exceptionnelles*, Indag. Math. **28**, 223-237 (1966). C.H. Barton, A. Sudbery, *Magic squares and matrix models of Lie algebras,* Adv. in Math. **180** (2003), 596-647, `arXiv:math/0203010 [math.RA]`. J. C. Baez, *The Octonions*, Bull. Amer. Math. Soc. **39**, 145-205 (2002), `arXiv:math/0105155 [math.RA]`. V. S. Varadarajan : *Lie groups, Lie algebras, and their representations*", Graduate Text in Mathematics. Springer-Verlag, New York, 1984. C.H. Barton, A. Sudbery, *Magic Squares of Lie Algebras*, `arXiv:math/0001083 [math.RA]`. T. Dray, J. Huerta, J. Kincaid, *The Magic Square of Lie Groups: The* $2\times 2$* Case*, Lett. Math. Phys. **104** (2014) 1445-1468. T. Dray, C.A. Manogue, R.A. Wilson, *A symplectic representation of* $E_{7}$, Comment. Math. Univ. Carolin. **55**, 387-399 (2014), `arXiv:1311.0341 [math.RA]`. J. Kincaid, T. Dray, *Division algebra representations of* $\mathit{SO(4,2)}$, Mod. Phys. Lett. **A29**, no. 25, 1450128 (2014), `arXiv:1312.7391 [math.RA]`. T. Dray, C. A. Manogue, *Octonionic Cayley Spinors and* $\mathit{E}_{6}$, Comment. Math. Univ. Carolin. **51**, 193-207 (2010), `arXiv:0911.2255 [math.RA]`. T.N. Bailey, M.G. Eastwood, A.R. Gover, *Thomas’s structure bundle for conformal, projective and related structures*, Rocky Mountain J. Math. **24** (1994), 1191-1217. A. R. Gover, A. Shaukat, A. Waldron, *Tractors, Mass and Weyl Invariance*, Nucl. Phys. **B812**, 424 (2009), `arXiv:0810.2867 [hep-th]`. S. Curry, A. R. Gover, *An introduction to conformal geometry and tractor calculus, with a view to applications in general relativity*, `arXiv:1412.7559 [math.DG]`. A. Čap, A.R. Gover, *Tractor bundles for irreducible parabolic geometries*, Global analysis and harmonic analysis, S[é]{}min. Congr. **4**, 129, Soc. Math. France 2000. C.  Fefferman, C.R.  Graham, *Conformal invariants*, in : *The mathematical heritage of Cartan"* (Lyon, 1984). Asterisque 1985, Numero Hors Serie, 95-116. S. M. Kuzenko, *Conformally compactified Minkowski superspaces revisited*, JHEP **1210**, 135 (2012), `arXiv:1206.3940 [hep-th]`. D. Klemm, M. Nozawa, *Geometry of Killing spinors in neutral signature*, Class. Quant. Grav. **32** (2015) no.18, 185012, `arXiv:1504.02710 [hep-th]`. C. M. Hull, *Duality and the signature of space-time*, JHEP **9811** (1998) 017, `hep-th/9807127`. C. M. Hull, *Timelike T-Duality, de Sitter Space, Large* $\mathit{N}$* Gauge Theories and Topological Field Theory*, JHEP **9807** (1998) 021, `hep-th/9806146`. S. Ferrara, *Spinors, superalgebras and the signature of space-time*, `hep-th/0101123`. R. L. Bryant, *Pseudo-Riemannian metrics with parallel spinor fields and vanishing Ricci tensor*, in : *Global analysis and harmonic analysis"* (Marseille-Luminy, 1999), vol. **4** of Sémin. Congr., pp. 53–94. Soc. Math. France, Paris, 2000, `math/0004073 [math.DG]`. M. Dunajski, *Anti-self-dual four manifolds with a parallel real spinor*, Proc. Roy. Soc. Lond. **A458** (2002) 1205, `math/0102225 [math.DG]`. M. Dunajski, *Einstein-Maxwell-dilaton metrics from three-dimensional Einstein-Weyl structures*, Class. Quant. Grav. **23** (2006) 2833, `gr-qc/0601014`. M. Dunajski, S. West, *Anti-self-dual conformal structures in neutral signature*, `math/0610280 [math.DG]`. S. Hervik, *Pseudo-Riemannian VSI spaces II*, Class. Quant. Grav. **29**, 095011 (2012), `arXiv:1504.01616 [math-ph]`. H. Ooguri, C. Vafa, *Selfduality and* $\mathcal{N}\mathit{=2}$* string magic*, Mod. Phys. Lett. **A5** (1990) 1389. R. Penrose, *Twistor algebra*, J. Math. Phys. **8** (1967) 345. E. Witten, *Perturbative gauge theory as a string theory in twistor space*, Commun. Math. Phys. **252** (2004) 189, `hep-th/0312171`. M. Rios, *Extremal Black Holes as Qudits*, `arXiv:1102.1193 [hep-th]`. W. Nahm, *Supersymmetries and their representations*, Nucl. Phys. **B135** (1978) 149. R. Fioresi, M. A. Lledó : *The Minkowski and Conformal Superspaces: The Classical and Quantum Descriptions"*, World Scientific Publishing, 2015. A. Sudbery, *Division Algebras, (Pseudo)Orthogonal Groups and Spinors*, J. Phys. **A17**, 939 (1984). T. Kugo, P. K. Townsend, *Supersymmetry and the Division Algebras*, Nucl. Phys. **B221** (1983) 357. D. Cervantes, R. Fioresi, M.A. Lledó, F. Nadal, *Quadratic deformation of Minkowski space*, Fortschr. Phys. **60** (2012), no. 9-10, 970-976. 53Cxx (32L25). D. Cervantes, R. Fioresi, M.A. Lledó,* The quantum chiral Minkowski and conformal superspaces*, Adv. Theor. Math. Phys. **15** (2011), no. 2, 565-620, `arXiv:1007.4469 [math.QA]`. D. Cervantes, R. Fioresi, M.A. Lledó : *On chiral quantum superspaces"*, in :*Supersymmetry in Mathematics and Physics"*, 69-99, Lecture Notes in Math. **2027**, Springer, Heidelberg, 2011. R. Fioresi, *Quantizations of flag manifolds and conformal space time*, Rev. Math. Phys. **9** (1997), no. 4, 453-465. R. Fioresi, *Quantum deformation of the flag variety*, Comm. Algebra **27** (1999), no. 11, 5669-5685. I.L. Kantor, A.S. Solodovnikov : *Hypercomplex Numbers: An Elementary Introduction to Algebras"*, Springer, New York, 1983. K. Carmody, *Circular and hyperbolic quaternions, octonions, sedionions*, Applied Mathematics and Computation **84**(1) (1997), 27–47. M. Günaydin, O. Pavlyk, *Spectrum Generating Conformal and Quasiconformal* $\mathit{U}$*-Duality Groups, Supergravity and Spherical Vectors*, JHEP **1004** (2010) 070, `arXiv:0901.1646 [hep-th]`. T. Dray, C. A. Manogue : *The Geometry of the Octonions"*, World Scientific, 2015. T. Springer, F. D. Veldkamp : *Octonions, Jordan Algebras and Exceptional Groups"*, Springer, 2013. J. M. Evans, *Trialities and Exceptional Lie Algebras: Deconstructing the Magic Square*, `arXiv:0910.1828 [hep-th]`. B.L. Cerchiai, S. Ferrara, A. Marrani, B. Zumino, *Charge Orbits of Extremal Black Holes in Five Dimensional Supergravity*, Phys. Rev. **D82** (2010) 085010, `arXiv:1006.3101 [hep-th]`. S.L. Cacciatori, B.L. Cerchiai, A. Marrani, *Magic Coset Decompositions*, Adv. Theor. Math. Phys. **17** (2013) 1077-1128, `arXiv:1201.6314 [hep-th]`. L. Andrianopoli, R. D’Auria, S. Ferrara, A. Marrani, M. Trigiante, *Two-Centered Magical Charge Orbits*, JHEP **1104** (2011) 041, `arXiv:1101.3496 [hep-th]`. K. McCrimmon : *A Taste of Jordan Algebras"*, Springer, 2004. R. Iordanescu : *Jordan structures in Analysis, Geometry and Physics"*, Editura Academiei Române, 2009. N. Jacobson, *Structure and representations of Jordan algebras*, American Mathematical Society Colloquium Publications, Vol. XXXIX. American Mathematical Society, Providence, R.I., 1968. P. Jordan, J. von Neumann, E.P. Wigner, *On an algebraic generalization of the quantum mechanical formalism*, Ann. Math. **35** (1934) no. 1, 29–64. P. Budinich, *From the Geometry of Pure Spinors with their Division Algebras to Fermion’s Physics*, Found. Phys. **32** (2002) 1347-1398, `arXiv:hep-th/0107158`. P. Budinich, *Internal Symmetry From Division Algebras in Pure Spinor Geometry*, Proceedings of Institute of Mathematics of NAS of Ukraine 2004, Vol. **50**, Part 2, 654–665, `arXiv:hep-th/0311045`. P. Charlton : *The geometry of pure spinors, with applications"*, PhD thesis, University of Newcastle, Dept. of Mathematics, 1997. R. D’Auria, S. Ferrara, M.A. Lledó, V.S. Varadarajan, *Spinor algebras*, J. Geom. Phys. **40** (2001) 101-128, `hep-th/0010124`. É. Cartan : *Leçons sur la Theorie des Spineurs"*, Paris, Hermann, 1937. C. Chevalley : *The algebraic theory of spinors"*, New York, Columbia University Press, 1954. N. Berkovits, *Super-Poincaré Covariant Quantization of the Superstring*, JHEP **0004**, 018 (2000), `arXiv:hep-th/0001035`. N. Berkovits, *ICTP Lectures on Covariant Quantization of the Superstring*, *arXiv:hep-th/0209059*. J. Igusa, *A classification of spinors up to dimension twelve*, Am. J. of Math. **92** (1970), 997–1028. V.G. Kac, E.B. Vinberg, *Spinors of 13-dimensional space*, Adv. in Math. **30** (1978), 137–155. V.L. Popov, *Classification of spinors of dimension fourteen*, Trans. Moscow Math. Soc. **1** (1980), 181–232. X.-W. Zhu, *The classification of spinors under* $\mathit{GSpin}_{14}$* over finite fields*, Trans. Am. Math. Soc. **333** (1992), no. 1, 95–114. L.V. Antonyan, A.G. Èlashvili, *Classification of spinors in dimension sixteen*, Trudy Tbiliss. Mat. Inst. Razmadze Akad. Nauk Gruzin. SSR **70** (1982), 4–23. S. Giler, P. Kosinski, J. Rembielinski, *On* $\mathit{SO(p,q)}$* Pure Spinors*, Acta Phys. Pol. Vol. **B18**, no. 8, 713 (1987). P. Furlan, R. Raczka, *Nonlinear spinor representations*, J. Math. Phys. **26**, 3021-3032 (1985). R. L. Bryant, *Remarks on spinors in low dimension*, Unpublished notes, 1999. M. Günaydin, G. Sierra and P. K. Townsend, *Exceptional Supergravity Theories and the Magic Square*, Phys. Lett. **B133**, 72 (1983). M. Günaydin, G. Sierra and P. K. Townsend, *The Geometry of* $\mathcal{N}\mathit{=2}$* Maxwell-Einstein Supergravity and Jordan Algebras*, Nucl. Phys. **B242**, 244 (1984). M. Günaydin, H. Samtleben, E. Sezgin, *On the Magical Supergravities in Six Dimensions*, Nucl. Phys. **B848** (2011) 62-89, `arXiv:1012.1818 [hep-th]`. C. Carmeli, R. Fioresi, S. D. Kwok, *SUSY structures, representations and Peter-Weyl theorem for $S^{1|1}$*, J. Geom. Phys. **95**, 144-158 (2015), `arXiv:1407.2706 [math.RT]`. C. Carmeli, R. Fioresi, S. D. Kwok, The *Peter-Weyl theorem for ${SU}(1|1)$*, P-Adic Numbers, Ultrametric Analysis, and Applications, Vol. **7**, no. 4, 266-275 (2015), `arXiv:1509.07656 [math.RT]`. [^1]: Technically this is called the big cell inside the Klein-conformal (super)space. [^2]: While the generalization to the $D=(3,3)$ is straightforward, the case $D=(5,5)$ may be plagued by further issues, which actually arise also in the Lorentzian case $D=(9,1)$, due to the known problem of constructing the superconformal algebra in $D>6$ [@Nahm]. We aim at tackling this problem in a future project. [^3]: $Id$ denotes the group identity element throughout. [^4]: Note that $D=q+2$ corresponds to the *critical* space-time dimensions of superstring theory. In fact, there is a deep relationship between supersymmetry and division algebras; *cfr. e.g.* [Baez-Huerta-1,Huerta-2,Huerta-3,Huerta]{}, and Refs. therein. [^5]: Note that in general $\mathcal{C}(s,t)$ is *not* isomorphic to $\mathcal{C}(t,s)$, even if $\mathrm{Spin}(s,t)\cong \mathrm{Spin}(t,s)$ (and thus $\mathrm{SO}(s,t)\cong \mathrm{SO}(t,s)$); *cfr. e.g.* [Spinor-Algebras,flmink]{}. [^6]: [@Sudbery] only deals with the division case. However, the treatment for the split case goes through almost without modification. Indeed, it is known that the $\mathbf{27}$ of $E_{6(6)}$ is $J_{3}\left( \mathbb{O}_{s}\right) $, so it is essentially ensured that the $\mathbf{16}$ of $Spin(5,5)$ is representable by $\mathbb{O}_{s}^{2}$. The same holds, by suitable algebraic truncations, for $\mathbb{H}_{s}^{2}$ and $\mathbb{C}_{s}^{2}$. We thank Leron Borsten for correspondence on this. [^7]: In the dimension-labelled physicists’ notation of the group irreprs., the dimensions are real, unless otherwise noted by suitable subscripts. [^8]: Concerning physical applications, the relevance of $\mathcal{A}_{q}$ (\[A\]) as a part of the $U$-duality symmetry of $\mathcal{N}=(1,0)$ chiral *magic* supergravity theories in $D=(5,1)$ dimensions has been recently exploited in [@Gunaydin-D=6] (*cfr.* Table 2 and Sec. 3.2 therein). [^9]: In this case, the chiral projectors on $S^{\pm }$ are real, as well. [^10]: After the remarks below (\[(3,1)\])-(\[(3,1)-2\]), the same holds in $D=(3,1)$, as a consequence of the representability in terms of $\mathbb{C}^{2}$. [^11]: The upperscript $t$" denotes transposition. [^12]: This observation will also give rise to the chain of isomorphisms ([iso-3]{}) (holding both at Lie algebra and at Lie group level). It can be traced back to the symmetry of the *doubly-split* Magic Square of order $2$ [@BS-1; @BS-2; @MS-2-Groups]. [^13]: It should be here remarked that the determinant of $2\times 2$ Hermitian matrices with $\mathbb{H}$- or $\mathbb{H}_{s}$- valued entries is well defined [@McCrimmon]. [^14]: The tilde in $\widetilde{\mathrm{Sp}}(4,\mathbb{C}_{s})$ denotes the peculiar definition (\[def-Sp4\]) - after [@BS-1; @BS-2] - of the symplectic group by the matrix Hermitian-conjugate (and not by the matrix transpose, as usually done).
--- abstract: 'The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here we discuss some newly developed tools to import and manipulate astrophysical data into the three dimensional visual effects software, [*Houdini*]{}. This software is widely used by visual effects artists, but a recently implemented Python API now allows astronomers to more easily use [*Houdini*]{} as a visualization tool. This paper includes a description of features, work flow, and various example visualizations. The project website, [www.ytini.com]{}, contains [*Houdini*]{} tutorials and links to the Python script Bitbucket repository aimed at a scientific audience to simplify the process of importing and rendering astrophysical data.' author: - 'J.P. Naiman, Kalina Borkiewicz, A.J. Christensen' bibliography: - 'bib\_pasp\_2015.bib' title: Houdini for Astrophysical Visualization --- Introduction ============ Astronomers have long used visualizations of their observed and simulated data to stimulate the public’s interest in science. Recent inroads which utilize three dimensional modeling and game development software make the possibility of outreach materials generated by a myriad of individual scientists, rather than a few graphics studios, an exciting new avenue to be explored [@kent2015; @taylor2015; @naiman2016]. As technological advancements in graphics and gaming progress, the scientist is presented with innovative methods to further develop their own public outreach [@vogt2013; @steffen2014; @vogt2014; @brown2014; @madura2015]. In this paper, we introduce several astrophysical data processing Python tools for use in [*Houdini*]{}, the three dimensional visual effects software used by a wide variety of professional studios. Here, we make use of [*yt*]{} [@turk2011] as a data reader within [*Houdini*]{} to serve as a bridge between the astronomical and graphics community. A variety of resources including tutorials, example scientific [*Houdini*]{} files and workflows, Python scripts, both raw and preprocessed volumetric scientific data sets, and links to external resources have been compiled into the newly launched “Houdini For Astronomy" website, [www.ytini.com]{}, as a resource for the astronomy community. Integration of astronomical data within [*Houdini*]{} provides scientists with the ability to create production quality visualizations that encompass not only images in papers and on websites, but large scale movies for dome shows and various virtual reality devices. The organization of this paper is as follows: after a brief overview of the [*Houdini*]{} software and its use in professional graphics studios in section \[section:houdini\], an introduction to the [*Houdini*]{} GUI and an example workflow including the import of astrophysical data and image rendering is presented in section \[section:workflow\]. The different methods for importing datasets are discussed in section \[section:inputFormats\]. The paper concludes with example renders and interactive three dimensional models along with a brief discussion of their possible extensions with visual effects methods in section \[section:examples\]. We conclude with a summary and discussion of future plans in section \[section:discussion\]. Houdini Background and Usage {#section:houdini} ============================ [*Houdini*]{} is a commercially available visual effects software package that is widely used by visual effects studios in Hollywood, video game development studios, and other graphics industries. Many of these studios and companies, such as Disney/Pixar, Dreamworks, Double Negative, Method, Digital Domain, Framestore, Axis, Electronic Arts, and Ubisoft, contribute to a lively user and developer community, and in concert with exhaustive documentation by the developer, have made the software accessible to smaller research communities such as the research teams at the University of Technology Sydney, the California Academy of Sciences, the American Museum of Natural History, and the National Center for Supercomputing Applications.[^1] These research teams use the extensive visual effects methods available in [*Houdini*]{} to craft educational documentaries and planetarium shows from observed and simulated datasets from a wide variety of scientific disciplines. While several other three dimensional graphics packages exist to render astronomical data ([*Maya*]{}, [*Blender*]{}), [*Houdini*]{} provides some advantages for the rendering of multidimensional scientific datasets. In addition to being an industry standard software, increasing the ease of collaborating with professional filmmakers, the native renderer in [*Houdini*]{}, Mantra, is considered to be one of the highest-quality renderers for volumetric data, which is a data format often generated by both computational and observational astronomers. Packages such as [*Maya*]{} and [*Blender*]{} provide excellent general purpose tools for three dimensional modeling, compositing, and animating. However, its [*Houdini*]{}’s focus on rendering specifically for visual effects which allows for the production of visually compelling volume renders as the multidimensional nature of astrophysical data makes it more closely akin to industry standard visual effects frameworks then either modeling or animating frameworks. In addition, [*Houdini*]{} offers many methods for arbitrary data types that most animation software does not, including pre-built nodes for importing data from tables and easy conversions between imported data types to compare visualizations from differently formatted datasets. [*Houdini*]{} is unique among commercial visual effects tools as its scene development paradigm is intended to be modular, procedural, and easily altered at any stage of development. This proceduralism is represented as a network of operator nodes that affect geometry, shaders, rendering, etc. The node network should feel familiar to data scientists, as it is a graphical representation of the flow of information through blocks of code. In addition to a programmer-friendly operator network, there are nodes that allow scripting with a native optimized C-like language called VEX, there are nodes that allow Python scripting, and there are many other ways to extend the native capabilities of the software including external so-called “Digital assets" and the C++ Houdini Development Kit. Developers familiar with software such as [*Maya*]{} or [*Blender*]{} will be familiar with the way this concept is implemented by those programs in their shader network interfaces, however, for the novice we review several vital user interfaces in section \[section:workflow\]. All the widely used operating systems are supported by [*Houdini*]{}, including Windows, Linux, and Mac. Although there is a high-priced license available to studios, significantly discounted licenses are available to educational institutions, and a free learning edition with few limitations is available for direct download from the developer, SideFX[^2]. Scene layout and geometry data are stored within [*Houdini*]{} as a series of nested directories, and represented this way within the interface. At the scene level, one can enter a directory that contains all of the surface data, a directory for the render data, a directory for the shader data, etc. A user then builds operator networks within these directories. Surface directory networks are built from surface operators or SOPs, render networks are built from ROPs, shader networks are built from SHOPs, and so on. New nodes can be placed by pulling up menus or tool shelves. The Tab key menu is also an efficient way to peruse the available operators. While scenes can become increasingly complicated for production-level renders, the rendering of an image of astronomical data can be accomplished in relatively few steps. Example Houdini Workflow {#section:workflow} ======================== The pathway from an astrophysical dataset to a rendered image and/or three dimensional model revolves around first loading the data in a format that is understood by [*Houdini*]{}, placing the objects in the three dimensional space, determining how light will travel through the dataset based on user defined shaders, and then exporting a rendered image(s) or a three dimensional model. While the data handling portion of the workflow is done through the custom Python scripts discussed in section \[section:inputFormats\], the interaction with the loaded data and [*Houdini*]{} cameras is accomplished through the GUI, shown in Figure \[fig:gui\]. The GUI can be modified to include further informational and analysis panels, but the default version shown in Figure \[fig:gui\] highlights the key interfaces for the novice user - the Scene View provides direct three dimensional interaction with the loaded data, the Network View depicts the connected network of data and data modifiers and shaders which determine the final rendered image, and the Parameters panel provides information about each node in the network. The smaller highlighted sections include the Menu bar which is used to import code and data, the Selector and Handle Controls which allow the user different methods to select and move data within the three dimensional space of the Scene View. Using interactions with this basic GUI we outline the general workflow of a [*Houdini*]{} session, but caution that this workflow is highly variable between users and projects. - [**Load Volume Data**]{}: [*Houdini*]{} accepts several graphics data formats natively, including .vdb, .geo, .bgeo, .json, .pdb, and .obj.[^3]. One can load in this type of data by importing through the Menu section highlighted in Figure \[fig:gui\] or through a File node in the Network View panel, the details of which are presented in an online tutorial[^4]. Direct access to data through the [*yt*]{} data reading capabilities is presented in some detail in section \[section:pythonsop\]. - [**Position a Camera (Optional)**]{}: One can set up test renders directly through the viewport as will be discussed in the [**Set up a Test Render**]{} bullet point. However, for the sake of completeness we discuss here how one can set up a rendering camera. The simplest way to create a static camera is by framing the scene in the Scene View window, and then creating a “New Camera" from the “no cam" dropdown menu at the top right corner of the Scene View. This creates a camera, looking at exactly what you see in the Scene View. More complicated methods for positioning and pointing cameras are left to the tutorials and resources on the [*ytini*]{} website[^5]. - [**Set up a Test Render**]{}: Once the volume and camera are created, an image can be rendered. For a quick render, first select the “Render View" tab above the Scene Viewer panel highlighted in Figure \[fig:gui\]. Then select your camera (“/obj/cam1" by default), then press the [Render]{} button. A test render of your scene will then appear in the Scene Viewer panel in the “Render View" tab. - [**Set up Shaders**]{}: Shaders determine how the parameters of your dataset are translated into the opacity, color, and light reflection model passed to the image renderer. This process can be achieved through [*Houdini*]{}’s CVEX[^6] code, or through the manipulation of prebuilt SHOP (shader) nodes. Here we outline the basic SHOP nodes to create a custom shader and leave the more complex details to a [ytini.com]{} tutorial. In Figure \[fig:shaderNode\] we show an expanded view of the internal node structure of the prebuilt “billowysmoke" shader for volume data, which is accessed through the “Material Palette" tab in the Network View highlighted in Figure \[fig:gui\]. The data input parameters are shown in dark purple with the “density" and “temperature" labels. These are encoded in your geometry file and imported in the [**Load Volume Data**]{} step of this workflow. The user is able to modify this complex node structure with several color ramps as will be outlined in the example of section \[section:ajviz\]. The various other nodes control how these geometry parameters are translated into surface colors, surface opacity and outputs of the Bidirectional Scattering Distribution Function[^7] denoted in the second most right node in Figure \[fig:shaderNode\] as Cf, Of, and F, respectively. As shaders can increase in complexity nearly infinitely we leave the step-by-step instructions for how to build this example shader as a tutorial[^8] and provide the reader with a previously built HIP file containing this shader for direct import into [*Houdini*]{}[^9]. - [**Render an Image to Disk**]{}: When the user has selected their final shader parameters they may render an image to a file by right-clicking in the “Render View" tab in the Scene View panel highlighted in Figure \[fig:gui\] and following the instructions to specify a file path and file type in which to save their image. New Methods for Importing Scientific Data To Houdini {#section:inputFormats} ==================================================== As with the majority of production level graphics software, [*Houdini*]{} is not specifically designed to load observed or simulated astrophysical datasets. Here we present two methods under development to facilitate the formatting of data from astronomy datasets to [*Houdini*]{} geometry nodes. All methods described here are more fully documented on [ytini.com]{} through tutorials and downloadable example datasets and scripts. Both methods described in this section use [*yt*]{} as a means to query and format the data for use within [*Houdini*]{}. The [*yt*]{} codebase is an open source, parallel, analysis and visualization Python package. This widely used program includes a myriad of tools to view and analyze data including volume rendering, projections, slices, halo finding, isocontour generation and variable integration along paths. Here, we emphasis [*yt*]{}’s capabilities as a data reader and volume formatter, as it provides uniform interaction with data generated from the majority of the popular simulation codes and observational databases. Each frontend for a specific data format in [*yt*]{} converts data into physically meaningful units, describes how it is stored on disk for quick and efficient reading, and prescribes a method of reading the data from disk. In this way, the user of [*yt*]{} in [*Houdini*]{} can remain nearly agnostic toward the specific simulation code or observational source used to generate the data. Future plans include more direct integration of [*yt*]{}’s data manipulation capabilities within [*Houdini*]{}. Direct access within Houdini through yt: A Python SOP {#section:pythonsop} ----------------------------------------------------- We begin our discussion of data access by describing an example [*Houdini*]{} “SOP" which makes use of [*yt*]{} to read in volumetric data from a simulation, where “SOP" is shorthand for the Surface OPerators used to construct and manipulate geometry in [*Houdini*]{}. Because this Python SOP makes use of the [*yt*]{} libraries, one must open [*Houdini*]{} with their terminal’s [PATH]{} variable pointing to both [*Houdini*]{}’s Python libraries as well of those in [*yt*]{}.[^10] The code for this Python SOP is shown in Figure \[fig:pythonSOPcode\]. This code uses several GUI-modifiable parameters to control the data file location, resolution level, and type of field to be uploaded with this SOP. Here, [*yt*]{} is used as a data reader to query the simulation snapshot file and generate a uniform grid for a specific resolution, which is then formatted for [*Houdini*]{}’s internal Volume data format. To add this Python SOP to one’s [*Houdini*]{} file, one simply accesses [File $\rightarrow$ New Operator Type]{} from the Menu section depicted in Figure \[fig:gui\], selects a name and label for the new Python SOP (default is “newop" and “New Operator", respectively), sets the [Operator Style]{} to “Python Type", and the [Network Type]{} to “Geometry Operator". Once one clicks [Accept]{} a new window pops up in which one can enter the various code and parameters needed for your new Python SOP. The important panels necessary for this SOP import are outlined in Figure \[fig:pythonSOPprocess\]. First, the code from Figure \[fig:pythonSOPcode\] must be copied into the “Code" panel, which appears after clicking the Code panel tab highlighted by the red circle shown in the upper panel of Figure \[fig:pythonSOPprocess\]. In the “Basic" panel shown in the top of Figure \[fig:pythonSOPprocess\] one must set the default “Minimum Inputs" to zero as no separate [*Houdini*]{} nodes are required to run this SOP. In the “Parameters" panel shown in the bottom of Figure \[fig:pythonSOPprocess\], one must drag-and-drop a value from the type list on the left list to the “Existing Parameters" middle list for each parameter in the Python code outlined in Figure \[fig:pythonSOPcode\]. In addition, one must set the name and label of each parameter in the “Parameter Description" section of this panel, highlighted by the green circle in Figure \[fig:pythonSOPprocess\]. Once the [Accept]{} button has been selected in this New Operator Type panel, the new Python SOP is added to the current [*Houdini*]{} file. To access the SOP one simply adds a new geometry node as described in section \[section:workflow\], deletes the default File node within the geometry node and replaces it with one’s Python SOP which is accessed within [Tab $\rightarrow$ Digital Assets]{}. An example HIP file with this Python SOP pre-installed is located on the file download section of [ytini.com]{}[^11]. A more detailed tutorial with a full explanation of adding Python SOPs can be found on the “Houdini for Astronomy" website[^12]. Preprocessing Data for Houdini: The VDB Format {#section:vdbformat} ---------------------------------------------- While the Python SOP is an excellent method for viewing smaller datasets, larger simulation files require more nuanced data handling methods. The OpenVDB file format is an efficient method for storage of sparse datasets [@museth2013]. Its hierarchical data structure allows OpenVDB to store high resolution, volumetric data efficiently on disk and in memory for fast I/O access. OpenVDB for Python includes a wrapper to access all of the VDB C++ functions within a Python interface. Installing OpenVDB for Python and its dependencies can be accomplished through a variety of package installers (apt-get, homebrew, conda, etc). Further details of the installation process are left to a tutorial on the “Houdini for Astronomy" website[^13] where a list of installation options for a variety of operating systems will be consistently updated. Figure \[fig:vdbconvertercode\] shows an example use of the VDB converter within Python. Here, the function [vdbyt.convert\_vdb\_with\_yt]{} makes use of both the [*yt*]{} and [*pyopenvdb*]{} libraries to transform any data readable with [*yt*]{} into the efficient VDB file format, which [*Houdini*]{} is optimized to read and render. The necessary variables include the data file, the location of the saved VDB, the refinement level of data to process, and the variable the user wishes to convert. Several other variables are included to modify the output data. One can choose to output the log of the variable ([log\_the\_variable]{}, default is [False]{}), choose to clip the output such that data with values below the clipping tolerance isn’t stored in the converted VDB ([variable\_tol]{} sets the clipping level, default is [None]{}), choose to renormalize the minimum and maximum level of the stored data to $0 \rightarrow 1$ ([renorm]{}, default is [True]{}), rescale the domain size ([renorm\_box]{}, default is [True]{}) so it spans a specified number of [*Houdini*]{} domain units on import ([renorm\_box\_size]{}, default is 10). Figure \[fig:vdbconvertercode\] depicts an example that employs taking the log of the output variable, disregards the storage of data with densities below $10^{-27} \, {\rm g \, cm^{-3}}$, and renormalizes the box so it spans 100x100x100 [*Houdini*]{} units upon import of the converted VDB. In this process, [*yt*]{} is utilized to read in the data and convert it to a uniform grid, sampled at the specified resolution level. After various scaling and thresholding methods are implemented, a VDB grid is created with [pyopenvdb]{}, and the uniform grid generated by [*yt*]{} is placed within the VDB grid structure. This sparse data structure is then stored on disk at the location specified by the user. Output of combined data sets, multi-level thresholding and further processing using [*yt*]{} data manipulation methods are all possible, but left for future tutorials on the “Houdini for Astronomy" website[^14]. Example Visualizations {#section:examples} ====================== As astronomical datasets grow in size and complexity the complementary methods of both visualization and analysis to inspect data are required [@goodman2012]. In a similar vein to methods utilized to bring the visualization techniques of [*Blender*]{} to the astronomical community [@taylor2015; @kent2015; @naiman2016], this section will focus on several methods of using [*Houdini*]{} to visualize astronomical data for a specific dataset from [@oshea2004]. Here, we present an outline of how one makes a production quality render from their data, checks the accuracy of their renders with one of [*Houdini*]{}’s data analysis nodes, and how one might combine multiple observational and computational datasets to convey a scientific concept. Production Quality Volume Renders {#section:ajviz} --------------------------------- As alluded to in section \[section:workflow\], [*Houdini*]{}’s volume processing and vast shader manipulation capabilities result in the production of beautifully rendered scientific images. To produce a volume render, we begin with the VDB generated from the code described in section \[section:vdbformat\] and illustrated in Figure \[fig:vdbconvertercode\]. The IsolatedGalaxy dataset [@oshea2004] used in this example can be downloaded from the [*yt*]{} Data Repository website[^15]. The density threshold value chosen in Figure \[fig:vdbconvertercode\] allows one to probe the in-falling gas within the galaxy’s halo which surrounds an imbedded disk. Once the VDB is imported into [*Houdini*]{}, we can generate the volume render of this in-falling material depicted in Figure \[fig:volumerender\] by manipulating the shader properties of the “billowysmoke" shader described in section \[section:workflow\]. In particular, one can change the total emissivity and opacity of the entire volume, the variety of colors for each emissivity level, and a variety of “smoke" effects to modify the render. This allows the user to probe and highlight different features in their dataset. While the image in Figure \[fig:volumerender\] uses some defaults of the “billowysmoke" shader to color the volume render, one can use other variables in their dataset to further modify the color scheme. Beyond the effects described here, this image can further be modified through [*Houdini*]{}’s extensive toolset for visual effects. Complex camera paths, fading, advanced lighting, nesting of datasets, procedural noise or other added detail, and other derived features like advected field lines are all commonly used features by digital artists when using [*Houdini*]{} with scientific data. Further discussion of these methods is left for future tutorials on [ytini.com]{}. Accuracy Checks of Volume Renders: Analysis Plots in [*Houdini*]{} {#section:kalinaviz} ------------------------------------------------------------------ While predominately a visual effects and visualization software, [*Houdini*]{} can also be utilized to interactively inspect datasets through interaction with the data in the Scene View and with a variety of analysis plots and tables. We utilize one of these features - a “Volume Slice" node - to visualize the IsolateGalaxy dataset both using the slice plot technique many astronomers employ to analyze their simulations (Scene View in left panel of Figure \[fig:slice\]), and the less familiar method of producing a volume render of the dataset discussed in section \[section:ajviz\] (Render View in right panel of Figure \[fig:slice\]). Here, we probe gas further out from the center of the galaxy than that shown in Figure \[fig:volumerender\] by using a lower density cut ($10^{-29} \, {\rm g \, cm^{-3}}$) in the generation of the VDB with the code depicted in Figure \[fig:vdbconvertercode\]. To use this feature, the user first presses [TAB]{} to pull up the TAB Menu in the Network View window, types [volume slice]{}, and presses [ENTER]{} to select. By connecting the output of the volume node to the input of the “Volume Slice" node a slice across the specified volume is created as shown in the Network View of Figure \[fig:slice\]. To view the slice plot instead of a render of the volume, one clicks on the center of the volume slice node to select it, followed by clicking on the right-most end of the volume slice node, to toggle the display flag. This will turn the right side blue indicating the volume slice will be displayed in the Scene View window. The parameters defining the plane axis, plane offset, data attribute name, and data range can be changed in the Parameter window. To inspect the individual data values along the slice, with the volume slice node selected, one switches to the Geometry Spreadsheet tab, which lives at the top of the Scene View as highlighted in Figure \[fig:sliceInspection\]. The last column(s) of the Geometry Spreadsheet will display the data values within the slice based on their x/y/z locations (denoted by P\[x\], P\[y\] and P\[z\] columns). In this way, the user can interact with their data in familiar (slice plot), and less familiar methods (volume render), and probe individual values of their data cube as shown in Figure \[fig:sliceInspection\]. Further explanation of the analysis capabilities of [*Houdini*]{} will be categorized in a tutorial on [ytini.com]{}[^16]. In particular, we describe using the slice plotting and volume rendering capabilities of [*Houdini*]{} in tandem with the IsolatedGalaxy dataset in some detail. Extensions: Combine and Annotate Multiple Datasets {#section:multiples} -------------------------------------------------- Finally, [*Houdini*]{} can be utilized to combine multiple scientific datasets into dome shows and documentaries. Figure \[fig:suns3\] shows two frames from the full dome planetarium documentary “Solar Superstorms"[^17]. The “Fan" image in the top panel of Figure \[fig:suns3\] was produced from the combination of multiple data sets, as both volumes and geometrical annotations [@fan2016; @rempel2014]. This rendering of a coronal mass ejection (CME) integrates 2D Solar Dynamics Observatory[^18] imagery around the rim of the sun; a 2D solar surface simulation, mapped onto a sphere geometry object; volumetric magnetic field lines as VDBs; and the coronal mass ejection itself as a VDB. The magnetic field line data were imported into [*Houdini*]{} using a Python script, and the CME was imported via a custom C++ plugin, using the [*Houdini*]{} Development Kit. In the bottom panel of Figure \[fig:suns3\] is an image from the same documentary - a rendering of solar plasma interacting with Earth’s magnetic field [@kar2014]. The magnetic field lines were traced through the 3D simulation data using [*yt*]{} on the Blue Waters[^19] supercomputer with a script that produced [*Houdini*]{}-readable .bgeo objects, which were then converted into VDB volumes. The solar plasma data were externally subsampled to reduce the file size, then read into Houdini as a .bgeo VDB object. Full credits for these CADENS images are stored on [ytini.com]{}[^20]. [*Houdini*]{}’s ability to render multiple datasets at once allows the user to compare and contrast outputs from both observational and simulated data and incorporate outputs from simulations produced with different computational methods. Summary and Future Plans {#section:discussion} ======================== As the richness of observed and simulated astrophysical datasets increases with the advent of ever more powerful instruments and computers, so does the astronomer’s potential to translate their scientific discoveries into visually stunning images and movies. In this work we introduce several methods of interaction between astronomical data and [*Houdini*]{}, the three dimensional visual effects software used by many professional graphics studios. The techniques described here make use of [*yt*]{} as a data reader and manipulator to parse large datasets into the volume formats used by [*Houdini*]{}. We give several simple examples of processing simulation data into aesthetically pleasing images, and encourage the reader to visit the “Houdini For Astronomy" website, [www.ytini.com]{}, for further exploration of these methods. Our ongoing work focuses on the volume rendering of data with non-uniform voxel sizes (e.g. generated from adaptive mesh refinement codes) which is currently beyond the capabilities of the majority of graphics and visual effects software. Preliminary integration of AMR volume rendering is mentioned on the [ytini.com]{} blog[^21]. Progress on this front will be regularly updated on the website.\ \ The authors would like to thank Stuart Levy, Robert Patterson, Jeffrey Carpenter, Donna Cox, and Matthew Turk for illuminating conversations and Morgan Macleod and Melinda Soares-Furtado for their keen insights in regards to this paper and the anonymous referee for their extremely helpful comments. This work is supported by a NSF grant AST-1402480, NSF award for CADENS ACI-1445176. ![image](temp_GUI.png){width="100.00000%"} ![image](shaderNodes.png){width="130.00000%"} ![image](pythonSOP_codeRefined_mod.png){width="80.00000%"} ![image](pythonSOP_dualpanels.png){width="95.00000%"} ![image](vdbexporter_code4_mod.png){width="80.00000%"} ![image](full_volume_render_houdini_gui.png){width="80.00000%"} ![image](slice_and_volume_render.png){width="130.00000%"} ![image](ytini_slice_geoview_labels.png){width="80.00000%"} ![image](solarImageWidth2sm.png){width="80.00000%"} [^1]: University of Technology Sydney: http://www.uts.edu.au/, California Academy of Sciences: http://www.calacademy.org/, American Museum of Natural History: http://www.amnh.org/, National Center for Supercomputing Applications, Advance Visualization Laboratory: http://avl.ncsa.illinois.edu/ [^2]: www.sidefx.com [^3]: VDB: http://www.openvdb.org, GEO/BGEO: https://www.sidefx.com/docs/houdini11.0/io/formats/geo, JSON: http://json.org/example.html, PDB: https://github.com/Microsoft/microsoft-pdb, OBJ: https://en.wikipedia.org/wiki/Wavefront\_.obj\_file [^4]: www.ytini.com/tutorials/tutorial\_isolatedGalaxy.html [^5]: www.ytini.com/tutorials/tutorial\_moreAboutCameras.html [^6]: https://www.sidefx.com/docs/houdini11.1/vex/contexts/cvex [^7]: https://www.sidefx.com/docs/houdini13.0/render/bsdf [^8]: www.ytini.com/tutorials/tutorial\_moreAboutShaders.html [^9]: isolatedvolume\_shader.hiplc entry in table on www.ytini.com/listofHoudiniFiles.html [^10]: Note: the Python install version of [*Houdini*]{} must match that of [*yt*]{}. If you install with a conda or miniconda installation this can be achieved with [PATH\_TO\_YT\_CONDA/bin/conda install python$=$2.7.\#]{} where “\#" is the sub-version of Python installed within [*Houdini*]{}. This is discussed further in the Getting Started section of the website - www.ytini.com/getstarted.html. [^11]: withPythonSOP.hipnc entry in table on www.ytini.com/listofHoudiniFiles.html [^12]: www.ytini.com/tutorials/tutorial\_pythonSOP.html [^13]: www.ytini.com/tutorials/tutorial\_vdbInstall.html [^14]: www.ytini.com/tutorials/tutorial\_pythonVDBconverter.html [^15]: http://yt-project.org/data/ [^16]: www.ytini.com/tutorials/tutorial\_analysisPlots.html [^17]: http://www.ncsa.illinois.edu/enabling/vis/cadens/documentary/solar\_superstorms [^18]: http://sdo.gsfc.nasa.gov/ [^19]: http://www.ncsa.illinois.edu/enabling/bluewaters [^20]: www.ytini.com/misc/SolarSuperstorms\_Magnetosphere\_Credits.pdf and www.ytini.com/misc/SolarSuperstorms\_DoubleCME\_Credits.pdf [^21]: http://www.ytini.com/blogs/blog\_amr\_2016-11-02.html
--- abstract: | Within the left-right symmetric model (LRM) the decays $$S_1\to\mu^++\tau^-,\qquad S_1\to\mu^-+\tau^+$$ where $S_1$ is an analog of the standard model Higgs boson, are considered. The widths of this decays are found in the third order of the perturbation theory. Since the main contribution to the decay widths is caused by the diagram with the light and heavy neutrinos in the virtual state then investigation of this decays could shed light upon the neutrino sector structure. The obtained decay widths critically depend on the charged gauge bosons mixing angle $\xi$ and the heavy-light neutrinos mixing angle $\varphi$. The LRM predicts the values of these angles as functions of the vacuum expectation values $v_L$ and $v_R$. Using the results of the existing experiments, on looking for the additional charged gauge boson $W_2$ and on measuring the electroweak $\rho$ parameter, gives $$\sin\xi\leq5\times10^{-4},\qquad\sin\varphi\leq2.3\times10^{-2}.$$ However, even using the upper bounds on $\sin\xi$ and $\sin\varphi$ one does not manage to get the upper experimental bound on the branching ratio $\mbox{BR}(S_1\to\tau\mu)_{exp}$ being equal to $0.25\times10^{-2}$. The theoretical expression proves to be on two orders of magnitude less than $\mbox{BR}(S_1\to\tau\mu)_{exp}$. author: - | O.M.Boyarkin[^1], G.G.Boyarkina, D.S.Vasileuskaya\ \ --- : Higgs boson, lepton flavor violation, left-right symmetric model, heavy and light neutrinos, mixing in the neutrino sector, Large Hadron Collider.\ PACS numbers: 12.15.Ji, 12.15.Lk, 13.40.Ks, 12.60.Cn. Introduction ============ Upon discovering the Higgs boson, the obvious next step is to elucidate if it is an elemental or a composite particle and if there is physics beyond the Standard Model (SM) that could be hidden in the Higgs sector. Expectation for departure from SM behavior are based on the following facts. The SM has not found satisfactory explanation of baryon asymmetry of the Universe, neutrino mass smallness, the value of the muon anomalous magnetic moment, hierarchy problem and so on. Moreover, among the SM particles there are no candidates on the role of weakly interacting massive particles which enter into the non-baryonic cold dark matter. It is clear that the future ambitious experimental program, both at the upgraded Large Hadron Collider (LHC) and future linear colliders, which will determine all the Higgs couplings with higher precision than at present, will play a central role. A particularly interesting possible departure from the Higgs standard properties will be Higgs decays going with lepton flavor violation (LFV). These decays do not take place even in the minimally extended SM (SM with massive neutrinos), since lepton flavor symmetry is an exact symmetry of the SM and therefore it predicts vanishing rates for all these LFV processes to all orders in perturbation theory. It should be noted that any experimental signal of LFV will indicate that some new physics, either new particles or new interactions must be responsible for it. The ATLAS and CMS collaborations are actively searching for these LFV Higgs decays. For example, the CMS collaboration saw an excess on the $H\to\tau\mu$ channel after the run-I (this process includes both $H\to \mu^+\tau^-$ and $H\to \mu^-\tau^+$), with a significance of $2.4\sigma$ and a value [@VK15; @Gaa] $$\mbox{BR}(H\to\tau\mu)=(0.84^{+0.39}_{-0.37})\%.\eqno(1)$$ However, neither this excess, nor other positive LFV Higgs decay signal, have been detected at the present run-II. As of now, ATLAS has released their results after analyzing 20.3 $\mbox{fb}^{-1}$ of data at a center of mass energy of $\sqrt{s}=8$ TeV, achieving sensitivities of the order of $10^{-2}$ for the $H\to\tau\mu$ and $H\to\tau e$ channels [@GA17]. CMS has also searched for the $H\to\mu e$ channel after the run-I [@VKH16] and has further enhanced the sensitivities of the $H\to\tau\mu$ and $H\to\tau e$ channels with new run-II data [@CMS17] of $\sqrt{s}=13$ TeV, setting the most stringent upper bounds for the LFV Higgs decays, that at the 95% CL are as follows $$\mbox{BR}(H\to\mu e)<3.5\times10^{-4}\eqno(2)$$ $$\mbox{BR}(H\to\tau e)<0.61\times10^{-2}\eqno(3)$$ $$\mbox{BR}(H\to\tau\mu)<0.25\times10^{-2}\eqno(4)$$ There is no question that observation of the Higgs boson decay with the LFV is a smoking gun signal for physics beyond the SM. These decays have been studied for a long time in the literature within various SM extensions (for recent works see, [@KC16; @SB16; @Eah16; @AA16]). The models predicting the Higgs boson decays with LFV could be classified into two categories. Among the first are the SM extensions in which existence of these decays is provided by introducing the Higgs boson LFV couplings by hand. This can be achieved by an extension of the scalar sector with some additional discrete symmetries (see, for example, Ref. [@MDCa; @ACri]). It is clear that all these SM extensions necessarily introduce a number of new arbitrary parameters. Notice that in the models of this kind the Higgs decays (2)-(4) proves to be allowed even at the tree approximation. However, the more elegant explanation of the Higgs decays with LFV gives models falling into the second category in which the flavor mixing among particles of different generations is embedded by the construction. Example is provided by the supersymmetric models in which the flavor mixing among the three generations of the charged sleptons and/or sneutrinos takes place. This mixing produces via their contributions the Higgs decay channel $H\to l_i\overline{l}_j$ at the one-loop level [@JLD00; @AAD14]. Another example is the left-right symmetric model (LRM) [@ICP74; @RNM75; @GSRN], where the LFV processes are caused by the mixing in the neutrino sector. Within the LRM the LFV was investigated by the example of the processes [@Bom97] $$e^-+\mu^+\to W_k^-+W_n^+,\qquad e^-+\mu^-\to W_k^-+W_n^-,$$ which may be observed on the muon colliders and the decays [@Bom04] $$\mu^-\to e^++e^-+e^-,\qquad \mu^-\to e^-+\gamma.$$ In so doing one was shown that within the LRM it could be possible to obtain the upper experimental bounds on the BR($\mu^-\to e^+e^-e^-)$ and BR($\mu^-\to e^-\gamma).$ In this work we also investigate the LFV processes from the point of view of the LRM. Our goal is to consider the Higgs decay $H\to\mu\tau$ and establish whether this decay is possible in the context of the LRM. The organization of the paper goes as follows: section 2 contains a summary of the LRM. In sections 3 we fulfill our calculations and analyze the results obtained. Section 4 includes our conclusion. The left-right-symmetric model ============================== In the LRM quarks and leptons enter into the left- and right-handed doublets $$\left.\begin{array}{ll} \displaystyle{Q_L^a({1\over2}, 0, {1\over3})=\left(\matrix{u_L^a\cr d_L^a}\right)},\hspace{12mm} \displaystyle{Q_R^a(0,{1\over2}, {1\over3})=\left(\matrix{u_R^a\cr d_R^a}\right)},\\[4mm] \displaystyle{\Psi_L^a({1\over2}, 0, -1)=\left(\matrix{\nu_{a L}\cr l_{\alpha L}}\right)},\qquad \displaystyle{\Psi_R^a(0,{1\over2}, -1)=\left(\matrix{N_{a R}\cr l_{a R}}\right)},\end{array}\right\}\eqno(5)$$ where $a=1,2,3$, in brackets the values of $S^W_L, S^W_R$ and $B-L$ are given, $S^W_L$ ($S^W_R$) is the weak left (right) isospin while $B$ and $L$ are the baryon and lepton numbers. Note that introducing the heavy neutrinos $N_{aR}$ leads to the existence of the see-saw relation which, in its turn, gives explanation of the $\nu_l$-neutrino mass smallness. The Higgs sector structure of the LRM determines the neutrino nature. The mandatory element of the Higgs sector is the bi-doublet $\Phi(1/2,1/2,0)$ $$\Phi=\left(\matrix{\Phi^0_1 & \Phi^+_2\cr \Phi^-_1 &\Phi^0_2\cr}\right).\eqno(6)$$ Its nonequal vacuum expectation values (VEV’s) of the electrically neutral components bring into existence the masses of quarks and leptons. For the neutrino to be a Majorana particle, the Higgs sector must include two triplets $\Delta_L(1,0,2)$, $\Delta_R(0,1,2)$ [@RN81] $$({\mbox{\boldmath $\tau$}}\cdot{\mbox{\boldmath $\Delta$}}_L)= \left(\matrix{\delta_L^+/\sqrt{2} & \delta_L^{++}\cr \delta_L^0 & -\delta_L^+/\sqrt{2}\cr}\right),\qquad ({\mbox{\boldmath $\tau$}}\cdot{\mbox{\boldmath $\Delta$}}_R)=\left(\matrix{\delta_R^+/\sqrt{2} & \delta_R^{++}\cr \delta_R^0 & -\delta_R^+/\sqrt{2}\cr}\right).\eqno(7)$$ If the Higgs sector consists of two doublets $\chi_L(1/2,0,1)$, $\chi_R(0,1/2,1)$ and one bidoublet $\Phi(1/2,1/2,0)$ [@RMS77], then the neutrino represents a Dirac particle. In what follows we shall consider the LRM version with Majorana neutrinos. The masses of fermions and their interactions with the gauge boson are controlled by the Yukawa Lagrangian. Its expression for the lepton sector is as follows $${\cal L}_Y=-\sum_{a,b}\{h_{ab}\overline{\Psi}_{aL}\Phi\Psi_{bR}+ h^{\prime}_{ab}\overline{\Psi}_{aL}\tilde{\Phi}\Psi_{b,R}+$$ $$+if_{ab}[\Psi^T_{aL}C\tau_2({\mbox{\boldmath $\tau$}}\cdot {\mbox{\boldmath $\Delta$}}_L)\Psi_{bL}+ (L\rightarrow R)]+\mbox{h.c.}\},\eqno(8)$$ where $C$ is a charge conjugation matrix, $\tilde{\Phi}= \tau_2\Phi^*\tau_2$, $a,b=e,\mu,\tau,$ $h_{ab}, h^{\prime}_{ab}$ and $f_{ab}=f_{ba}$ are bidoublet and triplet Yukawa couplings (YC’s), respectively. The spontaneous symmetry breaking (SSB) according to the chain $$SU(2)_L\times SU(2)_R\times U(1)_{B-L}\rightarrow SU(2)_L\times U(1)_Y \rightarrow U(1)_Q$$ is realized for the following choice of the vacuum expectation values (VEV’s): $$<\delta^0_{L,R}>={v_{L,R}\over\sqrt{2}},\qquad <\Phi^0_1>=k_1,\qquad <\Phi^0_2>=k_2.\eqno(9)$$ To achieve agreement with experimental data, it is necessary to ensure fulfillment of the conditions $$v_L<<\mbox{max}(k_1,k_2)<<v_R.\eqno(10)$$ The Higgs potential $V_H$ is the essential element of the theory because it defines the physical states basis of Higgs bosons, Higgs masses, and interactions between Higgses. We shall use the most general shape of $V_H$ that was proposed in Ref. [@NGD91]. After the SSB we have 14 physical Higgs bosons. They are: four doubly-charged scalars $\Delta^{(\pm)}_{1,2}$, four singly-charged scalars $\tilde{\delta}^{(\pm)}$ and $h^{(\pm)}$, four neutral scalars $S_{1,2,3,4}$ ($S_1$ boson is an analog of the SM Higgs boson), and two neutral pseudoscalars $P_{1,2}$. We now direct our attention to the sector of the neutral scalar Higgses. If one does not impose any conditions on the constants entering the Higgs potential $V_H$, then we have four scalars $$\left.\begin{array}{ll} S_1=(\Phi_-^{0r}\cos\theta_0+\Phi_+^{0r}\sin\theta_0)\cos\alpha -\delta_R^{0r}\sin\alpha, \ S_2=-\Phi_-^{0r}\sin\theta_0+\Phi_+^{0r}\cos\theta_0,\\[2mm] \hspace{15mm}S_3=(\Phi_-^{0r}\cos\theta_0+\Phi_+^{0r}\sin\theta_0)\sin\alpha +\delta_R^{0r}\cos\alpha, \qquad S_4=\delta^{0r}_L,\end{array}\right\}\eqno(11)$$ where $$\Phi_-^{0r}={k_1\Phi_1^{0r}+k_2\Phi_2^{0r}\over k_+},\qquad \Phi_+^{0r}={k_1\Phi_2^{0r}-k_2\Phi_1^{0r}\over k_+},$$ $k_{\pm}=\sqrt{k_1^2\pm k_2^2}$ and the superscript $r$ means the real part of the corresponding quantity. The mixing angle $\theta_0$ is defined by the expression [@Bom200] $$\tan2\theta_0={{4k_1k_2k_-^2[2(2\lambda_2+\lambda_3)k_1k_2+\lambda_4k_+^2]} \over{k_1k_2[(4\lambda_2+2\lambda_3)(k_-^4-4k_1^2k_2^2)-k_+^2 (2\lambda_1k_+^2+8\lambda_4k_1k_2)]-\alpha_2v_R^2k_+^4}}\eqno(12)$$ and, as a result, appears to be very small. In what follows we shall set it equal to zero. As far as the mixing angle $\alpha$ is concerned, it could be very sizeable. The theory predict that at $v_L=k_2=0$ the expression for the mixing angle $\alpha$ is as follows [@JG96] $$\tan2\alpha={\alpha_Hk_1v_R\over\rho_Hv_R^2-\lambda_Hk_1^2},\eqno(13)$$ where $\lambda_H, \rho_H$ and $\alpha_H$ are linear combinations of the constants entering the Higgs potential. Recent investigations [@FA15; @SIG16] allow for $\sin\alpha<0.44$ at $2\sigma$ CL, practically independently of the $S_3$ mass. Then the Lagrangian of interaction between the $S_1$ boson and leptons will look like $${\cal{L}}_l=-{1\over\sqrt{2}k_+}\Big\{\sum_am_a\overline{l}_{aR}l_{aL} S_1\cos\alpha+\sum_{a,b}\overline{N}_{aR}\nu_{bL}[h_{ab} k_1+h^{\prime}_{ab}k_2]S_1\cos\alpha\Big\}+\mbox{h.c.}.\eqno(14)$$ It is convenient to express the coupling constants of the $S_1$ boson with the neutrinos in terms of neutrino oscillation parameters [@Bom200; @Bom97]. In the two flavor approximation the neutrino mass matrix in the basis $\Psi^T=\left(\nu_{aL}^T,N_{aR}^T,\nu_{bL}^T,N_{bR}^T\right)$ will look like $${\cal M}=\left(\matrix{f_{aa}v_L & m^a_D & f_{ab}v_L & M_D\cr m_D^a & f_{aa}v_R & M^{\prime}_D & f_{ab}v_R\cr f_{ab}v_L & M^{\prime}_D & f_{bb}v_L & m^b_D \cr M_D & f_{ab}v_R & m_D^b & f_{bb}v_R\cr}\right).\eqno(15)$$ where $$m_D^a=h_{aa}k_1+h^{\prime}_{aa}k_2,\eqno(16)$$ $$M_D=h_{ab}k_1+h^{\prime}_{ab}k_2,\qquad M^{\prime}_D=h_{ba}k_1+h^{\prime}_{ba}k_2.\eqno(17)$$ The transition to the eigenstate neutrino mass basis $m_i$ ($i=1,2,3,4$) is carried out by the matrix $$U=\left(\matrix{c_{\varphi_a}c_{\theta_{\nu}} & s_{\varphi_a}c_{\theta_N} & c_{\varphi_a}s_{\theta_{\nu}} & s_{\varphi_a}s_{\theta_N} \cr -s_{\varphi_a}c_{\theta_{\nu}} & c_{\varphi_a}c_{\theta_N} & -s_{\varphi_a}s_{\theta_{\nu}} & c_{\varphi_a}s_{\theta_N} \cr -c_{\varphi_b}s_{\theta_{\nu}} &-s_{\varphi_b}s_{\theta_N} & c_{\varphi_b}c_{\theta_{\nu}} & s_{\varphi_b}c_{\theta_N}\cr s_{\varphi_b}s_{\theta_{\nu}} &-c_{\varphi_b}s_{\theta_N} & -s_{\varphi_b}c_{\theta_{\nu}} & c_{\varphi_b}c_{\theta_N}\cr}\right),\eqno(18)$$ where $\varphi_a$ and $\varphi_b$ are the mixing angles inside $a$ and $b$ generations respectively, $\theta_{\nu} (\theta_N)$ is the mixing angle between the light (heavy) neutrinos belonging to the $a$- and $b$-generations, $c_{\varphi_a}=\cos\varphi_a, \ s_{\varphi_a}=\sin\varphi_a$ and so on. Using the eigenvalues equation for the mass matrix we could obtain the relations which connect the YC’s with the masses and mixing angles of the neutrinos $$m_D^a=c_{\varphi_a}s_{\varphi_a}(-m_1c^2_{\theta_{\nu}}-m_3s^2_ {\theta_{\nu}}+ m_2c^2_{\theta_N}+m_4s^2_{\theta_N}),\eqno(19)$$ $$M_D=c_{\varphi_a}s_{\varphi_b}c_{\theta_{\nu}}s_{\theta_{\nu}} (m_1-m_3)+s_{\varphi_a}c_{\varphi_b}c_{\theta_N}s_{\theta_N}(m_4-m_2), \eqno(20)$$ $$f_{ab}v_R=s_{\varphi_a}s_{\varphi_b}c_{\theta_{\nu}}s_{\theta_{\nu}} (m_3-m_1)+c_{\varphi_a}c_{\varphi_b}c_{\theta_N}s_{\theta_N}(m_4-m_2), \eqno(21)$$ $$f_{aa}v_R=(s_{\varphi_a}c_{\theta_{\nu}})^2m_1+(c_{\varphi_a}c_{\theta_N}) ^2m_2+(s_{\varphi_a}s_{\theta_{\nu}})^2m_3+(c_{\varphi_a}s_{\theta_N})^2m_4, \eqno(22)$$ $$f_{bb}v_R=(s_{\varphi_b}s_{\theta_{\nu}})^2m_1+(c_{\varphi_b} s_{\theta_b})^2m_2+(s_{\varphi_b}c_{\theta_{\nu}})^2m_3+ (c_{\varphi_b}c_{\theta_N})^2m_4,\eqno(23)$$ $$m_D^b=m_D^a(\varphi_a\rightarrow \varphi_b,\theta_{\nu,N} \rightarrow\theta_{\nu,N}+{\pi\over2}),\qquad M_D^{\prime}=M_D(\varphi_a\leftrightarrow\varphi_b),\eqno(24)$$ The change $L\rightarrow R$ in the left-hand sides of Eqs. (21)-(23) results in the replacement $\varphi_{a,b}\rightarrow\varphi_{a,b}+{\pi\over2}$ in their right-hand sides. From definition of $f_{aa}v_R$ and $f_{aa}v_L$ follows the exact formula for the heavy-light neutrino mixing angle $\varphi_{a,b}$ [@Bom04] $$\sin2\varphi_a=2{\sqrt{f^2_{aa}v_Rv_L-[f_{aa}(v_R+v_L)- m_{\nu_1}c_{\theta_{\nu}}^2-m_{\nu_2}s_{\theta_{\nu}}^2] (m_{\nu_1}c_{\theta_{\nu}}^2+m_{\nu_2}s_{\theta_{\nu}}^2)} \over f_{aa}(v_R+v_L)-2(m_{\nu_1}c_{\theta_{\nu}}^2+ m_{\nu_2}s_{\theta_{\nu}}^2)},\eqno(25)$$ $$\sin2\varphi_b=\sin2\varphi_a\left(f_{aa}\rightarrow f_{bb}, \theta_{\nu}\rightarrow\theta_{\nu}+{\pi\over2}\right).\eqno(26)$$ It should be remarked that according the LRM the heavy-light mixing angles belonging to different generations are practically equal in value $$\sin2\varphi_a\simeq\sin2\varphi_b\simeq2{\sqrt{v_Rv_L}\over v_R+v_L}\equiv\sin2\varphi.\eqno(27)$$ In following calculations we also need the Lagrangians which describe interaction of the charged gauge bosons both with the $S_1$ Higgs boson $$\sqrt{2}{\cal L}^n_W=g_L^2\Big\{k_+[W^*_{1\mu}(x)W_1^{\mu}(x)+ W^*_{2\mu}(x)W_2^{\mu}(x)]-{2k_1k_2\over k_+}[c_{2\xi} (W^*_{2\mu}(x)W_1^{\mu}(x)+W^*_{1\mu}(x)W_2^{\mu}(x))+$$ $$+s_{2\xi} (W^*_{2\mu}(x)W_{2\mu}(x)- W^*_{1\mu}(x)W_{1\mu}(x))]\Big\}S_1(x),\eqno(28)$$ and with leptons $${\cal{L}}_l^{CC}={g_L\over2\sqrt{2}}\sum_l\Big[\overline{l}(x)\gamma^{\mu} (1-\gamma_5) \nu_{lL}(x)W_{L\mu}(x)+\overline{l}(x)\gamma^{\mu} (1+\gamma_5)N_{lR}(x)W_{R\mu}(x)\Big],\eqno(29)$$ where $$W_1=W_L\cos\xi+W_R\sin\xi,\qquad W_2=-W_L\sin\xi+W_R\cos\xi,$$ The theory predicts the following connection between the heavy charged gauge boson mass $m_{W_2}$ ($m_{W_2}\simeq g_Lv_R$) and the mixing angle $\xi$ [@RN81] $$\tan2\xi\simeq{4g_Lg_Rk_1k_2\over g_R^2(2v_R^2+k_+^2)-g_L^2(2v_L^2+k_+^2)}.\eqno(30)$$ In Ref. [@OMB14] investigation of Mikheyev-Smirnov-Wolfenstein resonance with the solar and reactor neutrinos has be done. The sector of heavy neutrino in two flavor approximation has been considered. It was demonstrated that only three versions of the heavy neutrino sector structure are possible: (i) the light-heavy neutrino mixing angles $\varphi_a$ and $\varphi_b$ are arbitrary but equal each other whereas the heavy neutrino masses are quasi-degenerate (quasi-degenerate mass case — QDM case); (ii) the heavy neutrino masses are hierarchical ($m_{N_1}< m_{N_2}$) while the angles $\varphi_a$ and $\varphi_b$ are equal to zero (no mass degeneration case — NMD case); (iii) $\varphi_a=\varphi_b$ and the heavy-heavy neutrino mixing is maximal, $\theta_N=\pi/4$, and as a result the heavy neutrino masses are hierarchical (maximal heavy-heavy mixing case — MHHM case). It is logical to assume that the same pattern takes place in the three flavor approximation as well. Decay of the Higgs boson into $\mu\tau$ pair ============================================ In this chapter we shall investigate the Higgs decay into the channel $$S_1 \to \mu^++\tau^-\eqno(31)$$ within the LRM. Thanks to the mixing into the neutrino sector this decay could go in the third order of the perturbation theory. The corresponding diagrams are pictured in Fig.1. For the sake od simplicity we shall consider the individual contributions of each diagram to the total width of the decay (31). Let us start with the kind of the diagrams one of them shown in Fig.1a. There are eight diagrams depending on what neutrinos are produced in the virtual state. For example, when in the virtual state the $\nu_{\tau}\overline{N}_{\tau}$ pair comes into being the corresponding matrix element take the form $$M_1^{(a)}={g_L^2m_D^{\tau}\cos\alpha\sin2\theta_N\sin\xi\over 32k_+\sqrt{2}}\sqrt{{m_{\tau}m_{\mu}\over2m_{S_1}E_{\tau}E_{\mu}}}\ \overline{u}(p_1)\gamma_{\lambda}(1-\gamma_5)\Big\{\int_{\Omega} {\hat{p}-\hat{k}+m_{\nu_i}\over(p-k)^2-m_{\nu_i}^2} \times$$ $$\times(1+\gamma_5)\Bigg[{\hat{k}+m_{N_2}\over k^2-m_{N_2}^2} -{\hat{k}+m_{N_1}\over k^2-m_{N_1}^2}\Bigg]\gamma_{\sigma}(1+\gamma_5) {g^{\lambda\sigma}-(k-p_2)^{\lambda}(k-p_2)^{\sigma}/m_{W_1}^2\over (k-p_2)^2-m_{W_1}^2} d^4k\Big\}v(p_2),\eqno(32)$$ where $m_{N_j}$ $(j=1,2)$ is the mass of the heavy neutrino, $p_1$ and $p_2$ are momentum of $\tau$-lepton and $\mu$-meson, respectively. Taking into account Eqs. (19), (20) and (24) we find that the matrix element corresponding to all eight diagrams is given by the expression $$M^{(a)}=\sum_{i=1}^8M_i^{(a)}={g_L^2\cos\alpha\sin2\varphi\sin2 \theta_N\sin\xi\over 16k_+\sqrt{2}}\sqrt{{m_{\tau}m_{\mu}\over2m_{S_1}E_{\tau}E_{\mu}}}\ \overline{u}(p_1)\gamma_{\lambda}(1-\gamma_5)\Big\{\int_{\Omega} {\hat{p}-\hat{k}+m_{\nu_i}\over(p-k)^2-m_{\nu_i}^2} \times$$ $$\times(1+\gamma_5)\Bigg[{m_{N_2}(\hat{k}+m_{N_2})\over k^2-m_{N_2}^2} -{m_{N_1}(\hat{k}+m_{N_1})\over k^2-m_{N_1}^2}\Bigg]\gamma_{\sigma}(1+\gamma_5) {g^{\lambda\sigma}-(k-p_2)^{\lambda}(k-p_2)^{\sigma}/m_{W_1}^2\over (k-p_2)^2-m_{W_1}^2} d^4k\Big\}v(p_2).\eqno(33)$$ Substituting (33) into the partial decay width $$d\Gamma=(2\pi)^4\delta^{(4)}(p-p_1-p_2)|M^{(a)}|^2{d^3p_1d^3p_2 \over(2\pi)^8},$$ integrating the obtained expression over $p_1$, $p_2$ and using the procedure of dimensional regularization, we get $$\Gamma(S_1\to\overline{\nu}_L^*N_R^*W_1^*\to\mu^+\tau^-)= {\pi^3(g_L^2\cos\alpha\sin2\varphi\sin2\theta_N\sin\xi)^2 \over16m_{S_1}^3}\Big\{4m_{\tau}m_{\mu}(\Delta L)(\Delta R)+$$ $$+(m_{S_1}^2-m_{\tau}^2-m_{\mu}^2)[(\Delta L)^2+(\Delta R)^2]\Big\}\sqrt{(m_{S_1}^2-m_{\mu}^2-m_{\tau}^2)^2-4m_{\mu}^2m_{\tau}^2} ,\eqno(34)$$ where $$\Delta L=L(m_{N_2})-L(m_{N_1}),\qquad L(m_{N_j})={m_{N_j}\over k_+}\Big[L_W^1(m_{N_j})+L_W^2(m_{N_j})+L_W^3(m_{N_j})\Big],$$ $$\Delta R=R(m_{N_2})-R(m_{N_1}),\qquad R(m_{N_j})={m_{N_j}\over k_+}\Big[R_g(m_{N_j})+R^1_W(m_{N_j})+R^2_W(m_{N_j})+$$ $$+R^3_W(m_{N_j})+ R^4_W(m_{N_j})\Big],$$ $$R_g(m_{N_j})=2\int_0^1xdx\int_0^1\Big[ {(pp_x)-p_x^2\over l_{xy}^j-p_x^2} -2\ln\Bigg|{l_{xy}^j\over l_{xy}^j-p_x^2}\Bigg|\Big]dy,\eqno(35)$$ $$L_W^1(m_{N_j})={2m_{\mu}m_{\tau}\over m_W^2}\int_0^1xdx\int_0^1{(m_{S_1}^2-m_{\tau}^2)(x-xy)-2(p_2p_x)+m_{\mu}^2x\over l_{xy}^j-p_x^2}dy,\eqno(36)$$ $$R_W^1(m_{N_j})=-{2m^2_{\mu}\over m_W^2}\int_0^1xdx\int_0^1{(m_{S_1}^2-m_{\tau}^2)x+m_{\tau}^2(x-xy)\over l_{xy}^j-p_x^2}dy,\eqno(37)$$ $$L_W^2(m_{N_j})=-{2m_{\mu}m_{\tau}\over m_W^2}\int_0^1xdx\int_0^1\Bigg[-3\ln\Bigg|{l_{xy}^j\over l_{xy}^j-p_x^2}\Bigg|+{(pp_x)(x-xy)-2p_x^2\over l_{xy}^j-p_x^2}\Bigg]dy,\eqno(38)$$ $$R_W^2(m_{N_j})={2\over m_W^2}\int_0^1xdx\int_0^1\Bigg[\ln\Bigg|{l_{xy}^j\over l_{xy}^j-p_x^2}\Bigg|(2m_{S_1}^2-2m_{\tau}^2+m_{\mu}^2)+{(pp_x)xm_{\mu}^2+ (m_{S_1}^2- m_{\tau}^2)p_x^2\over l_{xy}^j-p_x^2}\Bigg]dy,\eqno(39)$$ $$L_W^3(m_{N_j})=-{m_{\mu}m_{\tau}\over m_W^2}\int_0^1xdx\int_0^1\Bigg\{6xy\ln\Bigg|{l_{xy}^j\over l_{xy}^j-p_x^2}\Bigg|+{(2xy-4x)p_x^2\over l_{xy}^j-p_x^2}\Bigg\}dy,\eqno(40)$$ $$R_W^3(m_{N_j})=-{1\over m_W^2}\int_0^1xdx\int_0^1\Bigg\{\ln\Bigg|{l_{xy}^j\over l_{xy}^j-p_x^2}\Bigg|\Bigg[12(pp_x)+6m_{\mu}^2x-6m_{\tau}^2(x-xy)\Bigg]+$$ $$+{2p_x^2\over l_{xy}^j-p_x^2}\Bigg[2(pp_x)+m_{\mu}^2x-m_{\tau}^2(x-xy)\Bigg]\Bigg\}dy, \eqno(41)$$ $$R_W^4(m_{N_j})={1\over m_W^2}\int_0^1xdx\int_0^1\Bigg\{\ln\Bigg|{l_{xy}^j\over l_{xy}^j-p_x^2}\Bigg|(24p^2_x-12l_{xy}^j) +p_x^2\Bigg[12+{2p_x^2\over l_{xy}^j-p_x^2}\Bigg]\Bigg\}dy,\eqno(42)$$ $$l_{xy}^j=yx(m_{\mu}^2-m_{W_1}^2-m_{S_1}^2)+x(m_{S_1}^2+m_{N_j}^2)-m_{N_j}^2,$$ $$p_x^2=m_{\tau}^2x^2y^2+m_{S_1}^2x^2-(m_{S_1}^2+m_{\tau}^2-m_{\mu}^2)x^2y,\qquad (pp_x)=m_{S_1}^2x-{1\over2}(m_{S_1}^2-m_{\mu}^2+m_{\tau}^2)xy,$$ $$(p_2p_x)=m_{\mu}^2x+{1\over2}(m_{S_1}^2-m_{\mu}^2-m_{\tau}^2)(x-xy),$$ In the expression (33) we have neglected mixing in the light neutrino sector because of current experiments leads to the results [@CP16] $$\Delta(m_{21})^2=\mbox{few}\times10^{-5}\ \mbox{eV}^2,\qquad \Delta(m_{31})^2=\mbox{few}\times10^{-3}\ \mbox{eV}^2,\qquad\Delta(m_{32})^2=\mbox{few}\times10^{-3}\ \mbox{eV}^2.\eqno(43)$$ Now we proceed to the diagrams of Fig.1b-1d. Calculations show that amongst them the greatest contributions are come from the following two diagrams pictured on Fig.1d. The first diagram contains the $W^-_2W_2^+N_R$ particles in the virtual states. Its existence is caused by the heavy-heavy neutrino mixing (HHNM) and, as a result, contribution from this diagram turns into zero when $\theta_N=0$. The second diagram holds the $W_1^-W_1^+\nu_L$ particles in the virtual states and it leads to nonzero contribution in only case when both the HHNM and the heavy-light neutrino mixing are in existence. It is convenient to consider contributions of these diagrams to the decay width separately. In the case of the HHNM we obtain $$\Gamma(S_1\to W_2^{+*}W_2^{-*}N_R^*\to\mu^+\tau^-)= {\pi^3(g_L^4k_+\sin2\theta_N)^2\over128m_{S_1}^3}\Big\{4m_{\tau} m_{\mu}(\Delta L^{\prime})(\Delta R^{\prime})+(m_{S_1}^2-m_{\tau}^2-$$ $$-m_{\mu}^2)[(\Delta L^{\prime})^2+(\Delta R^{\prime})^2]\Big\}\sqrt{(m_{S_1}^2-m_{\mu}^2-m_{\tau}^2)^2- 4m_{\mu}^2m_{\tau}^2} ,\eqno(44)$$ where the expressions for $\Delta L^{\prime}$ and $\Delta R^{\prime}$ are given in Appendix. The expression for $\Gamma(S_1\to W_1^{+*}W_1^{-*}\nu_L^*\to\mu^+\tau^-)$ follows from (44) under replacement $$m_{W_2}\to m_{W_1},\qquad(\sin2\theta_N)^2\to(\sin2\theta_N\sin^2\varphi)^2.\eqno(45)$$ In order to compare the obtained expressions it is necessary to have information concerning the values of such parameters as $v_R$, $\xi$, $v_L$ and $\varphi$. Let us start with the $v_R$ and $\xi$. The lower bound obtained by the ATLAS Collaboration on $m_{W_2}$ from dijet searches at $\sqrt{s}=13$ TeV is [@ATC17] $$m_{W_2}\geq3.7\ \mbox{TeV}\qquad\mbox{at}\ 95\% \mbox{C.L.}\qquad\mbox{with}\qquad L=37\ \mbox{fb}^{-1},\eqno(46)$$ to give $v_R\simeq5.7$ TeV. Since current experimental limits on the mixing angle $\xi$ fall in the broad range between 0.12 and 0.0006 (see, for review [@CP16]), then for definition of $\xi$ one needs to use the relation (30) which is predicted by the LRM. Using $v_R=5.7$ TeV we get $\xi\simeq5\times10^{-4}$. In what follows we shall use this very value for the mixing angle $\xi$. As far as the value of the heavy-light neutrino mixing angle $\varphi$ is concerned, there are a lot of papers devoted to determination of experimental bounds on it (see, for example [@PSB13] and references therein). One way to find such bounds is connected with searches for the neutrinoless double beta decay ($0\nu\beta\beta$) and disentangle the heavy neutrino effect. In Ref. [@SD16] considering the case of $^{76}\mbox{Ge}$, the following expression was obtained $$\Big|\sum_i{U_{ei}^2\over m_{N_i}}\Big|<{7.8\times10^{-8}\over m_p}\Bigg[{104\over {\cal{M}}_{0\nu}(\mbox{Ge})}\Bigg]\times\Bigg[{3\times10^{25}\ \mbox{yr}\over \tau^{0\nu}_{1/2}}\Bigg]^{1/2},\eqno(47)$$ where ${\cal{M}}_{0\nu}(\mbox{Ge})$ is is the nuclear matrix element, $m_p$ is the proton mass and $\tau^{0\nu}_{1/2}$ is the half-life for $0\nu\beta\beta$. However, there is the point of view that the $0\nu\beta\beta$ does not give the reliable answer on the value of the heavy-light mixing. Of course, the main uncertainties are connected with the determination of nuclear matrix element. In its calculation one should assume the definite values both for the axial coupling constants of the nucleon $g_A$ and for the phase space factor. For example, when $g_A=g_{\mbox{\footnotesize{nucleon}}}=1.269$and $g_A=g_{\mbox{\footnotesize{phen.}}}=g_{\mbox {\footnotesize{nucleon}}}\times A^{-0.18}$ ($A$ is the atomic number) the ${\cal{M}}_{0\nu}(\mbox{Ge})$ takes the values $104\pm29$ and $22\pm6$, respectively. Note, the $g_A=g_{\mbox{\footnotesize{phen.}}}$ parametrization as a function of $A$ comes directly from the comparison between the theoretical half-life for $2\nu\beta\beta$ and its observation in different nuclei [@JB13]. Using $\tau^{0\nu}_{1/2}(^{76}\mbox{Ge})=1.9\times10^{25}$ yr and setting $m_N=100$ GeV, with the help of Eq. (47) we may get $$(\sin\varphi)_{\mbox{\footnotesize{max}}}\simeq\left\{\begin{array} {ll}3.2\times10^{-3}\qquad\mbox{when} \qquad g_A=g_{\mbox{\footnotesize{nucleon}}} ,\\ 7\times10^{-3}\qquad\hspace{4mm}\mbox{when}\qquad g_A=g_{\mbox{\footnotesize{phen.}}} .\end{array}\right.$$ The other way is to directly look for the presence of the heavy-light neutrino mixing, which can manifest in several ways, for example, (i) via departures from unitarity of the neutrino mixing matrix, which could be investigated in neutrino oscillation experiments as well as in lepton flavor violation searches, and (ii) via their signatures in collider experiments. To take an illustration, in Ref. [@CC13] the final states with same-sign dileptons plus two jets without missing energy ($l^{\pm}l^{\pm}jj$), arising from $pp$ collisions were considered. This signal depends crucially on the heavy-light neutrino mixing. Analysis of the channel $$p+p\to N^*_ll^{\pm}\to l^{\pm}+l^{\pm}+2j\eqno(48)$$ led to the upper limit on $\sin\varphi$ equal to $3.32\times10^{-2}$ for $m_{W_R}=4$ TeV and $m_{N_l}=100$ GeV. On the other hand to evaluate $\varphi$ we could use the relation (27) as well. The precision measurements of the electroweak $\rho$ parameter [@TGR82] $$\rho={m_{Z_1}^2\cos^2\theta_W\over m_{W_1}^2}={1+4x\over1+2x}\eqno(49)$$ ($x=(v_L/k_+)^2$) set an upper bound on the VEV of $v_L\leq3$ GeV. Taking into account this value we obtain $$(\sin2\varphi)_{\mbox{\footnotesize{max}}}\simeq4.6\times10^{-2}.\eqno(50)$$ Setting $$\left.\begin{array}{ll} \theta_N={\pi\over4},\qquad m_{N_1}=140\ \mbox{GeV},\qquad m_{N_2}=250\ \mbox{GeV},\\ [2mm] \sin\alpha=0.44,\qquad \sin\xi=5\times10^{-4},\qquad \sin\varphi=2.3\times10^{-2}, \end{array}\right\}\eqno(51)$$ we get $${\Gamma(S_1\to \nu_L^*N_R^*W_1^*\to\mu^+\tau^-)\over \Gamma(S_1\to W_1^*W_1^*\nu_L^*\to\mu^+\tau^-)}\simeq10^5,\qquad {\Gamma(S_1\to \nu_L^*N_R^*W_1^*\to\mu^+\tau^-)\over \Gamma(S_1\to W_2^*W_2^*N_R^*\to\mu^+\tau^-)}\simeq10^4.\eqno(52)$$ So, the main contribution to the decay $S_1\to\mu^++\tau^-$ comes from the diagram of Fig.1a. In order to obtain the width of the decay $$S_1\to\mu^-+\tau^+\eqno(53)$$ one should make in Eqs. (34) the following replacement $$m_{\tau}\leftrightarrow m_{\mu}.$$ Now we shall find out whether could the obtained expressions for $\mbox{BR}(S_1\to\mu^+\tau^-)+\mbox{BR}(S_1\to\mu^-\tau^+)$ reproduce the experimental bound on the branching ratio of the decay $S_1\to\mu\tau$? First and foremost we note that the width of this decay does not equal to zero only provided the heavy neutrino masses are hierarchical while the heavy-heavy and heavy-light neutrino mixing angles do not equal to zero. Using (51) we get $$\mbox{BR}(S_1\to\tau^-\mu^+)\simeq\Bigg\{\begin{array}{ll} 0.24\times10^{-4},\qquad\mbox{when}\qquad\sin\varphi=2.3 \times10^{-2},\\ [2mm] 0.45\times10^{-6},\qquad\mbox{when}\qquad\sin\varphi=3.2\times10^{-3} .\end{array}.\eqno(54)$$ So, we see that at most the obtained expression is two orders of magnitude less than the current experimental upper bound being equal to $0.25\times10^{-2}$. Conclusion ========== Within the left-right symmetric model (LRM) the decays of the neutral Higgs boson $S_1$ $$S_1\to\mu^++\tau^-,\qquad S_1\to\mu^-+\tau^+\eqno(55)$$ where $S_1$ is an analog of the standard model (SM) Higgs boson, have been considered. These decays go with the lepton flavor violation (LFV) and, as result, are forbidden in the SM. We have found the widths of the decays (55) in the third order of the perturbation theory. The width of this decay does not equal to zero only provided the heavy neutrino masses are hierarchical. It was shown that the main contribution to the decay width is caused by the diagram with the light and heavy neutrinos in the virtual state. Therefore, investigation of these decays could give information about the neutrino sector structure of the model under study. The obtained decay widths critically depend on the angle $\xi$ which defines the mixing in the charged gauge boson sector and the heavy-light neutrino mixing angle $\varphi$. Within the LRM there exist the formulae connecting the values of these angles with the VEV’s $v_L$ and $v_R$. Using the results of the current experiments, on looking for the additional charged gauge boson $W_2$ and on measuring the electroweak $\rho$ parameter, gives $$\sin\xi\leq5\times10^{-4},\qquad\sin\varphi\leq2.3\times10^{-2}. \eqno(56)$$ However, even using the upper bounds on $\sin\xi$ and $\sin\varphi$ one does not manage to get for the branching ratio $\mbox{BR}(S_1\to\tau\mu)$ the value being equal to upper experimental bound $0.25\times10^{-2}$. The theoretical expression for the branching ratio of the decay $S_1\to\tau\mu$ proves to be on two orders of magnitude less than the upper experimental bound. On the other hand, it should be remembered that in our case $\mbox{BR}(S_1\to\tau\mu)_{exp}$ is nothing more than the experiment precision limit, rather than the measured value of the branching ratio. Therefore, the experimental programs with higher precision than at present are required to get more detail information about the decay $S_1\to\tau\mu$. At future hadronic and leptonic colliders the more high statistics of Higgs boson events will be achieved. For example, the future LHC runs with $\sqrt{s}=14$ TeV and total integrated luminosity of first 300 $\mbox{fb}^{-1}$ and later 3000 $\mbox{fb}^{-1}$ expect the production of about 25 and 250 millions of Higgs boson events, respectively, to be compared with 1 million Higgs boson events that the LHC produced after the first runs [@ATLAS1; @CMS1]. These large numbers provide an upgrading of sensitivities to $\mbox{BR}(S_1\to l_k\overline{l}_m)_{exp}$ of at least two orders of magnitude with respect to the present sensitivity. In much the same way, at the planned lepton colliders, similar to the international linear collider with $\sqrt{s}=1$ TeV and $\sqrt{s}=2.5$ TeV [@BBF13], and the future electron-positron circular collider, formerly known as TLEP, with $\sqrt{s}=350$ GeV and 10 $\mbox{ab}^{-1}$ [@TLEP], the expectations are of about 1 and 2 million Higgs boson events, respectively, with much lower backgrounds owing to the cleaner environment, which will also allow for a large improvement in LFV Higgs boson decay searches regarding to the current sensitivities. Acknowledgments {#acknowledgments .unnumbered} =============== This work is partially supported by the grant of Belorussian Ministry of Education No 20170217. Appendix {#appendix .unnumbered} ======== The terms appearing in the width of the decay $$S_1\to W_2^{+*}W_2^{-*}N_R^*\to\mu^+\tau^-$$ are as follows: $$\Delta L^{\prime}=L^{\prime}(m_{N_2})-L^{\prime}(m_{N_1}),\qquad L^{\prime}(m_{N_j})=L_g^{\prime}(m_{N_j})+\sum_{i=1}^7L_W^{\prime i}(m_{N_j}),$$ $$\Delta R^{\prime}=R^{\prime}(m_{N_2})-R^{\prime}(m_{N_1}),\qquad R^{\prime}(m_{N_j})=R_g^{\prime}(m_{N_j})+\sum_{i=1}^7R_W^{\prime i}(m_{N_j}),$$ $$L^{\prime}_g(m_{N_j})=2m_{\mu}\int_0^1x(x-1)dx\int_0^1 {dy\over \beta_{xy}^j-q_x^2},\qquad R^{\prime}_g(m_{N_j})=-2m_{\tau}\int_0^1x^2dx\int_0^1 {ydy\over \beta_{xy}^j-q_x^2},\eqno(A.1)$$ $$R^{\prime1}_W(m_{N_j})=-{m_{\tau}\over m_W^2}\int_0^1x^2dx\int_0^1ydy\Big\{6\ln\Big| {l^j_{xy}\over l^j_{xy}-p_x^2}\Big|+2[p_x^2-2(p_xp_2)]{1\over l^j_{xy}-p_x^2}\Big\},\eqno(A.2)$$ $$L^{\prime1}_W(m_{N_j})={2m_{\mu}\over m_W^2}\int_0^1xdx\int_0^1dy\Big\{(3x+1)\ln\Big| {l^j_{xy}\over l^j_{xy}-p_x^2}\Big|+[p_x^2(x+1)-2(p_xp_2)x]{1\over l^j_{xy}-p_x^2}\Big\},\eqno(A.3)$$ $$R^{\prime2}_W(m_{N_j})={m_{\tau}\over m_W^2}\int_0^1xdx\int_0^1dy \Big\{m_{S_1}^2xy+[(m_{\mu}^2-m_{S_1}^2-m_{\tau}^2)xy+ (m_{\tau}^2-m_{S_1}^2-m_{\mu}^2)(x-1)]\Big\}{1\over l^j_{xy}-p_x^2},\eqno(A.4)$$ $$L^{\prime2}_W(m_{N_j})=-{m_{\mu}\over m_W^2}\int_0^1xdx\int_0^1dy\Big\{ m_{S_1}^2(x-1)+ [(m_{\mu}^2-m_{S_1}^2-m_{\tau}^2)xy+ (m_{\tau}^2-m_{S_1}^2-m_{\mu}^2)(x-1)\Big\}{1\over l^j_{xy}-p_x^2},\eqno(A.5)$$ $$R^{\prime3}_W(m_{N_j})={m_{\tau}\over m_W^2}\int_0^1xdx\int_0^1dy\Big[4\ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+{2p_x^2-2(p_xp_2)-2(pp_2)xy\over l_{xy}^j-p_x^2}\Big],\eqno(A.6)$$ $$L^{\prime3}_W(m_{N_j})=-{m_{\mu}\over m_W^2}\int_0^1xdx\int_0^1dy\Big[4\ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+{2p_x^2-2(p_xp_2)-2(pp_2)x+2(p_xp)\over l_{xy}^j-p_x^2}\Big],\eqno(A.7)$$ $$R^{\prime4}_W(m_{N_j})={m_{\tau}\over4m_W^4}\int_0^1x^2dx\int_0^1ydy\big\{ [80p_x^2-48l_{xy}^j-32(p_xp_2)]\ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+{p_x^2\over l_{xy}^j-p_x^2}[4p_x^2-8(p_xp_2)] \Big\},\eqno(A.8)$$ $$L^{\prime4}_W(m_{N_j})=-{m_{\mu}\over4m_W^4}\int_0^1xdx\int_0^1dy\Big\{ [(80p_x^2-48l_{xy}^j)x+32p_x^2-12l_{xy}^j-32(p_xp_2)x] \ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+12p_x^2+{4p_x^2\over l_{xy}^j-p_x^2}[p_x^2(x+1)-2(p_xp_2)x]\Big\},\eqno(A.9)$$ $$R^{\prime5}_W(m_{N_j})={m_{\tau}\over2m_W^4}\int_0^1xdx\int_0^1dy\big\{ [12l_{xy}^j-24p_x^2+6(m_{S_1}^2-m_{\tau}^2)xy+6m_{\mu}^2x] \ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+{2p_x^2\over l_{xy}^j-p_x^2}[(m_{S_1}^2-m_{\tau}^2)xy+m_{\mu}^2x-p_x^2] -12p_x^2\Big\},\eqno(A.10)$$ $$L^{\prime5}_W(m_{N_j})={m_{\mu}\over2m_W^4}\int_0^1xdx\int_0^1dy\Big\{ [24p_x^2-12l_{xy}^j+6m_{\tau}^2xy-6m_{\mu}^2x] \ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+12p_x^2+{2p_x^2\over l_{xy}^j-p_x^2}[p_x^2+m_{\tau}^2xy-m_{\mu}^2x]\Big\}.\eqno(A.11)$$ $$R^{\prime6}_W(m_{N_j})={m_{\tau}\over2m_W^4}\int_0^1xdx\int_0^1dy\Big\{ [3l_{xy}^j-4p_x^2-8(p_xp)xy+2(p_xp_2)+2(pp_2)xy] \ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+{2(p_xp)\over l_{xy}^j-p_x^2}[2(p_xp_2)-p_x^2]xy-3p_x^2\Big\},\eqno(A.12)$$ $$L^{\prime6}_W(m_{N_j})={m_{\mu}\over2m_W^4}\int_0^1xdx\int_0^1dy\Big\{ [4p_x^2-3l_{xy}^j+8(p_xp)x-2(p_xp_2)-2(pp_2)x+4(p_xp)] \ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+3p_x^2+{2(p_xp)\over l_{xy}^j-p_x^2}[p_x^2x+p_x^2-2(p_xp_2)x]\Big\},\eqno(A.13)$$ $$R^{\prime7}_W(m_{N_j})={m_{\tau}\over2m_W^4}\int_0^1xdx\int_0^1dy\Big\{ [6(p_xp)-m_{S_1}^2-m_{\mu}^2+m_{\tau}^2]\ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|-$$ $$-{2(p_xp)\over l_{xy}^j-p_x^2}[(m_{S_1}^2-m_{\tau}^2)xy+m_{\mu}^2x]\Big\},\eqno(A.14)$$ $$L^{\prime7}_W(m_{N_j})=-{m_{\mu}\over2m_W^4}\int_0^1xdx\int_0^1dy\Big\{ [6(p_xp)+m_{\tau}^2-m_{\mu}^2]\ln\Big|{l_{xy}^j\over l_{xy}^j-p_x^2}\Big|+$$ $$+{2(p_xp)\over l_{xy}^j-p_x^2}[m_{\tau}^2xy-m_{\mu}^2x]\Big\},\eqno(A.15)$$ $$\beta_{xy}^j=yx(m_{S_1}^2-m_{W_2}^2-m_{\mu}^2+m_{N_j}^2)+x(m_{W_2}^2+ m_{\mu}^2-m_{N_j}^2)-m_{W_2}^2,\eqno(A.16)$$ $$q_x^2=x^2[m_{\tau}^2y^2+y(m_{S_1}^2-m_{\mu}^2-m_{\tau}^2)+m_{\mu}^2].\eqno(A.17)$$ [xxxx]{} V. Khachatryan [*[et al.]{}*]{} \[CMS Collaboration\], Phys. Lett. B [**[749]{}**]{}, 337 (2015) \[arXiv:1502.07400 \[hep-ex\]\]. G. Aad [*[et al.]{}*]{} \[ATLAS Collaboration\], arXiv:1508.03372 \[hep-ex\]. G. Aad [*[et al.]{}*]{} (ATLAS), Eur. Phys. J. C [**[77]{}**]{} (2017) 70, arXiv:1604.07730 \[hep-ex\]. \[71\]V.Khachatryan [*[et al.]{}*]{} (CMS), Phys. Lett. B [**[763]{}**]{} (2016) 472, arXiv:1607.03561 \[hep-ex\]. CMS Collaboration, Search for lepton flavor violating decays of the Higgs boson to $\mu\tau$ and $e\tau$ in proton-proton collisions at $\sqrt{p}=13$ TeV, (2017), CMS-PAS-HIG-17-001. K.Cheung, W.Y.Keung and P.Y.Tseng, Phys. Rev. D [**[93]{}**]{} (2016) 015010. S.Baek and K.Nishiwaki, Phys. Rev. D [**[93]{}**]{} (2016) 015002. E. Arganda [*[et al.]{}*]{}, Phys. Rev. D [**[93]{}**]{} (2016) 055010. A. Abada [*[et al.]{}*]{}, JHEP [**[1602]{}**]{} (2016) 083. M.D.Campos, A.E.C.Hernandez, H.Pas, and E. Schumacher, arXiv:1408.1652. A.Crivellin, G.D’Ambrosio, and J. Heeck, arXiv:1501.00993. J.L.Diaz-Cruz and J.J.Toscano, Phys. Rev. D [**[62]{}**]{} (2000) 116005. A.Abada, M.E.Krauss, W.Porod, F.Staub, A.Vicente and C. Weiland, JHEP [**[1411]{}**]{} (2014) 048. J.C.Pati and A.Salam, Phys. Rev. D [**[10]{}**]{} (1974) 275. R.N.Mohapatra and J.C.Pati, Phys. Rev. D [**[11]{}**]{} (1975) 566. G.Senjanovic and R.N.Mohapatra, Phys. Rev. D [**[12]{}**]{}, (1975) 1502. G.G.Boyarkina, O.M.Boyarkin, Physics of Atomic Nuclei, [**[60]{}**]{} (1997) 601. O.M.Boyarkin, G.G.Boyarkina, and T.I.Bakanova, Phys. Rev. D [**[70]{}**]{} (2004) 113010-1. R.N.Mohapatra and G.Senjanovic, Phys. Rev. Lett. [**[44]{}**]{} (1980) 912. R.N.Mohapatra and D.P.Sidhu, Phys. Rev. Lett. [**[38]{}**]{} (1977) 667. N.G.Deshpande, J.F.Gunion, B.Kayser, F.Olness, Phys. Rev. [**[D44]{}**]{}, 837 (1991). G.G.Boyarkina, O.M.Boyarkin, Eur. Phys. J. C [**[13]{}**]{} (2000) 99. J.F. Gunion [*[et al.]{}*]{}, Production and Detection at SSC of Higgs Bosons in Left-Right Symmetric Theories, in Proc. of the 1986 Summer Study on the Physics of the Superconducting Supercollider, June 23-July 11, 1986, http://inspirehep.net/record/20858. A.Falkowski [*[al.]{}*]{}, High Energy Phys. [**[57]{}**]{} (2015) 2015, arXiv:1502.01361 \[hep-ph\]. S.I Godunov [*[al.]{}*]{}, Eur. Phys. J. C [**[76]{}**]{} (2016) 1, arXiv:1503.01618 \[hep-ph\]. O.M.Boyarkin, G.G.Boyarkina, Phys. Rev. D [**[90]{}**]{} (2014) 025001. C. Patrignani [*[et al.]{}*]{} (Particle Data Group), Chin. Phys. C, [**[40]{}**]{}, (2016) 100001. ATLAS Collaboration Phys.Rev. D [**[96]{}**]{} (2017) 052004 \[arXiv:1703.09127\]. P.S.Bhupal Dev, Chang-Hun Lee, R.N.Mohapatra, Phys. Rev. D [**[88]{}**]{} (2013) 093010. S.Dell’Oro [*[et al.]{}*]{}, Adv. High Energy Phys. [**[2016]{}**]{} (2016) 2162659, arXiv:1601.07512 \[hep-ph\]. J. Barea, J. Kotila, and F. Iachello, Phys. Rev. C [**[87]{}**]{} (2013) 014315. Chien-Yi Chen, P.S.Bhupal Dev, and R.N.Mohapatra, Phys.Rev. D [**[88]{}**]{} (2013) 033014. T.G.Rizzo, Phys.Rev. D, [**[25]{}**]{} (1982) 1355. The ATLAS collaboration, Projections for measurements of Higgs boson cross sections, branching ratios and coupling parameters with the ATLAS detector at a HL-LHC, Tech. Rep. ATL-PHYS-PUB-2013-014 (CERN, 2013) The CMS Collaboration (CMS), Proceedings, Community Summer Study 2013: Snowmass on the Mississippi (CSS2013) (2013), arXiv:1307.7135 \[hep-ex\] H. Baer [*[et al.]{}*]{}, (2013), arXiv:1306.6352 \[hep-ph\]. M. Bicer [*[et al.]{}*]{}, JHEP [**[01]{}**]{} (2014) 164, arXiv:1308.6176 \[hep-ex\] [^1]: E-mail:oboyarkin@tut.by
--- abstract: | A classical theorem of Kuratowski says that every Baire one function on a $G_{\delta}$ subspace of a Polish ($=$ separable completely metrizable) space $X$ can be extended to a Baire one function on $X$. Kechris and Louveau introduced a finer gradation of Baire one functions into small Baire classes. A Baire one function $f$ is assigned into a class in this heirarchy depending on its oscillation index $\beta(f)$. We prove a refinement of Kuratowski’s theorem: if $Y$ is a subspace of a metric space $X$ and $f$ is a real-valued function on $Y$ such that $\beta_{Y}(f)<\omega^{\alpha}$, $\alpha<\omega_{1}$, then $f$ has an extension $F$ onto $X$ so that $\beta_{X}(F)\leq\omega ^{\alpha}$. We also show that if $f$ is a continuous real valued function on $Y,$ then $f$ has an extension $F$ onto $X$ so that $\beta_{X}\left( F\right) \leq3.$ An example is constructed to show that this result is optimal. address: - 'Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 117543.' - | Mathematics and Mathematics Education, National Institute of Education\ Nanyang Technological University, 1 Nanyang Walk, Singapore 637616. author: - 'Denny H. Leung' - 'Wee-Kee Tang' title: Extension of Functions with Small Oscillation --- Let $X$ be a topological space. A real-valued function on $X$ belongs to Baire class one if it is the pointwise limit of a sequence of continuous functions. If $X$ is a Polish ($=$ separable completely metrizable) space, then a classical theorem of Kuratowski [@K] states that every Baire one function on a $G_{\delta}$ subspace of $X$ can be extended to a Baire one function on $X$. In [@KL], Kechris and Louveau introduced a finer gradation of Baire one functions into small Baire classes using the oscillation index $\beta$, whose definition we now recall. Let $X$ be a topological space and let ${\mathcal{C}}$ denote the collection of all closed subsets of $X$. A *derivation* on ${\mathcal{C}}$ is a map ${\mathcal{D}} : {\mathcal{C}} \to{\mathcal{C}}$ such that ${\mathcal{D}}(P) \subseteq P$ for all $P \in{\mathcal{C}}$. The oscillation index $\beta$ is associated with a family of derivations. Let $\varepsilon> 0$ and a function $f : X \to{\mathbb{R}}$ be given. For any $P \in{\mathcal{C}}$, let ${\mathcal{D}}^{1}(f, \varepsilon, P)$ be the set of all $x \in P$ such that for any neighborhood $U$ of $x$, there exist $x_{1}, x_{2} \in P \cap U$ such that $|f(x_{1}) - f(x_{2})| \geq\varepsilon$. The derivation ${\mathcal{D}}^{1}(f,\varepsilon,\cdot)$ may be iterated in the usual manner. For all $\alpha< \omega_{1}$, let $${\mathcal{D}}^{\alpha+1}(f, \varepsilon, P) = {\mathcal{D}}^{1}(f, \varepsilon, {\mathcal{D}}^{\alpha}(f, \varepsilon, P)).$$ If $\alpha$ is a countable limit ordinal, set $${\mathcal{D}}^{\alpha}(f, \varepsilon, P) = \cap_{\gamma<\alpha}{\mathcal{D}}^{\gamma}(f, \varepsilon, P).$$ If ${\mathcal{D}}^{\alpha}(f,\varepsilon,P) \neq\emptyset$ for all $\alpha< \omega_{1}$, let $\beta_{X}(f,\varepsilon) = \omega_{1}$. Otherwise, let $\beta_{X}(f,\varepsilon)$ be the smallest countable ordinal $\alpha$ such that ${\mathcal{D}}^{\alpha}(f,\varepsilon,P) = \emptyset$. The *oscillation index* of $f$ is $\beta_{X}(f) = \sup_{\varepsilon> 0}\beta_{X}(f,\varepsilon)$. The main result of §1 is that if $Y$ is a subspace of a metric space $X$ and $f:Y\rightarrow{\mathbb{R}}$ satisfies $\beta_{Y}(f)<\omega^{\alpha}$ for some $\alpha<\omega_{1}$, then $f$ can be extended to a function $F$ on $X$ with $\beta_{X}(F)\leq\omega^{\alpha}$. It follows readily from the Baire Characterization Theorem [@BBT 10.15] that a real-valued function on a Polish space is Baire one if and only if its oscillation index is countable. (See, e.g. [@KL].) Also, a theorem of Alexandroff says that a $G_{\delta}$ subspace of a Polish space is Polish [@BBT 10.18]. Hence our result refines Kuratowski’s theorem in terms of the oscillation index. Let us mention that if $X$ is a metric space, then every real-valued function with countable oscillation index on a closed subspace of $X$ may be extended onto $X$ with preservation of the index [@LT Theorem 3.6]. (Note that the proof of [@LT Theorem 3.6] does not require the compactness of the ambient space.) More recent results on the extension of Baire one functions on general topological spaces are found in [@KS]. It is well known that if a function is continuous on a *closed* subspace of a  metric space, then there exists a continuous extension to the whole space. §2 is devoted to the study of extensions of continuous functions from an *arbitrary* subspace of a metric space. It is shown that if $f$ is a continuous function on a subspace $Y$ of a metric space $X,$ then $f$ has an extension $F$ on $X$ with $\beta_{X}\left( F\right) \leq3.$ An example is given to show that the result is optimal. The criteria for continuous extension on dense subspaces had been studied by several authors. (See, e.g., [@B], [@H].) Functions of Small Oscillation ============================== Given a real-valued function defined on a set $S,$ let $\left\Vert f\right\Vert _{S}=\sup_{s\in S}\left\vert f\left( s\right) \right\vert .$ For any topological space $X,$ the support $\operatorname*{supp}f$ of a function $f:X\rightarrow\mathbb{R}$ is the closed set $\overline{\left\{ x\in X:f\left( x\right) \neq0\right\} }.$ A family $\left\{ \varphi_{\alpha }:\alpha\in\mathcal{A}\right\} $ of nonnegative real-valued functions on $X$ is called a *partition of unity on* $X$ if 1. The support of the $\varphi_{\alpha}$’s form a locally finite closed covering of $X,$ 2. $\sum_{\alpha\in\mathcal{A}}\varphi_{\alpha}\left( x\right) =1$ for all $x\in X.$ If $\left\{ U_{\beta}:\beta\in\mathcal{B}\right\} $ is an open covering of $X,$ we say that a partition of unity $\left\{ \varphi_{\beta }:\beta\in\mathcal{B}\right\} $ on $X$ is subordinated to $\left\{ U_{\beta }:\beta\in\mathcal{B}\right\} \,$if the support of each $\varphi_{\beta}$ lies in the corresponding $U_{\beta}.$ It is well known that if $X$ is paracompact (in particular, if $X$ is a metric space [D]{}), then for each open covering $\left\{ U_{\beta}:\beta\in\mathcal{B}\right\} $ of $X$ there is a partition of unity on $X$ subordinated to $\left\{ U_{\beta}:\beta\in\mathcal{B}\right\} .$ (See, for example, [@D Theorem VIII 4.2].) \[P1\]Let $X$ be a metric space and $Y$ be a subspace of $X.$ Suppose that $f:Y\rightarrow\mathbb{R}$ is a function such that $\beta_{Y}\left( f,\varepsilon\right) \leq\alpha$ for some $\varepsilon>0,$ $\alpha<\omega _{1}.$ Then there exists a function $\tilde{f}:X\rightarrow\mathbb{R}$ such that $\beta_{X}\left( \tilde{f}\right) \leq\alpha+1,$ $\left\Vert \tilde {f}\right\Vert _{X}\leq\left\Vert f\right\Vert _{Y}$ and $\left\Vert f-\tilde{f}\right\Vert _{Y}\leq\varepsilon.$ In the following, denote $\mathcal{D}^{\beta}\left( f,\varepsilon,Y\right) $ by $Y^{\beta}$ for all $\beta<\omega_{1}.$ Proposition \[P1\] is proved by working on each of the pieces $Y^{\beta}\smallsetminus Y^{\beta+1}$, $\beta<\alpha$, and gluing together the results. \[L3\]For all $0\leq\beta<\alpha,$ there exist an open set $Z_{\beta}$ in $X$ such that $Y^{\beta}\smallsetminus Y^{\beta+1}\subseteq Z_{\beta}\subseteq\left( Y^{\beta+1}\right) ^{c},$ and a continuous function $f_{\beta}:Z_{\beta}\rightarrow\mathbb{R}$ such that $\left\Vert f-f_{\beta }\right\Vert _{Y^{\beta}\smallsetminus Y^{\beta+1}}\leq\varepsilon$ and $\left\Vert f_{\beta}\right\Vert _{Z_{\beta}}\leq\left\Vert f\right\Vert _{Y}.$ If $0\leq\beta<\alpha$ and $y\in Y^{\beta}\smallsetminus Y^{\beta+1},$ there exists a set $U_{y}$ that is an open neighborhood of $y$ in $X$ so that $U_{y}$ is disjoint from $Y^{\beta+1}$ and that $f\left( U_{y}\cap Y^{\beta }\right) \subseteq\left( f\left( y\right) -\varepsilon,f\left( y\right) +\varepsilon\right) .$ Let $$Z_{\beta}={\displaystyle\bigcup\limits_{y\in Y^{\beta}\smallsetminus Y^{\beta+1}}} U_{y}.$$ Each $Z_{\beta}$ is open in $X$. Clearly, $Y^{\beta}\smallsetminus Y^{\beta +1}\subseteq Z_{\beta}\subseteq\left( Y^{\beta+1}\right) ^{c}$. There exists a partition of unity $\left( \varphi_{y}\right) _{y\in Y^{\beta }\smallsetminus Y^{\beta+1}}$ on $Z_{\beta}$ subordinated to the open covering $\mathcal{U}=\left\{ U_{y}:y\in Y^{\beta}\smallsetminus Y^{\beta+1}\right\} $. Define $f_{\beta}:Z_{\beta}\rightarrow\mathbb{R}$ by $$f_{\beta}\left( z\right) =\sum_{y\in Y^{\beta}\smallsetminus Y^{\beta+1}}f\left( y\right) \varphi_{y}\left( z\right) .$$ Then $f_{\beta}$ is well-defined, continuous and $\left\Vert f_{\beta }\right\Vert _{Z_{\beta}}\leq\left\Vert f\right\Vert _{Y}.$ If $x\in Y^{\beta }\smallsetminus Y^{\beta+1},$ set $V_{x}=\left\{ y\in Y^{\beta}\smallsetminus Y^{\beta+1}:\varphi_{y}\left( x\right) \neq0\right\} .$ Then $\sum_{y\in V_{x}}\varphi_{y}\left( x\right) =1$. If $y\in V_{x},$ then $x\in U_{y};$ thus $\left\vert f\left( x\right) -f\left( y\right) \right\vert <\varepsilon$. Hence$$\begin{aligned} \left\vert f\left( x\right) -f_{\beta}\left( x\right) \right\vert & =\left\vert \sum_{y\in V_{x}}\left( f\left( x\right) -f\left( y\right) \right) \varphi_{y}\left( x\right) \right\vert \\ & \leq\sum_{y\in V_{x}}\left\vert f\left( x\right) -f\left( y\right) \right\vert \varphi_{y}\left( x\right) \leq\varepsilon.\end{aligned}$$ Therefore, $\left\Vert f-f_{\beta}\right\Vert _{Y^{\beta}\smallsetminus Y^{\beta+1}}\leq\varepsilon$, as required. \[Proof of Proposition \[P1\]\]Define a function $\tilde{f}:X\rightarrow \mathbb{R}$ by$$\tilde{f}\left( x\right) =\left\{ \begin{array} [c]{ccc}f_{\beta}\left( x\right) & \text{if} & x\in Z_{\beta}\smallsetminus \cup_{\gamma<\beta}Z_{\gamma},\text{ }\beta<\alpha,\\ 0 & \text{if} & x\notin\cup_{\gamma<\alpha}Z_{\gamma}. \end{array} \right.$$ Clearly, $\left\Vert \tilde{f}\right\Vert _{X}=\sup_{\beta<\alpha}\left\Vert f_{\beta}\right\Vert _{Z_{\beta}}\leq\left\Vert f\right\Vert _{Y}.$ If $x\in Y,$ then $x\in Y^{\beta}\smallsetminus Y^{\beta+1}$ for some $\beta<\alpha.$ In particular, $x\in Z_{\beta}\smallsetminus\cup_{\gamma<\beta}Z_{\gamma}$. Hence $\left\vert f\left( x\right) -\tilde{f}\left( x\right) \right\vert =\left\vert f\left( x\right) -f_{\beta}\left( x\right) \right\vert \leq\left\Vert f-f_{\beta}\right\Vert _{Y^{\beta}\smallsetminus Y^{\beta+1}}\leq\varepsilon$ according to Lemma \[L3\]. Since this is true for all $x\in Y,$ $\left\Vert f-\tilde{f}\right\Vert _{Y}\leq\varepsilon.$ It remains to show that $\beta_{X}\left( \tilde{f}\right) \leq\alpha+1.$ To this end, we claim that $\mathcal{D}^{\beta}\left( \tilde{f},\delta,X\right) \cap Z_{\gamma}=\emptyset$ for all $\delta>0,$ $\gamma<\beta\leq\alpha.$ We prove the claim by induction$.$ Let $\delta>0.$ Since $f_{0}$ is continuous on the open set $Z_{0},$ we have $\mathcal{D}^{1}\left( \tilde{f},\delta ,X\right) \cap Z_{0}=\emptyset.$ Suppose that the claim holds for all ordinals less than $\beta.$ By the inductive hypothesis, $\mathcal{D}^{\xi }\left( \tilde{f},\delta,X\right) \cap\left( \cup_{\gamma<\xi}Z_{\gamma }\right) =\emptyset$ for all $\xi<\beta.$ Therefore, $$\mathcal{D}^{\xi}\left( \tilde{f},\delta,X\right) \cap\left[ Z_{\xi }\smallsetminus\left( \cup_{\gamma<\xi}Z_{\gamma}\right) \right] =\mathcal{D}^{\xi}\left( \tilde{f},\delta,X\right) \cap Z_{\xi}.$$ Now $\tilde{f}=f_{\xi}$ is continuous on this set, which is open in $\mathcal{D}^{\xi}\left( \tilde{f},\delta,X\right) .$ Therefore $\mathcal{D}^{\xi+1}\left( \tilde{f},\delta,X\right) \cap Z_{\xi}=\emptyset.$ Also since $\mathcal{D}^{\beta}\left( \tilde{f},\delta,X\right) \subseteq\mathcal{D}^{\gamma+1}\left( \tilde{f},\delta,X\right) $ for all $\gamma<\beta,$ $$\mathcal{D}^{\beta}\left( \tilde{f},\delta,X\right) \cap Z_{\gamma }=\emptyset$$ for all $\gamma<\beta.$ This proves the claim. It follows from the claim that$$\mathcal{D}^{\alpha}\left( \tilde{f},\delta,X\right) \subseteq\left( \cup_{\gamma<\alpha}Z_{\gamma}\right) ^{c}$$ for any $\delta>0.$ Since $\tilde{f}=0$ on the latter set, $\mathcal{D}^{\alpha+1}\left( \tilde{f},\delta,X\right) =\emptyset.$ In order to iterate Proposition \[P1\] to obtain an extension of $f,$ we need the following result. \[P2\]Let $Y$ be a subspace of a metric space $X.$ If $\beta_{Y}\left( f\right) <\omega^{\xi}$ and $\beta_{Y}\left( g\right) <\omega^{\xi},$ then $\beta_{Y}\left( f+g\right) <\omega^{\xi}.$ Proposition \[P2\] is proved by the method used in [@KL Lemma 5]. This requires a slight modification in the derivation $\mathcal{D}$ associated with the index $\beta.$ Given a real valued function $f:Y\rightarrow\mathbb{R}$, $\varepsilon>0,$ and a closed subset $P$ of $Y,$ define $G\left( f,\varepsilon,P\right) $ to be the set of all $y\in P$ such that for every neighborhood $U$ of $y,$ there exists $y^{\prime}\in P\cap U$ such that $\left\vert f\left( y\right) -f\left( y^{\prime}\right) \right\vert \geq\varepsilon.$ Let $$\mathcal{G}^{1}\left( f,\varepsilon,P\right) =\overline{G\left( f,\varepsilon,P\right) },$$ where the closure is taken in $Y.$ This defines a derivation $\mathcal{G}$ on the closed subsets of $Y$ which may be iterated in the usual manner. If $\alpha<\omega_{1},$ let$$\mathcal{G}^{\alpha+1}\left( f,\varepsilon,P\right) =\mathcal{G}^{1}\left( f,\varepsilon,\mathcal{G}^{\alpha}\left( f,\varepsilon,P\right) \right) .$$ If $\alpha<\omega_{1}$ is a limit ordinal, let$$\mathcal{G}^{\alpha}\left( f,\varepsilon,P\right) ={\displaystyle\bigcap\limits_{\alpha^{\prime}<\alpha}} \mathcal{G}^{\alpha^{\prime}}\left( f,\varepsilon,P\right) .$$ Clearly, the derivation $\mathcal{G}$ is closely related to $\mathcal{D}.$ The precise relationship between $\mathcal{D}$ and $\mathcal{G}$ is given in part (c) of the next lemma. \[L1\]If $f$ and $g$ are real-valued functions on $Y$, $\varepsilon>0,$ and $P,Q$ are closed subsets of $Y,$ then \(a) $\mathcal{G}^{1}\left( f+g,\varepsilon,P\right) \subseteq\mathcal{G}^{1}\left( f,\varepsilon/2,P\right) \cup\mathcal{G}^{1}\left( g,\varepsilon/2,P\right) ,$ \(b) $\mathcal{G}^{1}\left( f,\varepsilon,P\cup Q\right) \subseteq \mathcal{G}^{1}\left( f,\varepsilon,P\right) \cup\mathcal{G}^{1}\left( f,\varepsilon,Q\right) ,$ \(c) $\mathcal{D}^{1}\left( f,2\varepsilon,P\right) \subseteq\mathcal{G}^{1}\left( f,\varepsilon,P\right) \subseteq\mathcal{D}^{1}\left( f,\varepsilon,P\right) .$ We leave the simple proofs to the reader. Note that it follows from part (c) that for all $\alpha<\omega_{1}$, $$\mathcal{D}^{\alpha}\left( f,2\varepsilon,P\right) \subseteq\mathcal{G}^{\alpha}\left( f,\varepsilon,P\right) \subseteq\mathcal{D}^{\alpha}\left( f,\varepsilon,P\right) . \tag{d}\label{3'}$$ \[Proof of Proposition \[P2\]\]Parts (a) and (b) of Lemma \[L1\] correspond to (\*) and (\*\*) in [@KL Lemma 5] respectively. From the proof of that result, we obtain for all $n\in\mathbb{N}$ and $\zeta<\omega_{1},$$$\mathcal{G}^{\omega^{\zeta}\cdot2n}\left( f+g,\varepsilon,Y\right) \subseteq\mathcal{G}^{\omega^{\zeta}\cdot n}\left( f,\varepsilon/2,Y\right) \cup\mathcal{G}^{\omega^{\zeta}\cdot n}\left( g,\varepsilon/2,Y\right) . \label{E1}$$ Since $\beta_{Y}\left( f\right) <\omega^{\xi}$ and $\beta_{Y}\left( g\right) <\omega^{\xi},$ there exist $\zeta<\xi$ and $n\in\mathbb{N}$ such that $\beta_{Y}\left( f\right) <\omega^{\zeta}\cdot n$ and $\beta_{Y}\left( g\right) <\omega^{\zeta}\cdot n$. By (\[3’\]), for any $\varepsilon>0,$$$\mathcal{G}^{\omega^{\zeta}\cdot n}\left( f,\varepsilon/2,Y\right) =\mathcal{G}^{\omega^{\zeta}\cdot n}\left( g,\varepsilon/2,Y\right) =\emptyset.$$ By (\[3’\]) and (\[E1\]) , $$\mathcal{D}^{\omega^{\zeta}\cdot2n}\left( f+g,2\varepsilon,Y\right) =\emptyset.$$ Since this is true for all $\varepsilon>0,$ we have $$\beta_{Y}\left( f+g\right) \leq\omega^{\zeta}\cdot2n<\omega^{\xi}.$$ Let $X$ be a metric space and let $Y$ be an arbitrary subspace of $X.$ If $f:Y\rightarrow\mathbb{R}$ satisfies $\beta_{Y}\left( f\right) <\omega^{\alpha}$ for some $\alpha<\omega_{1},$ then there exists $F:X\rightarrow\mathbb{R}$ with $\beta_{X}\left( F\right) \leq\omega ^{\alpha}$ and $F_{|Y}=f.$ Applying Proposition \[P1\] to $f:Y\rightarrow\mathbb{R}$ with $\varepsilon=\frac{1}{2},$ we obtain $g_{1}:X\rightarrow\mathbb{R}$, with $\left\| f-g_{1}\right\| _{Y}\leq\frac{1}{2},$ and $\beta_{X}\left( g_{1}\right) <\omega^{\alpha}$. By Proposition \[P2\], $\beta_{Y}\left( f-g_{1}\right) <\omega^{\alpha}.$ Now apply Proposition \[P1\] to $\left( f-g_{1}\right) _{|Y}$ with $\varepsilon=\frac{1}{2^{2}}.$ We obtain $g_{2}:X\rightarrow\mathbb{R}$, with $\left\| g_{2}\right\| _{X}\leq\left\| f-g_{1}\right\| _{Y}\leq\frac{1}{2},$ $\left\| f-g_{1}-g_{2}\right\| _{Y}\leq\frac{1}{2^{2}},$ and $\beta_{X}\left( g_{2}\right) <\omega^{\alpha }.$ Continuing in this way, we obtain a sequence $\left( g_{n}\right) $ of real-valued functions on $X$ such that for all $n\in\mathbb{N},$ \(i) $\left\Vert g_{n+1}\right\Vert _{X}\leq\left\Vert f-\sum_{i=1}^{n}g_{i}\right\Vert _{Y}\leq\frac{1}{2^{n}},$ \(ii) $\beta_{X}\left( g_{n}\right) <\omega^{\alpha}.$ Let $F=\sum_{n=1}^{\infty}g_{n}.$ Note that the series converges uniformly on $X$ and $g_{|Y}=f$ by (i). Finally, suppose that $\varepsilon>0.$ Choose $N$ such that $\sum_{n=N+1}^{\infty}\left\Vert g_{n}\right\Vert _{X}<\varepsilon/4.$ Then$$\mathcal{D}^{\omega^{\alpha}}\left( F,\varepsilon,X\right) \subseteq \mathcal{D}^{\omega^{\alpha}}\left( \sum_{n=1}^{N}g_{n},\frac{\varepsilon}{2},X\right) =\emptyset,$$ since $\beta_{X}\left( \sum_{n=1}^{N}g_{n}\right) <\omega^{\alpha}$ by Proposition \[P2\]$.$ Thus $\beta_{X}\left( F\right) \leq\omega^{\alpha}.$ \[[Kuratowski, [@K §31, VI]]{}\]Let $X$ be a Polish space and $Y$ be a $G_{\delta}$ subset of $X.$ Then every function of Baire class one on $Y$ can be extended to a function of Baire class one on $X.$ Extension of Continuous Functions ================================= In this section, we study the extension of a continuous function on a subspace of a metric space to the whole space. To begin with, we consider the extension of a continuous function from a dense subspace. Consider a metric space $X$ with a dense subspace $Y$. Suppose that $f:Y\rightarrow\mathbb{R}$ is continuous on $Y$. An obvious way of extending $f$ to $X$ (if $f$ is loacally bounded) is to consider the limit superior (or limit inferior) of $f,$ i.e., $$\tilde{f}\left( x\right) =\limsup_{y\rightarrow x,\text{ }y\in Y}f\left( y\right) =\inf_{\delta>0}\sup_{\substack{d\left( x,y\right) <\delta\\y\in Y}}f\left( y\right) .$$ The extended function, which is upper semi-continuous (lower semi-continuous in the case of limit inferior), is not optimal as far as the oscillation index is concerned. In fact, the $\limsup$ extension $\tilde{f}$ of the continuous function $f$ in Example \[Ex1\] below has oscillation index $\beta _{X}\left( \tilde{f}\right) =\omega.$ The following is an alternative algorithm that produces an extension with the smallest possible oscillation index. If $A\subseteq\operatorname*{dom}f,$ $\operatorname*{osc}\left( f,A\right) $ is defined to be $\sup\left\{ \left\vert f\left( x\right) -f\left( x^{\prime}\right) \right\vert :x,x^{\prime}\in A\right\} .$ If $x$ belongs to the closure of $\operatorname*{dom}f,$ then define$$\operatorname*{osc}\left( f,x\right) =\lim_{\delta\rightarrow0}\operatorname*{osc}\left( f,B\left( x,\delta\right) \cap\operatorname*{dom}f\right) .$$ We first define layers of approximate extensions inductively. Let $S_{0}=X$ and $n_{0}(s)=0$ for all $s\in S_{0}$. Assume that $S_{k}$ has been chosen and $n_{k}(s)$ is defined for all $s\in S_{k}$. Let $\mathcal{U}_{k}=\{B(s,2^{-n_{k}(s)}):s\in S_{k}\}$ and $X_{k}=\cup\,\mathcal{U}_{k}$. Choose a partition of unity $(\varphi_{s}^{k})_{s\in S_{k}}$ on $X_{k}$ subordinated to $\mathcal{U}_{k}$. For each $s\in S_{k}$, choose $y_{s}^{k}\in Y\cap B(s,2^{-n_{k}(s)})$. Define $F_{k}:X_{k}\rightarrow\mathbb{R}$ by $F_{k}(x)=\sum_{s\in S_{k}}\varphi_{s}^{k}(x)f(y_{s}^{k})$. For each $x\in X_{k}$, let $S_{k}(x)=\{s\in S_{k}:x\in\operatorname{supp}\varphi_{s}^{k}\}$ and $l_{k}(x)=\max\{n_{k}(s):s\in S_{k}(x)\}+1$. Let $S_{k+1}$ be the set of all $x\in X_{k}$ such that $\operatorname{osc}(f,x)<2^{-l_{k}(x)}$. If $x\in S_{k+1}$, choose $n_{k+1}(x)\geq l_{k}(x)$ so that 1. $\operatorname{osc}(f,B(x,2^{1-n_{k+1}(x)})\cap Y)<2^{-l_{k}(x)}$, 2. \[(2)\]$B(x,2^{-n_{k+1}(x)})\subseteq B(s,2^{-n_{k}(s)})$ for all $s\in S_{k}(x)$, 3. \[(3)\]$B(x,2^{1-n_{k+1}(x)})\cap\operatorname{supp}\varphi_{s}^{k}=\emptyset$ if $s\in S_{k}\backslash S_{k}(x)$. The extension $F$ (defined after Lemma \[L2.2\]) is obtained by pasting the layers $\left( F_{k}\right) $ one after another. Observe that $Y\subseteq S_{k}\subseteq X_{k}$ for all $k$ and that $X_{k+1}\subseteq X_{k}$ because of condition (\[(2)\]). \[L2.1\]Suppose that $s\in S_{k}$, $t\in S_{m}$ for some $m>k$, and that $\operatorname{supp}\varphi_{s}^{k}\cap\operatorname{supp}\varphi_{t}^{m}\neq\emptyset$. Then $B(t,2^{-n_{m}(t)})\subseteq B(s,2^{-n_{k}(s)})$. Let $x\in\operatorname{supp}\varphi_{s}^{k}\cap\operatorname{supp}\varphi _{t}^{m}$. Then $x\in X_{j}$ for all $j\leq m$. In particular, if $m>j>k$, then there exists $s_{j}\in S_{j}$ such that $x\in\operatorname{supp}\varphi_{s_{j}}^{j}$. Thus it suffices to prove the lemma for $m=k+1$. Assume that $x\in\operatorname{supp}\varphi_{s}^{k}\cap\operatorname{supp}\varphi _{t}^{k+1}$. Note that $s\in S_{k}(t)$. For otherwise, $B(t,2^{1-n_{k+1}(t)})\cap\operatorname{supp}\varphi_{s}^{k}=\emptyset$ by (\[(3)\]). Since $x$ belongs to this set, we have reached a contradiction. It now follows from (\[(2)\]) that $B(t,2^{-n_{k+1}(t)})\subseteq B(s,2^{-n_{k}(s)})$. \[L2.2\]Suppose that $x\in X_{m}$ and $m>k\geq1$. Then there exists $s\in S_{k}(x)$ such that $|F_{k}(x)-F_{m}(x)|<2^{1-l_{k-1}(s)}$. Moreover, if $x\in Y$, then $|F_{k}(x)-f(x)|<2^{-l_{k-1}(s)}$ for some $s\in S_{k}(x)$. Denote by $S$ the set of all $t\in S_{m}$ such that $\varphi_{t}^{m}(x)>0$ and choose a point $y\in\cap_{t\in{S}}B(t,2^{-n_{m}(t)})\cap Y$. Let $s$ be an element where $l_{k-1}(s)$ attains its minimum over $S_{k}\left( x\right) $. By Lemma \[L2.1\], $B(t,2^{-n_{m}(t)})\subseteq B(s,2^{-n_{k}(s)})$ for all $t\in S$. Hence $|f(y)-f(y_{t}^{m})|<2^{-l_{k-1}(s)}$ for any $t\in S$. By Lemma \[L2.1\] again, $y\in B(t,2^{-n_{m}(t)})\subseteq B(s^{\prime },2^{-n_{k}(s^{\prime})})$ for all $t\in S$ and all $s^{\prime}\in S_{k}(x)$. Hence $$|f(y)-f(y_{s^{\prime}}^{k})|<2^{-l_{k-1}(s^{\prime})}\leq2^{-l_{k-1}(s)}$$ for all $s^{\prime}\in S_{k}(x)$. Therefore $$\begin{aligned} |F_{k}(x)-F_{m}(x)| & \leq|F_{k}(x)-f(y)|+|f(y)-F_{m}(x)|\\ & <2^{-l_{k-1}(s)}+2^{-l_{k-1}(s)}=2^{1-l_{k-1}(s)}.\end{aligned}$$ Moreover, if $x\in Y$, then the above applies for $y=x$. Hence $|F_{k}(x)-f(x)|<2^{-l_{k-1}(s)}$. Observe that $l_{k}(s)\geq k+1$ for all $s\in S_{k}$, $k\geq0$. It follows from Lemma \[L2.2\] that $(F_{k})$ converges pointwise on $\cap X_{k}$ and that the limit is $f$ on $Y$. Define $F:X\rightarrow\mathbb{R}$ by $$F(x)=\begin{cases} \lim_{k}F_{k}(x) & \text{ if $x\in\cap X_{k}$},\\ F_{k}(x) & \text{ if $x\in X_{k}\backslash X_{k+1}$, $k\geq0$.}\end{cases}$$ Then $F$ is an extension of $f$ to $X$. \[L2.3\]Suppose that $x\in X_{k}$ for some $k\geq1$. There exists an open neighborhood $U$ of $x$ and $s\in S_{k}(x)$ such that $|F(z)-F(x)|<2^{3-l_{k-1}(s)}$ for all $z\in U$. Let $s$ be an element where $l_{k-1}(s)$ attains its minimum over $S_{k}\left( x\right) $. Note that $F_{k}$ is continuous on the open set $X_{k}$. Hence there is an open neighborhood $U$ of $x$ such that 1. $\operatorname{osc}(F_{k},U)<2^{-l_{k-1}(s)}$, 2. $U\subseteq X_{k}$, 3. $U\cap\operatorname{supp}\varphi_{s}^{k}=\emptyset$ if $s\in S_{k}\backslash S_{k}(x)$. We claim that ${S}_{k}(z)\subseteq S_{k}(x)$ for all $z\in U$. Indeed, if $z\in U$ and $s\in{S}_{k}(z)\backslash S_{k}(x)$, then $z\in U\cap\operatorname{supp}\varphi_{s}^{k}=\emptyset$, a contradiction. Now if $z\in U$, then either $z\in X_{m}$ for all $m$ or $z\in X_{m}\backslash X_{m+1}$ for some $m\geq k$. In either case, $|F_{k}(z)-F(z)|\leq 2^{1-l_{k-1}(s)}$ by Lemma \[L2.2\]. Therefore, $$\begin{aligned} |F(z)-F(x)| & \leq|F(z)-F_{k}(z)|+|F_{k}(z)-F_{k}(x)|+|F_{k}(x)-F(x)|\\ & <2^{1-l_{k-1}(s)}+2^{-l_{k-1}(s)}+2^{1-l_{k-1}(s)}<2^{3-l_{k-1}(s)}.\end{aligned}$$  The next proposition is an immediate consequence of Lemma \[L2.3\]. Every $x\in\cap X_{k}$ is a point of continuity of $F$. If $x\in{\mathcal{D}}^{1}(F,2^{-m},X)\cap X_{k}$, $k\geq1$, then there exists $s\in S_{k}(x)$ such that $l_{k-1}(s)\leq m+3$. Since $x\in X_{k},$ by Lemma \[L2.3\], there exist an open neighborhood $U$ of $x$ and $s\in S_{k}\left( x\right) $ such that for all $z\in U,$ $\left\vert F\left( z\right) -F\left( x\right) \right\vert <2^{3-l_{k-1}\left( s\right) }.$ Hence $\left\vert F\left( z_{1}\right) -F\left( z_{2}\right) \right\vert <2^{4-l_{k-1}\left( s\right) }$ for all $z_{1},z_{2}\in U.$ As $x\in{\mathcal{D}}^{1}(F,2^{-m},X),$ $-m<4-l_{k-1}\left( s\right) .$ Thus $l_{k-1}\left( s\right) \leq m+3.$ \[P2.3\]Suppose that $x\in X_{k}\cap{\mathcal{D}}^{2}(F,2^{-m},X)$, $k\geq0$. Then $n_{k}(s)\leq m+2$ for all $s\in S_{k}$ such that $\varphi _{s}^{k}(x)>0$. Choose an open neighborhood $U_{1}$ of $x$ such that $U_{1}\subseteq \{\varphi_{s}^{k}>0\}$ for all $s\in S_{k}$ such that $\varphi_{s}^{k}(x)>0$. Note that, in particular, $U_{1}\subseteq X_{k}$. Then choose an open neighborhood $U_{2}$ of $x$ such that $\operatorname{osc}(F_{k},U_{2})<2^{-m}$. Let $U=U_{1}\cap U_{2}$. There exist $z_{1},z_{2}\in U\cap{\mathcal{D}}^{1}(F,2^{-m},X)$ such that $|F_{k}(z_{1})-F_{k}(z_{2})|\geq2^{-m}$. If $z_{1},z_{2}\notin X_{k+1}$, then $F(z_{i})=F_{k}(z_{i})$, $i=1,2$. This leads to a contradiction with the fact that $\operatorname{osc}(F_{k},U_{2})<2^{-m}$. Thus at least one of $z_{1},z_{2}$ belongs to $X_{k+1}$. Denote it by $z$. By the previous proposition, there exists $t\in S_{k+1}(z)$ such that $l_{k}(t)\leq m+3$. Let $s\in S_{k}$ be such that $\varphi_{s}^{k}(x)>0$. We claim that $s\in S_{k}(t)$. For otherwise, $B(t,2^{1-n_{k+1}(t)})\cap\operatorname{supp}\varphi_{s}^{k}=\emptyset$. This is absurd since the intersection contains the point $z$. It follows from that claim that $l_{k}(t)\geq n_{k}(s)+1$. Hence $n_{k}(s)\leq m+2$, as required. $\beta_{X}(F)\leq3$. Suppose that $x\in{\mathcal{D}}^{3}(F,2^{-m},X)$ for some $m$. Then there exists $k$ such that $x\in X_{k}\backslash X_{k+1}$. Choose a neighborhood $U$ of $x$ such that $U\subseteq B(x,2^{-m-2})\cap X_{k}$ and $\operatorname{osc}(F_{k},U)<2^{-m}$. There exist $z_{1},z_{2}\in U\cap{\mathcal{D}}^{2}(F,2^{-m},X)$ such that $|F(z_{1})-F(z_{2})|\geq2^{-m}$. If $z_{1},z_{2}\notin X_{k+1}$, then $F(z_{i})=F_{k}(z_{i})$, $i=1,2$. This contradicts the fact that $\operatorname{osc}(F_{k},U)<2^{-m}$. Hence there exists $z\in U\cap X_{k+1}\cap{\mathcal{D}}^{2}(F,2^{-m},X)$. By Proposition \[P2.3\], $n_{k+1}(t)\leq m+2$ for all $t\in S_{k+1}$ such that $\varphi_{t}^{k+1}(z)>0$. Fix such a $t$. Note that $$\begin{aligned} d(x,t) & \leq d(x,z)+d(z,t)\\ & <2^{-m-2}+2^{-n_{k+1}(t)}\leq2^{1-n_{k+1}(t)}.\end{aligned}$$ Thus $$\operatorname{osc}(f,x)\leq\operatorname{osc}(f,B(t,2^{1-n_{k+1}(t)})\cap Y)<2^{-l_{k}(t)}.$$ We claim that $S_{k}(x)\subseteq S_{k}(t)$. For otherwise, there exists $s\in S_{k}(x)\backslash S_{k}(t)$. Then $B(t,2^{1-n_{k+1}(t)})\cap \operatorname{supp}\varphi_{s}^{k}=\emptyset$. This is absurd since the intersection contains the point $x$. It follows from the claim that $l_{k}(t)\geq l_{k}(x)$. Hence $\operatorname{osc}(f,x)<2^{-l_{k}(x)}$. Then $x\in S_{k+1}\subseteq X_{k+1}$, a contradiction. We have shown that: \[T2\]Every continuous function $f$ on a dense subspace of a metric space $X$ can be extended to a function $F$ on $X$ with $\beta_{X}\left( F\right) \leq3.$ *([@LT Theorem 3.6])* Let $Y$ be a closed subspace of a metric space $X$ and let $f$ be a function on $Y$ with $\beta_{Y}\left( f\right) <\omega_{1}.$ Then there exists a function $F$ on $X$ such that $$F_{|Y}=f\text{ and }\beta_{X}\left( F\right) =\beta_{Y}\left( f\right) .$$ \[T3\]Let $X$ be a metric space and $Y$ be a subspace of $X.$ Every continuous function $f$ on $Y$ can be extended to a function $F$ on $X$ with $\beta_{X}\left( F\right) \leq3.$ The following example shows that Theorem \[T3\] is optimal. \[Ex1\]There is a subspace $Y\subseteq\left\{ 0,1\right\} ^{\omega}=X$ and a continuous real-valued function $f$ on $Y$ such that for any extension $F$ of $f$ to $X,$ $\beta_{X}\left( F\right) \geq3.$ For any integer $n$, denote $n\left( \operatorname{mod}2\right) $ by $\hat{n}.$ Let $$Y=\left\{ \left( \varepsilon_{i}\right) \in X:\varepsilon_{i}=0\text{ for infinitely many }i\text{'s}\right\} .$$ We denote elements in $X$ of the form$$\left( \underset{n_{1}}{\underbrace{1,1,...,1}},0,\underset{n_{2}}{\underbrace{1,1,...,1}},0,...,\underset{n_{k}}{\underbrace{1,1,...,1}},0,...\right)$$ by $$\left( 1^{n_{1}},0,1^{n_{2}},0,...,1^{n_{k}},0,...\right) .$$ Also write $\left( \varepsilon_{1},...,\varepsilon_{k},\varepsilon ,\varepsilon,...\right) $ as $\left( \varepsilon_{1},...,\varepsilon _{k},\varepsilon^{\omega}\right) ,$ $\varepsilon_{i},\varepsilon\in\left\{ 0,1\right\} .$ Define $g:Y\rightarrow X$ by $$g\left( 1^{n_{1}},0,1^{n_{2}},0,...,1^{n_{k}},0,...\right) =\left( \hat {n}_{1},\hat{n}_{2},...\right) ,\text{ }n_{1},n_{2},...\in\mathbb{N\cup }\left\{ 0\right\} ,$$ and let $h:X\rightarrow\mathbb{R}$ be the canonical embedding of $X$ into $\mathbb{R}$, $h\left( \varepsilon_{1},\varepsilon_{2},...\right) =\sum_{k=1}^{\infty}\frac{2\varepsilon_{k}}{3^{k}}.$ Then the function $f=h\circ g:Y\rightarrow\mathbb{R}$ is continuous. Suppose that $F$ is an extension of $f$ to $X$ such that $\beta_{X}\left( F\right) \leq2.$ First observe that for any $n_{1},...,n_{k}\in\mathbb{N\cup}\left\{ 0\right\} $ and all $n\in\mathbb{N}$,$$\left\vert F\left( 1^{n_{1}},0,...,1^{n_{k}},0,1^{2n},0^{\omega}\right) -F\left( 1^{n_{1}},0,...,1^{n_{k}},0,1^{2n-1},0,1,0,1,...\right) \right\vert =\frac{1}{3^{k}}.$$ Hence $\left( 1^{n_{1}},0,...,1^{n_{k}},0,1^{\omega}\right) \in \mathcal{D}^{1}\left( F,\frac{1}{3^{k}},X\right) .$ Let $F\left( 1^{\omega }\right) =a.$ Either $\left\vert a\right\vert \geq\frac{1}{2}$ or $\left\vert 1-a\right\vert \geq\frac{1}{2}.$ We assume the former; the proof is similar for the latter case. Since $\left( 1^{\omega}\right) \notin\mathcal{D}^{2}\left( F,\frac{1}{3},X\right) ,$ there exists a neighborhood $U$ of $\left( 1^{\omega}\right) $ such that $\left\vert F\left( x\right) -a\right\vert <\frac{1}{3}$ if $x\in U\cap\mathcal{D}^{1}\left( F,\frac{1}{3},X\right) .$ In particular, there exists $n_{1}\in\mathbb{N}$ such that $$\left\vert F\left( 1^{2n_{1}},0,1^{\omega}\right) -a\right\vert =\frac{1}{3}-\delta\text{ for some }\delta>0.$$ Similarly, using the fact that $\left( 1^{2n_{1}},0,1^{\omega}\right) \notin\mathcal{D}^{2}\left( F,\frac{1}{3^{2}},X\right) ,$ we obtain $n_{2}\in\mathbb{N}$ such that $$\left\vert F\left( 1^{2n_{1}},0,1^{2n_{2}},0,1^{\omega}\right) -F\left( 1^{2n_{1}},0,1^{\omega}\right) \right\vert <\frac{1}{3^{2}}.$$ Continuing, we choose $n_{1},n_{2},...\in\mathbb{N}$ such that $$\left\vert F\left( 1^{2n_{1}},0,...,1^{2n_{k+1}},0,1^{\omega}\right) -F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) \right\vert <\frac{1}{3^{k+1}},\text{ }k\in\mathbb{N}.$$ In particular,$$\left\vert F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) -a\right\vert \leq\frac{1}{3}+\frac{1}{3^{2}}+...-\delta=\frac{1}{2}-\delta,\text{ }k\in\mathbb{N}.$$ Since $\left\vert a\right\vert \geq\frac{1}{2},$ we have $\left\vert F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) \right\vert \geq\delta$ for all $k\in\mathbb{N}.$ But $$F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{2n},0^{\omega}\right) =f\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{2n},0^{\omega}\right) =0$$ for all $n\in\mathbb{N}.$ Hence $\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) \in\mathcal{D}^{1}\left( F,\delta,X\right) $ for all $k\in\mathbb{N}.$ However, note that the sequence $\left( \left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) \right) _{k\in\mathbb{N}}$ converges to the point $\left( 1^{2n_{1}},0,...,1^{2n_{j}},0,1^{2n_{j+1}},0,...\right) $ and $$\begin{aligned} & \left\vert F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) -F\left( 1^{2n_{1}},0,...,1^{2n_{j}},0,1^{2n_{j+1}},0,...\right) \right\vert \\ & =\left\vert F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) -f\left( 1^{2n_{1}},0,...,1^{2n_{j}},0,1^{2n_{j+1}},0,...\right) \right\vert \\ & =\left\vert F\left( 1^{2n_{1}},0,...,1^{2n_{k}},0,1^{\omega}\right) \right\vert \geq\delta\end{aligned}$$ for all $n\in\mathbb{N}.$ Therefore, $\left( 1^{2n_{1}},0,...,1^{2n_{j}},0,1^{2n_{j+1}},0,...\right) \in\mathcal{D}^{2}\left( F,\delta,X\right) ,$ contrary to the assumption that $\beta_{X}\left( F\right) \leq2.$ Our final result presents a special class of spaces where the conclusion of Theorem \[T3\] may be improved upon. Recall that a topological space is $0$-*dimensional* if every open cover has a refinement that is an open cover and consists of pairwise disjoint sets. In particular, a 0-dimensional space has a basis consisting of clopen sets. Also note that a closed subspace of a 0-dimensional space is 0-dimensional. If $A$ is a subset of a topological space $X,$ the derived set $A^{\prime}$ of $A$ is the set of all cluster points of $A.$ Let $A^{\left( 0\right) }=A.$ If $A^{\left( \alpha\right) }$ has been defined, let $A^{\left( \alpha+1\right) }=\left( A^{\left( \alpha\right) }\right) ^{\prime}.$ If $\beta$ is a limit ordinal, let $$A^{\left( \beta\right) }=\bigcap\limits_{\alpha<\beta}A^{\left( \alpha\right) }.$$ A topological space $X$ is said to be *scattered* if $X^{\left( \gamma\right) }=\emptyset$ for some ordinal $\gamma.$ Suppose that $Y$ is a subspace of a $0$-dimensional scattered metrizable space $X$. If $f:Y\rightarrow\mathbb{R}$ is a continuous function, then there is an extension $F:X\rightarrow\mathbb{R}$ of $f$ such that $\beta_{X}(F)\leq2$ and that $F$ is continuous at every point in $Y$. Since $X$ is scattered, $X^{(\gamma)}=\emptyset$ for some ordinal $\gamma.$ The proof is by induction on $\gamma.$ The case $\gamma=1$ is clear. Suppose that the theorem holds for all $\gamma<\gamma_{0}$. Let $X$ be a $0$-dimensional metrizable space with $X^{\left( \gamma_{0}\right) }=\emptyset$. For all $x\in X$, choose $\gamma_{x}<\gamma_{0}$ such that $x\in X^{(\gamma_{x})}\backslash X^{(\gamma_{x}+1)}$. Let $d$ be a compatible metric on $X$ that is bounded. Define $\delta_{x}=d(x,X^{(\gamma_{x})}\backslash \{x\})$. Then $\delta_{x}>0$. **Case 1.** $\gamma_{0}$ is a limit ordinal. Let $\mathcal{A}=\left\{ B\left( x,\delta_{x}\right) :x\in X\right\} .\,$Then $\mathcal{A}$ is an open cover of $X.$ Hence there is a refinement $\mathcal{B}$ that is an open cover of $X$ consisting of pairwise disjoint sets. In particular the elements of $\mathcal{B}$ are clopen subsets of $X.$ If $U\in\mathcal{B}$, then $U\subseteq B\left( x,\delta_{x}\right) $ for some $x\in X.$ Hence $U\cap X^{\left( \gamma_{x}+1\right) }=\emptyset.$ Since $\gamma_{x}+1<\gamma_{0},$ we may apply the inductive hypothesis to obtain an extension $f_{U}:U\rightarrow\mathbb{R}$ of $f_{|Y\cap U}$ such that $\beta_{U}\left( f_{U}\right) \leq2$ and that $f_{U}$ is continuous at every point in $Y\cap U.$ Take $F=\cup_{U\in\mathcal{B}}f_{U}.$ Since each $U$ is clopen in $X,$ $F$ is continuous at each point in $Y$ and $\mathcal{D}^{2}\left( F,\varepsilon,X\right) \cap U=\mathcal{D}^{2}\left( F,\varepsilon,U\right) =\emptyset$ for all $\varepsilon>0$ and $U\in \mathcal{B}.$ Therefore $\beta_{X}\left( F\right) \leq2.$ **Case 2.** $\gamma_{0}$ is a successor ordinal. For each $x\in X^{(\gamma_{0}-1)}$, choose a sequence $(W_{n,x})_{n=1}^{\infty}$ of clopen neighborhoods of $x$ such that $W_{n+1,x}\subseteq W_{n,x}\subseteq B(x,1/n)$ for all $n\in\mathbb{N}$ and $W_{1,x}\subseteq B(x,\delta_{x}/3)$. If $x$ and $x^{\prime}$ are distinct elements in $X^{(\gamma_{0}-1)}$, then $B(x,\delta_{x}/3)\cap B(x^{\prime},\delta _{x^{\prime}}/3)=\emptyset$. Hence $W_{1,x}\cap W_{1,x^{\prime}}=\emptyset$. Note that $W_{1}=\cup\{W_{1,x}:x\in X^{(\gamma_{0}-1)}\}$ is clopen in $X$. Indeed, clearly $W_{1}$ is open. If $z\in\overline{W_{1}}$, then choose $(x_{n})$ in $X^{(\gamma_{0}-1)}$ and a sequence $(z_{n})$ converging to $z$ such that $z_{n}\in W_{1,x_{n}}$ for all $n$. If $(x_{n})$ has a constant subsequence, then clearly $z\in W_{1}$. Otherwise, assume that all $x_{n}$’s are distinct. For all distinct $n,m\in\mathbb{N}$, $$\begin{aligned} \max(\delta_{x_{n}},\delta_{x_{m}}) & \leq d(x_{n},x_{m})\leq d(x_{n},z_{n})+d(z_{n},z_{m})+d(z_{m},x_{m})\\ & <\delta_{x_{n}}/3+d(z_{n},z_{m})+\delta_{x_{m}}/3.\end{aligned}$$ Hence $\max(\delta_{x_{n}},\delta_{x_{m}})/3<d(z_{n},z_{m})$. Since $(z_{n})$ converges, $\delta_{x_{n}}\rightarrow0$. Then $$d(x_{n},z)\leq d(x_{n},z_{n})+d(z_{n},z)<\delta_{x_{n}}/3+d(z_{n},z)\rightarrow0.$$ Since the $x_{n}$’s are distinct elements in $X^{(\gamma_{0}-1)}$, $z\in X^{(\gamma_{0})}$, contrary to the assumption. Hence $W_{1}$ is clopen in $X$. Now $(X\backslash W_{1})^{(\gamma_{0}-1)}=\emptyset$. Hence by the inductive hypothesis, there exists an extension $f_{0}:X\backslash W_{1}\rightarrow \mathbb{R}$ of $f_{|Y\cap(X\backslash W_{1})}$ such that $\beta_{X\backslash W_{1}}(f_{0})\leq2$ and that $f_{0}$ is continuous at every point in $Y\cap(X\backslash W_{1})$. For each $x\in X^{(\gamma_{0}-1)}$ and each $n\in\mathbb{N}$, set $U_{n,x}=W_{n,x}\backslash W_{n+1,x}$. Then $U_{n,x}^{(\gamma_{0}-1)}=\emptyset$. By the inductive hypothesis, there exists an extension $f_{n,x}:U_{n,x}\rightarrow\mathbb{R}$ of $f_{|Y\cap U_{n,x}}$ such that $\beta_{U_{n,x}}(f_{n,x})\leq2$ and that $f_{n,x}$ is continuous at every point in $Y\cap U_{n,x}$. Consider $y\in Y\cap U_{n,x}$. Choose a clopen neighborhood $V_{y}$ of $y$ such that $V_{y}\subseteq U_{n,x}\cap B(y,\min(\delta_{y}/3,1/n))$ and that $$|f_{n,x}(z)-f(y)|=|f_{n,x}(z)-f_{n,x}(y)|<\min(\delta_{y},1/n)$$ for all $z\in V_{y}$. Set $V=\cup\{V_{y}:y\in Y\cap(W_{1}\backslash X^{(\gamma_{0}-1)})\}$. If $x\in X^{(\gamma_{0}-1)}\backslash Y$, define $F(x)=0$. If $y\in X^{(\gamma_{0}-1)}\cap Y$, define $F(y)=f(y)$. Then define $$F(z)=\begin{cases} f_{0}(z) & \text{ if $z\notin W_{1}$}\\ f_{n,x}(z) & \text{ if $z\in V\cap U_{n,x}$ for some $x\in X^{(\gamma_{0}-1)}$ and $n\in\mathbb{N}$}\\ F(x) & \text{ if $z\in W_{1,x}\backslash V$ for some $x\in X^{(\gamma_{0}-1)}$}. \end{cases}$$ Since $X\backslash W_{1}$ and all $V_{y}$ are open in $X$, by the definition of $F$, we see that $\mathcal{D}^{2}\left( F,\varepsilon,X\right) \cap (V\cup(X\backslash W_{1}))=\emptyset$ and that $F$ is continuous at every point in $Y\cap(V\cup(X\backslash W_{1}))$. Suppose $y\in Y\cap X^{(\gamma _{0}-1)}$. If $y$ is not a point of continuity of $F$, then there exists a sequence $(z_{m})$ converging to $y$ such that $(F(z_{m}))$ does not converge to $f(y)$. Without loss of generality, we may assume that $z_{m}\in W_{1,y}$ for all $m$. Since $F=f(y)$ on $W_{1,y}\backslash V$, we may also assume that $z_{m}\in V$ for all $m$. Choose sequences $(n_{m})$ in $\mathbb{N}$ and $(y_{m})$ in $Y$ so that $y_{m}\in U_{n_{m},y}$ and $z_{m}\in V_{y_{m}}$ for all $m$. Since $(z_{m})$ converges to $y$, $(n_{m})$ diverges to $\infty$. Then $d(z_{m},y_{m})<\min(\delta_{y_{m}}/3,1/n_{m})\rightarrow0$ and $$|F(z_{m})-f(y_{m})|=|f_{n_{m},y}(z_{m})-f(y_{m})|<\min(\delta_{y_{m}},1/n_{m})\rightarrow0.$$ Hence $(y_{m})$ converges to $y$ and $(f(y_{m}))$ converges to $f(y)$ since $f$ is continuous on $Y$. But then $(F(z_{m}))$ converges to $f(y)$, a contradiction. Hence $F$ is continuous at every point in $Y\cap X^{(\gamma _{0}-1)}$ as well. Since $Y\subseteq V\cup(X\backslash W_{1})\cup X^{(\gamma_{0}-1)}$, $F$ is continuous at all points in $Y$. Finally, suppose that $z\in\mathcal{D}^{2}\left( F,\varepsilon,X\right) $ for some $\varepsilon>0$. By the above, $z\in W_{1}\backslash V$. Choose $x\in X^{(\gamma_{0}-1)}$ such that $z\in W_{1,x}$. Then $F(z)=F(x)$. Choose $(z_{m})$ in $W_{1,x}\cap\mathcal{D}^{1}\left( F,\varepsilon,X\right) $ such that $(z_{m})$ converges to $z$ and $|F(z_{m})-F(z)|\geq\varepsilon/2$ for all $m$. In particular, $z_{m}\in V$ for all $m$. Choose sequences $(n_{m})$ in $\mathbb{N}$ and $(y_{m})$ in $Y$ so that $y_{m}\in U_{n_{m},x}$ and $z_{m}\in V_{y_{m}}$ for all $m$. **Claim**. $\delta_{y_{m}}\rightarrow0$. If the claim fails, then by going to a subsequence if necessary, we may assume that there exists $\delta>0$ such that $\delta_{y_{m}}\geq\delta$ for all $m$, that $d(z_{m},z_{k})<\delta/6$ for all $m,k$ and that $(\gamma_{y_{m}})$ is a nondecreasing sequence of ordinals. If $m<k$ and $y_{m}$ and $y_{k}$ are distinct, then $y_{k}\in X^{(\gamma_{y_{m}})}\backslash\{y_{m}\}$. Thus $$\begin{aligned} \delta_{y_{m}} & \leq d(y_{m},y_{k})\\ & \leq d(y_{m},z_{m})+d(z_{m},z_{k})+d(z_{k},y_{k})\\ & <\delta_{y_{m}}/3+\delta/6+\delta_{y_{k}}/3\\ & \leq\delta_{y_{m}}/2+\delta_{y_{k}}/3.\end{aligned}$$ Hence $\delta_{y_{k}}>3\delta_{y_{m}}/2$ whenever $k>m$ and $y_{k}\neq y_{m}$. Since the metric $d$ is assumed to be bounded, the sequence $(y_{m})$ must have a constant subsequence. Without loss of generality, let $y_{m}=y_{0}$ for all $m$. Then $z_{m}\in V_{y_{0}}$ for all $m$ and hence $z\in V_{y_{0}}\subseteq V$, a contradiction. This proves the claim. Using the claim, choose $m$ large enough so that $\delta_{y_{m}}<\varepsilon/2$. Now $$|F(v)-f(y_{m})|<\delta_{y_{m}}<\varepsilon/2$$ for all $v\in V_{y_{m}}$. Since $V_{y_{m}}$ is a neighborhood of $z_{m}$, we see that $z_{m}\notin\mathcal{D}^{1}\left( F,\varepsilon,X\right) $, contrary to the choice of $z_{m}$. [9]{} <span style="font-variant:small-caps;">R. L. Blair,</span> Extensions of continuous functions from dense subspaces. Proc. Amer. Math. Soc. **54** (1976), 355–359. <span style="font-variant:small-caps;">A. M. Bruckner, J. B. Bruckner and B. S. Thomson,</span> Real Analysis, Prentice-Hall, Inc., New Jersey, 1997. <span style="font-variant:small-caps;">J. Dugundji</span>, Topology, Allyn and Bacon, Inc., Boston, 1966. <span style="font-variant:small-caps;">S. Hernández</span>, Extension of continuous functions into uniform spaces. Proc. Amer. Math. Soc. **97** (1986), no. 2, 355–360. <span style="font-variant:small-caps;">A. S. Kechris and A. Louveau</span>, A classification of Baire-1 functions, Trans. Amer. Math. Soc. **318**(1990), 209–236. <span style="font-variant:small-caps;">O. Kalenda and J. Spurny,</span> Extending Baire-one functions on topological spaces, Topology and its Appl., *to appear.* <span style="font-variant:small-caps;">C. Kuratowski,</span> Topologie, Vol. I. (French) 4ème éd. Monografie Matematyczne, Tom 20. Państwowe Wydawnictwo Naukowe, Warsaw, 1958. <span style="font-variant:small-caps;">D. H. Leung and W.-K. Tang,</span> Functions of Baire Class One, Fund. Math., **179**(2003), 225–247.
--- abstract: 'Focusing on the optimization version of the random K-satisfiability problem, the MAX-K-SAT problem, we study the performance of the finite energy version of the Survey Propagation (SP) algorithm. We show that a simple (linear time) backtrack decimation strategy is sufficient to reach configurations well below the lower bound for the dynamic threshold energy and very close to the analytic prediction for the optimal ground states. A comparative numerical study on one of the most efficient local search procedures is also given.' author: - Demian Battaglia - Michal Kolář - Riccardo Zecchina title: Minimizing energy below the glass thresholds --- Introduction ============ The problem of finding variable configurations that minimize the energy of a system with competitive interactions has been and still is a central one in the study of complex systems, like spin glasses in physics, protein folding and regulatory networks in biology, and optimization problems in computer science (see *e.g.,* [@review_dynamics; @Proteins; @Clustering; @optiphysics; @newoptiphysics]). Among the tools for numerical investigations of complex systems at low temperatures the simulated annealing (SA) algorithm [@SA] and its variants have played a major role. Such stochastic processes satisfy detailed balance and their behavior can be compared with static and dynamical mean-field calculations. However, in problems in which the interest is focused on zero temperature ground states and where the proliferation of metastable states causes an exponential slowdown in the equilibration rate, the applicability of SA-like algorithms is limited to relatively small system sizes. In computer science the field of combinatorial optimization [@Papadimitriou] deals precisely with the general issue of classifying the computational difficulty (“hardness”) of minimization problems and of designing search algorithms. Similarly to statistical physics models, a generic combinatorial optimization problem is composed of many discrete variables—*e.g.,* Boolean variables, finite sets of colors or Ising spins—which interact through constraints typically involving a small number of variables, that in turn sum up to give the global cost-energy function. When the problem instances are extracted at random from nontrivial ensembles (that is ensembles which contains many instances that are hard to solve), computer science meets physics in a very direct way: many of the models considered to be of basic interest for Computer Science are nothing but spin glasses defined over finite connectivity random graphs, the well studied diluted spin glasses [@TCS; @AI]. Their associated energy function counts the number of violated constraints in the original combinatorial problem (with ground states corresponding to optimal solutions). Understanding the onset of hardness of such systems is at the same time central to computer science and to $T=0$ statistical physics with surprisingly concrete engineering applications. For instance, among the most effective error correcting codes and data compression methods are the Low Density Parity Check algorithms [@Spielman; @RichardsonUrbankeIntroduction; @Sourlas], which indeed implement an energy minimization of a spin glass energy defined over a sparse random graph. In such problems, the choice of the graph ensemble is a part of the designing techniques, a fact that makes spin glass theory directly applicable. The above example is however far from representing the general scenario for combinatorial problems: in many situations the probabilistic set up is not defined and, consequently, the notion of typical-case analysis does not play any obvious role. The study of the connection (if any) between worst-case and typical-case complexity is indeed an open one and very few general results are known [@Ajtai]. Still, a precise understanding of non-trivial random problem instances promises to be important under many aspects. New algorithmic results as well as many mathematical issues have been put forward by the statistical physics studies, with examples ranging from phase transitions [@Nature; @Science] and out-of-equilibrium analysis of randomized algorithms [@Physical_Methods] to new classes of message-passing algorithms [@MZ; @BMZ]. The physical scenario for the diluted spin glasses version of hard combinatorial problems predicts a trapping in metastable states for exponentially long times of local search dynamic process satisfying detailed balance. Depending on the models and on the details of the process—*e.g.,* cooling rate for SA—the long time dynamics is dominated by different types of metastable states at different temperatures [@Montanari:Ricci:1]. A common feature is that at zero temperature and for simulation times which are sub-exponential in the size of the problem there exists an extensive gap in energy which separates the blocking states from true ground states. Such behavior can be tested on concrete random instances which therefore constitute a computational benchmark for more general algorithms. Of particular interest for computer science are randomized search processes which do not properly satisfy detailed balance and that are known (numerically) to be more efficient than SA-like algorithms in the search for ground states [@Randomized_Algorithms]. Whether the physical blocking scenario applies also to these artificial processes, which are not necessarily characterized by a proper Boltzmann distribution at long times, is a difficult open problem. The available numerical results and some approximate analytical calculations [@Semerjian_Monasson; @Barthel_et_al] seem to support the existence of a thermodynamical gap, a fact which is of up-most importance for optimization. For this reason (and independently from physics), during the last decade the problem of finding minimal energy configurations of random combinatorial problems similar to diluted spin-glasses—*e.g.,* random K-Satisfiability (K-SAT) or Graph Coloring—has become a very popular algorithmic benchmark in computer science [@AI]. In the last few years there has been a great progress in the study of spin glasses over random graphs which has shed new light on mean-field theory and has produced new algorithmic tools for the study of low energy states in large single problem instances. Quite surprisingly, problems which were considered to be algorithmically hard for local search algorithms, like for instance random K-SAT close to a phase boundary, turned out to be efficiently solved by the Survey Propagation (SP) algorithm arising from the replica symmetry broken (RSB) cavity approach to diluted spin glasses. Such type of results calls for a rigorous theory of the functioning of SP (which is a non local process) and bring new mathematical challenges of potential practical impact. Scope of this paper is to display a set of new numerical and algorithmic results which complete previously published results on the SP algorithm. We shall deal only with the random K-SAT problem even though we expect the algorithmic outcomes to be applicable to other similar problems like, for instance, the random graph coloring. The paper is organized as follows. In Sections \[KSAT\], \[SP\] we briefly review the known results on random K-SAT together with the SP equations over single instances at finite pseudo-temperature. We discuss as well in \[SP-Y\] how the SP algorithm can be modified in order to study the region of parameters with finite ground state energy (UNSAT phase), where not all constraints of the underlying random K-SAT problem can be satisfied simultaneously. In Sec. \[results\] we discuss then the performance of SP as an optimization device. At variance with the SAT phase in which many clusters of zero energy configurations coexist and where SP works efficiently without need of correcting variable assignments, in the UNSAT phase an efficient implementation of SP requires the introduction of—at least—a very simple form of backtracking procedure (similar to the one proposed in [@Parisi_backtrack]). We show that a linear time backtrack is enough to reach energies compatible with those predicted by the analytic calculations in the infinite size limit in the relevant region of parameters. We give moreover numerical evidence for the existence of threshold states for one of the most efficient randomized local search algorithms for solving random K-SAT, namely WalkSat [@WalkSat]. We display a blocking mechanism at an energy level which is definitely above the lower bound for the dynamical threshold states predicted by the stability analysis of the 1-RSB cavity equations. Finally, for the deep UNSAT phase, we report on numerical data on convergence times for both WalkSat and SA which are in agreement with the predicted existence of full RSB (f-RSB) phases. Conclusions and perspectives are briefly discussed in Sec. \[Conclusions\]. Brief review of random K-SAT {#KSAT} ============================ K-SAT is a NP-complete problem [@Garey_Johnson] (for $K>2$) which lies at the root of combinatorial optimization. It is very easy to state: Given $N$ Boolean variables and $M$ constraints taking the form of clauses, [*K-SAT consists in asking whether it exists an assignment of the variables that satisfies all constraints*]{}. Each clause contains exactly $K$ variables, either directed or negated, and its truth value is given by the OR function. Since the same variable may appear directed or negated in different clauses, competitive interactions among clauses may set in. As mentioned in the introduction, in the last decade there has been a lot of interest on the random version of K-SAT: for each clause the variables are chosen uniformly at random (with no repetitions) and negated with probability $1/2$. In the large $N$ limit, random K-SAT displays a very interesting threshold phenomenon. Taking as control parameter the ratio of number of clauses to number of variables, $\alpha=M/N$, there exists a phase transition at a finite value $\alpha_c(K)$ of this ratio. For $\alpha<\alpha_c(K)$ the generic problem is satisfiable (SAT), for $\alpha>\alpha_c(K)$ the generic problem is not satisfiable (UNSAT). This phase transition has been seen numerically [@kirkpatrick:selman:94] and it is of special interest since extensive experiments [@AI] have shown that the instances which are algorithmically hard to solve are exactly those where $\alpha$ is close to $\alpha_c$. Therefore, the study of the SAT/UNSAT phase transition is considered of crucial relevance for understanding the onset of computational complexity in typical instances [@TCS]. A lot of work has been focused on the study of both the decision problem (*i.e.,* determining with a YES/NO answer whether a satisfying assignment exists), and the optimization version in which one is interested in minimizing the number of violated clauses when the problem is UNSAT (random MAX-K-SAT problem). On the analytical side, there exists a proof that the threshold phenomenon exists at large $N$ [@Friedgut], although the fact that the corresponding $\alpha_c$ has a limit when $N \to \infty$ has not yet been established rigorously. Upper bounds $\alpha_{\mathrm{UB}}(K)$ on $\alpha_c$ have been found using first moment methods [@dubois_kirousis] and variational interpolation methods [@guerra_franz:leone:03], and lower bounds $\alpha_{\mathrm{LB}}(K)$ have been found using either explicit analysis of some algorithms [@lowbound1], or some second moment methods [@achlioptas:moore:02]. For random MAX-K-SAT theoretical bounds are also known [@Achlioptas_Naor_Peres; @max-3-sat-7/8], as well as rigorous results on the running times of random walk and approximation algorithms [@Schoning; @Ben-Sasson; @Parkes]. Recently, the cavity method of statistical physics has been applied to K-SAT [@Science; @MZ; @MMZ] and the thresholds have been computed with high accuracy. A lot of work is going on in order to provide a rigorous foundation to the cavity results and we refer to [@MMZ] for a more complete discussion of these aspects. In what follows we shall concentrate on the $K=3$ case and we will be interested in analyzing the behavior of different algorithms in the region of parameter in which the random formulas are expected to be hard to solve or to minimize. The energy function which is used in the zero temperature statistical mechanics studies is taken proportional to the number of violated clauses in a given problem so that a zero energy ground state corresponds to a satisfying assignment. The energy of a single clause is positive (equals 2 for technical reasons) if the clause is violated and zero if it is satisfied. The overall energy is obtained by summing over clauses and reads $$\label{hamiltonian} E=2\sum_a\frac{\prod_{i=1}^{3}\left(1+J_{a,i} s_i^a\right)}{2} \label{energy}$$ where $s^a_i$ is the $i$-th binary (spin) variable appearing in clause $a$ and the coupling $J_{a, i}$ takes the value 1 (resp. -1) if the corresponding variable appears not negated (resp. negated) in clause $a$. For instance the clause $(x_1 \vee \bar x_2 \vee x_3)$ has an energy $\frac{1}{4} (1+s_1) (1-s_2) (1+s_3)$ where the Boolean variables $x_i=\{0,1\}$ are connected to the spin variables by the transformation $s_i=(-1)^{x_i}$. The phase diagram of the random 3-SAT problem as arising from the statistical physics studies can be very briefly summarized as follows. For $\alpha < 3.86$, the $T=0$ phase is at zero energy (the problem is SAT). The entropy density is finite and the phase is Replica Symmetric (RS) and unfrozen. Roughly speaking, this means that there exists one giant cluster of nearby solutions and that the effective fields vanish linearly with the temperature. For $3.86 < \alpha < 3.92$, there is a full RSB phase. The solution space breaks in clusters and the order parameter becomes a nested probability measure in the space of probability distribution describing cluster to cluster fluctuations. The phase is still SAT and unfrozen [@Biroli_et_al; @Parisi_SAT04]. At $\alpha \simeq 3.92$ there is a discontinuous transition toward a clustered frozen phase [@Science; @MZ]. Up to $\alpha =4.15$ the phase is f-RSB while above the 1-RSB solution becomes stable[@Montanari:Parisi:Ricci]. The [*complexity*]{}, that is the normalized logarithm of the number of clusters, is finite in this region. At finite energy there exist even more metastable states which act as dynamical traps. The 1-RSB metastable states become unstable at some energy density $E_G(\alpha)$ which constitutes a lower bound to the true dynamical *threshold energy* (see Sec. \[SP\] for more details). At $\alpha=4.2667$ the ground state energy becomes positive and therefore the typical random 3-SAT problem becomes UNSAT. At the same point the complexity vanishes. The phase remains 1-RSB up to $\alpha=4.39$ where an instability toward a zero complexity full RSB phase appears. In the region $4.15 < \alpha < 4.39$, the 1-RSB ansatz for the ground state is stable against higher orders of RSB, but the 1-RSB predictions become unstable for energies larger than the *Gardner energy*. The instability line intersects with the 1-RSB ground state extimation at the two extremes of the interval, inside which it provides a lower bound to the true threshold energy (see Ref. [@Montanari:Parisi:Ricci] for a comprehensive discussion). Further (preliminary) f-RSB corrections suggest that the true threshold states have energies very close to the lower bound and hence the interval $A=[4.15,4.39]$ should be taken as the region where to take really hard benchmarks for algorithm testing. ![ The solid line is an extimation for the ground state energy, while the dashed curve represents the Gardner energy, providing a lower bound for the threshold states (numerical data adapted from ref. [@Montanari:Parisi:Ricci]). In the inset we show that the difference between the Gardner and the ground state energy is strictly positive in the small 1-RSB stable region around the SAT/UNSAT transition critical point (indicated by the vertical line): it is expected that it is hard for heuristics based on local search to find assignments inside the closed area delimited by the energy gap curve.[]{data-label="small_gap_fig"}](small_gap_fig.ps){width="\columnwidth"} As displayed in Fig. \[small\_gap\_fig\], the actual value of the energy gap is very small close to the end points of $A$. In order to avoid systematic finite size errors, numerical simulations should be done close to the SAT/UNSAT point, *i.e.,* far from the end point of $A$. Consistently with the fact that finite size fluctuations are relatively big ($O(\sqrt{N}))$, even close to $\alpha_c$ problem sizes of the order at least of $N=10^5$ are necessary in order to observe a matching with the analytic predictions. Brief review of SP equations {#SP} ============================ The 1-RSB cavity equations which have been used to study the typical phase diagram of random K-SAT become the SP equations once reformulated to run over single problem instance [@MZ]. This is done by avoiding the averaging process with respect to the underlying random graphs. Thanks to the self-averaging property of the random K-SAT free energy [@Self-averaging], the SP equations can be used both to re-derive the phase diagram of the problem and, more important, to access detailed information of algorithmic relevance about a given problem instance. In particular, the SP equations provide information about the statistical behavior of the single variables in the stable and metastable states of given energy density. The 1-RSB cavity equations are iterative equations (averaged over the disorder) for the probability distribution functions (pdf) of effective fields that describe their cluster-to-cluster fluctuations. The order parameter is a probability measure in the space of pdf’s; it tells the probability that a randomly chosen variable has a certain associated pdf in states at a given energy density. In SP and more in general in the cavity approach, one assumes to know pdf’s of the fields of all variables in the temporary absence of one of them. Then one writes the induced pdf of the local field acting on this “cavity” variable in absence of some other variable interacting with it (*i.e.,* the so called Bethe lattice approximation for the problem). These relations define a closed set of equations for the pdf’s that can be solved iteratively. The equations are exact if the cavity variables acting as inputs are uncorrelated, *e.g.,* over trees, or are conjectured to be an asymptotically exact approximation over locally tree-like structures [@MZ] where the typical distance between randomly chosen variables diverges in the large $N$ limit (as $\ln N$ for diluted random graphs). The full list of the cavity fields over the entire underlying graph, in the SP implementation, constitutes the order parameter. From the cavity fields one may determine the total field acting on each variable in all metastable states of given energy density and this information can be used for algorithmic purposes. A clear formalism for the single sample analysis is given by the factor graph representation [@factor_graph] of K-SAT: variables are represented by $N$ circular “variable nodes” labeled with letters $i,j,k,\ldots$ whereas the K-body interactions are represented by $M$ square “function nodes” (carrying the clause energies) labeled by $a,b,c,\ldots$ (see Fig. \[factorgraph\]) ![Factor graph representation. Variables are represented by circles, and are connected by function nodes, represented by squares; if a variable appears negated in a clause, the connecting line is dashed.[]{data-label="factorgraph"}](factor_graph.eps){width="0.62\columnwidth"} For random 3-SAT, function nodes have connectivity $3$, variable nodes have a Poisson connectivity of average $3 \alpha$ and the overall graph is bipartite. The total energy is nothing but the sum of energies of all function nodes as given by Eq. (\[energy\]). Adopting the message-passing notation and strictly following [@MZ], we call $u$-messages the contribution to the cavity fields coming from the different connected branches of the graph. In SP the messages along the links of the factor graph have a functional nature carrying information about distributions of $u$-messages over the states at a given value of the energy, fixed by a Lagrange multiplier $y$: we call these distributions of messages $u$-surveys. The SP equations can be written at any “temperature” (the inverse of the Lagrange multiplier $y$ is actually a pseudo-temperature, see [@MZ]). However they acquire a particularly simple form in the limit $1/y\to 0$, which is the limit of interest for optimization purposes, at least in the SAT region. In K-SAT, the $u$-surveys are parameterized by two real numbers and SP can be implemented very efficiently. Each edge $a \to i$, from a function node $a$ to a variable node $i$, carries a $u$-survey $Q_{a \to i}(u)$. From these $u$-surveys one can compute the cavity fields $h_{i\to b}$ for every $b\in V(i)$, which in turn determine new output $u$-surveys (see Fig. \[popdynfig\]). Very schematically, the SP equations can be implemented as follows. Let $V(i)$ be the set of function nodes connected to the variable $i$, $V(a)$ the set of variables connected to the function node $a$; let us denote by $V(i)\setminus a$ and $V(a)\setminus i$ the same sets deprived respectively of the clause $a$ and of the variable $i$. Given then a random initialization of all the $u$-surveys $Q_{a \to i}(u)$, the function nodes are selected sequentially at random and the $u$-surveys are updated according to a complete set of coupled functional equations (see Fig. \[popdynfig\] for the notation): ![Cavity fields and $u$-messages. The $u$-survey for the $u$-message $u_{a\to i}$ depends on the pdf’s of the cavity fields $h_{j_1\to a}$ and $h_{j_2\to a}$. These are on the other side dependent on the $u$-surveys for the $u$-messages incoming to the variables $j_1$ and $j_2$.[]{data-label="popdynfig"}](popdyn.eps){width="0.9\columnwidth"} $$\begin{aligned} P_{j \to a}(h_{j\to a})&=&C_{j \to a} \int \mathcal{D}Q_{j,a}\,\,\delta \Big(h-\sum_{b\in V(j)\setminus a} u_{b\to j} \Big) \nonumber\\ \times \exp \Big(y &\!\!\!\big(\vert&\!\!\!\!\!\!\!\!\sum_{b \in V(j)\setminus a} u_{b\to j} \vert - \sum_{b\in V(j)\setminus a} \vert u_{b\to j} \vert\big) \Big), \label{P}\\ Q_{a \to i}(u)&=&\int \mathcal{D}P_{a,i}\,\, \delta\left( u-\hat{u}_{a\to i}\left(\left\{h_{j\to a}\right\}\right)\right), \label{Q2}\end{aligned}$$ where the $C_{i \to a}$’s are normalization constants, the function $\hat{u}_{a\to i}$ is: $$\hat{u}_{a\to i}\left(\left\{h_{j\to a}\right\}\right)=J_{a,i}\prod_{j\in V(a)\setminus i}\theta\left(J_{a,j}h_{j\to a}\right),$$ and the integration measures are given by: $$\mathcal{D}Q_{j,a} = \prod_{b\in V(j)\setminus a}Q_{b \to j} (u_{b\to j})\,du_{b\to j},$$ $$\mathcal{D}P_{a,i} = \prod_{j\in V(a)\setminus i}P_{j\to a}(h_{j\to a})\,dh_{j\to a}.$$ Parameterizing the $u$-surveys as $$Q_{a \to i}(u)=\eta_{a\to i}^0\delta(u)+\eta_{a \to i}^+\delta(u-1)+\eta_{a \to i}^-\delta(u+1)$$ where $\eta_{a\to i}^0=1-\eta_{a\to i}^+ - \eta_{a\to i}^-$, the above set of equations (\[P\],\[Q2\]) defines a non-linear map over the $\eta$’s. Once a fixed point is reached, from the list of the $u$-surveys one may compute the normalized pdf of the *local field* acting on each variable: $$\begin{aligned} P_{i}(H)&=&C_i \int \mathcal{D}\widehat{Q}_i\,\,\delta \Big( H -\sum_{b \in V(i)} u_{b\to i} \Big)\nonumber\\ \times&\!\!\exp&\!\!\Big(y\big(\vert \!\!\! \sum_{b \in V(i)} u_{b\to i} \vert - \sum_{b \in V(i)} \vert u_{b\to i} \vert\big) \Big), \label{local_field}\\ \mathcal{D}\widehat{Q}_i &=& \prod_{b\in V(i)}Q_{b \to i} (u_{b\to i})\,du_{b\to i}.\end{aligned}$$ It should be remarked that $P_i(H)$ is in general different from the family of *cavity fields* pdf’s $P_{i\to b}(h)$ computed by mean of (\[P\]). From the knowledge of the cavity and local fields pdf’s, one derives the (Bethe) free energy at the level of 1-RSB: $$\Phi(y)= \frac{1}{N}\left(\sum_{a=1}^M \Phi^f_{a}(y)-\sum_{i=1}^N \Phi^v_i(y) (\Gamma_i-1)\right) \ , \label{freeonesamp1}$$ where $\Gamma_i$ is the connectivity of the variable $i$ and: $$\begin{aligned} \Phi^f_{a}(y)&=&-\frac{1}{y} \ln \left\{ \int \prod_{i \in V(a)}\mathcal{D}Q_{i,a}\,\, \exp \left[ -y \min_{\{ \sigma_i,i \in V(a)\} } \left( E_a -\sum_{i \in V(a)} \left[ \sum_{b\in V(i)\setminus a}u_{b\to i} \right] \sigma_i + \sum_{b\in V(i)\setminus a} \vert u_{b\to i} \vert \right) \right] \right\} ,\nonumber \\ \Phi^v_i(y)&=&-\frac{1}{y} \ln \left \{ \int \mathcal{D}\widehat{Q}_i\,\, \exp\left[y (\vert \sum_{a \in V(i)} u_{a\to i} \vert- \sum_{a \in V(i)}\vert u_{a\to i} \vert ) \right] \right \}=-\frac{1}{y} \ln (C_i). \label{freeonesamp2}\end{aligned}$$ Here, $E_a$ is the energy contribution of the function node $a$. The maximum value of the free-energy functional provides a lower bound estimation of the ground state energy of the Hamiltonian (\[hamiltonian\]) defined on the sample. In the SAT region the free-energy functional $\Phi(y)$ is always non positive and it is increasing in the limit $y\to\infty$; in the UNSAT region, on the contrary, it exhibits a positive maximum for $y=y^*$ (see [@MZ]). From the free-energy density of a given instance, it is straightforward to compute numerically its complexity $\Sigma(y)=\partial \Phi(y) / \partial(1/y)$ and its energy density $\epsilon(y)=\partial(y\Phi(y))/\partial y$. We remind that the complexity is linked to the number of pure states (*i.e.,* clusters of configurations) of energy $E$, by the defining relation $\mathcal{N}(E)=\exp\left(N\Sigma(E)\right)$. The energy level represented by the largest number of configurations, $e_{th}$, is given by: $$\Sigma(e_{th})=\max_{E}\left(\Sigma(E)\right).$$ Further RSB corrections may be needed to locate the precise value of $e_{th}$, which is in any case lower bounded the largest energy of 1-RSB stable states, the so called [*Gardner energy*]{} $E_G$. It is expected that local search strategies get trapped at energies close, but not necessarily equal, to the threshold energy (see refs. [@Montanari:Ricci:1] for a throrough discussion on the role of the iso-complexity states [@iso]). More elaborated strategies not properly satisfying detailed balance (*e.g.,* WalkSat for the K-SAT problem) could in principle overcome this type of barriers; however, the available numerical and analytical results suggest that also these more sophisticated randomized searches undergo an exponential slowdown, with different layers of states acting as dynamical traps, depending on the details of the heuristics. SP in the UNSAT region {#SP-Y} ====================== In the SAT phase, where the $y \to \infty$ limit is taken, the convolutions (\[P\]) filter out completely any clause-violating truth value assignment. This feature is extremely useful for satisfiable formulas, but it becomes undesired when our sample is presumably unsatisfiable. In the UNSAT region the SP equations require a finite value of the Lagrange multiplier $y$. The filtering action of the exponential re-weighting term in (\[P\]) is then weakened and the messages computed by the SP equations can vehicle information pointing to states with a non vanishing number of violated constraints. The finite pseudo-temperature recursive equations ------------------------------------------------- The SP equations simplify considerably in the $y\to \infty$ limit and lead to extremely efficient algorithmic implementations, as discussed in great detail in [@BMZ]. In the case of finite pseudo-temperature $1/y$ the same simplification cannot take place because of the presence of a nontrivial re-weighting factor. Still, a relatively fast recursive procedure can be written. Let us consider a variable $j$ having $\Gamma_j$ neighboring function nodes and let us compute the cavity field pdf $P_{j \to a}(h)$ where $a\in V(j)$. We start by randomly picking up one function node in $V(j)\setminus a$, denoted as $b_1$, and we calculate the following “$h$-survey”: $$\label{pdrecinit} P_{j\to a}^{(1)}(h)= \eta_{b_1\to i}^{0}\,\delta(h) + \eta_{b_1\to i}^{\/+}\,\delta(h-1) + \eta_{b_1\to i}^{\/-}\,\delta(h+1).$$ The function $P_{j\to a}^{(1)}(h)$ would correspond to the true local field pdf of the variable $j$ in the case in which $b_1$ was the only neighboring clause (as denoted by the upper index). The following steps of the recursive procedure consist in adding the contributions of all the other function nodes in $V(j)\setminus a$, clause by clause (Fig. \[popdynrecfig\]): ![Computing recursively a cavity pdf. (a) In order to find a single cavity pdf $P_{j\to a}(h)$, a single clause $b_1$ in $V(j)\setminus a$ is picked up at random and the $u$-survey $Q_{b_1\to j}$ is used to compute equation (\[pdrecinit\]); (b) The contributions of all the other function nodes in $V(j)\setminus a$ are then added, clause by clause; (c) The pdf computed recursively after $\Gamma_j -1$ iterations coincides with $P_{j\to a}(h)$.[]{data-label="popdynrecfig"}](popdynrec.eps){width="0.9\columnwidth"} $$\begin{aligned} \label{pdrec} \widetilde{P}_{j\to a}^{(\gamma)}(h)&=& \eta_{b_\gamma\to j}^{0}\,\widetilde{P}_{j\to a}^{(\gamma-1)}(h)\\ &+& \eta_{b_\gamma\to j}^{\/+}\,\widetilde{P}_{j\to a}^{(\gamma-1)}(h-1)\,\exp\left [-2y\,\hat{\theta}(-h)\right]\nonumber\\ &+& \eta_{b_\gamma\to j}^{\/-}\,\widetilde{P}_{j\to a}^{(\gamma-1)}(h+1)\,\exp\left [-2y\,\hat{\theta}(h)\right].\nonumber\end{aligned}$$ Here $\widetilde{P}^{(\gamma)}_{j\to a}(h)$ is an unnormalized pdf and $\hat{\theta}(h)$ is a step function equal to $1$ for $h \geq 0$ and zero otherwise. The recursion ends after $\gamma = \Gamma_j-1$ steps, when the influence of every clause in $V(j)\setminus a$ has been taken in account. The final cavity-field pdf $P_{j\to a}(h)$ can be found straightforwardly by computing the pdf $\widetilde{P}_{j\to a}^{(\Gamma_j-1)}(h)$ for all values of the field $-\Gamma_j + 1 < h < \Gamma_j -1$ and by normalizing it. As already pointed out in Section \[SP\], the knowledge of $K - 1$ input cavity-field pdf’s can be used to obtain a single output $u$-survey. Let us compute for instance the $u$-survey $Q_{a\to i}(u)$ (see always Fig. \[popdynfig\] for the notation). In order to do that, we need first the cavity field pdf’s $P_{j\to a}(h)$ for every $j\in V(a)\setminus i$. The parameters $\{\eta_{a\to i}^0, \eta_{a\to i}^+, \eta_{a\to i}^-\}$ are then updated according to the formulas: $$\label{equpdateeta} \eta_{a\to i}^{J_{a, i}} = \prod_{n=1}^{K-1} W_{j_n\to a}^{J_{j_n, a}}, \quad \eta_{a\to i}^{-J_{a, i}} = 0, \quad \eta_{a\to i}^0 = 1 - \eta_{a\to i}^{J_{a, i}},$$ where we introduced the weight factors: $$\label{Wplus} W_{j\to a}^+ = \sum_{h = 1}^{\Gamma_j - 1} P_{j\to a}(h),\quad W_{j\to a}^- = \sum_{h = -\Gamma_j + 1}^{-1} P_{j\to a}(h).$$ It should be remarked that $Q_{a\to i}(u)$ depends only on one single nontrivial $\eta_{a\to i}^{J_{a, i}}$ (from now simply referred to as $\eta_{a\to i}$). We could say that a single kind of message can be produced, telling the receiver literal to assume the truth value “TRUE”; this message is transmitted along the edge $a\to i$ with a probability $\eta_{a\to i}$, corresponding to the probability that the only way of not violating the constraint $a$ is to set appropriately the truth value of $i$. Starting from a full collection of $u$-surveys at a given time, it is possible to realize a complete update of all the parameters $\left\{\eta_{a\to i}\right\}$ by systematical application of the recursions (\[pdrecinit\]), (\[pdrec\]) and of the relation (\[equpdateeta\]); from the new set of $u$-surveys, new cavity field pdf’s can be computed and the procedure continues until when self-consistence of $\eta$’s is reached. This procedure can be efficiently implemented numerically and allows us to determine the fixed point of the population-dynamics equations (\[P\]), (\[Q2\]), for a general value of $y$. The SP-Y algorithm ------------------ In the usual SP-inspired decimation [@BMZ], the computation of the local field pdf’s $P_{i}(H)$ is used to decide a truth value assignment for the most biased variables. Indeed, it is reasonable that a spin tends to align itself with the most probable direction of the local field. A ranking can be realized by finding all the probability weights $$\label{Wplus_loc} W_{j}^+ = \sum_{H = 1}^{\Gamma_j} P_{j}(H),\quad W_{j}^- = \sum_{H = -\Gamma_j}^{-1} P_{j}(H),$$ and by sorting the variables according to the values of a bias function: $$\label{bias_SP} b_{\mathrm{fix}}(j) = |W_{j}^+ - W^-_j|.$$ The local field pdf’s $P_{j}(H)$ can be naturally calculated resorting to the iterations (\[pdrecinit\]), (\[pdrec\]): computation is done simply by sweeping over the whole set of neighboring function nodes $V(j)$, including also the contribution of the skipped edge $a\to j$. By fixing in the right direction the spin of the most biased variable, we actually reduce the original $N$ variable problem to a new one with $N-1$ variables. New $u$-surveys are then computed. Doing that we have to take care of fixed variables: if $i$ is fixed, its cavity field pdf’s must be of the form: $$\label{fully_polarized} P_{i\to a}(h) = \delta\left(h-J_{a, i}s_{i}\right);$$ regardless of the recursions (\[pdrecinit\]), (\[pdrec\]). The complete polarization reflects the knowledge of the truth value of the literals depending on the spin $s_i$. The procedure of decimation continues until when a full truth assignment has been generated or until when convergence has been lost or a paramagnetic state has been reached; in the latter cases the original formula is simplified according to the partial truth assignment already generated and the simplified formula is passed to a specialized heuristic. Our choice of preference is the WalkSat algorithm [@WalkSat], which is by far more efficient than SA in the hard region of the 3-SAT problem, as we have checked exhaustively. Very briefly, the strategy of WalkSat is the following one: at each time step the current assignment is changed by randomly alternating greedy moves (where the variable which maximizes the number of satisfied clauses if fixed) and random-walk steps (in which a variable belonging to a randomly chosen unsatisfied clause is selected and flipped). WalkSat stops if either a satisfying assignment is found or if the maximum number of allowed spin flips (the “cutoff”) is reached (see Ref. [@Orponen] for another recently analyzed and very efficient heuristics). When working at finite pseudo-temperature, we have to take in account the possibility that some non optimal fixing is done in presence of thermal “noise”. After several updates of the $u$-surveys some biases of fixed spins may become smaller than the value they had at the time when the corresponding spin was fixed. Certain local fields can even revert their orientation. Small or positive values of an index function like: $$\label{bias_SP-bk} b_{\mathrm{backtrack}}(j) = -s_j\left(W_{j}^+ - W^-_j\right),$$ can track the appearance of such dangerous fixed spins and this information can be used to implement some “error removal” procedure; for instance, a simple strategy can be devised where both unfixing and fixing moves are performed at a fixed ratio $0 \leq r < 0.5$ (see [@Parisi_backtrack] for another backtracking implementation). The actual SP with finite $y$ simplification procedure (SP-Y) will depend not only on the backtracking fraction $r$, but even more on the choice of the inverse pseudo-temperature $y$. The simplest possibility is to keep it fixed during the simplification, but one may choose to dynamically update it, in order to stay as close as possible to the maximum $y^*$ of the free energy functional $\Phi(y)$ (which corresponds to select the ground state in the 1-RSB framework, as we have seen in Section \[SP\]). The equations (\[freeonesamp1\]), (\[freeonesamp2\]) can be rewritten in the following form, suitable for numerical computation: $$\begin{aligned} \label{Phi_SP-Y_clause} \Phi_{a}^f(y) = -\frac{1}{y}\Big[&\!\!\!\ln&\!\!\!\Big(1+\left(\mathrm{e}^{-y}-1\right) \prod_{i\in V(a)}W_{i\to a}^{J_{a, i}}\Big)\nonumber\\ &-&\!\!\ln\Big(\prod_{i\in V(a)}C_{i\to a}\Big)\Big],\\ \label{Phi_SP-Y_var} \Phi_{i}^v(y) = -\frac{1}{y}&\!\!\!\ln&\!\!\!\left(C_i\right).\end{aligned}$$ In Fig. \[pseudofig\] we give a summary of the simplification procedure in a standard pseudo-code notation. The first release of the SP-Y code can be downloaded from [@SP-Y]. Optimizing the energy below the threshold states {#results} ================================================ As we have already discussed in Section \[SP\], it is expected that, in the thermodynamical limit, any local search algorithm gets trapped in the vicinity of exponentially numerous threshold states with energy $e_{th}$ and that any local heuristics is in general unable to find the optimal assignment in the thermodynamical limit. To verify this prediction, we conducted various experiments, both in the SAT and in the UNSAT phase, focusing on the comparison between the WalkSat heuristics performance after and before different kinds of SP-Y simplification. In most of the situations, we decided to analyze carefully single large-sized samples instead of a larger number of smaller problems: we verified in fact that the sample-to-sample fluctuations tend to be unrelevant for size of order $10^4$ and larger. SAT region ---------- The aim of the first set of experiments was to check the actual existence of the threshold effect. We ran WalkSat over different formulas in the hard-SAT region, with fixed $\alpha = 4.24$ and sizes varying between $N=10^3$ and $N=10^5$, reaching a maximum cutoff of $10^{10}$ spin flips. The obtained results are plotted in Fig. \[Fig01\]; the Gardner energy is also reported for comparison with the data. Even if for small-size samples the local search algorithm is able to find a SAT assignment, for larger formulas ($N \sim \mathcal{O}(10^4)$) WalkSat does not succeed in reaching the ground state, its relaxation profile suffers of critical slowdown, and saturates at some well defined level. This is actually expected, because the Gardner energy becomes $\mathcal{O}(1)$ only for $N\sim10^4$ or larger, and for a smaller number of variables the threshold effect should be negligible when compared to finite size effects. We remind that WalkSat cannot be considered as an equilibrium stochastic process and that it is not possible to infer that its saturation level coincides with the sample threshold energy; we can anyway claim that WalkSat is unable to explore the full energy landscape of the problem, and that the enormous number of non optimal valleys is unavoidably hiding the true ground states. Plateaus in the relaxation profiles of WalkSat have indeed been already discussed in [@Semerjian_Monasson; @Barthel_et_al] and ascribed to metastable states acting as dynamical traps. For the $N=10^4$ formula a trapping effect becomes clearly visible in our experiments, but the saturation plateau is below the Gardner lower bound. The finite-size fluctuations are still of the same order of the energy gap between the ground and the threshold states and the experimental conditions are distant from the thermodynamical limit. When the size is increased up to $10^5$ variables, the saturation level moves finally between the full RSB lower bound and the 1-RSB upper bound for $e_{th}$. The efficiency of the SP-Y simplification strategy against the glass threshold is discussed in Fig. \[Fig02\]. We simplified a single randomly generated formula ($N=10^5$, $\alpha=4.24$) at several fixed values of pseudo-temperature. The solid line shows for comparison the WalkSat results after a standard SP decimation (*i.e.,* $y\to\infty$): the ground state, $E = 0$, is reached as expected, after a rather small number of spin flips. The same happens after SP-Y simplifications performed at a large enough inverse pseudo-temperature ($y>4$); one should remind indeed that in the SAT region the optimal value for $y$ would be infinite, and that in that limit the SP-Y recursions reduce to the SP equations. After simplification with smaller $y$’s, the WalkSat cooling curves reach again a saturation level, which is nevertheless below the Gardner energy, unless $y$ is too small: the threshold states of the original formula have not been able to trap the local search, even if the ground state becomes inaccessible. As we have indeed already discussed, working at finite temperature increases the probability of violating a clause when doing a spin fixing, and this is particularly evident in the SAT region where every assignment that does not satisfy some constraint should be filtered out. The procedure is intrinsically error prone, and it will allow in general to reach only “good states”, but not the true optimal solutions (the smaller the parameter $y$, the higher the saturation level will be). As we shall discuss in the next section, the use of backtracking partially cures the accumulation of errors at finite y: the saturation level can in fact be significantly lowered by keeping the same pseudo-temperature and introducing a small fraction of backtrack moves during the simplification. In Fig. \[Fig02\] the data for $y=1.5$ shows the importance of backtracking. While the run of SP-Y without backtracking has led to a plateau above Gardner energy, with the introduction of backtrack moves we find energies well below the threshold. UNSAT region ------------ When entering the UNSAT region, the task of looking for the optimal state becomes harder. The expected presence of violated constraints in the optimal assignments really forces us to run the simplification at a finite pseudo-temperature. Unfortunately, after many spin fixings, the recursions (\[pdrecinit\]), (\[pdrec\]) stop to converge for some finite value of $y$ before the maximum of the free energy is reached, most likely because the sub-problem has entered a full RSB phase. At this point one should switch to a 2-RSB version of SP which we did not realize, yet. Alternatively, one could try to run directly the final heuristic search (hoping that the full RSB sub-system is not exponentially hard to optimize) or more simply one may continue the decimation process by selecting the largest $y$ for which the computation converge. We decided to implement the latter choice until either convergence is lost independently from the value of $y$ or a paramagnetic state is reached. In our experiments we studied several 3-SAT sample problems belonging to the 1-RSB stable UNSAT phase. We employed WalkSat as an example of standard well-performing heuristics. Although WalkSat is not optimized for unsatisfiable problems, in the 1-RSB stable UNSAT region it performs still much better than any basic implementation of SA. We observed anyway that, even after $10^{10}$ spin flips, the WalkSat best assignments were still quite distant from the Gardner energy, for various samples of different size and $\alpha$. In Fig. \[Fig03\] we show the results relative to many different SP-Y simplifications with various values of $y$ and $r$ for a single sample with $N=10^5$ and $\alpha=4.29$. The simplification produced always an improvement in the WalkSat performance, but, in absence of backtracking, we were unable to go below the Gardner lower bound (although we touched it in some cases: in Fig. \[Fig03\] we show the data for a simplification at fixed $y=2.5$; a simplification with runtime optimization of $y$ reached the same level). The relative inefficiency of these first attempts of simplification was not due to the threshold effect alone, but also to an extreme sensitivity to the choice of $y$, as pointed out by a second set of experiments making use of backtracking. We performed first an extensive analysis of the simultaneous optimization of $y$ and $r$, using smaller samples in order to produce more experimental points. After some trials, the fraction $r=0.2$ appeared to be the optimal one, at least for our implementation, and in the small region under investigation of the K-SAT phase diagram. The data in Fig. \[Fig04\] refers to a formula with $N=10^4$ variables and $\alpha=4.35$. The dashed horizontal line shows the WalkSat best energy obtained on the original formula after $10^9$ spin flips. The WalkSat performance was seriously degraded when simplifying at too small values of $y$, but the introduction of backtracking cured the problem, identifying and repairing most of the wrong assignments. The WalkSat efficiency became actually almost independent from the choice of pseudo-temperature, whereas in absence of error correction a time consuming parameter tuning was required for optimization. Coming back to the analysis of the sample of Fig. \[Fig03\], the backtracking simplifications allowed us to access states definitely below the Gardner lower bound. The combination of runtime $y$-optimization and of error correction was even more effective: after a rather small number of spin flips, WalkSat reached a saturation level strikingly closer to the ground state lower bound than to the Gardner energy. A further valuable effect of introduction of the backtracking was the increased efficiency of the formula simplification itself: in the backtracking experiments, SP-Y was able to determine a truth value for more than 80% of the variables before losing convergence, while without backtracking, the algorithm stopped on average after only 40% of fixings. All the samples analyzed in the previous sections were taken from the 1-RSB stable region of the 3-SAT problem, where the equations (\[P\]), (\[Q2\]) are considered to be exact. For $\alpha > 4.39$, the phase becomes full RSB and SP loses convergence before the free energy $\Phi(y)$ reaches its maximum from the very first step of the decimation procedure. While a full RSB version of SP would most likely provide very good results, SP-Y still can be used in a sub-optimal way by selecting the largest value of $y$ for which convergence is reached. Numerical experiment show that indeed the performance of SP-Y are in good agreement with the analytical expectations. However, it should be noticed that in this region the use of SP is not necessary. Although the performance of WalkSat and SA can be improved by the SP simplification, the SA alone is already able of finding close-to-optimum assignments efficiently (as expected for a full RSB scenario) and behaves definitely better than WalkSat. Conclusions {#Conclusions} =========== In this paper, we have displayed the performance of SP as an optimization device and shown that configurations well below the threshold states can be found efficiently. Similar results are expected to hold also for random satisfiable instances very close to the critical point for which the combined use of finite pseudo-temperature and backtracking could give access to the SAT optima. It would be of some interest to analyze further improvements of the decimation strategies as well as to consider more structured factor graphs within a variational framework, in which some correlations can be put under control. A possible application of SP-Y–like algorithms can be found in information theory: lossy data compression based on Low Density Parity Check schemes leads to optimization problems which are indeed very similar to the one discussed in this paper. Acknowledgments =============== We thank A. Braunstein, M. Mézard, G. Parisi and F. Ricci-Tersenghi for very fruitful discussions. This work has been supported in part by the European Community’s Human Potential Programme under contract HPRN-CT-2002-00319, STIPCO. [99]{} J.-P. Bouchaud, L. F. Cugliandolo, J. Kurchan and M. Mézard, in [*Spin Glasses and Random Fields*]{}, edited by A. P. Young (World Scientific 1997) H. Frauenfelder, P. G. Wolynes, and R. H. Austin, Rev. Mod. Phys. [**71**]{}, s419–s430 (1999); V. S. Pande, A. Y. Grosberg, and T. Tanaka, Rev. Mod. Phys. [**72**]{}, 259–314 (2000) M. Blatt, S. Wiseman, and E. Domany, Physical Review Letters [**76**]{}, 3251 (1996) A. K. Hartmann, H. Rieger, [*Optimization Algorithms in Physics*]{} (Wiley-VCH, Berlin, 2001) A. K. Hartmann, H. Rieger, [*New and Advanced Optimization Algorithms in Physics and Computational Science*]{} (Wiley-VCH, Berlin, 2004) S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Science [**220**]{}, 671–680 (1983) C. H. Papadimitriou, K. Steiglitz, [*Combinatorial Optimization: Algorithms and Complexity*]{} (Prentice-Hall, Englewood Cliffs, NJ, 1982) Special Issue on [*NP-hardness and Phase transitions*]{}, edited by O. Dubois, R. Monasson, B. Selman and R. Zecchina, Theor. Comp. Sci. [**265**]{}, Issue: 1-2 (2001) T. Hogg, B.A. Huberman, C. Williams, C. (eds), Artificial Intelligence [**81**]{} I & II (1996) H. Nishimori, [*Statistical Physics of Spin Glasses and Information Processing*]{} (Oxford University Press, 2001) D. A. Spielman, Lecture Notes in Computer Science [**1279**]{}, pp. 67-84 (1997) T. Richardson and R. Urbanke, An introduction to the analysis of iterative coding systems, in [*Codes, Systems, and Graphical Models*]{}, edited by B. Marcus and J. Rosenthal (Springer, New York, 2001) N. Sourlas, Nature [**339**]{}, 693 (1989) N. Ajtai, Electronic Colloquium on Computational Complexity (ECCC) [**7**]{}, 3 (1996) R. Monasson, R. Zecchina, S. Kirkpatrick, B. Selman, and L. Troyanski, Nature [**400**]{}, 133 (1999) M. Mézard, G. Parisi, R. Zecchina, Science 297, 812 (2002) (Sciencexpress published on-line 27-June-2002; 10.1126/science.1073287) S. Cocco, R. Monasson, A. Montanari, and G. Semerjian, Approximate analysis of search algorithms with “physical” methods, preprint cs.CC/0302003 (2003) M. Mézard, R. Zecchina, Phys. Rev. E [**66**]{}, 056126 (2002) F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, IEEE Trans. Inf. Theory [**47**]{}, 498 (2002) A. Braunstein, M. Mézard, R. Zecchina, Survey propagation: an algorithm for satisfiability, preprint 2002, to appear in [*Random Structures and Algorithms*]{}, cs.CC/0212002 A. Montanari, F. Ricci-Tersenghi, On the cooling-schedule dependence of the dynamics of mean-field glasses preprint, cond-mat/0401649 (2004) R. Motwani, P. Raghavan, [*Randomized Algorithms*]{} (Cambridge University Press, Cambridge, 2000) G. Semerjian and R. Monasson, Relaxation and Metastability in the Random WalkSAT search procedure, cond-mat/0301272, preprint (2003) W. Barthel, A. K. Hartmann, and M. Weigt, Solving satisfiability problems by fluctuations: An approximate description of the dynamics of stochastic local search algorithms, cond-mat/0301271, preprint (2003) G. Parisi, A backtracking survey propagation algorithm for K-satisfiability, preprint cond-mat/0308510 (2003) M. R. Garey and D. S. Johnson, [*Computers and intractability*]{} (Freeman, New York, 1979) S. Kirkpatrick, B. Selman, Science [**264**]{}, 1297 (1994) E. Friedgut, Journal of the A.M.S. [**12**]{}, 1017 (1999) O. Dubois, Y. Boufkhad, and J. Mandler, Typical random 3-SAT formulae and the satisfiability threshold, in [*Proc. 11th ACM-SIAM Symp. on Discrete Algorithms*]{}, 124 (San Francisco, CA, 2000); A. Kaporis, L. Kirousis, and E. Lalas, The probabilistic analysis of a greedy satisfiability algorithm, in [*Proceedings of the 4th European Symposium on Algorithms*]{} (ESA 2002), to appear in series: Lecture Notes in Computer Science, Springer F. Guerra, Comm. Math. Phys. [**233**]{}, 1 (2003); S. Franz, M. Leone J. Stat. Phys. [**111**]{}, 535 (2003) J. Franco, Theoretical Computer Science [**265**]{}, 147 (2001); D. Achlioptas, G. Sorkin, [*41st Annu. Symp. of Foundations of Computer Science, IEEE Computer Soc. Press*]{}, 590 (Los Alamitos, CA, 2000) D. Achlioptas, C. Moore, Random k-SAT: Two Moments Suffice to Cross a Sharp Threshold, preprint (2002) D. Achlioptas, U. Noar, Y. Peres, On the Fraction of Satisfiable Clauses in Typical Formulas, preprint, extended abstract FOCS’03, pp. 362-370 U. Schöning, Algorithmica [**32**]{}, 615-623 (2002) M. Alekhnovich, E. Ben-Sasson, Analysis of the Random Walk Algorithm on Random 3-CNFs, preprint (2002) A.J. Parkes, Lecture Notes in Computer Science [**2470**]{}, 708 (2002) H. Karlo, U. Zwick, In Proc. of 38th FOCS, 406–415 (1997) S.Mertens, M. Mézard, R. Zecchina, Threshold values of Random K-SAT from the cavity method, preprint, cs.CC/0309020 (2003) G. Biroli and R. Monasson, and M. Weigt, Eur. Phys. J. [**B 14**]{}, 551 (2000) G. Parisi, Some remarks on the survey decimation algorithm for K-satisfiability, preprint cs.CC/0301015 (2003). A. Montanari, G. Parisi, and F. Ricci-Tersenghi, J. Phys. A 37, 2073 (2004) A.V. Lopatin, L.B. Ioffe, Phys. Rev. [**B 66**]{}, 174202 (2002) A.Z. Broder, A.M. Frieze, E. Upfal, in [ *Proc. 4th Annual ACM-SIAM Symp. on Discrete Algorithms*]{}, 322 (1993) B. Selman, H. Kautz and B. Cohen, Proc. AAAI-94, Seattle, WA, 337-343 (1994) S. Seitz, P. Orponen: An efficient local search method for random 3-satisfiability, in [*Proc. LICS’03 Workshop on Typical Case Complexity and Phase Transitions*]{} (Ottawa, Canada, June 2003); Electronic Notes in Discrete Mathematics Vol. 16. (Elsevier, Amsterdam, 2003) download site: [www.ictp.trieste.it/\~zecchina/SP]{}
--- abstract: 'The famous two weights problem consists in characterising all possible pairs of weights such that the Hardy projection is bounded between the corresponding weighted $L^2$ spaces. Koosis’ theorem of 1980 gives a way to construct a certain class of pairs of weights. We show that Koosis’ theorem is closely related to (in fact, is a direct consequence of) a spectral perturbation model suggested by de Branges in 1962. Further, we show that de Branges’ model provides an operator-valued version of Koosis’ theorem.' address: - 'Department of Mathematics, King’s College London, Strand, London, WC2R 2LS, U.K.' - 'Department of Mathematics, Michigan State University, East Lansing, MI 48824, U.S.A.' author: - Alexander Pushnitski - Alexander Volberg title: Spectral perturbation theory and the two weights problem --- Introduction and main result {#sec.a} ============================ Introduction {#sec.a1} ------------ Let $P_\pm$ be the Hardy projections in $L^2({{\mathbb T}})$ (${{\mathbb T}}$ is the unit circle parameterised by $(0,2\pi)$): $$(P_\pm f)(e^{i\theta}) = \pm \lim_{r\to1\mp0}\int_0^{2\pi}\frac{f(e^{it})}{1-re^{i(\theta-t)}}\frac{dt}{2\pi}. \label{a1}$$ In its simplest form, the two weights problem consists in the characterisation of all pairs of weights $v_j:{{\mathbb T}}\to [0,\infty)$, $j=0,1$, such that $$P_+: L^2({{\mathbb T}},v_0(e^{it})dt) \mapsto L^2({{\mathbb T}}, v_1(e^{it})dt) \label{a2}$$ is a bounded operator. (Of course, one could equally speak of $P_-$). If $v_1=v_0$, then the characterisation of such weights is given by the celebrated Muckenhoupt condition [@Muck]: $$\sup_\Delta \left( \frac1{{\lvert\Delta\rvert}}\int_\Delta v_0(e^{it})dt \cdot \frac1{{\lvert\Delta\rvert}}\int_\Delta v_0(e^{it})^{-1}dt \right)<\infty,$$ where the supremum is taken over the set of all intervals $\Delta\subset(0,2\pi)$ and ${\lvert\Delta\rvert}$ is the length of the interval $\Delta$. If there is no *a priori* relation between $v_0$ and $v_1$, the two weights problem is open, despite many years of efforts. Some necessary and some sufficient conditions are known but no effective complete description of all pairs of weights $v_0$, $v_1$ was available available till recently. The recent news at the time of writing is that the conjunction of three preprints [@NTV], [@LSSUT], [@L] proved a long-standing conjecture of Nazarov–Treil–Volberg (see [@Vo]), stating that for the Hilbert transform the so-called two-weight $T1$ theorem is valid. However, the conditions of $T1$ theorem are not easily translated (if at all) into conditions on weights. Under these circumstances, any partial information on the problem is valuable. One such piece of information is Koosis’ theorem [@Koosis1]: For every weight $v_0\geq0$ such that $0<v_0(e^{it})<1$ for a.e. $t\in(0,2\pi)$ and $v_0^{-1}\in L^1({{\mathbb T}})$, one can find another weight $v_1$, $0\leq v_1\leq v_0$, such that $\log v_1\in L^1({{\mathbb T}})$ and such that the Hardy projection $P_+$ is bounded between the weighted spaces . Koosis’ proof (see also [@NVY Appendix]) is an ingenious calculation, but one can argue that it has a rather *ad hoc* flavour. The purpose of this note is to point out that Koosis’ theorem follows *naturally* from the formalism of spectral perturbation theory (more precisely, scattering theory) in the form suggested by de Branges in [@deB]. In fact, the statement we get in this way is more general than the original Koosis’ theorem; we obtain an *operator-valued* analogue. That is, our $L^2$ spaces consist of functions on ${{\mathbb T}}$ with values in a Hilbert space ${{\mathcal K}}$ and our weights are functions with values in the Schatten classes of compact operators in ${{\mathcal K}}$. We hope that this note will attract the attention of experts to the connection between the two weights problem and scattering theory. We believe that this connection is yet to be thoroughly explored. Preliminaries {#sec.a1a} ------------- First we would like to rewrite the two weights problem in an equivalent form. Let $f\in L^2({{\mathbb T}},v_0(e^{it})dt)$ and suppose that the weight $v_0$ vanishes on some open set. Then the function $f$ is not defined on this open set, and therefore it is not clear how to define the projections $P_\pm f$ by . This suggests that the integration in the definition of the projections $P_\pm$ should be performed with respect to the weighted measure $v_0(e^{it})dt$. Thus, for a weight $w_0:{{\mathbb T}}\to[0,\infty)$, we define the *weighted Hardy projections $P_\pm^{(w_0)}$* by $$(P_\pm^{(w_0)}f)(e^{i\theta}) = \pm \lim_{r\to1\mp0}\int_0^{2\pi}\frac{w_0(e^{it})f(e^{it})}{1-re^{i(\theta-t)}}\frac{dt}{2\pi}; \label{a3}$$ the existence of the limits will be discussed separately. If $v_0(e^{it})>0$ for a.e.  $t$, then a simple argument with replacing $f$ by $v_0f$ shows that $P_+$ is a bounded operator between the spaces if and only if $$P_+^{(w_0)}: L^2({{\mathbb T}},w_0(e^{it})dt)\to L^2({{\mathbb R}},w_1(e^{it})dt) \label{a4}$$ is bounded, where $w_1=v_1$ and $w_0=v_0^{-1}$. Thus, we obtain For every weight $w_0\geq0$ such that $w_0(e^{it})>0$ for a.e. $t\in(0,2\pi)$ and $w_0\in L^1({{\mathbb T}})$, one can find another weight $w_1\geq0$ with $w_1w_0\leq1$ and $\log w_1\in L^1({{\mathbb T}})$ such that the weighted Hardy projection $P_+^{(w_0)}$ is a bounded operator between the spaces . It is this second version of Koosis’ theorem that we will discuss in this paper. Operator valued functions {#sec.a1b} ------------------------- Let ${{\mathcal K}}$ be a Hilbert space; the case $\dim {{\mathcal K}}<\infty$ is not excluded, neither it is trivial. We denote by $(\cdot,\cdot)$ the inner product in ${{\mathcal K}}$ and by ${\lVert\cdot\rVert}$ the norm in ${{\mathcal K}}$. Notation ${\mathcal{B}}({{\mathcal K}})$ stands for the set of all bounded linear operators on ${{\mathcal K}}$ and ${\mathbf{S}}_p$, $1\leq p<\infty$, denotes the Schatten class of compact operators in ${{\mathcal K}}$; in particular, ${\mathbf{S}}_1$ is the trace class. We denote by ${\lVert\cdot\rVert}_p$ the norm in ${\mathbf{S}}_p$ and by ${\lVert\cdot\rVert}_{\mathcal{B}}$ the norm in ${\mathcal{B}}({{\mathcal K}})$. As usual, for $w\in{\mathcal{B}}({{\mathcal K}})$, notation $w\geq0$ means that $(w\chi,\chi)\geq0$ for all elements $\chi\in{{\mathcal K}}$, and in the same way $w\leq C$, where $C$ is a constant, means $(w\chi,\chi)\leq C{\lVert\chi\rVert}^2$ for all $\chi\in{{\mathcal K}}$. For any $w\in{\mathcal{B}}({{\mathcal K}})$ such that $w\geq0$, the square root $w^{1/2}$ is defined via the functional calculus for self-adjoint operators. Below we work with “nice” ${{\mathcal K}}$-valued functions of the form $$f(\mu)=\sum_i (\mu-z_i)^{-1} \chi_i, \quad \mu\in{{\mathbb T}}, \quad \chi_i\in{{\mathcal K}}, \quad {\lvertz_i\rvert}\not=1, \label{a5}$$ where the sum has finitely many terms. We will denote by ${\mathcal{L}}$ the set of all such “nice” functions $f$. Let $w:{{\mathbb T}}\to{\mathcal{B}}({{\mathcal K}})$ be a Borel measurable function. Suppose that $w$ is non-negative i.e. $w(e^{it})\geq0$ for a.e.  $t\in(0,2\pi)$, and that $w$ satisfies $$\int_0^{2\pi}(w(e^{it})\chi,\chi)\frac{dt}{2\pi} \leq C{\lVert\chi\rVert}^2 \label{a4a}$$ for some constant $C$ and all $\chi\in{{\mathcal K}}$. Then for any $f\in{\mathcal{L}}$ we can define the quasi-norm $${\lVertf\rVert}_{L^2(w)}^2 = \int_0^{2\pi} (w(e^{it})f(e^{it}),f(e^{it}))\frac{dt}{2\pi}. \label{a4b}$$ After taking the quotient over the subspace of functions $f$ with ${\lVertf\rVert}_{L^2(w)}=0$, we obtain a norm on the quotient space; the space obtained by taking the closure is, by definition, the weighted space $L^2(w)$. Thus, by construction, ${\mathcal{L}}$ is dense in $L^2(w)$. Main result and discussion {#sec.a2} -------------------------- Let $1\leq p<\infty$, and let $w_0:{{\mathbb T}}\to{\mathbf{S}}_p$ be a Borel measurable function. We assume that $w_0$ is non-negative and satisfies $$\int_0^{2\pi} {\lVertw_0(e^{it})\rVert}_p \frac{dt}{2\pi}<\infty;$$ this, of course, implies . For convenience, we will assume that $w_0$ is normalised so that the above integral equals one: $$\int_0^{2\pi} {\lVertw_0(e^{it})\rVert}_p \frac{dt}{2\pi}=1. \label{a6}$$ For such weight $w_0$ and for $f\in{\mathcal{L}}$, we define the weighted Hardy projections $P_\pm^{(w_0)}$, as in the scalar case, by . It is clear that for every $r\not=1$, the integrals in converge absolutely in the norm of ${{\mathcal K}}$. \[th.a1\] Let $1\leq p<\infty$, and let $w_0:{{\mathbb T}}\to{\mathbf{S}}_p$ be a Borel measurable non-negative ($w_0\geq0$ a.e.) weight function which satisfies . Then for all $f\in{\mathcal{L}}$ (i.e. for all $f$ of the form ) and for a.e.  $\theta\in(0,2\pi)$, the limits in exist in the norm of ${{\mathcal K}}$. Further, there exists a non-trivial Borel measurable non-negative weight function $w_1:{{\mathbb T}}\to{\mathcal{B}}({{\mathcal K}})$, which satisfies $$\int_0^{2\pi}(w_1(e^{it})\chi,\chi)\frac{dt}{2\pi} \leq {\lVert\chi\rVert}^2, \quad \forall \chi\in{{\mathcal K}},$$ and there exist contractions (i.e. operators of norm $\leq1$) $X$, $Y_+$, $Y_-$, acting from $L^2(w_0)$ to $L^2(w_1)$, such that the weighted Hardy projections $P_\pm^{(w_0)}$ can be represented as $$P_\pm^{(w_0)}=\pm\frac{i}{2}(X-Y_\pm). \label{a7a}$$ In particular, $$P_\pm^{(w_0)}: L^2(w_0)\to L^2(w_1)$$ are contractions. Let us discuss this result. 1\. It is easy to see that the sum $P_+^{(w_0)}+P_-^{(w_0)}$ is simply the operator of multiplication by $w_0$: $$(P_+^{(w_0)}f)(e^{i\theta})+(P_-^{(w_0)}f)(e^{i\theta}) = w_0(e^{i\theta})f(e^{i\theta}).$$ By , it follows that this operator of multiplication has norm $\leq1$. From this it follows that $$w_0(e^{i\theta})^{1/2}w_1(e^{i\theta})w_0(e^{i\theta})^{1/2}\leq 1 \label{a9}$$ for a.e.  $\theta\in(0,2\pi)$; see the end of Section \[sec.d\] for the details of this argument. 2\. In fact, more than is true; we note without proof that the boundedness of $P_\pm^{(w_0)}$ implies that $$(P_r*w_0)^{1/2}(P_r*w_1)(P_r*w_0)^{1/2}\leq C$$ for all $r<1$ with some constant $C$; here $P_r*w_{0,1}$ is the convolution with the Poisson kernel $$P_r(\theta)=\frac{1-r^2}{1+r^2-2r\cos\theta}. \label{a9a}$$ 3\. Of course, the boundedness of $P_\pm^{(w_0)}$ implies that the weighted Hilbert transform $$(H^{(w_0)} f)(e^{i\theta}) = \lim_{r\to1} \int_{0}^{2\pi} w_0(e^{it}) \frac{2\sin(\theta-t)}{1+r^2-2r \cos(\theta-t)} f(e^{it})\frac{dt}{2\pi}$$ is a bounded map from $L^2(w_0)$ to $L^2(w_1)$. 4\. If $p=1$, the weight $w_1$ can be chosen to satisfy $$\int_0^{2\pi} {\lVertw_1(e^{it})\rVert}_1\frac{dt}{2\pi}<\infty; \label{a10}$$ see the end of Section \[sec.d\]. 5\. The weight function $w_1$ constructed in the Koosis theorem is non-degenerate in the sense that $\log w_1\in L^1({{\mathbb T}})$. The weight function $w_1$ that we construct in Theorem \[th.a1\] is also non-degenerate in the following sense. One has $$w_0(\mu) = D_0^+(\mu)^* w_1(\mu) D_0^+(\mu), \quad \text{ a.e.\ $\mu\in{{\mathbb T}}$,}$$ where $D_0^+$ is an operator valued function to be constructed below (see ). The function $D_0^+$ satisfies ${\lVertD_0^+(\cdot)\rVert}_{\mathcal{B}}\in L^{1,\infty}({{\mathbb T}})$ and $D_0^+(\mu)$ has a bounded inverse for a.e.  $\mu\in{{\mathbb T}}$. In particular, $$\operatorname{rank}w_0(\mu)=\operatorname{rank}w_1(\mu), \quad \text{ a.e.\ $\mu\in{{\mathbb T}}$,} \label{a12}$$ and $${\lVertw_1(\mu)\rVert}_{\mathcal{B}}\geq \frac{{\lVertw_0(\mu)\rVert}_{\mathcal{B}}}{{\lVertD_0^+(\mu)\rVert}_{\mathcal{B}}^2}, \quad \text{ a.e.\ $\mu\in{{\mathbb T}}$.} \label{a30}$$ By , we have $$\log {\lVertw_1(\mu)\rVert}_{\mathcal{B}}\geq \log {\lVertw_0(\mu)\rVert}_{\mathcal{B}}- 2\log^+{\lVertD_0^+(\mu)\rVert}_{\mathcal{B}},$$ and ${\lVertD_0^+(\cdot)\rVert}_{\mathcal{B}}\in L^{1,\infty}({{\mathbb T}})$ implies $\log^+{\lVertD_0^+(\cdot)\rVert}_{\mathcal{B}}\in L^p({{\mathbb T}})$ for all $p<\infty$. The outline of the proof {#sec.a3} ------------------------ We consider the absolutely continuous (a.c.) operator valued measure on ${{\mathbb T}}$ given by $$d\nu_0(e^{i\theta})=w_0(e^{i\theta})\frac{d\theta}{2\pi}. \label{a13}$$ For this measure $\nu_0$, we exhibit (see Lemma \[lma.b1\]) a Hilbert space ${{\mathcal H}}$, a unitary operator $U_0$ in ${{\mathcal H}}$ and a contraction $G:{{\mathcal H}}\to{{\mathcal K}}$ such that $$\nu_0(\delta)=GE_{U_0}(\delta)G^*, \quad \delta\subset{{\mathbb T}}, \label{a14}$$ where $E_{U_0}$ is the projection-valued spectral measure of $U_0$, and $\delta\subset{{\mathbb T}}$ is any Borel set. Next, we construct (see ) a unitary operator $U_1$ in ${{\mathcal H}}$ such that the identities $$\begin{aligned} (\alpha+\psi_0(z)) (\alpha-\psi_1(z)) &=I, \label{a15} \\ (\alpha-\psi_1(z)) (\alpha+\psi_0(z)) &=I, \label{a16}\end{aligned}$$ hold true for all ${\lvertz\rvert}\not=1$; here $\alpha$ is the auxiliary bounded self-adjoint operator given by $$\alpha=\sqrt{I-(GG^*)^2}, \label{a17}$$ and $$\psi_j(z)=i G\frac{U_j+z}{U_j-z} G^*. \label{a17a}$$ Further, similarly to , we set $$\nu_1(\delta)=GE_{U_1}(\delta)G^*, \quad \delta\subset {{\mathbb T}}. \label{a18}$$ We will be able to prove (in Lemma \[lma.c2\]) that the a.c. part of the measure $\nu_1$ can be represented as $$d\nu^{\text{\rm (ac)}}_1(e^{i\theta})=w_1(e^{i\theta})\frac{d\theta}{2\pi}$$ with some operator valued non-negative weight function $w_1$. Note that this is not automatic: the Radon-Nikodym theorem for operator valued measures in general fails; to see this, consider the spectral measure of a self-adjoint or unitary operator with a non-trivial a.c. component. Key to our construction is the connection between the weighted Hardy projections $P_\pm^{(w_0)}$ and certain operators appearing in scattering theory for the pair $U_0$, $U_1$. We use the formalism suggested by de Branges [@deB] with some simplifications due to Kuroda [@Ku]. This formalism makes use of the weighted Hilbert spaces $L^2(\nu_j)$, $j=0,1$ of ${{\mathcal K}}$-valued functions on ${{\mathbb T}}$. They are defined, similarly to , starting from the quasi-norm $${\lVertf\rVert}_{L^2(\nu_j)}^2 = \int_0^{2\pi} d(\nu_j(e^{it})f(e^{it}),f(e^{it}))$$ on the set ${\mathcal{L}}$, by taking a quotient and then a closure. We note that $\nu_0=\nu^{\text{\rm (ac)}}_0$ and $$L^2(\nu_1)\subset L^2(\nu_1^{\text{\rm (ac)}}) \quad \text{ and } \quad {\lVertf\rVert}_{L^2(\nu_1^{\text{\rm (ac)}})} \leq {\lVertf\rVert}_{L^2(\nu_1)}. \label{a20}$$ Following de Branges, we define some auxiliary bounded operators $X$, $Y_+$ and $Y_-$ acting from $L^2(\nu_0)$ to $L^2(\nu_1^{{\text{\rm (ac)}}})$. First we denote (cf. , ) $$D_0(z) = \alpha+\psi_0(z), \quad D_1(z) = -\alpha+\psi_1(z). \label{a21}$$ By , we have $$D_0(z)D_1(z)=D_1(z)D_0(z)=-I, \quad {\lvertz\rvert}\not=1. \label{a22}$$ Let $$X: L^2(\nu_0)\to L^2(\nu_1) \label{a23}$$ be the linear operator, defined on the dense set ${\mathcal{L}}$ by $$(Xf)(\mu) = \sum_i (\mu-z_i)^{-1} D_0(z_i)\chi_i, \quad f(\mu)=\sum_i (\mu-z_i)^{-1}\chi_i. \label{a24}$$ It turns out (see Lemma \[lma.b2\]) that $X$ is a unitary operator between the spaces . Moreover, this is true for *any* operators $U_0$, $U_1$, $G$, $\alpha$, related by –; assumption is not relevant here. This fact is part of de Branges’ construction [@deB]. Bearing in mind the embedding , we see that $X$ is a contraction as a map from $L^2(\nu_0)$ to $L^2(\nu_1^{\text{\rm (ac)}})$. Further, by the spectral theorem for the unitary operator $U_0$, we have $$\psi_0(z) = i\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}d\nu_0(e^{it}) = i\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z} w_0(e^{it})\frac{dt}{2\pi}. \label{a24a}$$ Thus, $\psi_0$ is the Cauchy transform of $w_0$. Using assumption on the weight $w_0$ and the *UMD property* (see e.g. [@deF]) of the space ${\mathbf{S}}_p$, $1<p<\infty$, we check (in Lemma \[lma.c1\]) that the limits $$D_0^\pm(e^{i\theta}) = \lim_{r\to1\pm0} D_0(re^{i\theta}), \quad D_1^\pm(e^{i\theta}) = \lim_{r\to1\pm0} D_1(re^{i\theta}) \label{a25}$$ exist for a.e.  $\theta\in(0,2\pi)$ in the operator norm. For $p=1$, this was proven in [@deB]; for $p>1$, this fact is borrowed from from our related work [@PuV]. Again following de Branges, we consider the operators $$Y_\pm: f(\mu)\mapsto D_0^{\pm}(\mu)f(\mu), \quad \mu\in{{\mathbb T}}, \label{a26}$$ defined initially on the set ${\mathcal{L}}$, and show that $Y_\pm$ extend as isometric operators $$Y_\pm: L^2(\nu_0)\to L^2(\nu_1^{\text{\rm (ac)}}).$$ Finally, a simple calculation (see Section \[sec.d\]) shows that $P_\pm^{(w_0)}$, $X$, $Y_\pm$ are related by . We note that $Y_\pm$ are unitarily equivalent to the wave operators $W_\pm(U_1,U_0)$ (see [@Ku]), although we will not need this fact. Identities , and the map $X$ {#sec.b} ============================ The construction of $G$, $U_0$, $U_1$, $\alpha$ {#sec.b1} ------------------------------------------------ Let ${{\mathcal H}}$ be the Hilbert space of all Borel measurable ${{\mathcal K}}$-valued functions on ${{\mathbb T}}$ with the norm $${\lVertf\rVert}_{{\mathcal H}}^2 = \int_0^{2\pi} {\lVertf(e^{it})\rVert}^2 \frac{dt}{2\pi}.$$ Let $U_0$ be the operator of multiplication by $e^{it}$ in ${{\mathcal H}}$. Let $G:{{\mathcal H}}\to{{\mathcal K}}$ be defined by $$Gf=\int_0^{2\pi} w_0(e^{it})^{1/2} f(e^{it})\frac{dt}{2\pi}.$$ Then our assumption implies that $G$ is a contraction: $$\begin{gathered} {\lVertGf\rVert} \leq \left(\int_0^{2\pi}{\lVertw_0(e^{it})^{1/2}\rVert}_{\mathcal{B}}^2\frac{dt}{2\pi}\right)^{1/2} \left(\int_0^{2\pi}{\lVertf(e^{it})\rVert}^2\frac{dt}{2\pi}\right)^{1/2} \\ = \left(\int_0^{2\pi}{\lVertw_0(e^{it})\rVert}_{\mathcal{B}}\frac{dt}{2\pi}\right)^{1/2} {\lVertf\rVert}_{{\mathcal H}}\leq {\lVertf\rVert}_{{\mathcal H}}.\end{gathered}$$ It is clear that setting $\nu_0(\delta)=GE_{U_0}(\delta)G^*$ (see ) yields . Next, let $$\Theta =2\sin^{-1}(G^*G);$$ thus, $\Theta$ is a bounded self-adjoint operator in ${{\mathcal H}}$ with $\sigma(\Theta)\subset [0,\pi)$ and $$G^*G=\sin(\tfrac12 \Theta). \label{b2}$$ Set $$U_1=\exp(\tfrac{i}2 \Theta)U_0\exp(\tfrac{i}2 \Theta) \label{b3}$$ and let $\alpha$ be defined by . \[lma.b1\] Let $U_0$, $U_1$, $G$, $\alpha$ be as described above. Then identities , hold true. The measure $\nu_1$, defined by , satisfies $\nu_1({{\mathbb T}})=\nu_0({{\mathbb T}})$ and $${\lVert\nu_1({{\mathbb T}})\rVert}\leq 1. \label{b3a}$$ Denote $$\beta=\sqrt{I-(G^*G)^2};$$ clearly, we have $$\alpha G=G\beta. \label{b5}$$ Comparing and the definition of $\beta$, we find that $$\beta=\cos(\tfrac12\Theta).$$ Using this and a little algebra, we obtain $$U_1G^*G+G^*GU_0+i(U_1\beta-\beta U_0)=0.$$ From here by straightforward manipulation we obtain the identity $$(U_1-z)G^*G(U_0-z) + i\bigl((U_1+z)\beta(U_0-z)-(U_1-z)\beta(U_0+z)\bigr) - (U_1+z)G^*G(U_0+z)=0$$ for any $z\in{{\mathbb C}}$. Taking ${\lvertz\rvert}\not=1$ and multiplying by $(U_1-z)^{-1}$ on the left and by $(U_0-z)^{-1}$ on the right, we get $$G^*G + i\left( \frac{U_1+z}{U_1-z}\beta-\beta\frac{U_0+z}{U_0-z} \right) - \frac{U_1+z}{U_1-z}G^*G\frac{U_0+z}{U_0-z} =0.$$ Multiplying this by $G$ on the left and by $G^*$ on the right and using that (by ) $$(GG^*)^2=I-\alpha^2,$$ we obtain $$-\alpha^2 + iG\frac{U_1+z}{U_1-z}\beta G^* - i G\beta\frac{U_0+z}{U_0-z} G^* - G\frac{U_1+z}{U_1-z}G^* G\frac{U_0+z}{U_0-z} G^* = -I.$$ Finally, using , this transforms into . The relation is obtained by taking adjoints in and changing $z$ to $\overline{z}^{-1}$. By , , we have $\nu_0({{\mathbb T}})=\nu_1({{\mathbb T}})=GG^*$. The estimate follows from the inequality ${\lVertG\rVert}\leq 1$. In fact, the construction of [@deB; @Ku] allows for a whole family of possible choices for operators $G$, $U_0$, $U_1$, suitable for our argument. For simplicity, we have chosen only one representative of this family. In order to clarify the ideas behind Lemma \[lma.b1\], let us sketch the analogous argument for the case of the weights $w_0$, $w_1$ on the real line. In this case the construction naturally leads to self-adjoint (rather than unitary) operators and the algebra is somewhat more transparent. Let a non-negative weight $w_0:{{\mathbb R}}\to{\mathcal{B}}({{\mathcal K}})$ satisfy $$\int_{{\mathbb R}}{\lVertw_0(t)\rVert}_{{\mathbb R}}dt<\infty.$$ Let ${{\mathcal H}}_{{\mathbb R}}$ be the $L^2$ space of ${{\mathcal K}}$-valued functions on ${{\mathbb R}}$ with the norm $${\lVertf\rVert}_{{{\mathcal H}}_{{\mathbb R}}}^2 = \int_{{\mathbb R}}{\lVertf(t)\rVert}^2dt.$$ Let $A_0$ be the operator of multiplication by the independent variable $t$ in ${{\mathcal H}}_{{\mathbb R}}$ and let $G_{{\mathbb R}}:{{\mathcal H}}_{{\mathbb R}}\to{{\mathcal K}}$ be given by $$G_{{\mathbb R}}f=\int_{{\mathbb R}}w_0(t)^{1/2} f(t)dt.$$ We set $A_1=A_0+G_{{\mathbb R}}^*G_{{\mathbb R}}$. Then from the standard resolvent identity we get $$\begin{gathered} (I+G_{{\mathbb R}}(A_0-z)^{-1}G_{{\mathbb R}}^*) (I-G_{{\mathbb R}}(A_1-z)^{-1}G_{{\mathbb R}}^*) \\ = (I-G_{{\mathbb R}}(A_1-z)^{-1}G_{{\mathbb R}}^*) (I+G_{{\mathbb R}}(A_0-z)^{-1}G_{{\mathbb R}}^*) =I;\end{gathered}$$ this is the analogue of , . One sets $$\nu_j^{{\mathbb R}}(\delta)=G_{{\mathbb R}}E_{A_j}(\delta)G_{{\mathbb R}}^*, \quad j=0,1, \quad \delta\subset {{\mathbb R}},$$ and the rest of the construction is very similar to the case of measures on ${{\mathbb T}}$. The map $X$ {#sec.b2} ----------- Let the map $X$ be defined by , . \[lma.b2\] The map $X$ is unitary between the spaces $L^2(\nu_0)$ and $L^2(\nu_1)$. For $j=0,1$, the functions $\psi_j$ (see ) can be expressed as $$\psi_j(z)=i\int_0^{2\pi}\frac{e^{it}+z}{e^{it}-z}d\nu_j(e^{it}). \label{b6}$$ We note two identities for $\psi_j$: $$\begin{gathered} \frac{\psi_j(z_1)-\psi_j(z_2)^*}{z_1-\overline{z_2}^{-1}} = 2i\int_0^{2\pi} \frac{e^{it}}{(e^{it}-z_1)(e^{it}-\overline{z_2}^{-1})} d\nu_j(e^{it}), \label{b7} \\ \psi_j(z)^*=\psi_j(\overline{z}^{-1}). \label{b8}\end{gathered}$$ Next, using , , we have for ${\lvertz_{1,2}\rvert}\not=1$: $$\begin{gathered} \psi_0(z_1)-\psi_0(z_2)^* = D_0(z_1)-D_0(z_2)^* \\ = -D_0(z_2)^*D_1(z_2)^*D_0(z_1) + D_0(z_2)^*D_1(z_1)D_0(z_1) \\ = D_0(z_2)^*(-\psi_1(z_2)^*+\psi_1(z_2))D_0(z_1).\end{gathered}$$ Combining this with , , we get $$\int_0^{2\pi} \frac{d\nu_0(e^{it})}{(e^{-it}-\overline{z_2})(e^{it}-z_1)} = D_0(z_2)^* \int_0^{2\pi} \frac{d\nu_1(e^{it})}{(e^{-it}-\overline{z_2})(e^{it}-z_1)} D_0(z_1). \label{b9}$$ Now let $$f_1(\mu)=(\mu-z_1)^{-1}\chi_1, \quad f_2(\mu)=(\mu-z_2)^{-1}\chi_2, \quad \mu\in{{\mathbb T}}, \label{b10}$$ where ${\lvertz_{1,2}\rvert}\not=1$ and $\chi_{1,2}\in{{\mathcal K}}$. Then from we get $$(f_1,f_2)_{L^2(\nu_0)} = (Xf_1,Xf_2)_{L^2(\nu_1)}.$$ This extends to all $f_1,f_2\in{\mathcal{L}}$. It follows that $X$ is an isometry. By considering an operator $X_1$ defined in a similar way with $D_1$ instead of $D_0$, and using , we obtain $XX_1=-I$, hence $X$ is a surjection. Thus, $X$ is a unitary operator. The boundary values of $D_0$ and $D_1$ {#sec.c} ====================================== Existence of boundary values of $D_0$ and $D_1$ {#sec.c1} ----------------------------------------------- \[lma.c1\] The limits $D_0^\pm(e^{i\theta})$, $D_1^\pm(e^{i\theta})$ (see ) exist for a.e.  $\theta\in(0,2\pi)$ in the operator norm. For $p=1$, this was proven in [@deB]. 1\. First we consider the limits $D_0^\pm$. We have $$D_0(z)=\alpha+\psi_0(z),$$ where $\psi_0(z)$ is given by . Thus, it suffices to consider the limits of $\psi_0$. By , it suffices to consider the limits as $z$ approaches the unit circle from inside the unit disk. Without loss of generality assume $p>1$ in . In fact, we will prove the existence of the non-tangential limits $$\lim_{\genfrac{}{}{0pt}{}{z\to e^{i\theta}}{z\in S_\theta}} \psi_0(z)$$ in the norm of ${\mathbf{S}}_p$. Here $S_\theta$ is the appropriate sector of opening $\pi/2$ with the vertex at $e^{i\theta}$ (see e.g. [@Koosis2 Section VIII:C3]). The argument below is presented in more detail in our related work [@PuV]. The function $\psi_0$ is the Cauchy transform of the weight function $w_0$ (see ). Consider the non-tangential maximal function $$(T w_0)(e^{i\theta}) = \sup \left\{{\left\lVert\psi_0(z)\right\rVert}_p: z\in S_\theta\right\}.$$ The key fact is that for $1<p<\infty$, the Banach space ${\mathbf{S}}_p$ possesses the UMD property, see [@deF]; that is, the Hilbert transform and many other integral transforms are bounded as operators in $L^2$ spaces of ${\mathbf{S}}_p$-valued functions. Using this, one can prove that the (non-linear) operator $T$ is of the weak 1-1 type, i.e. $T w_0$ belongs to the weak $L^{1,\infty}({{\mathbb T}})$ class. Next, using this fact and repeating the classical construction of Privalov’s uniqueness theorem (see e.g. [@Koosis2 Section III:D]), for any ${\varepsilon}>0$ one constructs a simply connected domain ${{\mathcal D}}$ in the unit disk such that ${\lVert\psi_0\rVert}_p$ is bounded in ${{\mathcal D}}$ and the boundary of ${{\mathcal D}}$ contains the unit circle ${{\mathbb T}}$ up to a set of measure ${\varepsilon}$. Let $\varphi$ be a conformal map of the unit disk onto ${{\mathcal D}}$. Then $F(z)=\psi_0(\varphi(z))$ is a bounded ${\mathbf{S}}_p$-valued analytic function on the unit disk. By standard results on Banach space valued analytic functions (see e.g. [@Bu]), $F(z)$ attains non-tangential boundary values in ${\mathbf{S}}_p$ norm a.e.  on the unit circle. It follows that the function $\psi_0$ attains non-tangential boundary values in ${\mathbf{S}}_p$ norm on the unit circle minus a set of measure ${\varepsilon}$. Sending ${\varepsilon}\to0$, one obtains the desired result. 2\. Let us consider the limits of $D_1$. Since $D_1(z)=-D_0(z)^{-1}$, it suffices to prove that the limiting operators $D_0^\pm(e^{i\theta})$ have bounded inverses for a.e.  $\theta$. We do this by employing an argument from [@Yafaev]. We have $$D_0(z)=D_0(0)\left(I+D_0(0)^{-1}(D_0(z)-D_0(0))\right),$$ and therefore it suffices to check that the operators $$I+D_0(0)^{-1}(D_0^\pm(e^{i\theta})-D_0(0)) \label{c1}$$ have a bounded inverse for a.e.  $\theta$. By , we have $\psi_0(z)\in{\mathbf{S}}_p$ for all ${\lvertz\rvert}\not=1$. Let $q\geq p$ be any integer; consider the regularised determinant $$d(z)=\operatorname{Det}_q(I+D_0(0)^{-1}(\psi_0(z)-\psi_0(0))).$$ The functional $A\mapsto\operatorname{Det}_q(I+A)$ is continuous (in fact, analytic) on ${\mathbf{S}}_q$. Thus, $d(z)$ is analytic in $z$ and by the previous step of the proof, $d(z)$ has non-tangential boundary values a.e.  on the unit circle. Applying Privalov’s uniqueness theorem, we obtain that these boundary values are non-zero a.e.  on the unit circle. Now since $\operatorname{Det}_q(I+A)\not=0$ if and only if $I+A$ has a bounded inverse, we conclude that the operators have bounded inverses for a.e.  $\theta$. The a.c. part of $\nu_1$ {#sec.c2} ------------------------ Taking $z_1=z_2=re^{i\theta}$ in , one obtains $$\psi_j(re^{i\theta})-\psi_j(re^{i\theta})^* = 2i\int_0^{2\pi} P_r(\theta-t)d\nu_j(e^{it}), \label{c2}$$ where $P_r$ is the Poisson kernel on ${{\mathbb T}}$. From the existence of the boundary values of $\psi_j$ on ${{\mathbb T}}$ (see Lemma \[lma.c1\]) it folows that the r.h.s. of attains a limit (in the operator norm) as $r\to1$ for a.e.  $\theta\in(0,2\pi)$. Of course, by the definition of $\nu_0$ we have $$w_0(e^{i\theta}) = \lim_{r\to1} \int_0^{2\pi} P_r(\theta-t)d\nu_0(e^{it}) \label{c7}$$ for a.e.  $\theta$. Similarly, we *define* the weight function $w_1$ by $$w_1(e^{i\theta})= \lim_{r\to1} \int_0^{2\pi} P_r(\theta-t)d\nu_1(e^{it}) \label{c3}$$ for a.e.  $\theta$. In Lemmas \[lma.c2\] and \[lma.c3\], we follow de Branges’ work [@deB]. \[lma.c2\] The a.c. part of the measure $\nu_1$ is given by $$d\nu_1^{\text{\rm (ac)}}(e^{i\theta})=w_1(e^{i\theta})\frac{d\theta}{2\pi}, \quad \text{ a.e.\ $\theta\in(0,2\pi)$.} \label{c4}$$ Of course, in the scalar case $\dim{{\mathcal K}}<\infty$ formula follows directly from ; the point here is to consider the general case. Let $\chi_1,\chi_2\in{{\mathcal K}}$; consider the scalar (complex-valued) measure $(\nu_1(\cdot)\chi_1,\chi_2)$. If $\nu_1^{\text{\rm (ac)}}$ and $\nu_1^{\text{\rm (sing)}}$ are the a.c. and the singular parts of $\nu_1$ with respect to the Lebesgue measure on ${{\mathbb T}}$, then $$(\nu_1(\cdot)\chi_1,\chi_2) = (\nu_1^{\text{\rm (ac)}}(\cdot)\chi_1,\chi_2) + (\nu_1^{\text{\rm (sing)}}(\cdot)\chi_1,\chi_2)$$ gives the unique decomposition of the scalar measure $(\nu_1(\cdot)\chi_1,\chi_2)$ into the a.c. and singular parts. By the scalar theory, we have $$\lim_{r\to1}\int_0^{2\pi} P_r(\theta-t)d(\nu_1^{\text{\rm (sing)}}(e^{it})\chi_1,\chi_2)=0$$ for a.e.  $\theta$. Thus, using , we obtain $$(w_1(e^{i\theta})\chi_1,\chi_2) = \lim_{r\to1} \int_0^{2\pi} P_r(\theta-t)d(\nu_1^{\text{\rm (ac)}}(e^{it})\chi_1,\chi_2). \label{c5}$$ Now take $f_1$, $f_2$ as in ; multiplying by $(e^{i\theta}-z_1)^{-1}(e^{-i\theta}-\overline{z_2})^{-1}$ and integrating, we get $$\int_0^{2\pi} (w_1(e^{i\theta})f_1(e^{i\theta}), f_2(e^{i\theta}))\frac{d\theta}{2\pi} = \int_0^{2\pi} d(\nu_1^{\text{\rm (ac)}}(e^{i\theta})f_1(e^{i\theta}),f_2(e^{i\theta})).$$ By linearity, this extends to all $f_1,f_2\in{\mathcal{L}}$. This yields . The operators $Y_\pm$ {#sec.c3} --------------------- Next, we consider the operators $Y_\pm$ of multiplication by $D_0^\pm$, see . \[lma.c3\] The operators $Y_\pm$ are unitary maps from $L^2(\nu_0)$ to $L^2(\nu_1^{\text{\rm (ac)}})$. Taking $z_1=z_2=re^{i\theta}$ in , we obtain $$\int_0^{2\pi} P_r(\theta-t)d\nu_0(e^{it}) = D_0(re^{i\theta})^* \int_0^{2\pi} P_r(\theta-t)d\nu_1(e^{it}) \ D_0(re^{i\theta}).$$ Taking $r\to1\pm0$ and using Lemma \[lma.c2\], we get $$w_0(\mu) = D_0^\pm(\mu)^*w_1(\mu)D_0^\pm(\mu), \quad \text{ a.e.\ $\mu\in{{\mathbb T}}$.}$$ This shows that $Y_\pm$ are isometries. Considering the operators of multiplication by the boundary values of $D_1$ and using the identity , it is easy to prove that $Y_\pm$ are surjections, so they are unitary operators. The proof of Theorem \[th.a1\] {#sec.d} ============================== The weight function $w_1$ has been defined by . By construction, it is non-negative. It is Borel measurable as a pointwise norm limit of continuous weight functions. Let us prove that the limits in exist in ${{\mathcal K}}$ and $$(P^{(w_0)}_\pm f)(e^{i\theta}) = \pm\frac{i}{2}((Xf)(e^{i\theta})-(Y_\pm f)(e^{i\theta})) \label{d1}$$ for a.e. $\theta$. Take $f(\mu)=(\mu-z)^{-1}\chi$, $\chi\in{{\mathcal K}}$, ${\lvertz\rvert}\not=1$. We have $$D_0(z)=\alpha+i\int_0^{2\pi}d\nu_0(e^{it})\frac{e^{it}+z}{e^{it}-z} = \alpha - i\int_0^{2\pi}d\nu_0(e^{it}) + 2i \int_0^{2\pi}d\nu_0(e^{it})\frac{e^{it}}{e^{it}-z},$$ and therefore, by the definition of $X$, $$(Xf)(e^{i\theta}) = \left(\alpha - i\int_0^{2\pi}d\nu_0(e^{it})\right)f(e^{i\theta}) + 2i \int_0^{2\pi}d\nu_0(e^{it})\frac{e^{it}}{(e^{i\theta}-z)(e^{it}-z)}\chi.$$ For the second term in the above sum, we have $$\begin{gathered} \int_0^{2\pi} d\nu_0(e^{it})\frac{e^{it}}{(e^{i\theta}-z)(e^{it}-z)}\chi = \int_0^{2\pi} d\nu_0(e^{it})\frac{f(e^{i\theta})-f(e^{it})}{e^{it}-e^{i\theta}}e^{it} \\ = \lim_{r\to1} \left\{ \int_0^{2\pi} d\nu_0(e^{it})\frac{f(e^{i\theta})}{1-re^{i(\theta-t)}} - \int_0^{2\pi} d\nu_0(e^{it})\frac{f(e^{it})}{1-re^{i(\theta-t)}} \right\},\end{gathered}$$ where the limits exist in the norm of ${{\mathcal K}}$. Putting this together, after a little algebra we get $$(Xf)(e^{i\theta}) = \lim_{r\to1} \left\{D_0(re^{i\theta})f(e^{i\theta}) -2i \int_0^{2\pi} d\nu_0(e^{it})\frac{f(e^{it})}{1-re^{i(\theta-t)}}\right\}. \label{d3}$$ By Lemma \[lma.c1\] the limits $$\lim_{r\to\pm1} D_0(re^{i\theta})f(e^{i\theta})$$ exist in the norm of ${{\mathcal K}}$. Thus, the limits of the integral in also exist. Recalling the definition of $P_\pm^{(w_0)}$, we obtain . By and , we have $$\begin{gathered} w_0(e^{i\theta}) = \lim_{r\to1}\int_0^{2\pi} P_r(\theta-t)d\nu_0(e^{it}) \\ = \frac1{2i}\lim_{r\to1} (\psi_0(re^{i\theta})-\psi_0(\tfrac1r e^{i\theta})) = \frac1{2i}\lim_{r\to1} (D_0(re^{i\theta})-D_0(\tfrac1r e^{i\theta})).\end{gathered}$$ Thus, if we denote by $Y_0$ the operator of multiplication by $w_0(e^{i\theta})$, acting from $L^2(\nu_0)$ to $L^2(\nu_1^{\text{\rm (ac)}})$, we obtain $$Y_0=\frac1{2i}(Y_+-Y_-),$$ and therefore ${\lVertY_0\rVert}\leq 1$. This yields $$\int_0^{2\pi} (w_1(e^{i\theta})w_0(e^{i\theta})f(e^{i\theta}), w_0(e^{i\theta})f(e^{i\theta}))\frac{d\theta}{2\pi} \leq \int_0^{2\pi} (w_0(e^{i\theta})f(e^{i\theta}),f(e^{i\theta}))\frac{d\theta}{2\pi},$$ which implies . Suppose $p=1$. Then $$\operatorname{Tr}(GG^*) = \int_0^{2\pi} \operatorname{Tr}( w_0(e^{i\theta}))\frac{d\theta}{2\pi} \leq 1,$$ hence $G$ is Hilbert-Schmidt. Then $$\int_0^{2\pi} {\lVertw_1(e^{i\theta})\rVert}_1\frac{d\theta}{2\pi} = \int_0^{2\pi} \operatorname{Tr}(w_1(e^{i\theta}))\frac{d\theta}{2\pi} = \operatorname{Tr}(\nu_1^{\text{\rm (ac)}}({{\mathbb T}})) \leq \operatorname{Tr}(\nu_1({{\mathbb T}})) = \operatorname{Tr}(GG^*)\leq 1,$$ i.e. ${\lVertw_1(\cdot)\rVert}_1\in L^1({{\mathbb T}})$. [12]{} *Perturbations of self-adjoint transformation,* American Journal of Mathematics, **84**, no. 4 (1962), 543–560. *Hardy spaces of vector-valued functions.* (Russian) Investigations on linear operators and theory of functions, VII. Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) **65** (1976), 5–16. *Moyennes quadratiques pondérées de fonctions périodiquess et de leurs conjuguées harmoniques,* C. R. Acad. Sci. Paris, Ser. A, **291** (1980), 255–257. *Introduction to $H_p$ spaces.* Second edition. Cambridge University Press, Cambridge, 1998. *On a stationary approach to scattering problem,* Bull. Amer. Math. Soc. **70** (1964), 556–560. , *Two Weight Inequality for the Hilbert Transform: A Real Variable Characterization, II*, preprint, arXiv1301.4663v3. , *Two weight inequality for the Hilbert transform: a real variable characterization*, preprint, arXiv 1201.4319v6. , *Weighted norm inequalities for the Hardy maximal function,* Trans. AMS **165** (1972), 207–226. *Two weight estimate for the Hilbert transform and Corona decomposition for non-doubling measures*, preprint, arXiv:1003.1596. *Asymptotics of orthogonal polynomials via the Koosis’ theorem,* Math. Res. Lett. **13**, no. 5–6 (2006), 975–983. *Scattering theory and Banach space valued singular integrals,* preprint, arXiv:1211.6694. *Martingale and integral transforms of Banach space valued functions.* Probability and Banach spaces (Zaragoza, 1985), 195–222, Lecture Notes in Math., 1221, Springer, Berlin, 1986. *Mathematical scattering theory. General theory.* American Mathematical Society, Providence, RI, 1992. *Calderón–Zygmund capacities and operators on non-homogeneous spaces*, CBMS Series in Math., v. 100, Amer. Math. Soc., 2003, pp. 1–165.
--- abstract: 'We study proximity induced triplet superconductivity in a spin-orbit-coupled system, and show that the **d** vector of the induced triplet superconductivity undergoes precession that can be controlled by varying the relative strengths of Rashba and Dresselhaus spin-orbit couplings. In particular, a long-range spin-triplet helix is predicted when these two spin-orbit couplings have equal strengths. We also study the Josephson junction geometry and show that a transition between 0 and $\pi$ junctions can be induced by controlling the spin-orbit coupling with a gate voltage. An experimental setup is proposed to verify these effects. Conversely, the observation of these effects can serve as a direct confirmation of triplet superconductivity.' author: - 'Xin Liu, J. K. Jain, and Chao-Xing Liu' title: 'Long-Range Spin-Triplet Helix in Proximity Induced Superconductivity in Spin-Orbit-Coupled Systems' --- [*Introduction -*]{} Crucial to the success of spintronics [@Zutic:2004_a] are injection of spin, its long decay length and its manipulation. The study of spin transport in a superconductor has given rise to the subfield known as superconducting spintronics [@Ohnishi2010; @Quay2013; @Wakamura2014]. One may wonder if the spin-1 of Cooper pairs in a triplet superconductor can play a similar role as the electron spin in spintronics. The observation of surprisingly long-range proximity effect in a superconductor (SC)/ferromagnet (FM) junction [@Keizer2006a; @Wang2010a; @Robinson2010a; @Khaire2010a; @Klose2012a; @Robinson2012a; @Leksin2012] has been interpreted in terms of an injection into the FM of triplet Cooper pairs with a long decay length [@Bergeret2001a; @Eschrig2008a; @Bergeret2005a; @Buzdin2005a; @Takei2012a; @Bergeret2013a]. However, it is unclear how to manipulate the long-range part of the induced triplet pair. We propose here a geometry in which the triplet pairs are injected into a material with spin-orbit coupling (SOC) and show, theoretically, that they can be manipulated by varying the relative strengths of the Rashba and Dresselhaus SOCs. In particular, we predict a long-range spin-triplet helix, which can be verified by observing a $0-\pi$ transition in Josephson junctions as a function of the SOC strengths. We show that the effect is robust against any spin independent scattering. Proximity effect in SOC materials has been considered previously,[@Yang2009; @Yang2010] but with only Rashba SOC, which does not produce long-range effects discussed below. Before presenting the detailed microscopic theory, we first illustrate the underlying physics, shown in Fig \[pairing\]. In the absence of magnetization and SOC, four kinds of Cooper pairs (singlet $|\uparrow \downarrow \rangle-|\downarrow\uparrow \rangle$ and triplet pairs $|\uparrow \downarrow \rangle+|\downarrow\uparrow \rangle$, $|\uparrow\uparrow\rangle \pm |\downarrow\downarrow \rangle$) are allowed with a zero center-of-mass momentum. The magnetization breaks the degeneracy between $|k,\uparrow\rangle$ and $|-k,\downarrow\rangle$. It will lead to a spatially modulated oscillation $e^{-iqx}|\uparrow\downarrow\rangle\pm e^{iqx}|\downarrow\uparrow\rangle$ [@Demler1997; @Eschrig2011] for the Cooper pairs with opposite spins but leave the pairs $|\uparrow\uparrow\rangle \pm |\downarrow\downarrow \rangle$ unchanged, as shown in Fig \[pairing\](b). (Here we assume that the system is uniform along y and z directions so that the center-of-mass momentum of pairs is always zero along these directions.) On the contrary, SOC breaks the degeneracy between $|k,\uparrow(\downarrow)\rangle$ and $|-k,\uparrow(\downarrow)\rangle$, as shown in Fig. \[pairing\](c,d). Thus, the Cooper pairs with parallel spins will oscillate spatially as $e^{-iqx}|\uparrow\uparrow\rangle+e^{iqx}|\downarrow\downarrow\rangle$, while the pairs $|\uparrow\downarrow\rangle \pm |\uparrow\downarrow \rangle$ remain unchanged. Here we emphasize that the spin quantization axis aligns along different directions for different momenta, determined by the form of SOC in Fig. \[pairing\](c). The spatially oscillatory pairs will decay after taking into account all possible wave vectors of $q$ [@Buzdin2005a] in the case of Fig. \[pairing\](b). Similarly, the triplet pairs $|\uparrow\uparrow\rangle$ and $|\downarrow\downarrow\rangle$ in Fig. \[pairing\](c) will also generally decay rapidly in the SOC region. Therefore, in the presence of magnetization and generic SOC, only the pairs with zero center-of-mass momenta exhibit long-range proximity effect. However, there is an exception for a system with equal strengths of Rashba and Dresselhaus SOCs. In this case, the Fermi surfaces for two spin bands shifted in opposite directions by $Q=4m\alpha$, shown in Fig. \[pairing\](d). Here $m$ being the electron effective mass and $\alpha$ being the Rashba SOC strength. Thus, all of spatially oscillatory pairs have the same wave vector $Q$ and will not decay even in the presence of spin independent scattering. We show below that these oscillatory triplet pairs result in a long-range helical mode, dubbed "long-range spin-triplet helix”, in analogy to the persistent spin helix observed in two dimensional electron gases (2DEGs)[@Bernevig:2006_a; @Stanescu:2007_a; @LiuXin:2012_a; @Weber:2007_a; @Koralek:2009_a]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Energy dispersion and Fermi surfaces are shown for (a) normal metals, (b) ferromagnets, (c) a 2DEG with Rashba SOC and (d) a 2DEG with equal strengths of Rashba and Dresselhaus SOCs. The possible forms of spin states of Cooper pairs, including singlet and triplet pairs, are also illustrated in the figures. For $k_y\neq 0,k_x=0$, the gap between two spin bands in (c) is $|2\alpha k_y|$.[]{data-label="pairing"}](pair-1 "fig:"){width="0.75\columnwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [*Hamiltonian and pairing functions -*]{} We study the SC/normal-conductor structure whose Hamiltonian takes the form $$\begin{aligned} \hat{H}&=&\left(\begin{array}{cc}H_0& \hat{\Delta} \\ \hat{\Delta}^{\dagger} & -H_0^* \end{array}\right), \hat{\Delta}=\Delta(x) i\sigma_y, \nonumber \\ H_0&=&\left(\frac{\textbf p^2}{2m}-\mu \right)\sigma_0+\left(\textbf M(x) + \textbf h(x,\textbf k)\right) \cdot \bm \sigma,\end{aligned}$$ in the basis $[c_{\uparrow},c_{\downarrow},c^{\dag}_{\uparrow},c^{\dag}_{\downarrow}]^{\rm T}$, where $c_{\uparrow,\downarrow}$ and $c^{\dag}_{\uparrow,\downarrow}$ are electron annihilation and creation operators for different spins, $m$ is the electron mass, $\mu$ is the chemical potential, $\hat{\Delta}$ is the spin-singlet s-wave superconducting gap, $\textbf{M}$ is the magnetization, $\textbf{h}$ is the effective magnetic field of SOC and $\bm \sigma$ denotes the spin operators. The gap strength $\Delta(x)$ is zero in the proximity region and has a constant value $\Delta$ in the superconducting region. The magnetization $\textbf M(x)$ and effective magnetic field of SOC $\textbf h(x,k)$ are only present in the normal-conductor and depend on the spatial coordinate $x$ shown in Fig. \[SFSOC\] and Fig. \[model\](a,b). Cooper pairs in spin space can be described microscopically by a pairing function $f^R(E,\bm r)=(d_0\sigma_0+\textbf{d}\cdot \bm{\sigma})i\sigma_y$ [@Bergeret2001a; @Champel2008], which is the off diagonal block of the retarded Green’s function $$\begin{aligned} \label{Pair-2}\left. G^{\rm R}(E,\textbf r, \textbf r')\right|_{\textbf r= \textbf r'}=\left(\begin{array}{cc}g^{\rm R}(E,\textbf r)&f^{\rm R}(E,\textbf r) \\ \overline{f}^{\rm R}(E,\textbf r)&\overline{g}^{\rm R}(E,\textbf r) \end{array}\right). \end{aligned}$$ Here $d_0$ and $\textbf{d}$ are the expectation value of singlet and triplet pairs respectively, $E$ is the energy, $\textbf{r}$ and $\textbf{r'}$ are the spatial coordinates; we have $f_{ij}^{\rm R}(E,\textbf r)=-(\overline{f}_{ij}^{\rm R}(-E,\textbf r))^{\dagger}$; and $g^{\rm R}$($\overline{g}^{\rm R}$) is the electron (hole) Green’s function. Both $f^R$ and $g^R$ are $2 \times 2$ matrices in spin space. The superconducting gap is related to the pairing function by the equality $\hat{\Delta}=(1/2\pi)\int dE \lambda f_E {\rm Im} f^{\rm R}$ where $\lambda$ is the attractive interaction strength and $f_E$ is the Fermi distribution. In the proximity region, the superconducting gap is zero because of $\lambda=0$, but the pairing function $f^{\rm R}$ can be nonzero. Below, we will calculate, in the presence of either magnetization or SOC, the spatial evolution of the pairing function $f^R(E,\textbf r)$ in the proximity region and show its consistence to the physical picture in Fig. \[pairing\]. ![ A schematic plot of a SC/FM/SOC junction. Energy dispersions for different regions are shown above the junction structure. The colors in the dispersion relation represent different spin indices and the solid lines (dashed lines) denote electron (hole) bands. $k_{1(2),f}$ and $k_{3(4),f}$ are the Fermi momenta of different spin bands for SOC and FM regions, respectively. Different propagation or reflection processes are denoted by $T^{\rm in(out)}_{\rm fm(soc)}$ or $R_{ad}$.[]{data-label="SFSOC"}](SFSOC){width="0.8\columnwidth"} [*$\textbf d$ vector in a one-dimensional (1D) SC/FM/SOC junction -*]{} In the ferromagnetic region ($x \in (-a,0)$) the SOC is zero, while in the SOC region ($x>0$) the magnetization is zero shown in Fig. \[SFSOC\]. In the SOC (FM) region, the Fermi wave vectors of the spin split bands, $k_{1\rm f}, k_{2\rm f}$ ($k_{3\rm f}, k_{4\rm f}$) in Fig \[SFSOC\], satisfy $$\begin{aligned} \label{split-1} k_{2\rm f}-k_{1\rm f}=\frac{2|\textbf{h}(k_{\rm f})|}{\hbar v_{\rm f}}, \ \ k_{4\rm f}-k_{3\rm f}=\frac{2M}{\hbar v_{\rm f}}, \end{aligned}$$ with $\hbar v_{\rm f}=\hbar k_{\rm f}/m=\sqrt{2\mu/m}$, assuming $h,M\ll \mu$. The Green’s functions $G^R$ can be related to the reflection matrix $R$ by the Fisher-Lee relation[@Fisher1981a] which has been applied to the superconducting proximity effect [@Lambert1993; @Wang2001]. For 1D case, Fisher-Lee relation in the basis $[c_{\uparrow},c_{\downarrow},c^{\dag}_{\uparrow},c^{\dag}_{\downarrow}]^{\rm T}$ takes the form [@Lambert1993; @Wang2001; @Datta1997] $$\begin{aligned} \label{F-L-1} R_{ij}(E,\textbf r)=-\delta_{ij}+i\hbar\sqrt{v_i v_j}G_{ij}^R(E,\textbf r), \end{aligned}$$ where $i,j=1,\dots,4$ and $v_{i(j)}$ is the velocity of the particle at energy $E$ in $i(j)$ channels. Therefore, we will calculate the reflection matrix to extract pairing functions in 1D case. For simplicity, we consider the clean limit with perfect transmission at FM/SOC boundary and ideal Andreev reflection at the FM/SC boundary. The reflection matrix $R(x)$ in the SOC region can be decomposed into five matrices representing five steps shown in Fig. \[SFSOC\]: an electron first propagates from $x=r$ to the interface at $x=0$ ($T^{\rm in}_{\rm soc}$); it then propagates to the interface at $x=-a$ ($T^{\rm in}_{\rm fm}$); ideal Andreev reflection occurs at the SC/FM interface of $x=-a$ ($R_{\rm ad}$), where the electron is completely reflected as a hole; the reflected hole transmits back to $x=0$ ($T^{\rm rf}_{\rm fm}$), and finally to the SOC region at $x=r$ ($T^{\rm rf}_{\rm soc}$) [@SM_T]. Consequently, the scattering matrix $R(r)$ takes the form $$\begin{aligned} \label{S-1} R(r)=T^{\rm rf}_{\rm soc} T^{\rm rf}_{\rm fm}R_{\rm ad}T^{\rm in}_{\rm fm}T^{\rm in}_{\rm soc}.\end{aligned}$$ When there is no SOC (i.e., $r=0$), the reflection matrix at FM/SOC boundary takes the form [@SM_T] $$\label{app:Andreev-2-0}R(r=0)=T^{\rm rf}_{\rm fm}R_{\rm ad}T^{\rm in}_{\rm fm}=-\hbar v_{\rm f} (d_0\sigma_0+\textbf{d} \cdot \bm{\sigma} )i\sigma_y\otimes \tau_y,$$ where $$\label{dR-1} (d_0,\textbf{d})=-i\frac{e^{-i\alpha}}{\hbar v_{\rm f}}\left(\cos\left(\frac{2Ma}{\hbar v_{\rm f}}\right),i\sin\left(\frac{2Ma}{\hbar v_{\rm f}}\right) {\textbf m}\right),$$ ${\textbf m}={\textbf M}/M$, $\alpha=\arccos(E/\Delta)$ and $\tau_z=+1(-1)$ for the electron (hole) in the Nambu space. In the limit $M\ll \mu$, we take $\sqrt{v_i v_j}\approx v_{\rm f}$. Eqs. (\[app:Andreev-2-0\]) and (\[dR-1\]) show oscillation between singlet and triplet pairs as a function of $a$, the distance from the SC/FM interface. Thus, by choosing an appropriate length $a$ of the FM region, one can use the SC/FM junction to inject singlet or triplet pairs into the SOC region. When there is no FM ($a=0$), the reflection matrix reduces to $R(r)=T^{\rm rf}_{\rm soc} R_{\rm ad}T^{\rm in}_{\rm soc}=R_{ad}$ [@SM_T] in the SOC region. This is because SOC does not lift the degeneracy of time reversed pairs, as shown in Fig \[pairing\] (c) and (d). For an FM of length $a$ satisfying $2Ma/\hbar v_{\rm f}=\pi/2$, only triplet pairs with **d** vector along ${\textbf M}$ are injected into the SOC region. When the effective magnetic field of SOC is parallel to the magnetization, say $\textbf h(\textbf k) \parallel \textbf{M}$, the reflection matrix in the SOC region can be written as $$\begin{aligned} \label{d-uniform} R(r)=-e^{-i\alpha} {\textbf m}\cdot {\bm \sigma}i\sigma_y \otimes \tau_y. \end{aligned}$$ When $\textbf h(\textbf k) \perp \textbf M$, the reflection matrix in the SOC regime has the form $$\begin{aligned} \label{d-helix} R(r)=-\hbar v_{\rm f}(d_{1}\textbf m\cdot \bm{\sigma} +d_{2}\textbf m \times \textbf n \cdot \bm{\sigma}) i\sigma_y\otimes \tau_y, \end{aligned}$$ where $$\begin{aligned} \label{d-helix-1} \left(d_{1},d_{2}\right)=\frac{e^{-i\alpha}}{\hbar v_{\rm f}}\left(\cos(k_{2\rm f}-k_{1\rm f})r,\sin(k_{2\rm f}-k_{1\rm f})r\right), \end{aligned}$$ and ${\textbf n}$ is the unit direction of $\textbf{h}(k_{\rm f})$. Here $d_1$ and $d_2$ give the decomposition of the -vector along the direction ${\textbf m}$ and ${\textbf m\times \textbf n}$, respectively. Eq. (\[d-uniform\]) implies that **d** vector keeps its original direction in the case of $\textbf d \parallel \textbf h(\textbf k)$. In contrast, Eq. (\[d-helix\]) shows that in the case of $\textbf{d}\perp{\textbf h(\textbf k)}$, **d** vector precesses in the plane perpendicular to $\textbf h(\textbf k)$ when propagating along 1D SOC region. The above conclusions are consistent with our physical picture shown in Fig \[pairing\](c,d). Especially, based on Eq (\[d-helix-1\]), the precession of **d** vector leads to a helical structure, which is dubbed **d** helix or spin-triplet helix and schematically shown by red arrows in the SOC region of Fig \[model\] (b). -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The magnetization direction (the green arrows) and the effective magnetic field direction of SOC (the purple arrow) are shown (a) for a 0-junction and (b) a $\pi$-junction. The red arrows reveals the spatial distribution of d-vector. The phases of SCs at two sides are taken to be $\phi/2$ and $-\phi/2$. The color in (c) and (d) shows the spectral function of the SC/FM/SOC/FM/SC junction (logarithmic plot) as a junction of the relative phase $\phi$ for a 0-junction and $\pi$-junction, respectively. The black lines are the Andreev levels from analytical calculations. (f) shows the current-phase relation for the 0- and $\pi$-junction.[]{data-label="model"}](0-pi "fig:"){width="0.8\columnwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [*0 and $\pi$ Josephson junction transition -*]{} To confirm the predicted **d** helix, we propose an experimental setup of a SC/FM/SOC/FM/SC junction (Fig. \[model\](a,b)) and show that the **d** helix can lead to a $0-\pi$ transition in Josephson junctions [@Buzdin2005a]. The magnetizations of two ferromagnetic layers point along $x$ and $-x$ direction (Fig. \[model\](a,b)), to ensure a trivial 0-Josephson junction in the absence of the SOC region. The lengths of FMs are chosen to satisfy $2Ma/\hbar v_{\rm f}=\pi/2$, so only triplet pairs with $\textbf d$ vector along x direction are injected into the SOC region. We consider two cases with the SOC $\textbf{h}(k)=\alpha k_x \hat{e}_x \parallel \textbf M$ in Fig. \[model\](a) and $\textbf{h}(k)=\alpha k_x \hat{e}_y \perp \textbf M$ in Fig. \[model\](b). The length of the SOC wire satisfy $(k_{2\rm f}-k_{1\rm f}) L=\pi$. To study the current-phase relation in this setup, we first calculate the Andreev levels numerically by evaluating the spectral function, $ {\rm Tr}[\sum_n g^R(E,x_n)]/N$, in a tight-binding model. Here $g^{R}$ is the electron retarded Green’s function defined in Eq. (\[Pair-2\]), $x_n$ represents the $n$th site and $N$ is the total number of sites in the proximity region. The spectral functions are plot as a function of the relative phase $\phi$ between two SCs in Fig. 3(c,d). The peaks shown by the red color indicate Andreev levels. We also obtain Andreev levels analytically using the standard scattering matrix method [@Schapers2001; @Beenakker1992; @SM_T]. The analytical results are shown by two black lines in Fig 3(c,d), which are consistent with the numerical results. It is noted that the crossings of the black curves at $\phi=\pi$, in Fig. \[model\](c) and at $\phi= 0$; $2\pi$ in Fig. \[model\](d) turn into anti-crossings in numerical results. This is because we impose a barrier potential at the SC/FM interfaces and include the Fermi velocity mismatch among different regions in numerical calculations, which remove all degeneracies in analytical results. The anti-crossing changes the period of the Josephson current at zero temperature, $I_s=\frac{2e}{\hbar}\sum_n \partial E_n/\partial \phi$ with the summation of negative Andreev levels, from $4\pi$ (black curves) to $2\pi$ [@Tang1997; @Schapers2001]. The Josephson current for the Fig. \[model\](c) gives the form of $I_s\sim \sin(\phi)$ (the blue line in Fig. \[model\](f)), which corresponds to a 0-junction. In contrast, for the Fig. \[model\](d) we have $I_s\sim \sin(\phi+\pi)$ (the red line in Fig. \[model\](f)), indicating a $\pi$-junction. This 0-pi junction transition is consistent with the physical picture of the d-vector precession, shown by red arrows in Fig. \[model\](a,b). Further calculations show that the $\pi$ junction is obtained for $L$ satisfying $\pi/2 < (k_{1f}-k_{2f})L <3\pi/2$ [@Notpub]. [*$\textbf d$ helix in a 2D system -*]{} Having clarified the physics in a 1D model, we next ask if **d** helix also exists in a 2D system. For a 2DEG, the SOC has the form (assuming $x$-axis along $[110]$ direction) $$\begin{aligned} \label{SOC-2} H_{\rm so}=(\alpha+\beta)k_x\sigma_y+(\beta-\alpha)k_y\sigma_x,\end{aligned}$$ where $\alpha$ and $\beta$ are the Rashba and Dresselhaus SOC strengths. When $\alpha=\beta$, the Fermi surface with the spin parallel (anti-parallel) to the y axis is shifted along $-x$ ($x$) direction by $Q/2$, as shown in Fig \[pairing\](d). As a result, the eigenenergies of two spin states satisfy $\epsilon_{\rm 1}({\bf k})=\epsilon_{\rm 2}({\bf k+Q})$, where ${\bf Q}=4m\beta \hat{e}_x$, and $1$ $(2)$ denotes the spin parallel (anti-parallel) to the y axis. As shown in Refs. [@Bernevig:2006_a; @Weber:2007_a; @Koralek:2009_a; @Stanescu:2007_a; @LiuXin:2012_a], one can construct [*spin*]{} helix operators, which commute with the Hamiltonian and lead to a persistent spin helix mode. In our model with superconductivity, we can define triplet pairing operators $$\begin{aligned} \label{dhelix-1} \hat{d}_{x}&=&\frac{1}{2}\left(\sum_{ \{ {\bf k},i\}} \delta(\epsilon_{ {\bf k},i}-\mu) c^{\dagger}_{ {\bf k},i} c^{\dagger}_{ {-\bf k}-(-1)^i {\bf Q},i}+\rm{h.c.}\right), \\ \label{dhelix-2} \hat{d}_{z}&=&\frac{1}{2i}\left(\sum_{ \{ {\bf k},i\}} \delta(\epsilon_{ {\bf k},i}-\mu) c^{\dagger}_{ {\bf k},i} c^{\dagger}_{ {-\bf k}-(-1)^i {\bf Q},i}-\rm{h.c.}\right) \end{aligned}$$ where the summation is performed in the interval $\{\textbf{k},i\}=\{k_x<(-1)^i Q/2,k_y\}$ at the Fermi surface to avoid double counting. These two operators represent a **d** helix of triplet pairs with center-of-mass $\bf Q$ in x-z plane. Since the operators $\hat{d}_{x,z}$ commute with the Hamiltonian $H_0+H_{so}$ [@SM_T], a persistent **d** helix also exists in the triplet superconducting proximity region. It is also noted that in the case of $\alpha=\beta$, the Hamiltonian even with a spin independent scattering potential, $H=H_0+H_{\rm so}+V(\textbf r)\sigma_0$, can be transformed to a Hamiltonian without SOC through the unitary matrix $U=\exp(-iQx/2)\sigma_y$. This is because $U$ is independent of momenta and commutes with $V(\textbf r)\sigma_0$. At the same time, the triplet pairs with center-of-mass momentum $Q$ as defined in Eq. (\[dhelix-1\], \[dhelix-2\]) is transformed to those with zero center-of-mass momentum as shown in Fig \[pairing\](a). Therefore, we expect that this spin-triplet helix is immune to any spin-independent scattering and its decay length should be as long as the Cooper pairs coherence length [@Schapers2001] in the normal region. This can be further confirmed by solving Usadel equations [@Usadel:1970; @Rammer:2007_a] with SOCs [@SM_T]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The spatial dependence of ${\textbf d}$ vector of triplet pairs (the red arrows) and the corresponding effective magnetic field (the purple arrows) of the SOC are shown for (a) $\alpha=\beta$ ($\pi$-junction) and (b) $\alpha=-\beta$ (0-junction). TSC means triplet superconductor. (c) The proposed 2D SC/FM/SOC/FM/SC structure for an electronic-tunable Josephson junction.[]{data-label="shift-trans"}](2D-junction-1 "fig:"){width="0.7\columnwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In experiments, the Dresselhaus parameter $\beta$ is fixed while Rashba parameter $\alpha$ can be tuned by a gate voltage. Therefore, the following geometry can be used to confirm the oscillatory triplet pairs by observing an electrically tunable $0$-$\pi$ transition. The length $L$ of the SOC region is chosen to satisfy the condition $QL=\pi$. From the above discussion, when $\alpha=\beta$, the **d** vector of triplet pairs changes its sign after propagating from $x=x_2$ to $x=x_3$ (Fig. \[shift-trans\](a)), leading to a $\pi$-junction. If we tune the Rashba parameter to $\alpha=-\beta$, the effective magnetic field of SOC $\textbf{h}=2\beta k_y \hat{e}_x$ is along the x direction, parallel to $\textbf{d}$ vector. Based on our theory, $\textbf d$ vector keeps its direction in the SOC region (Fig. \[shift-trans\](b)) and we will have a $0$-junction. The proximity effect in the 2D Josephson junction for these two cases should be long-range according to our arguments. For realistic experiments, InAs quantum wells provide a potential candidate (Fig \[shift-trans\](c)), because they show strong proximity effect due to their low Schottky barrier [@Doh2005a]. If the two FM layer are Ni, 1 nm thickness [@Klose2012a] is enough to convert singlet pairs in SC to triplet pairs on FM/InAs interface. For the effective mass $m_{\rm eff}=0.04\rm{\rm m}_{\rm{e}}$ and typical $\alpha=0.2 \rm{eV}$Åin InAs quantum wells, we find $Q\approx 40\mu m^{-1}$, which corresponds to the length of $\sim80 nm$ of the SOC region to realize the Josephson $0-\pi$ junction transition. This length is much smaller than the coherence length, $\xi_{\rm N}=\hbar^2 \sqrt{2\pi n}/m_{\rm eff}2\pi k_{\rm B}T_c \approx 4 \mu\rm{m}$ [@Schapers2001], where $n=10^{12} \rm{cm}^{-2}$ is the typical electron density in the InAs quantum well and $T_c=1.2K$ is the critical temperature of Al. We acknowledge Yinghai Wu, Jimmy A. Hutasoit and Shou-Cheng Zhang for very helpful discussion. X.L. acknowledges partial support by the DOE under Grant No. DE-SC0005042. [37]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/RevModPhys.76.323) [****,  ()](\doibase http://dx.doi.org/10.1063/1.3427483) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.112.036602) [****,  ()](\doibase 10.1038/nature04499) [****,  ()](\doibase 10.1038/NPHYS1621) [ (), 10.1126/science.1189246](\doibase 10.1126/science.1189246) [****,  ()](\doibase 10.1103/PhysRevLett.104.137002) [****,  ()](\doibase 10.1103/PhysRevLett.108.127002) [****,  (), 10.1038/srep00699](\doibase 10.1038/srep00699) [****,  ()](\doibase 10.1103/PhysRevLett.109.057005) [****,  ()](\doibase 10.1103/PhysRevLett.86.4096) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/RevModPhys.77.1321) [****, ()](\doibase 10.1103/RevModPhys.77.935) [****,  ()](\doibase 10.1103/PhysRevB.86.054521) [****,  ()](\doibase 10.1103/PhysRevLett.110.117003) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevB.55.15174) [****, , ()](\doibase 10.1063/1.3541944) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevB.75.125307) [****,  ()](\doibase 10.1103/PhysRevB.86.174301) [****,  ()](\doibase 10.1103/PhysRevLett.98.076604) [****,  ()](\doibase 10.1038/nature07871) [****,  ()](\doibase 10.1103/PhysRevLett.100.077003) [****, ()](\doibase 10.1103/PhysRevB.23.6851) [****,  ()](http://stacks.iop.org/0953-8984/5/i=25/a=009) [****, ()](\doibase http://dx.doi.org/10.1063/1.1421236) [**](http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&path=ASIN/0521599431) (, ) @noop @noop [**]{}, Vol.  (, ) @noop [**]{}, edited by  and , Vol.  (, ) [****,  ()](\doibase 10.1007/s002570050220) @noop [****,  ()](\doibase 10.1103/PhysRevLett.25.507) @noop [**]{} (, ) [****,  ()](\doibase 10.1126/science.1113523) Supplementary material ====================== In the Supplementary Material, we provide details for the calculation of the propagation matrix and the Andreev levels for various geometries mentioned in the main text in the 1D clean limit. We also present the details for the spatial evolution of triplet pairs in the 2D system with general spin-orbit couplings (SOCs). Section I will derive the previously known results for superconductor/ferromagnet (SC/FM) junction [@SBuzdin2005a; @SBergeret2005a] from the scattering matrix method, and Section II will consider SC/FM/SOC geometry. Section III will show how to calculate the Andreev levels in SC/FM/SOC/FM/SC junctions based on the scattering matrix method. The definition of the triplet pairing operators given in Eqs. (10, 11) of the main text and their properties are given in Section IV. Section V describes the spatial evolution of the triplet pairs based on the Usadel equation. 1D SC/FM junction ----------------- We first consider how magnetization mixes different pairing functions in a one-dimensional (1D) ferromagnetic region of a SC/FM junction, schematically shown in Fig \[S-M\].a. The effective Hamiltonian for this junction is given by $$\begin{aligned} \label{Ham-5} H_{\rm SC/FM}= \left(\begin{array}{cc}\left(\frac{\hat{P}^2}{2m}-\mu\right)\sigma_0&0\\0& -\left(\frac{\hat{P}^2}{2m}-\mu\right)\sigma_0\end{array}\right)+\Theta(x)\left(\begin{array}{cc}\bm{M}\cdot \bm \sigma&0\\0& -\bm{M}\cdot \bm \sigma^*\end{array}\right)+\Theta(-x) \left(\begin{array}{cc}0&\Delta i\sigma_y\\-\Delta i\sigma_y& 0\end{array}\right),\end{aligned}$$ where $\Theta$ is the Heaviside step function, ${\bf M}$ denotes magnetization of FM, and $\Delta$ is the superconducting gap in the SC region. The SC/FM interface reflects incoming electrons (holes) into outgoing holes (electrons) and thereby induces a non-zero pairing function in the ferromagnetic region. To explore the spatial evolution of pairing function in a clean ferromagnetic wire, we formulate the reflection process by a matrix $R_{\rm fm}(a)$, given by $$R_{\rm fm}(a)=T_{\rm fm}^{\rm rf}R_{\rm ad}T_{\rm fm}^{\rm{in}}$$ which is decomposed into three steps shown in Fig. \[S-M\].a. An incoming electron (hole) is transmitted from $x=a$ to the SC/FM interface at $x=0$ ($T^{\rm in}_{\rm fm}$); then an ideal Andreev reflection occurs at SC/FM interface where the incoming electron (hole) is completely reflected to the outgoing hole (electron) ($R_{\rm ad}$); the reflected outgoing hole (electron) propagates back to $x=a$ ($T^{\rm rf}_{\rm fm}$). We now calculate each factor separately. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Panels (a) and (b) show SC/FM and SC/FM/SOC junctions, respectively. The dispersions relations in various regions are shown; the colors represent the different spin indices and the solid lines (dashed lines) denote the electron (hole). Different propagation processes defined as $T^{\rm in}_{\rm fm}$, $R_{\rm ad}$, $T^{\rm rf}_{\rm fm}$, $T^{\rm in}_{\rm soc}$ and $T^{\rm rf}_{\rm soc}$ are shown on the figure. The quantities $k_{\rm 1 f}$, $k_{\rm 2f}$, $k_{\rm 3 f}$ and $k_{\rm 4f}$ are the Fermi momenta for different spin bands.](S-M-2 "fig:"){width="0.5\columnwidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- . \[S-M\] In the first step, the wave functions of two incoming electrons and holes with opposite spins take the form $$\begin{aligned} \label{wf-1} \Psi_{+\rm e}^{\rm in}=\left(\begin{array}{c} \psi_{+}\\ \hat{0} \end{array}\right)e^{- ik_{4 \rm f} x}, \ \ \ \Psi_{+h}^{\rm in}=\left(\begin{array}{c} \hat{0}\\ \psi_{+}^* \end{array}\right)e^{+ ik_{4 \rm f} x},\nonumber \\ \Psi_{- \rm e}^{\rm in}=\left(\begin{array}{c} \psi_{-}\\ \hat{0} \end{array}\right)e^{- ik_{3 \rm f} x}, \ \ \ \Psi_{- \rm h}^{in}=\left(\begin{array}{c} \hat{0}\\ \psi_{-}^* \end{array}\right)e^{+ ik_{3 \rm f} x},\end{aligned}$$ where $\hat{0}=(0,0)^{\rm T}$, $k_{3\rm f}$ and $k_{4\rm f}$ (in Fig. \[S-M\](a)) are the Fermi wave vectors of the minority and majority spin bands, respectively, and $\bm{M}\cdot \bm{\sigma}\psi_{\pm}=\pm |\bm M|\psi_{\pm}$. In the clean limit, the transmission matrix $T_{\rm fm}^{\rm in}$ describes the propagation of an incoming electron or hole from $x=a>0$ to the SC/FM interface and is given by $$\begin{aligned} \label{Tran-1} T_{\rm fm}^{\rm{in}}&=&|\Psi_{+\rm e}^{\rm in}(0)\rangle \langle \Psi_{+\rm e}^{\rm in}(a)|+|\Psi_{+\rm h}^{\rm in}(0)\rangle \langle \Psi_{+\rm h}^{\rm in}(a)|+|\Psi_{-\rm e}^{\rm in}(0)\rangle \langle \Psi_{-\rm e}^{\rm in}(a)|+|\Psi_{-\rm h}^{\rm in}(0)\rangle \langle \Psi_{-\rm h}^{\rm in}(a)|\nonumber \\ &=&\frac{1}{2}\left(\begin{array}{cc} e^{ik_{4 \rm f}a}(\sigma_0+\bm m \cdot \bm\sigma)&0\\0&e^{-ik_{4 \rm f}a}(\sigma_0+\bm m \cdot\bm\sigma^*) \end{array}\right)+\frac{1}{2}\left(\begin{array}{cc} e^{ik_{3 \rm f}a}(\sigma_0-\bm m \cdot\bm\sigma)&0\\0&e^{-ik_{3 \rm f}a}(\sigma_0-\bm m \cdot\bm\sigma^*) \end{array}\right),\nonumber \\ &=&\left(\begin{array}{cc} e^{i\beta}U_i&0\\0& e^{-i\beta} U_i^* \end{array}\right).\end{aligned}$$ Here $\beta=(k_{4\rm f}+k_{3 \rm f})a/2$ and $$\begin{aligned} \label{Tran-1-1} U_{\rm fm}&=&\exp(i\frac{(k_{4 \rm f}-k_{3 \rm f})a}{2}\bm m \cdot \bm \sigma)=\cos(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\sigma_0+i\sin(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\bm m \cdot \bm \sigma,\nonumber \\ U_{\rm fm}^{\dagger}&=&\exp(-i\frac{(k_{4 \rm f}-k_{3 \rm f})a}{2}\bm m \cdot \bm \sigma)=\cos(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\sigma_0-i\sin(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\bm m \cdot \bm \sigma,\nonumber \\ U_{\rm fm}^{*}&=&\exp(-i\frac{(k_{4 \rm f}-k_{3 \rm f})a}{2}\bm m \cdot \bm \sigma^{*})=\cos(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\sigma_0-i\sin(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a) \bm m \cdot \bm\sigma^{*},\nonumber \\ U_{\rm fm}^{\rm T}&=&\exp(i\frac{(k_{4 \rm f}-k_{3 \rm f})a}{2} \bm m \cdot \bm \sigma^{*})=\cos(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\sigma_0+i\sin(\frac{(k_{4 \rm f}-k_{3 \rm f})}{2}a)\bm m \cdot \bm \sigma^*.\nonumber \\\end{aligned}$$ In the second step, the ideal Andreev reflection matrix has the form [@SSchapers2001] $$\begin{aligned} \label{Andreev-1} R_{\rm ad}=e^{-i\alpha}\left(\begin{array}{cc}0&i\sigma_y\\ -i\sigma_y&0 \end{array}\right),\end{aligned}$$ where $\alpha=\arccos(E/\Delta)$ with the energy $E$ satisfying $|E|<\Delta$. In the third step, there are four outgoing particles in the ferromagnetic region, with wave functions given by $$\begin{aligned} \label{wf-5} \Psi_{+\rm e}^{\rm rf}=\left(\begin{array}{c} \psi_{+}\\ \hat{0} \end{array}\right)e^{ ik_{4 \rm f} x}, \ \ \ \Psi_{+h}^{\rm rf}=\left(\begin{array}{c} \hat{0}\\ \psi_{+}^* \end{array}\right)e^{- ik_{4\rm f} x},\nonumber \\ \Psi_{- \rm e}^{\rm rf}=\left(\begin{array}{c} \psi_{-}\\ \hat{0} \end{array}\right)e^{ ik_{3\rm f} x}, \ \ \ \Psi_{- \rm h}^{\rm rf}=\left(\begin{array}{c} \hat{0}\\ \psi_{-}^* \end{array}\right)e^{- ik_{3\rm f} x}. \end{aligned}$$ The transmission matrix $T^{\rm rf}_{\rm fm}$ describes the outgoing waves moving back from the SC/FM interface at $x=0$ to $x=a$, given by $$\begin{aligned} \label{Tran-1-2} T_{\rm fm}^{\rm rf}&=&|\Psi_{+\rm e}^{\rm rf}(a)\rangle \langle \Psi_{+\rm e}^{\rm rf}(0)|+|\Psi_{+\rm h}^{\rm rf}(a)\rangle \langle \Psi_{+\rm h}^{\rm rf}(0)|+|\Psi_{-\rm e}^{\rm rf}(a)\rangle \langle \Psi_{-\rm e}^{\rm rf}(0)|+|\Psi_{-\rm h}^{\rm rf}(a)\rangle \langle \Psi_{-\rm h}^{\rm rf}(0)|\nonumber \\&=&\frac{1}{2}\left(\begin{array}{cc} e^{ik_{4 \rm f}a}(\sigma_0+\bm m \cdot \bm \sigma)&0\\0&e^{-ik_{4 \rm f}a}(\sigma_0+\bm m \cdot \bm \sigma^*) \end{array}\right)+\frac{1}{2}\left(\begin{array}{cc} e^{ik_{3 \rm f}a}(\sigma_0-\bm m \cdot \bm\sigma)&0\\0&e^{-ik_{3 \rm f}a}(\sigma_0-\bm m \cdot \bm\sigma^*)\end{array}\right)\nonumber \\ &=&\left(\begin{array}{cc} e^{i\beta}U_i&0\\0& e^{-i\beta} U_i^* \end{array}\right),\end{aligned}$$ It is noted that $T^{\rm in}_{\rm fm}$ has the same form to $T^{\rm rf}_{\rm fm}$, which is consistent to the fact that the magnetization respects the inversion symmetry. The total reflection matrix at $x=a$ in the ferromagnetic region is then given by $$\begin{aligned} \label{app:Andreev-2-0}R_{\rm fm}(a)&=&T_{\rm fm}^{\rm rf}R_{\rm ad}T_{\rm fm}^{\rm{in}}\nonumber \\ &=&\left(\begin{array}{cc} e^{i\beta}U_i&0\\0& e^{-i\beta} U_i^{*}\end{array}\right)e^{-i\alpha}\left(\begin{array}{cc}0&i\sigma_y\\ -i\sigma_y&0 \end{array}\right)\left(\begin{array}{cc} e^{i\beta}U_i&0\\0& e^{-i\beta} U_i^{*}\end{array}\right)\nonumber \\ &=&e^{-i\alpha}\left(\begin{array}{cc}0&U_ii\sigma_yU_i^{*}\\ -U_i^{*}i\sigma_y U_i&0 \end{array}\right)\nonumber \\ &=&\left(\begin{array}{cc}0&\cos((k_{4 \rm f}-k_{3 \rm f})a)i\sigma_y+i\sin((k_{4 \rm f}-k_{3 \rm f})a)\bm m\cdot \bm \sigma i\sigma_y\\ \cos((k_{4 \rm f}-k_{3 \rm f})a)(-i\sigma_y)+i\sin((k_{4 \rm f}-k_{3 \rm f})a)\bm m\cdot \bm \sigma^* i\sigma_y&0 \end{array}\right)\nonumber \\ &=&e^{-i\alpha}\left[\cos(k_{4 \rm f}-k_{3 \rm f})a \left(\begin{array}{cc}0&i\sigma_y\\-i\sigma_y&0 \end{array}\right)+i\sin(k_{4 \rm f}-k_{3 \rm f})a\left(\begin{array}{cc}0&\bm m \cdot \bm \sigma i\sigma_y\\(\bm m \cdot \bm \sigma i\sigma_y)^{\dagger}&0 \end{array}\right) \right] \end{aligned}$$ The pairing function can now be obtained by the Fisher-Lee relation [@SFisher1981a] shown in the main text, and is given by $$\begin{aligned} \label{dR-1} f^R(E,x)&=&(d_0\sigma_0+\bm{d}\cdot \bm{\sigma})i\sigma_y,\nonumber \\ d_0&=&-i\frac{e^{-i\alpha}}{\hbar v_{\rm f}}\cos((k_{4 \rm f}-k_{3 \rm f})a),\ \ \bm d=\frac{e^{-i\alpha}}{\hbar v_{\rm f}}\sin((k_{4 \rm f}-k_{3 \rm f})a)\bm m.\end{aligned}$$ The above equations display the spatial oscillation between singlet and triplet pairs in the FM region. We note that the $\bm{d}$-vector is along the direction of magnetization $\bm{M}$. 1D SC/FM/SOC junction --------------------- We next consider a 1D SC/FM/SOC junction. The calculation is conceptually similar to that given above, although the details are more complicated. The reflection matrix is now given by $$R_{\rm soc}(L)=T_{\rm soc}^{\rm rf}R_{\rm fm}(a)T_{\rm soc}^{\rm{in}}$$ where $R_{\rm fm}(a)$ has already been calculated above. The SC/FM junction is utilized as a source of singlet and triplet pairs, and the relative strengths of the two can be tuned by varying $a$, the length of the FM region. The Hamiltonian in the SOC wire has the form $$\begin{aligned} \label{Ham-5-1} H_{\rm SOC}&=& \left(\begin{array}{cc}\left(\frac{\hat{p}^2}{2m}-\mu\right)\sigma_0&0\\0& -\left(\frac{\hat{p}^2}{2m}-\mu\right)\sigma_0\end{array}\right)+\left(\begin{array}{cc}\bm{h(\hat{p})}\cdot \bm \sigma&0\\0& -\bm{h^*(\hat{p})}\cdot \bm \sigma^*\end{array}\right)\end{aligned}$$ where $\bm{\hat{p}}$ is the momentum operator, $m$ is the electron mass, $\bm{h}(\bm{\hat p})$ is the effective magnetic field due to SOC and $\mu$ is the chemical potential. In the SOC region, the incoming electrons and holes propagate to the FM/SOC interface with the wave functions $$\begin{aligned} \label{wf-1-1} \Psi_{+e}^{\rm in}=\left(\begin{array}{c} \psi_{+}\\ \hat{0} \end{array}\right)e^{-ik_{1 \rm f} x}, \ \ \ \Psi_{+h}^{\rm in}=\left(\begin{array}{c} \hat{0}\\ \psi_{+}^* \end{array}\right)e^{ ik_{2 \rm f} x},\nonumber \\ \Psi_{-e}^{\rm in}=\left(\begin{array}{c} \psi_{-}\\ \hat{0} \end{array}\right)e^{-ik_{2 \rm f} x}, \ \ \ \Psi_{-h}^{\rm in}=\left(\begin{array}{c} \hat{0}\\ \psi_{-}^* \end{array}\right)e^{ ik_{1 \rm f} x},\end{aligned}$$ where $k_{2\rm f}$ and $k_{1 \rm f}$ (in Fig \[S-M\](b)) are the Fermi wave vectors of the majority and minority spin bands respectively and $\bm n\cdot\bm{\sigma}\psi_{\pm}=\pm\psi_{\pm}$ with $\bm n={\bf h}/{|\bf h|}$. The wave functions of outgoing electrons and holes are given by $$\begin{aligned} \label{wf-6-1} \Psi_{+\rm e}^{\rm rf}=\left(\begin{array}{c} \psi_{+}\\ \hat{0} \end{array}\right)e^{ ik_{2 \rm f} x}, \ \ \ \Psi_{+h}^{\rm rf}=\left(\begin{array}{c} \hat{0}\\ \psi_{+}^* \end{array}\right)e^{- ik_{1 \rm f} x},\nonumber \\ \Psi_{- \rm e}^{\rm rf}=\left(\begin{array}{c} \psi_{-}\\ \hat{0} \end{array}\right)e^{ ik_{1 \rm f} x}, \ \ \ \Psi_{- \rm h}^{\rm rf}=\left(\begin{array}{c} \hat{0}\\ \psi_{-}^* \end{array}\right)e^{- ik_{2 \rm f} x}, \end{aligned}$$ To obtain the pairing function in the SOC wire, we consider a perfect contact at FM/SOC interface. The transmission from $x=L$ to the SC/FM interface at $x=0$ is represented by the matrix $$\begin{aligned} \label{Tran-1-5} T_{\rm soc}^{\rm{in}}&=&\frac{1}{2}\left(\begin{array}{cc} e^{ik_{2 \rm f}L}(\sigma_0+\bm n \cdot \bm \sigma)&0\\0&e^{-ik_{2 \rm f}L}(\sigma_0+\bm n \cdot \bm \sigma^*) \end{array}\right)+\frac{1}{2}\left(\begin{array}{cc} e^{ik_{1 \rm f}L}(\sigma_0-\bm n \cdot \bm \sigma)&0\\0&e^{-ik_{1 \rm f}L}(\sigma_0-\bm n \cdot \bm \sigma^*) \end{array}\right)\nonumber \\ &=&\left(\begin{array}{cc} e^{i\beta}U_i&0\\0& e^{-i\beta} U_i^* \end{array}\right)\end{aligned}$$ where $\beta=(k_{2 \rm f}+k_{1 \rm f})L/2$ and $$\begin{aligned} \label{Tran-1-3} U_{\rm soc}&=&\exp(i\frac{(k_{2 \rm f}-k_{1 \rm f})L}{2}\bm n \cdot \bm \sigma)=\cos(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\sigma_0+i\sin(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\bm n \cdot \bm \sigma,\nonumber \\ U^{\dagger}_{\rm soc}&=&\exp(-i\frac{(k_{2 \rm f}-k_{1 \rm f})L}{2}\bm n \cdot \bm \sigma)=\cos(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\sigma_0-i\sin(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\bm n \cdot \bm \sigma,\nonumber \\ U^{*}_{\rm soc}&=&\exp(-i\frac{(k_{2 \rm f}-k_{1 \rm f})L}{2}\bm n \cdot \bm \sigma^{*})=\cos(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\sigma_0-i\sin(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\bm n \cdot \bm \sigma^{*},\nonumber \\ U^{\rm T}_{\rm soc}&=&\exp(i\frac{(k_{2 \rm f}-k_{1 \rm f})L}{2}\bm n \cdot \bm \sigma^{*})=\cos(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\sigma_0+i\sin(\frac{(k_{2 \rm f}-k_{1 \rm f})}{2}L)\bm n \cdot \bm \sigma^*.\nonumber \\\end{aligned}$$ The reflected hole (electron) moves back from the interface to $x=L$, which is represented by the matrix $$\begin{aligned} \label{Tran-1-4} T_{\rm soc}^{\rm rf}&=&\frac{1}{2}\left(\begin{array}{cc} e^{ik_{1 \rm f}L}(\sigma_0+\bm n \cdot \bm \sigma)&0\\0&e^{-ik_{1 \rm f}L}(\sigma_0+\bm n \cdot \bm \sigma^*) \end{array}\right)+\frac{1}{2}\left(\begin{array}{cc} e^{ik_{2 \rm f}L}(\sigma_0-\bm n \cdot \bm \sigma)&0\\0&e^{-ik_{2 \rm f}L}(\sigma_0-\bm n \cdot \bm \sigma^*)\end{array}\right)\nonumber \\ &=&\left(\begin{array}{cc} e^{i\beta}U_i^{\dagger}&0\\0& e^{-i\beta} U_i^T \end{array}\right). \end{aligned}$$ It is noted that $T_{\rm soc}^{\rm rf}$, describing the propagation away from the F/SOC interface, is different from $T^{\rm in}_{\rm soc}$, describing the propagation towards the F/SOC interface. This indicates the fact that SOC breaks inversion symmetry. We now have all the information needed to evaluate $R_{\rm soc}(L)$, and hence the pairing function. We specialize below to the case $(k_{4\rm f}-k_{3\rm f})a=\pi/2$, for which, according to Eq. \[app:Andreev-2-0\], the SC/FM junction behaves as a reservoir of only triplet pairs whose $d$-vector is along the magnetization direction. When the magnetization ${\bf M}$ in the ferromagnetic region is parallel to ${\bf h}$ in the SOC region, the reflection matrix at $x=L$ in the SOC region is the same as that in the ferromagnetic region $$\begin{aligned} \label{Andreev-2} R_{\rm soc}(L)&=&T_{\rm soc}^{\rm rf}R_{\rm fm}(a)T_{\rm soc}^{\rm{in}}=R_{\rm fm}(a),\end{aligned}$$ which implies that the SOC will not affect the triplet pair whose -vector is parallel to the effective magnetic field of SOC. When ${\bf M}$ is perpendicular to ${\bf h}$, the refection matrix shows an oscillating behavior $$\begin{aligned} \label{Andreev-2-1} R_{\rm soc}(L)= ie^{-i\alpha}\left[\cos(k_{2 \rm f}-k_{1 \rm f})L\left(\begin{array}{cc}0&\bm m \cdot \bm \sigma i\sigma_y\\(\bm m \cdot \bm \sigma i\sigma_y)^{\dagger}&0 \end{array}\right)+\sin(k_{2 \rm f}-k_{1 \rm f})L \left(\begin{array}{cc}0&\bm m \times \bm n \cdot \bm \sigma i\sigma_y\\(\bm m \times \bm n \cdot \bm \sigma i\sigma_y)^{\dagger}&0 \end{array}\right) \right] \nonumber\end{aligned}$$ which is identical to rotate the triplet pair in the plane perpendicular to $\bm{h}$. Scattering matrix method in SC/FM/SOC/FM/SC junction ---------------------------------------------------- We now show the scattering matrix method in the SC/FM/SOC/FM/SC junction. The magnetizations of two ferromagnetic layers point along $x$ and $-x$ direction (Fig.3(a,b) in the main text), to ensure a trivial 0-Josephson junction in the absence of the SOC region. The lengths of FMs are chosen to satisfy $2Ma/\hbar v_{\rm f}=\pi/2$, so only triplet pairs with $\bm d$-vector along x direction are injected into the SOC region based on Eq. (\[dR-1\]). From Eq. (\[app:Andreev-2-0\],\[dR-1\]), the associated reflection matrix at the interface of $x_2$ ($x_3$) takes the form $$\begin{aligned} \label{AR-1} R_{\rm fm}^{-(+)}=ie^{-i\alpha}\left(\begin{array}{cc}0& e^{-(+)i\phi/2}\sigma_x i\sigma_y\\(e^{+(-)i\phi}\sigma_x i\sigma_y)^{\dagger}&0 \end{array}\right). \end{aligned}$$ where $-\phi/2$ ($\phi/2$) is the phase of the left (right) superconductor. The discrete Andreev levels in the Josephson junction can be obtained from the condition [@SSchapers2001; @SBeenakker1992] $$\begin{aligned} \label{Andreev level-0} \rm{Det}\left(I_{4\times 4}-T_{\rm soc}^{\rm in}R_{\rm fm}^{-}T_{\rm soc}^{\rm rf}R_{\rm fm}^{+}\right)=0,\end{aligned}$$ where $I_{4\times 4}$ is a 4 by 4 identity matrix. When the effective magnetic field of SOC is along y direction, substituting Eq. (S15,S17) into Eq. (S20) and taking $k_{\rm 2f}-k_{\rm 1f}=\pi$, we have $$\begin{aligned} {\rm Det} (I_{4\times 4}-T_{\rm soc}^{\rm in}R_{\rm fm}^{-}T_{\rm soc}^{\rm rf}R_{\rm fm}^{+})={\rm Det}\left(\begin{array}{cc} (1+e^{-2i\alpha-i\phi})\sigma_0 & 0 \\ 0 &(1+e^{-2i\alpha+i\phi})\sigma_0 \end{array}\right)=0,\end{aligned}$$ which gives the two-fold degenerate Andreev levels $E=\pm \Delta \cos(\frac{\phi+\pi}{2})$. When the effective magnetic field of SOC is along x or -x direction, we have $$\begin{aligned} {\rm Det} (I_{4\times 4}-T_{\rm soc}^{\rm in}R_{\rm fm}^{-}T_{\rm soc}^{\rm rf}R_{\rm fm}^{+})={\rm Det}\left(\begin{array}{cc} (1-e^{-2i\alpha-i\phi})\sigma_0 & 0 \\ 0 &(1-e^{-2i\alpha+i\phi})\sigma_0 \end{array}\right)=0,\end{aligned}$$ which gives $E=\pm \Delta \cos(\frac{\phi}{2})$. Persistent triplet helix ------------------------ In the two dimensional case, SOC in general induces a destructive interference of different transverse modes shown in Fig 1(c) in the main text. This will lead to a rapid decay of triplet pairing function. However, for some particular forms of SOC, triplet pairs can precess in a coherent way, resulting in a long range proximity effect. Below, we will show how to achieve a long range proximity effect of triplet pairing functions in a 2DEG system. 2DEGs usually possess two kinds of SOCs, namely the Rashba and Dresselhaus terms, given by $$\begin{aligned} \label{SOC-2} H_{\rm R\&D}=H_{\rm Rashba}+H_{\rm Dresselhaus}=\alpha(k_x\sigma_y-k_y\sigma_x)+\beta(k_x\sigma_x-k_y\sigma_y),\end{aligned}$$ where $\alpha$ and $\beta$ are the coefficients of Rashba and Dresselhaus SOCs, respectively. In the case of $\alpha=\beta$, the SOC Hamiltonian takes the form $$\begin{aligned} \label{SOC-3} H_{\rm R\&D}=\alpha(k_x-k_y)(\sigma_x+\sigma_y). \end{aligned}$$ The form of the Hamiltonian is simplified if we re-define $(k_x-k_y)/\sqrt{2}\rightarrow k_x$ and $(\sigma_x+\sigma_y)/\sqrt{2}\rightarrow \sigma_z$ to $H_{\rm R\&D}=2\alpha k_x \sigma_z$. The two spin bands with opposite spins are shifted in the opposite directions. The Hamiltonian in the spin and Nambu space has the form $$\begin{aligned} \label{Ham-3} H=\frac{1}{2}\sum_{\bm k,i}\left(\begin{array}{c} c_{\bm k,i}^{\dagger} \\ c_{-k,i} \end{array} \right)^{\rm T} \left( \begin{array}{cc} k^2/2m-(-1)^iQ k_x-\mu & 0 \\ 0&-( k^2/2m-(-1)^iQ k_x-\mu) \end{array} \right) \left(\begin{array}{c} c_{\bm k,i} \\ c_{-k,i}^{\dagger} \end{array} \right) \end{aligned}$$ where $i=1 (2)$ is for the spin parallel (anti-parallel) to $z$ direction, we define $Q=4m\alpha$, and $m$ is the effective mass of 2DEGs. We construct the triplet helix operators as $$\begin{aligned} \label{SOC-3} \hat{d}^{-}&=&\sum_{\{\bm k, i\}} \delta(\epsilon_{ {\bf k},i}-\mu)c_{ {-\bf k}-(-1)^i {\bf Q},i}c_{\bm{k,i}}, \\ \label{SOC-4} \hat{d}^{+}&=&\sum_{\{\bm k, i\}} \delta(\epsilon_{ {\bf k},i}-\mu)c^{\dagger}_{ {\bf k},i} c^{\dagger}_{ {-\bf k}-(-1)^i {\bf Q},i},\end{aligned}$$ where $$\epsilon_{k,i}=\frac{k^2-(-1)^i Q k_x}{2m} \label{eq:relation1}$$ and the summation is performed over the Fermi surface, where we choose the interval $\{\bm k,i\}=\{k_x<(-1)^i Q/2,k_y\}$ to avoid double counting. Due to the dispersion relation in Eq. \[eq:relation1\], $\hat{d}^{\pm}$ defined in Eq. (\[SOC-3\],\[SOC-4\]) commute with the Hamiltonian in Eq. \[Ham-3\] $$\begin{aligned} \label{SOC-5} [H,\hat{d}^{-}]=\sum_{\{\bm k, i\}} \delta(\epsilon_{\bm{k},i}-\mu)\left(\epsilon_{-\bm{k}-(-1)^i\bm{Q},i}-\epsilon_{\bm{k},i}\right)c_{-\bm{k}-(-1)^i\bm{Q},i}c_{\bm{k}}=0,\end{aligned}$$ $$\begin{aligned} \label{SOC-6} [H,\hat{d}^{+}]=\sum_{\{\bm k, i\}} \delta(\epsilon_{\bm{k},i}-\mu)\left(\epsilon_{-\bm{k}-(-1)^i\bm{Q},i}-\epsilon_{\bm{k},i}\right)c_{\bm{k}}^{\dagger}c^{\dagger}_{-\bm{k}-(-1)^i\bm{Q},i}=0. \end{aligned}$$ which is the reason why SOC with $\alpha=\beta$ does not cause a decay of the triplet ${\mbox{\boldmath$d$}}$ helix for the center-of-mass momentum $Q$. To further confirm the long range triplet order in the dirty limit, we derive the Usadel equation in the proximity region with isotropic spin-independent scattering time $\gamma$. First, we derive the dynamic equation of the annihilation operator $$\label{anih-1} {\psi}_{\mu}(t,\bm x)=e^{i\hat{H}t}\hat{\psi}_{\mu}(\bm x)e^{-i\hat{H}t}=e^{i\hat{H} t}\left(\sum_{\bm k}\hat{\psi}_{\mu}(\bm k)e^{i\bm k \cdot \bm x}\right)e^{-i\hat{H}t}$$ where $\hat{H}=\sum_{\mu,\nu,\bm k}\psi^{\dagger}_{\mu}(\bm k) H_{\mu \nu}(\bm k)\psi_{\nu}(\bm k)$ and $$\begin{aligned} \label{Usadel-2} H(\bm k)=\left(\begin{array}{cc} \frac{k^2}{2m}&0 \\ 0& \frac{k^2}{2m} \end{array}\right)+\bm M \cdot \bm \sigma+\bm h(\bm k)\cdot \bm \sigma, \end{aligned}$$ where $\bm M$ is the magnetization and $\bm h(\bm k)$ is the SOC field. Therefore we have $$\begin{aligned} \label{Dy-1} i\partial_t \hat{\psi}_{\mu}(t,\bm x)&=&e^{i\hat{H}t}\left[\sum_{\bm k}\hat{\psi}_{\mu}(\bm k)e^{i\bm k \cdot \bm x},\hat{H} \right] e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H}t}\sum_{\bm{k,k'},\lambda,\nu}\left[\hat{\psi}_{\mu}(\bm k)e^{i\bm k \cdot \bm x},\psi^{\dagger}_{\lambda}(\bm k') H_{\lambda \nu}(\bm k')\psi_{\nu}(\bm k') \right]e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H} t} \sum_{\bm{k,k'},\lambda,\nu}\left(\hat{\psi}_{\mu}(\bm k)\psi^{\dagger}_{\lambda}(\bm k') \psi_{\nu}(\bm k') -\psi^{\dagger}_{\lambda}(\bm k') \psi_{\nu}(\bm k')\hat{\psi}_{\mu}(\bm k)\right)H_{\lambda \nu}(\bm k')e^{i\bm k \cdot \bm x}e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H} t} \sum_{\bm{k,k'},\lambda,\nu}\left(\hat{\psi}_{\mu}(\bm k)\psi^{\dagger}_{\lambda}(\bm k') \psi_{\nu}(\bm k') +\psi^{\dagger}_{\lambda}(\bm k')\hat{\psi}_{\mu}(\bm k) \psi_{\nu}(\bm k')\right)H_{\lambda \nu}(\bm k')e^{i\bm k \cdot \bm x}e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H} t} \sum_{\bm{k,k'},\lambda,\nu} \delta_{\bm{kk'}}\delta_{\mu\lambda} \psi_{\nu}(\bm k')H_{\lambda \nu}(\bm k')e^{i\bm k \cdot \bm x}e^{-i\hat{H}t}\nonumber \\ &=& \sum_{\bm k,\nu} H_{\mu\nu}(\bm k)e^{i\hat{H}t} \hat{\psi}_{\nu}e^{i\bm k\cdot \bm x} e^{-i\hat{H} t}\nonumber \\ &=& \sum_{\nu} \left(H_{\mu\nu}(-i\bm \nabla) \sum_{\bm k}e^{i\hat{H}t} \hat{\psi}_{\nu}e^{i\bm k \cdot \bm x} e^{-i\hat{H} t}\right)\nonumber \\ &=& \sum_{\nu} H_{\mu\nu}(-i\bm \nabla) \hat{\psi}_{\nu}(t,\bm x). \end{aligned}$$ Similar, for the creation operator $$\label{crea-1} {\psi}^{\dagger}_{\mu}(t,\bm x)=e^{i\hat{H}t}\hat{\psi}^{\dagger}_{\mu}(\bm x)e^{-i\hat{H}t}=e^{i\hat{H} t}\left(\sum_{\bm k}\hat{\psi}^{\dagger}_{\mu}(\bm k)e^{-i\bm k \cdot \bm x}\right)e^{-i\hat{H}t},$$ we have $$\begin{aligned} \label{Dy-2} i\partial_t \hat{\psi}_{\mu}^{\dagger}(t,\bm x)&=&e^{i\hat{H}t}\left[\sum_{\bm k}\hat{\psi}^{\dagger}_{\mu}(\bm k)e^{i\bm k \cdot \bm x},\hat{H} \right] e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H}t}\sum_{\bm{k,k'},\lambda,\nu}\left[\hat{\psi}^{\dagger}_{\mu}(\bm k)e^{-i\bm k \cdot \bm x},\psi^{\dagger}_{\lambda}(\bm k') H_{\lambda \nu}(\bm k')\psi_{\nu}(\bm k') \right]e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H} t} \sum_{\bm{k,k'},\lambda,\nu}\left(\hat{\psi}^{\dagger}_{\mu}(\bm k)\psi^{\dagger}_{\lambda}(\bm k') \psi_{\nu}(\bm k') -\psi^{\dagger}_{\lambda}(\bm k') \psi_{\nu}(\bm k')\hat{\psi}^{\dagger}_{\mu}(\bm k)\right)H_{\lambda \nu}(\bm k')e^{-i\bm k \cdot \bm x}e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H} t} \sum_{\bm{k,k'},\lambda,\nu}-\left(\psi^{\dagger}_{\lambda}(\bm k') \hat{\psi}^{\dagger}_{\mu}(\bm k)\psi_{\nu}(\bm k') +\psi^{\dagger}_{\lambda}(\bm k') \psi_{\nu}(\bm k')\hat{\psi}^{\dagger}_{\mu}(\bm k)\right)H_{\lambda \nu}(\bm k')e^{-i\bm k \cdot \bm x}e^{-i\hat{H}t}\nonumber \\ &=& e^{i\hat{H} t} \sum_{\bm{k,k'},\lambda,\nu} -\delta_{\bm{kk'}}\delta_{\mu\nu} \psi_{\lambda}^{\dagger}(\bm k')H_{\lambda \nu}(\bm k')e^{-i\bm k \cdot \bm x}e^{-i\hat{H}t}\nonumber \\ &=& -\sum_{\bm k,\lambda} H_{\lambda\mu}(\bm k)e^{i\hat{H}t} \hat{\psi}_{\lambda}^{\dagger}e^{-i\bm k\cdot \bm x} e^{-i\hat{H} t}\nonumber \\ &=&- \sum_{\lambda} \left(H_{\lambda\mu}(i\bm \nabla) \sum_{\bm k}e^{i\hat{H}t} \hat{\psi}_{\lambda}e^{-i\bm k \cdot \bm x} e^{-i\hat{H} t}\right)\nonumber \\ &=& -\sum_{\lambda} H_{\lambda\mu}(i\bm \nabla) \hat{\psi}^{\dagger}_{\lambda}(t,\bm x). \end{aligned}$$ The triplet pairs can be described by the G-lessor Green’s function $f_{\mu\nu}^<(t,\bm x;t',\bm x')=\langle\hat{\psi}_{\nu}(t',\bm x')\hat{\psi}_{\mu}(t,\bm x)\rangle$ which satisfies $$\begin{aligned} \label{Usadel-3} i\partial_t f^<_{\mu\nu}&=& \sum_{\lambda}H_{\mu\lambda}(-i\bm \nabla_{\bm x}) f^{<}_{\lambda\nu}(t,\bm x; t',\bm x')\\ \label{Usadel-3-1} i\partial_{t'} f^<_{\mu\nu}&=&\sum_{\lambda}H_{\nu\lambda}(-i\bm \nabla_{\bm x'})f^<_{\mu\lambda}(t,\bm x; t', \bm x'). \end{aligned}$$ Because $$\begin{aligned} \label{Usadel-4} H(-i\bm\nabla)=\left(\begin{array}{cc} -\frac{\nabla^2}{2m}&0 \\ 0& -\frac{\nabla^2}{2m} \end{array}\right)+\bm M \cdot \bm \sigma+\bm h(-i\bm \nabla)\cdot \bm \sigma, \end{aligned}$$ we have $$\begin{aligned} \label{Usadel-5} H^{\rm T}(-i\bm\nabla)=\left(\begin{array}{cc} -\frac{\nabla^2}{2m}&0 \\ 0& -\frac{\nabla^2}{2m} \end{array}\right)+\bm M \cdot \bm \sigma^*+\bm h(-i\bm \nabla)\cdot \bm \sigma^*. \end{aligned}$$ Combining Eq. (\[Usadel-3\],\[Usadel-3-1\],\[Usadel-4\],\[Usadel-5\]), we have $$\begin{aligned} \label{Usadel-6} i\partial_t f^<(t',\bm x';t,\bm x)&=& H(-i\bm \nabla_{\bm x}) f^{<}(t,\bm x; t',\bm x')\\ \label{Usadel-6-1} i\partial_{t'} f^<(t',\bm x';t,\bm x)&=&f^<_{\mu\lambda}(t,\bm x; t', \bm x')H^{\rm T}(-i\bm \nabla_{\bm x'}). \end{aligned}$$ We define $\bm R=(\bm x+\bm x')/2$, $\bm r=\bm x-\bm x'$, $T=(t+t')/2$, $\tau=t-t'$, $\bm \nabla_{\bm R}=\bm \nabla_{\bm x}+\bm \nabla_{\bm x'}$, $\bm \nabla_{\bm r}=(\bm \nabla_{\bm x}-\bm \nabla_{\bm x'})/2$, $\partial_{T}=\partial_t+\partial_{t'}$ and $\partial_{\tau}=(\partial_t-\partial_{t'})/2$. Therefore, Eq. (\[Usadel-6\],\[Usadel-6-1\]) can be written in the $(T,\tau,\bm R,\bm r)$ coordinates as $$\begin{aligned} \label{Usadel-7} \rm{Eq}.(\ref{Usadel-6})+\rm{Eq}.(\ref{Usadel-6-1})=i\partial_{T}f^<&=&\left(-\frac{\bm \nabla^2_{\bm r}}{m}-\frac{\bm \nabla^2_{\bm R}}{4m}\right)f^<+\bm M\cdot \bm \sigma f^<+f^< \bm M \cdot \bm \sigma^*+\left(\bm h(-i\bm{\nabla_{R}}/2)\cdot \sigma f^<+f^<\bm h(\bm {-i\nabla_{R}}/2)\cdot\bm\sigma^*\right)\nonumber \\ &+&\left(\bm{h(-i\nabla_{r})}\cdot \sigma f^<-f^<\bm h(\bm {-i\nabla_{r}})\cdot\bm\sigma^*\right)\\ \label{Usadel-7-1} \rm{Eq}.(\ref{Usadel-6})-\rm{Eq}.(\ref{Usadel-6-1})=2i\partial_{\tau}f^<&=&-\frac{\bm{\nabla_{R}\cdot\nabla_{r}}}{m}f^<+\bm M\cdot \bm \sigma f^<-f^< \bm M \cdot \bm \sigma^*+\left(\bm h(-i\bm{\nabla_{R}}/2)\cdot \sigma f^<-f^<\bm h(\bm {-i\nabla_{R}}/2)\cdot\bm\sigma^*\right)\nonumber \\ &+&\left(\bm{h(-i\nabla_{r})}\cdot \sigma f^<+f^<\bm h(\bm {-i\nabla_{r}})\cdot\bm\sigma^*\right). \end{aligned}$$ When only impurity scattering is considered, the equation of motion for retarded and G-lesser functions in the center of mass coordinates are the same[@SRammer:2007_a]. Therefore, Eq. \[Usadel-7-1\] can also be applied for the anomalous retarded Green’s function $f^R(E,x)$. To get a more compact form of Usadel equation for $f^R(E,x)$, we define $$\begin{aligned} \hat{v}_{\rm so}&=&\frac{\partial \hat{H}_{\rm so}}{\partial \bm k}=(\alpha\sigma_y+\beta \sigma_x)\bm{e_x}-(\alpha \sigma_x+\beta\sigma_y)\bm{e_y},\nonumber \\ \hat{H}_{\rm so}&=&(\beta \hat{p}_x-\alpha \hat{p}_y)\sigma_x+(\alpha \hat{p}_x+\beta \hat{p}_y)=\hat{p}_x(\beta\sigma_x+\alpha\sigma_y)+\hat{p}_y(-\alpha\sigma_x-\beta\sigma_y)=\bm{\hat{p}}\cdot \bm{\hat{v}}_{\rm so},\nonumber \\ f^R&=&\left(d^R_0\sigma_0+\bm d^R\cdot \bm\sigma\right)i\sigma_y=\mathbb{D}i\sigma_y, \ \ \ \mathbb{D}=d^R_0\sigma_0+\bm d^R\cdot \bm\sigma. \end{aligned}$$ By using the fact that $i\sigma_y \bm \sigma^*=-\bm \sigma i\sigma_y$, the Usadel equation of $f^R$ can be simplified to the equation of $\mathbb{D}$ as $$\begin{aligned} \label{Usadel-8} i\partial_T \mathbb{D}&=&-\left(\frac{\bm \nabla^2_{\bm r}}{m}+\frac{\bm \nabla^2_{\bm R}}{4m}\right)\mathbb{D}+[(\bm M+\bm h(-i\bm{\nabla_{R}}/2)) \cdot \bm \sigma,\mathbb{D}]+\{\bm h(-i\bm{\nabla_{r}})\cdot \sigma,\mathbb{D}\}, \\ \label{Usadel-8-1} 2i\partial_{\tau}\mathbb{D}&=&-i\bm{\nabla_{R}}\{\frac{\hat{\bm v}}{2},\mathbb{D}\}+\{\bm M\cdot \bm \sigma,\mathbb{D}\}+[\bm h(-i\bm{\nabla_{r}})\cdot\bm\sigma,\mathbb{D}]. \end{aligned}$$ When $f^R$ slowly varies in the proximity region, it is dominated by Eq. \[Usadel-8-1\] which is actually the Elienberger equation in the presence of both magnetization and SOC. Therefore, in this case, it is easily seen that the magnetization will mix the singlet pair with triplet pair which is parallel to the magnetization and the SOC will let the $d$(spin) vector precess in the plane perpendicular to the SOC direction which is similar to the SOC on the spin. When the pair function varies along $x$ direction, the Usadel equation in the presence of only SOC has the form $$\begin{aligned} \label{Usadel-1} D\nabla^2 d_z&=&4DA d_z-D C\partial_x d_x-D C'\partial_y d_y+2iE\nonumber \\ D\nabla^2 d_x&=&D (A+B )d_x+D C\partial_x d_z+2iE, \nonumber \\ D \nabla^2 d_y&=&D (A-B )d_y+D C'\partial_y d_z+2iE, \end{aligned}$$ Here $D=v_f^2\gamma/2$ is the diffusion constant, $\gamma$ is isotropic spin-independent scattering time, $A=2(\alpha^2+\beta^2)m^2$, $B=4\alpha\beta m^2$, $C=4(\alpha+\beta)m$ and $C'=4(\alpha-\beta)m$. In the spin-triplet-superconductor/SOC junction, we assume the -vector of the spin-triplet pairs is along x direction in the bulk of the superconductor. In the SOC region, we assume the $\bm d$-vector only depends on $x$ with the boundary condition ${\bf d}=(d_{0}, 0, 0)$ at the SC/SOC interface and the corresponding solution depends only on $x$, given by $$\begin{aligned} \label{Usadel-2} \bm d(x,y)=d_{0} e^{-\lambda x}(\cos(qx) \bm{e_x}+\sin(qx)\bm{e_z}). \end{aligned}$$ The solution, Eq.(\[Usadel-2\]), clearly shows the oscillating and decaying behaviors of -vector. When $\alpha=\beta$, we have $4A=A+B=Q^2$ and $C=2Q$. Taking $E=0$, we obtain $\lambda=0$ and $q=Q$, so Eq. (\[Usadel-2\]) recovers the solution of persistent triplet helix mode in the clean limit (the green lines in Fig. \[shift-trans\]). When $\alpha\neq\beta$, the triplet helix mode is no longer conserved. For example, we consider the case with only Rashba SOC and find a damping mode with $q+i\lambda=\sqrt{2\pm 2i\sqrt{7}}Q/4$, as shown in Fig. \[shift-trans\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ The spatial dependence of the ${\bm d}$-vector of triplet pairs for $\alpha=\beta$ (green) and $\alpha=0$ (blue). Green lines ($\alpha=\beta$) show long range oscillations while blues lines ($\alpha=0$) decay rapidly. The solid(dashed) lines indicate $d_x$($d_z$) of triplet pairs. []{data-label="shift-trans"}](triplet-evolution "fig:"){width="0.6\columnwidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [5]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , ****, (). @noop [**]{}, Vol.  (, ) [****, ()](\doibase 10.1103/PhysRevB.23.6851). , edited by  and , Vol.  (, ) , ** (, ).
--- abstract: 'A novel method is introduced in order to treat the dissipative dynamics of quantum systems interacting with a bath of classical degrees of freedom. The method is based upon an extension of the Nosè-Hoover chain (constant temperature) dynamics to quantum-classical systems. Both adiabatic and nonadiabatic numerical calculations on the relaxation dynamics of the spin-boson model show that the quantum-classical Nosè-Hoover chain dynamics represents the thermal noise of the bath in an accurate and simple way. Numerical comparisons, both with the constant energy calculation and with the quantum-classical Brownian motion treatment of the bath, show that the quantum-classical Nosè-Hoover Chain dynamics can be used to introduce dissipation in the evolution of a quantum subsystem even with just one degree of freedom for the bath. The algorithm can be computationally advantageous in modeling, within computer simulation, the dynamics of a quantum subsystem interacting with complex molecular environments.' author: - 'Alessandro Sergi[^1]' title: 'Deterministic constant-temperature dynamics for dissipative quantum systems' --- One of the most natural ways to make a quantum system follow a dissipative dynamics is achieved by putting it into contact with a thermal bath. Since usually one is not interested in the detailed time evolution of the bath degrees of freedom, it may also be convenient to approximate the bath dynamics by means of a classical description. When one faces with the problem of calculating the influence of an environment over a quantum subsystem, this approach leads to the representation of (a certain class of) open quantum systems [@petruccione] by means of mixed quantum-classical theories. Examples can be found in many phenomena connected to quantum optics [@qopt] and quantum information theory [@lebellac]. Typically, this is the case of cosmology where, due to the perduring lack of a full quantum theory of gravitation, one is forced to approximate formalisms in order to treat the interaction of quantum and classical degrees of freedom [@wald]. In many situations, condensed-matter quantum systems at finite temperature can also be treated with mixed quantum-classical theories. In light of the above discussion, one can certainly conclude that mixed quantum-classical approximations can be used in open quantum systems to describe many processes which are relevant to various fields of research. It is worth noting that mixed quantum-classical theories [@qc-bracket] applied to condensed-matter systems can treat classical molecular bath which can be as complex as *state-of-the-art* molecular dynamics simulation techniques permit nowadays. In the original constant-energy (NVE) formulation, mixed quantum-classical algorithms require many environmental degrees of freedom in order to describe the dissipative dynamics of the quantum subsystem. This has been shown within a path-integral influence functional approach [@makri]. However, quantum-classical dynamics has been recently generalized [@b3] in order to be unified with the constant-temperature simulation method originally developed by Nosè and Hoover [@nose] (more generally, the author in Ref. [@b3] proposed a scheme in order to unify quantum-classical dynamics with many energy-preserving phase space flows [@bs]). Therefore, one can think of using quantum-classical Nosè-Hoover (NH) dynamics in order to describe dissipative effects and, in particular, the constant-temperature relaxation dynamics of a relevant quantum subsystem. In practice, in order to overcome possible problems with ergodicity in classical phase space, it is more convenient to generalize the Nosè-Hoover chain (NHC) method of Martyna and coworkers [@martyna] to the quantum-classical case and to adopt it in place of the NH dynamics. However, the choice of the NHC dynamics can be viewed as a mere technical point with no deep conceptual implication as far as quantum-classical theories are concerned. In this Communication, by simulating the relaxation dynamics of the spin-boson model [@leggett], I show that the quantum-classical NHC dynamics can be adopted in order to describe dissipative effects in quantum-classical systems by means of a minimal (with respect to the number of degrees of freedom explicitly taken into account) representation of the classical bath. The quantum-classical Hamiltonian of the spin-boson model reads $$\begin{aligned} \hat{H}_{\rm sb}=-\hbar\Omega\hat{\sigma}_x+\sum_{j=1}^{N_b} \left(\frac{P_j^2}{2M_J}+\frac{1}{2}M_j\omega_j^2R_j^2 -c_jR_j\hat{\sigma}_z\right) \label{eq:Hsb}\end{aligned}$$ where $2\hbar\Omega$ is the energy gap of the isolated two-state system, $\hat{\sigma}_z$ and $\hat{\sigma}_x$ are Pauli’s matrices, $R_j$ and $P_j$ are the coordinates and momenta, respectively, of $N_b$ harmonic oscillators with mass $M_j$ and frequencies $\omega_j$ making up the classical bath. The other parameters of the system, *i.e.*, $(M_j,\omega_j,c_j)$, can be fixed by requiring that the harmonic bath is described by an Ohmic spectral density. In order to study the relaxation dynamics of this model [@qc-sb], one can assume that the system is initially in an uncorrelated state with the quantum subsystem in state $\vert\rm{up}\rangle$ and the classical harmonic bath in thermal equilibrium. The corresponding quantum-classical density matrix can be found starting from the full quantum expression by means of a *partial* Wigner transform [@wigner] and was explicitly written down in Ref. [@qc-sb]. The NVE quantum-classical dynamics of this system is formally exact [@qc-sb] (*i.e.*, the quantum-classical equations of motion have the same form that would arise within a full quantum treatment) and numerical results, which agree very well with those obtained by means of more sophisticated path-integral iterative techniques [@makri], are available in the literature [@qc-sb]. Such NVE results, which were obtained with $N_b=200$, will be compared here with those obtained with calculations performed by means of the quantum-classical NHC dynamics and with the bath made up by just one harmonic oscillator ($N_b=1$). For the spin-boson model, the quantum-classical NHC dynamics can be defined upon introducing an extended Hamiltonian with a chain of just two thermostat variables $$\hat{H}_{\rm (NHC)}=\hat{H}_{\rm sb}+ \frac{p_{\eta_1}^2}{2m_{\eta_1}} +\frac{p_{\eta_2}^2}{2m_{\eta_2}}+N_bk_BT\eta_1+k_BT\eta_2 \;,$$ where $T$ is the temperature of the bath thermalizing the quantum spin, $\eta_1$, $\eta_2$, $p_{\eta_1}$, $p_{\eta_2}$ are the Nosè variables, and $m_{\eta_1}$, $m_{\eta_2}$ are fictitious masses. Following Ref. [@b3], the quantum-classical NHC bracket can be defined as: $$\begin{aligned} \left(\hat{H}_{\rm (NHC)}\right.&,&\left.\hat{\sigma}_z\right)_{\rm (NHC)} =\frac{i}{\hbar} \left[\begin{array}{cc}\hat{H}_{\rm (NHC)} & \hat{\sigma}_z\end{array}\right] \nonumber \\ &\cdot& \left[ \begin{array}{cc} 0 & 1+\frac{\hbar\Lambda^{\rm (NHC)}}{2i}\\ -1-\frac{\hbar\Lambda^{\rm (NHC)}}{2i} & 0\end{array}\right] \cdot \left[\begin{array}{c}\hat{H}_{\rm (NHC)}\\ \hat{\sigma}_z\end{array}\right] \;,\nonumber\\ \label{eq:qcnosebracket}\end{aligned}$$ where $\Lambda^{\rm (NHC)}$ is a bracket operator whose action between two quantum-classical variables is defined as: $$\hat{\xi}(X)\Lambda^{\rm (NHC)}\hat{\chi}(X)=-\sum_{IJ} \frac{\partial\hat{\xi}}{\partial X_I} {\cal B}_{IJ}^{\rm (NHC)} \frac{\partial\hat{\chi}}{\partial X_J}\;. \label{eq:noseop}$$ Adopting as a convention for the point of the extended Nosè phase space $X=(R,\eta_1,\eta_2,P,p_{\eta_1},p_{\eta_2})$, the antisymmetric NHC matrix reads: $$\begin{aligned} \mbox{\boldmath$\cal B$}^{\rm (NHC)} &=&\left[\begin{array}{cccccc} 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\\ -1 & 0 & 0 & 0 & -P & 0\\ 0 & -1 & 0 & P & 0 & -p_{\eta_1}\\ 0 & 0 & -1 & 0 & p_{\eta_1} & 0 \end{array}\right]\;.\end{aligned}$$ It is worth noting that, since the Nosè coordinates are intrinsically classical, a quantum-classical treatment of such a constant-temperature dynamics is conceptually correct and, moreover, allows one to address nonadiabatic effects. Other approaches [@grillitosatti], which do not use a quantum-classical bracket, do not seem to permit nonadiabatic calculations in a straightforward manner. The equations of motion can be written in the adiabatic basis as $$\frac{d}{dt}\chi^{\alpha\alpha'}(X,t) =\sum_{\beta\beta'}i{\cal L}_{\alpha\alpha',\beta\beta'}^{\rm (NHC)} \chi^{\beta\beta'}(X,t) \label{eq:qcnoseqofmadb}$$ where $$\begin{aligned} i{\cal L}_{\alpha\alpha',\beta\beta'}^{\rm (NHC)} &=&i\omega_{\alpha\alpha'}\delta_{\alpha\beta}\delta_{\alpha'\beta'} +iL_{\alpha\alpha'}^{\rm (NHC)}\delta_{\alpha\beta}\delta_{\alpha'\beta'} \nonumber\\ &-&J_{\alpha\alpha',\beta\beta'} \\ iL_{\alpha\alpha'}^{\rm (NHC)}&=&\frac{1}{2}\sum_{IJ}{\cal B}_{IJ}^{\rm (NHC)} \frac{\partial(H^{\alpha}_{\rm (NHC)} +H^{\alpha'}_{\rm (NHC)})}{\partial X_J}\frac{\partial}{\partial X_I} \;,\nonumber\\\end{aligned}$$ and $\omega_{\alpha\alpha'}=(E_{\alpha}(R)-E_{\alpha'}(R))/\hbar$. The quantum transitions operator, $J$, is defined as in the constant-energy case [@b3]. One wants to calculate the time-dependent quantum-classical average: $$\langle\hat{\sigma}_z(X,t)\rangle =\sum_{\alpha\alpha'}\int dX \rho^{\alpha'\alpha}_{W} \sigma_z^{\alpha\alpha'}(X,t) \label{eq:qcnoseave}$$ where $\sigma^{\alpha\alpha'}_z(X,t)$ is given by Eq. (\[eq:qcnoseqofmadb\]). Details of the numerical algorithm for calculating Eq. (\[eq:qcnoseave\]) both in the adiabatic and nonadiabatic limit, can be found in Ref. [@qc-sb]. It is useful to recall that the nonadiabatic quantum-classical dynamics can be pictured as a piece-wise deterministic propagation of the classical phase space point $X$ over the energy surface $(\alpha\alpha')$ interspersed by stochastic quantum transitions (realized by the action of $J$). Note also that, in the NVE case, one must have $N_b\ge 200$, as proven by a cumulant expansion analysis of the influence functional entering the path-integral iterative procedure of Ref. [@makri]. In principle, dissipation might also be described by means of the quantum-classical Brownian dynamics (BD) that was introduced in Ref. [@qcLangevin]. In such a case, the quantum-classical average of $\hat{\sigma}_z$ is still given by an equation similar to (\[eq:qcnoseave\]) where, however, the time evolution is achieved by means of a quantum-classical Langevin-Liouville operator, whose explicit expression in the adiabatic basis is known [@qcLangevin]. Such a stochastic operator is defined in terms of a friction constant, $\zeta$, and of a Gaussian white noise process, $\xi(t)$, with the properties $\langle\xi(t)\rangle=0$, and $\langle\xi(t)\xi(t')\rangle=2k_BT\zeta\delta(t-t')$. Therefore, it is interesting to check whether the Brownian dynamics of a bath with $N_b=1$ can also lead to an accurate dissipative dynamics. However, as shown in the following, especially when considering nonadiabatic effects, the numerical results prove that the NHC quantum-classical dynamics provides a scheme which is much more accurate and robust than that arising from the Brownian dynamics. The spin-boson model has been simulated by using dimensionless coordinates [@qc-sb]. Hundred thousand trajectories were produced in order to sample the initial condition in the nonadiabatic calculations [@qc-sb]. The system parameters were $\Omega=1/3$ and $\omega_{\rm max}=3$, while the Kondo parameter and the reduced temperature took the two sets of values ($\xi=0.007$, $\beta=0.3$) and ($\xi=0.1$, $\beta=3$). The results of the NVE calculation, with a bath composed of $N_b=200$ oscillators, were compared with those obtained with just one oscillator ($N_b=1$) in the two cases where either the NHC dynamics or the Brownian dynamics is used. The outcome is that in the NHC dynamics a single oscillator provides a good numerical representation of the dissipative quantum dynamics of the spin. Moreover, it turns out that when nonadiabatic transitions are taken into account, the quantum-classical NHC dynamics provides very good results while the Brownian dynamics fails badly. Note that the the figures in this letter display the results of BD calculations performed only with $\zeta=1$ (in dimensionless units). However, various other calculations were performed, with effective values of $\zeta$ between $0$ and $10$, without obtaining any improvement in the nonadiabatic case. Figures \[fig:fig1\] and \[fig:fig2\] show the results of the adiabatic calculations for the set of parameters ($\xi=0.007$,$\beta=0.3$) and ($\xi=0.1$,$\beta=3$), respectively. In the adiabatic case, both the NHC and Brownian dynamics describe well the dissipative evolution of the quantum subsystem interacting with a single oscillator. The inclusion of nonadiabatic transitions (up to six for each trajectory in the quantum-classical ensemble) shows that, with $N_b=1$, the NHC dynamics is still very accurate while the BD evolution becomes numerically unstable at short times. This is not completely unexpected since when the system can switch from one potential surface to another, because of the nonadiabatic transitions, the BD dynamics (in the case of $N_b=1$) lacks of any equilibrating mechanism. Instead, the quantum-classical NHC dynamics still conserves the Hamiltonian along the trajectory. Such a conservation provides a robust stabilization mechanism even for calculations with baths with very few degrees of freedom. Figure \[fig:fig3\] shows the nonadiabatic results for the set of parameters ($\xi=0.007,\beta=0.3$) while those obtained with the set ($\xi=0.1,\beta=3$) are displayed in Fig. \[fig:fig4\]. In general, the results obtained with the NHC nonadiabatic evolution appear to be numerically more stable and smoother than those obtained with NVE dynamics. This is even more apparent in a slightly stronger coupling regime (see Fig. \[fig:fig4\]). It must be remarked that surface-hopping calculations within nonadiabatic quantum-classical dynamics are, for the moment, limited to relatively short times because of numerical instabilities [@qc-sb]. To address this issue, a quantum-classical non-linear formalism has been recently proposed [@bwein]. However, such long-time integration problems are not related to the NHC dynamics but challenge quantum-classical approximations of quantum dynamics on a more general level. In order to clarify this point, Fig. \[fig:fig5\] displays the results of a long-time calculation, performed in the adiabatic approximation. Since there is a great interest in the phenomenon of driven quantum tunneling [@grifoni], a static perturbation of the form $\hat{H}_{\rm ext}=-\hbar\gamma_s\hat{\sigma}_z$ was added to the unperturbed spin-boson Hamiltonian in Eq. (\[eq:Hsb\]), and simulations were carried out both in the NVE ($N_b=200$) and NHC case ($N_b=1$) with $\gamma_s/\hbar=(1/3)\Omega$ while the other system parameters took the same values as in the calculations whose results are illustrated in Figs. \[fig:fig1\] and \[fig:fig3\]. The results displayed in Fig. \[fig:fig5\] shows clearly that, in the adiabatic approximation, the numerical agreement between the NVE dynamics ($N_b=200$) and the NHC dynamics ($N_b=1$) is very good even over long time intervals. The results provided in this Communication suggest the possibility of representing the environmental noise, leading to dissipative quantum dynamics, by means of deterministic NHC quantum-classical dynamics. Such an idea can have deep conceptual implications since there is a subtle connection between thermal and quantum fluctuations [@qft]. Moreover, the algorithms presented here might open novel advantageous routes for the computer simulation of quantum dynamics in open molecular environments. Within condensed matter systems, an example that might be studied by means of the approach illustrated in this Communication is provided by the system recently investigated in Ref. [@miller]: A retinal chromophore molecule evolving according to short-time quantum coherent dynamics in bacteriorhodopsin. As already shown in Ref. [@pyp], a minimal model of such chromophore-protein systems can be built by explicitly considering the chromophore molecule itself and the nearest-neighbor amino-acids, belonging to the tight-binding pocket in which the chromophore is contained. On the short-time scale of the coherent quantum dynamics of the chromophore, one might think of representing the dissipation entailed by the rest of the protein by means of a deterministic NHC dynamics with a minimal bath. It is also worth to note that there is a great interest in the field of quantum information in the phenomenon of driven quantum tunneling [@grifoni]. In particular, recent work focuses on time-dependent external driving [@grifoni2]. In order to deal with such situations, one should generalize the quantum-classical approach presented here in order to unify it with the techniques of non-equilibrium molecular dynamics simulations. Although non-trivial algorithmic issues might be expected, this is something which is possible in principle and that deserves a thorough future investigation. In conclusion, the quantum-classical NHC dynamics may find interesting applications in various fields: In fact, it may be used to simulate not only systems in the fields of chemical-physics or biophysics but also in quantum-optics [@qopt] and quantum computing [@lebellac]. [**Acknowledgments**]{}\ I am grateful to Professor Paolo V. Giaquinta and Dr. Giuseppe Pellicane for a critical reading of the manuscript. [99]{} H.-P. Breuer and F. Petruccione, The theory of open quantum systems (Oxford University Press, Oxford, 2003). H. Paul, Introduction to Quantum Optics: From Light Quanta to Quantum Teleportation (Cambridge University Press, Cambridge, 2004); L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University Press, Cambridge, 1995). M. Le Bellac, A Short Introduction to Quantum Information and Quantum Computation (Cambridge University Press, Cambridge, 2006). R. M. Wald, Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics (The University of Chicago, Chicago, 1994); N. D. Birrell and P. C. W. Davies, Quantum fields in curved space (Cambridge University Press, Cambridge, 1994). I. V. Aleksandrov, Z. Naturforsch., [**36a**]{}, 902 (1981); V. I. Gerasimenko, Theor. Math. Phys., [**50**]{}, 77 (1982); D. Ya. Petrina, V. I. Gerasimenko and V. Z. Enolskii, Sov. Phys. Dokl., [**35**]{}, 925 (1990); W. Boucher and J. Traschen, Phys. Rev. D, [**37**]{}, 3522 (1988); W. Y. Zhang and R. Balescu, J. Plasma Phys., [**40**]{}, 199 (1988); R. Balescu and W. Y. Zhang, J. Plasma Phys. [**40**]{}, 215 (1988); O. V. Prezhdo and V.V. Kisil, Phys. Rev. A, [**56**]{}, 162 (1997); C. C. Martens and J.-Y. Fang, J. Chem. Phys. [**106**]{}, 4918 (1996); A. Donoso and C. C. Martens, J. Phys. Chem. [**102**]{}, 4291 (1998); R. Kapral and G. Ciccotti, J. Chem. Phys., [**110**]{}, 8919 (1999). N. Makri and K. Thompson, Chem. Phys. Lett. [**291**]{}, 101 (1998); K. Thompson and N. Makri, J. Chem. Phys. [**110**]{}, 1343 (1999); N. Makri, J. Phys. Chem. B [**103**]{}, 2823 (1999). A. Sergi, Phys. Rev. E [**72**]{} 066125 (2005). S. Nosè, Mol. Phys. [**52**]{}, 255 (1984); W. G. Hoover, Phys. Rev. A [**31**]{}, 1695 (1985); S. Nosè, Prog. Theor. Phys. [**103**]{}, 1 (1991). A. Sergi, J. Chem. Phys. [**124**]{}, 024110 (2006); Atti Accad. Pelorit. Pericol. Cl. Sci. Fis. Mat. Nat. [**33**]{} c1a0501003 (2005); Phys. Rev. E [**72**]{} 031104 (2005); Phys. Rev. E [**69**]{} 021109 (2004); Phys. Rev. E [**67**]{} 021101 (2003); A. Sergi and M. Ferrario, Phys. Rev. E [**64**]{} 056125 (2001). G. J. Martyna, M. L. Klein, and M. Tuckerman, J. Chem. Phys. [**92**]{}, 2635 (1992). A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zwerger, Rev. Mod. Phys. [**59**]{}, 1 (1987). A. Sergi, D. Mac Kernan, G. Ciccotti, and R. Kapral, Theor. Chem. Acc. [**110**]{} 49 (2003); D. Mac Kernan, G. Ciccotti, and R. Kapral, J. Chem. Phys. [**116**]{} 2346 (2002). E. P. Wigner, Phys. Rev. A [**40**]{} 749 (1932); K. Imre, E. Özimir, M. Rosenbaum, and P. Z. Zwiefel, J. Math. Phys. [**5**]{} 1097 (1967); M. Hillery, R. F. O’Connell, M. O. Scully, and E. P. Wigner, Phys. Rep. [**106**]{} 121 (1984). M. Grilli and E. Tosatti, Phys. Rev. Lett. [**62**]{}, 2889 (1989). A. Sergi and R. Kapral, J. Chem. Phys. [**119**]{}, 12776 (2003). A. Sergi, J. Chem. Phys. [**126**]{}, 074109 (2007). M. Grifoni and P. Hänggi, Phys. Rep. [**304**]{}, 229 (1998). J. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Oxford University Press, Oxford, 1993); M. Le Bellac, Quantum and Statistical Field Theory (Clarendon Press, Oxford, 1991). V. Prokhorenko, A. M. Nagy, S. A. Waschuk, L. S. Brown, R. R. Birge, and R. J. Dwayne, Science [**313**]{}, 1257 (2006). A. Sergi, M. Grüning, M. Ferrario and F. Buda, J. Phys. Chem. [**105**]{}, 4386 (2001). M. C. Goorden, M. Thorwart, and M. Grifoni, Phys. Rev. Lett. [**93**]{}, 267005 (2004); Eur. Phys. J. B [**45**]{}, 405 (2005). [^1]: E-mail: asergi@unime.it
--- abstract: | A method for measurements of small (metallic) Ettingshausen coefficient ($P$) was developed. The influence of the dominating thermal effects, the Joule and Thomson heats, was eliminated making use of the odd symmetry of the Ettingshausen temperature gradient in respect with reversing of the direction of the magnetic field and electrical current. The method was applied to La$_{2-x}$Sr$_{x}$CuO$_{4}$ ($x=0.03\div 0.35$) high-T$_{c}$ superconductor in normal state. We have found that in the whole composition range the Ettingshausen coefficient is of the order of 10$^{-7}$ m$^{3}$K/J which is characteristic of typical metals. The coefficient$\,$changes sign from positive to negative near $x\approx $ 0.07. Weak variation of $P$ is in contrast to the behavior of other transport coefficients for La$_{2-x}$Sr$% _{x}$CuO$_{4}$, as the thermoeletric power or the Hall coefficient, which have been reported in literature to change their values by more than two orders of magnitude with Sr doping. address: | Institute of Low Temperature and Structure Research,\ Polish Academy of Sciences,\ 50-950 Wrocław, P.O.Box 1410\ POLAND author: - 'T.Plackowski and M.Matusiak' date: 'Submitted to Superconductor Science and Technology, April 2, 1999' title: 'Normal State Ettingshausen Effect in La$_{2-x}$Sr$_{x}$CuO$_{4}$' --- Introduction ============ Since the discovery of high-$T_{c}$ (HTC) superconductors their normal state properties are a subject of extensive investigations. Many of them, as the electrical resistivity $\rho $, the Hall effect $R_{H}$ or the thermoelectric power $S$, exhibit some universal behavior in function of charge carrier doping. The universal behavior of the latter quantity is well known, for many of the HTC families the thermopower changes sign at optimal carrier concentration [@Obertelli]. The linearity of resistivity for the optimally doped samples in wide temperature region is also one of their most well known features [@Martin]. The Hall coefficient for optimally doped samples varies as $1/T$, which results in the $1/T^{2}$ dependence of the Hall mobility, $\mu _{H}$. Then it appeared that the $1/T^{2}$ dependence of $\mu _{H}$ is more universal. It applies not only to the optimally doped, but also to the under- and overdoped materials [@Kubo1]. In this work we present the measurements of the Ettingshausen coefficient, one of the less known transport coefficients. Our aim was to complement the present knowledge of the normal state properties of HTC materials, and, hopefully, to search for some new universalities. The Ettingshausen effect is a thermal analog of the Hall effect (the definition and sign convention are shown in Fig. 1). The difference with respect to the Hall effect lies in the fact that the temperature difference is measured in the direction perpendicular to both the current and field directions, instead of the voltage difference. The Hall and Ettingshausen effects are two of the four transversal magneto-thermal effects: $$\begin{aligned} \nabla \Phi _{\bot } &=&R_{H}~\vec{j}\times \vec{B}\text{, the Hall effect,} \\ \nabla T_{\bot } &=&P~\vec{j}\times \vec{B}\text{, the Ettingshausen effect,} \\ \nabla \Phi _{\bot } &=&Q~\nabla T\times \vec{B}\text{, the Nernst effect,} \\ \nabla T_{\bot } &=&S_{RL}~\nabla T\times \vec{B}\text{, the Righi-Leduc effect,}\end{aligned}$$ where $R_{H}$, $P$, $Q$, $S_{RL}$ are the respective coefficients, $\nabla \Phi _{\bot }$ and $\nabla T_{\bot }$ are the transversal gradients of electrical potential and temperature, respectively, caused by the presence of magnetic field ($\vec{B}$) perpendicular to the electrical current ($\vec{% j}$) or heat flux ($\vec{q}\sim \nabla T$). All these coefficients are interconnected by three fundamental relations which were considered by P.W.Bridgman [@Bridgman] in terms of thermodynamics of reversible processes: $$\begin{aligned} Q &=&\frac{\kappa }{T}P\text{,} \\ Q &=&\frac{\mu _{T}}{\rho }R_{H}\text{,} \\ P &=&\frac{\mu _{T}T}{\kappa }S_{RL}\text{,}\end{aligned}$$ where $\kappa $ is the total thermal conductivity and $\mu _{T}$ is the Thomson coefficient . The theory of the Ettingshausen effect for semiconductors was considered in [@Paranjape], and for metals in [@Fieber]. Only few measurements of the Ettingshausen effect in different materials were carried out up to date. Typical measured values of the Ettingshausen coefficients for semiconductors are positive and of the order of 10$^{-2}\div $10$^{-4}$ m$^{3}$K/J (Ge:[@Mette1]; Si:[@Mette2]; PbSe, PbTe:[@Putley]). Typical values of the Ettingshausen coefficient for metals are much lower then for semiconductors and are of the order 10$^{-7}\div $10$^{-8}$ m$^{3}$K/J. A negative Ettingshausen coefficient was observed for Ag, Cd, Cu, Fe, Zn and Au, whereas positive for Al, Co and Ni. The Ettingshausen effect and the Hall effect have opposite signs for Al, Cd, Fe, Ni and Zn, whereas the same signs for Ag, Co, Cu and Au [@CriticalTables; @Bridgman]. The Ettingshausen coefficient is much higher for semi-metals ($\sim $10$^{-4}$ m$^{3}$K/J for Sb and $\sim $10$^{-3}$ m$^{3}$K/J for Bi [@Bridgman]) and for rare earths ($\sim $10$^{-3}$ m$^{3}$K/J for Y, Gd, Tb and Dy [@Zecchina]). According to our knowledge, no measurements of the Ettingshausen effect were carried out in HTC superconductors in normal state. Few works were devoted to the normal state Nernst effect: for Tl-2212 [@Clayhold], YCBO [@Gasumyants] and NdCeCuO [@Fournier]. In the present work we have chosen the La$_{2-x}$Sr$_{x}$CuO$_{4}$ (LSCO) solid solution ($x=0.03\div 0.35$). This compound exhibits full range of behaviors versus chemical composition, which is characteristic of the layered copper-oxide superconductors. The carrier concentration may be controlled by the Sr concentration $x$. One could therefore investigate the doping dependence from the semiconducting region ($x\lesssim 0.05$), through the underdoped ($0.05\lesssim x\lesssim 0.17$) and overdoped ($0.17\lesssim x\lesssim 0.30$) superconducting regions, up to the heavily doped metallic region with no superconductivity ($x\gtrsim 0.30$) [@Devaux]. Moreover, LSCO has a simple crystal structure with single CuO$_{2}$ layers. It has neither Cu-O chains as in YBa$_{2}$Cu$_{3}$O$_{7-\delta }$ nor complicated modulation of the separating and spacing layers as in Bi- and Tl-based materials. All our samples have been characterized by X-ray and electrical resistivity measurements. Experimental ============ Samples ------- Polycrystalline samples of La$_{2-x}$Sr$_{x}$CuO$_{4}$ were produced following the standard solid state technique from high purity La$_{2}$O$_{3}$, SrCO$_{3}$ and CuO substrates. The powders were mixed and prefired in air at 950 $^{\circ }$C for 24 h. After pulverization, the materials were mixed, pressed and sintered at 1000 $^{\circ }$C for 60 h. Then they were regrounded, pelletized and, except one sample, refired at 1050 $^{\circ }$C for 72 h in oxygen under pressure of 1 bar. Only the sample of La$_{1.65}$Sr$% _{0.35}$CuO$_{4}$ was sintered in oxygen under the pressure of 300 bar at 1000 $^{\circ }$C for 48 h. All products were confirmed to be single phase by powder X-ray diffraction. The values of the critical temperature measured by the electrical resistivity measurements are shown in Table 1. Table 1. Critical temperatures of the La$_{2-x}$Sr$_{x}$CuO$_{4}$ samples. --------------- ------ ------ ------ ------ ------ ------ ------ ------ $x$ (Sr) 0.03 0.05 0.10 0.15 0.20 0.25 0 30 0.35 $T_{c}$ \[K\] 0 0 28.6 36.0 30.2 18.0 10.2 0 --------------- ------ ------ ------ ------ ------ ------ ------ ------ Choice of the experimental configuration ---------------------------------------- The Ettingshausen effect is usually very small, hence it might be easily overridden by other thermal effects, as the Joule and Thomson effects. Especially, the Joule heat may particularly hinder the detection of the Ettingshausen effect. When measuring this effect a strong electrical current should be passed through the sample thus evolving a large amount of energy at the electrical contacts, which electrical resistance is usually much higher than that of the sample. Therefore, some places of the sample should be thermally anchored to a large mass of constant temperature to carry the Joule heat away. On the other hand, the sample should be located in adiabatic conditions since the thermal gradient is the quantity to be measured. For the above reason we started from computer modelling of different possible experimental setups, with different patterns of sample anchoring (e.g. on corners, along the longest side, etc.). In all cases we have supposed that the sample should have a shape of a flat slab with the shortest dimension parallel to the magnetic field, since the temperature difference due to the Ettingshausen effect is inversely proportional to that dimension (in analogy to the Hall effect). The configuration we have finally chosen is presented Fig. 2. The ends of the sample (A) were attached to two copper bars (B) using the Au:In alloy and silver paint, making both good electrical and thermal contacts. The typical contact resistance amounted to 0.5$\div $2 $\Omega $ and was much higher than the sample resistance. The bars (B) were electrically insulated from the copper support (C), but their good thermal connection to the support was ensured by a relatively large area of the contacts. The sample was also surrounded by a thick copper screen (D) screwed on the support (C). The carbon-glass thermometer (E) was  located within support (C). The whole assembly was placed in a gas-flow cryostat. A differential copper-constantan-copper thermocouple was attached to the sides of the sample. This type of thermocouple has relatively large sensitivity at room temperature ($\alpha =40.5$ $\mu $V/K). Moreover, since only the middle segment was made of constantan, the total resistivity of the thermocouple amounted to few ohms only, reasonable reducing the thermal voltage noise. Next advantage was that only copper leads were connecting the thermocouple junctions on the sample with the Keithley 182 nanovoltmeter input, thus avoiding all detrimental EMF’s (the only lead solderings were thermally anchored to the support C). No detectable influence of the magnetic field on the thermocouple voltage have been found. Providing that the length of the sample is much greater than its width, in our configuration the Ettingshausen coefficient $P$ may be calculated directly from the definition: $$\Delta T_{Ett}=PJ_{x}B_{z}/d$$ where $\Delta T_{Ett}$ is the temperature difference between both sides of the sample due to the Ettingshausen effect, $J_{x}$ is the electrical current, $B_{z}$ is the magnetic induction and $d$ is the sample thickness. All our samples had thickness of 0.25-0.35 mm, width of 2.5-3 mm and length of 9-10 mm. Influence of the Joule effect ----------------------------- The temperature distribution on the sample surface due the Ettingshausen effect was presented in Fig. 3a. Our calculations have shown that in the centre of the sample this distribution is independent on the temperature distribution caused by the Joule effect, which is shown in Fig. 3b. In other words, the difference $\Delta T_{Ett}$ remains unaffected by the Joule heat, even if the values of the temperature gradient along the sample are much higher then the transversal gradient. The reason is that the gradients produced by the two effects are perpendicular. However, an inevitable mismatching of the thermocouple junctions positions causes that the Joule effect also may significantly contribute to the total temperature difference measured: $$\Delta T=T_{2}-T_{1}=\pm \Delta T_{Ett}+\Delta T_{Joule}$$ Depending on particular mounting, the $\Delta T_{Joule}$ difference may be of any sign. Its value, which is approximately proportional to $J^{2}$, may be much higher then $\Delta T_{Ett}$. Fortunately, the $\Delta T_{Ett}$ difference changes its sign upon reversing the direction of both electrical current and magnetic field, whereas $\Delta T_{Joule}$ difference does not. This feature was used to extract the Ettingshausen effect from the background. Influence of the Thomson effect ------------------------------- There is another effect interfering with determination of the Ettingshausen temperature difference $\Delta T_{Ett}$ - namely the Thomson effect. Because of our experimental arrangement a temperature gradient along the sample is present (see Fig. 3a), so the total heat $q$ produced per time unit in the sample (without magnetic field) consists of two components: $$\dot{q}(x)=J_{x}^{2}\rho -\mu _{T}J_{x}\frac{dT(x)}{dx}$$ The first term is the Joule heat, the second - the Thomson heat; $\rho $ denotes the electrical resistivity and $\mu _{T}$ - the Thomson coefficient. According to our knowledge, the values of the Thomson coefficient for HTC materials are unknown. However, they should be of the order of the Seebeck coefficient, because of the Kelvin relation, $\mu _{T}=TdS/dT$. The Thomson effect causes that the temperature distribution along the sample slightly changes upon changing the direction of the electrical current - see Fig. 4. Thus, the total temperature difference measured crosswise the sample should be expressed as: $$\Delta T=T_{2}-T_{1}=\pm \Delta T_{Ett}+\Delta T_{Joule}\pm \Delta T_{Thomson}$$ The $\Delta T_{Thomson}$ difference changes sign upon changing the direction of the electrical current, but, fortunately, it is insensitive to the magnetic field. Hence, the measurements of $\Delta T$ in function of magnetic field still give a chance to extract the effect of interest, $% \Delta T_{Ett}$. In the course of our experiments we have sometimes observed that during prolonged measurements on the same sample the values of the $\Delta T_{Joule} $ and $\Delta T_{Thomson}$ changed significantly (including the sign change for $\Delta T_{Thomson}$), whereas the $\Delta T_{Ett}$ remained unaffected. This observation is not surprising since $\Delta T_{Joule}$ and $\Delta T_{Thomson}$ are just a result of the mismatching of the thermocouple junctions which can be effectively changed by thermocycling. Influence of the electrical contacts heating and of the gas cooling ------------------------------------------------------------------- The resistance of the sample contacts was usually approximately one order of magnitude higher than that of sample itself, so most of the electrical energy was released within the contacts. The only consequence for the temperature distribution within the sample is that the curves shown in Fig.4 are shifted upwards, with no change of both the longitudinal and transversal temperature gradients. However, due to relatively large electrical current values we were using (up to 0.5 A) the sample would be heated up by several kelvins in respect of its surrounding (i.e. the support C with thermometer and the screen D, see Fig. 2). Therefore, in order to keep the temperature of the sample possibly close to that of thermometer we decided to perform all measurements in helium gas atmosphere under ambient pressure, despite that it will influence all temperature gradients in the sample and its support. We can assume that the gas has the temperature of the support. Thus, the gas is colder than the sample heated by the electrical contacts. Therefore, one could expect that the longitudinal temperature distributions presented in Figures 3a and 4 would be substantially flatten, or even inverted, depending on the ratio of the Joule heating to the gas cooling efficiency. The diminishing of the transversal gradient due to the Ettingshausen effect depends only on the gas cooling efficiency. Therefore, it is possible to set the experimental conditions so that the gas cooling is weak enough to do not disturb the Ettingshausen effect significantly, but to reduce substantially the detrimental longitudinal gradients. Indeed, in tests, in which one of the thermocouple junctions was moved onto the bar B, we have observed that for low values of the measurement current the middle of the sample may be colder than its ends (see Fig. 5). Moreover, reducing high temperature gradients by gas cooling should also result in diminishing the influence of non-linear energy exchange by radiation. To check the influence of the gas cooling on the results we have performed several measurements changing pressure of helium gas in the range of 30$\div $1030 mbar. It has been observed that the value of the measured Ettingshausen coefficient is insensitive to the pressure within the measurement error. In contrast, the reduction of the pressure from 1030 mbar to 30 mbar resulted in increase of the $\Delta T_{Joule}$ difference by approximately 3 times. The $\Delta T_{Thomson}$ difference was also found to be quite sensitive to the pressure, even that sign changes were observed. Measurements strategy --------------------- A typical measurements sequence without magnetic field is presented in Fig. 6. For each electrical current value several readings of the thermocouple voltage were taken, with alternatively changing the current direction. A delay of 60-120 seconds was applied after each current reversal to allow the steady-state to build up. The average value of the measured voltage for a given current value ($\Delta T_{Joule}$) was increasing approximately proportionally to $J^{2}$, as it might be expected for the Joule effect (see the upper insert). Changing the current direction resulted in a smaller voltage variations, which amplitude was found to be dependent on the electrical current value (see the lower insert). We suppose that the Thomson effect is responsible for these variations. In this paper we will call this amplitude $\Delta U_{\pm J}$ (an amplitude of the thermocouple voltage variations due to reversing of the electrical current). The Thomson temperature difference may be calculated from the equation: $$\Delta U_{\pm J}=\alpha (\Delta T_{Thomson})$$ where $\alpha $ is the thermocouple sensitivity. As expected from our computer simulations, we have not observed any universal dependence of $% \Delta U_{\pm J}$ on electrical current. For some cases $\Delta U_{\pm J}$ changed its sign for a certain value of the current (with no magnetic field). Application of the magnetic field causes the Ettingshausen effect and now the amplitude $\Delta U_{\pm J}$ is a sum of two contributions: $$\Delta U_{\pm J}=\alpha (\Delta T_{Thomson}+\Delta T_{Ett})$$ A typical measurement sequence in magnetic field and with constant electrical current is shown in Fig. 7. The influence of the Ettingshausen effect is clearly visible. The measured amplitude $\Delta U_{\pm J}$ strongly depends on the field strength. The insert demonstrates the values of $(\Delta T_{Thomson}+\Delta T_{Ett})=$ $\Delta U_{\pm J}/\alpha $ for different values of magnetic field fitted by a linear function. The Ettingshausen coefficient is calculated from the slope $A$ of this line using the formula: $$P=A\frac{d}{J}$$ The linearity of the dependence of the Ettingshausen effect on electrical current value may be checked by plotting the differences $\Delta U_{\pm J}(B)-$ $\Delta U_{\pm J}(B=0)$ versus current (see Fig. 8). Taking into account all the above considerations we have developed a measurements strategy which is illustrated in the Fig. 9. For a particular sample we started from measuring of the current dependencies of the $\Delta U_{\pm J}$ amplitude at $B$=+8, 0 and -8 T. Hence, we were able to check the linearity of $\Delta U_{\pm J}$ with the electrical current value. Then, we chose a value of current for measuring the field dependency of $\Delta U_{\pm J}$ amplitude. If possible, we chose a value of $J$ for which $\Delta U_{\pm J}(B=0)$ was close to zero, in order to avoid the influence of another interfering effect connected with the Righi-Leduc phenomenon, which is defined as: $$\nabla T_{RL}=S_{RL}B_{z}\nabla T_{x}$$ where $S_{RL}$ is the Righi-Leduc coefficient. This effect may result in another contribution to the transverse thermal gradient if a longitudinal thermal gradient is present. Fortunately, the lack of the Thomson effect is an indication of small longitudinal gradients and in this case the Righi-Leduc effect may be neglected. Moreover, to reduce additionally the influence of the Joule and Thomson effects all measurements were performed at $\sim $1 bar of helium gas. Results and Discussion ====================== The method described in this paper allows measurements of Ettingshausen effect of typical for metals, small value. The effect consists in the transversal thermal gradient appearing due to the electrical current flow in the presence of the perpendicular magnetic field. Since this weak thermal effect may be easily overridden by the Joule and Thomson effects a special measures have been taken to extract it from the background. The sample ends were thermally anchored to a large mass to carry away the Joule heat. To eliminate the influence of the longitudinal thermal gradients due to the Joule and Thomson effects the odd symmetry of the Ettingshausen temperature difference in respect with the direction of the magnetic field and electrical current have been exploited. The values of the measured Ettingshausen coefficients for La$_{2-x}$Sr$_{x}$CuO$_{4}$ are shown in. Fig. 10. For each composition 2 or 3 samples were investigated. For each sample several measurement runs were carried out and the results were averaged. All the coefficients are of the order of 10$^{-7}$ m$^{3}$K/J, what is typical value for good metals, in contrast to the semiconductors, where values higher by several orders of magnitude are expected (see the Introduction). The Ettingshausen coefficient for La$_{2-x}$Sr$_{x}$CuO$_{4}$ changes the sign: it is positive only for low Sr concentration ($x$ = 0.03 and 0.05), for all other compositions negative values were observed. This is also in contradistinction to the situation of semiconductors, where only positive values are predicted by the theory [@Paranjape] and found experimentally (see Introduction). The overall weak compositional dependence is somehow surprising, since the other transport coefficients for La$_{2-x}$Sr$_{x}$CuO$_{4}$ were found to be strongly dependent on Sr/La substitution. It was shown [@Suzuki; @Takagi] that for the compositional range $x$ $=0.05\div 0.35$ the Hall coefficient decreases by more than two orders of magnitude. Similar strong variation with composition was found for thermoelectric power [@Cooper; @Goodenough]. Acknowledgments =============== The work was supported by the Polish State Committee for Scientific Research under contract No. 2PO3B 11613. [**Figure1**]{}  The sign convention for a positive Ettingshausen effect. [**Figure 2**]{} An experimental setup used for measurements of the Ettingshausen effect. See text for details. [**Figure 3**]{}  The temperature distribution on the sample area due the Ettingshausen effect (a) and the temperature distribution along the sample due to the Joule effect (b). The mismatching of the thermocouple junctions positions was exaggerated to indicate how the Joule effect may contribute to the measured temperature difference $\Delta T=T_{2}-T_{1}$. Calculated by the finite-elements method. [**Figure 4**]{}  The temperature distribution along the sample due to the Joule and Thomson effects for two directions of the electrical current. The difference between two curves was exaggerated for sake of clarity. Calculated by the finite-elements method. [**Figure 5**]{}  A realistic temperature distribution along the sample for different values of the measurement current (calculated by the finite-elements method). The Joule and Thompson effects within the sample, the Joule effect in the electrical contacts and the cooling by the surrounding gas have been taken into account. The sample ends are warmer in respect with the support due to the Joule heating of the contacts. It is assumed that the gas has the same temperature as the support. The insert shows the results of the tests, in which one of the thermocouple junctions was moved onto the bar B (see Fig. 2) and the longitudinal gradient $\Delta T_{l/2}$ has been measured. [**Figure 6**]{}  An exemplary measurement sequence performed on La$_{1.75}$Sr$_{0.25}$CuO$_{4}$ sample at room temperature without magnetic field. The thermocouple voltages are plotted versus the successive reading numbers. The direction of the electrical current was alternatively changed from reading to reading. Our convention is that $J$ is positive for odd reading numbers, which are denoted by filled circles. Even numbers are denoted by empty circles. The upper insert shows the average voltage for each value of the current versus the current, i.e. the result of the Joule effect (the thermocouple sensitivity is denoted by $\alpha $). The variations of the voltage with the current direction are due to the Thomson effect. The amplitude of these variations $\Delta T_{Thomson}=\Delta U_{\pm J}/2\alpha $ versus current value is shown on the lower insert. [**Figure 7**]{}  An exemplary measurement sequence performed on La$_{1.75}$Sr$_{0.25}$CuO$_{4}$ sample at room temperature for a constant electrical current and in varied magnetic field (see also the caption of Fig. 6). A plot of the amplitude ($\Delta T_{Thomson}+\Delta T_{Ett})=\Delta U_{\pm J}/2\alpha $ versus $B$ is shown in the insert. [**Figure 8**]{}  A plot of the amplitude ($\Delta T_{Thomson}+\Delta T_{Ett})=\Delta U_{\pm J}/2\alpha $ versus electrical current for different values of the magnetic field. The insert shows the differences $(\Delta U_{\pm J}(B)-$ $\Delta U_{\pm J}(B=0))/2\alpha =\pm \Delta T_{Ett}$ ($% \alpha $ is the thermocouple sensitivity). [**Figure 9**]{}  A sketch of the experimental procedure for measurements of the Ettingshausen effect. At each point on the ($B$,$J$) plane several readings of the transversal temperature gradient have been taken for both current directions (see Figures 6 and 7). [**Figure 10**]{} The room temperature Ettingshausen coefficients for La$% _{2-x} $Sr$_{x}$CuO$_{4}$. The error bars denote the scatter of experimental values obtained for a particular composition. S.D.Obertelli, J.R.Cooper and J.L.Tallon, Phys.Rev.B [**46**]{} 14928 (1992). see e.g. S.Martin, A.T.Fiory, R.M.Fleming, L.F.Schneemeyer and J.V.Waszczak, Phys.Rev.B [**41**]{} 846 (1990). Y.Kubo and T.Manako, Physica C [**197**]{} 178 (1992). P.W.Bridgman, Phys.Rev. [**24**]{}, 644 (1924). B.V.Paranjape and J.S.Levinger, Phys.Rev.[* [**120**]{}, 437 (1960).* ]{} H.Fieber, A.Nedoluha and K.M.Koch, Z.Physik [*[**131**]{}, 143 (1952).* ]{} H.Mette, W.W.Gärtner and C.Loscoe, Phys.Rev. [*[**115**]{}, 537 (1959) .* ]{} H.Mette, W.W.Gärtner and C.Loscoe, Phys.Rev.[*  [**117,** ]{}1491 (1960) .* ]{} E.H.Putley, The Proceedings of the Physical Society, Sect. B, part [*1, [**68**]{}, 35 (1955).* ]{} (Mc-Graw-Hill Book Company, Inc., New York, 1929), vol. 6, p.419[* *]{} L.Zecchina, phys.stat.sol. [**42**]{}, K153 (1970). J.A.Clayhold, A.W.Linnen, Jr., F.Chen and C.W.Chu, Phys.Rev.B [**50**]{}, 4252 (1994). V.E.Gasumyants, N.V.Ageev, I.E.Goldberg and V.I.Kaydanov, Physica C [**282-287**]{}, 1279 (1997). P.Fournier, X.Jiang, W.Jiang, S.N.Mao, T.Venkatesan, C.J.Lobb and R.L.Greene, Phys.Rev.B [**56**]{}, 14149 (1997). F.Devaux, A.Manthiram and J.B.Goodenough, Phys.Rev.B [**41**]{}, 8723 (1990). M.Suzuki, Phys.Rev.B [**39**]{}, 2312 (1989). H.Takagi, T.Ido, S.Ishibashi, M.Uota, S.Uchida and Y.Tokura, Phys.Rev.B [**40**]{}, 2254 (1989). J.R.Cooper, B.Alavi, L.W.Zhou, W.P.Beyerman, G.Grüner, Phys.Rev.B [**35**]{}, 8794 (1987). J.B.Goodenough and A.Manthiram, in [*Studies of High Temperature Superconductors*]{}, vol.5, Ed. by A.V.Narlikar, Plenum Press, New York, 1990.
\ \ Department of Mathematics, Faculty of Science,\ Kobe University, Hyogo 657-8501, Japan [**Abstract.**]{} An explicit form of the Lax pair for the $q$-difference Painlevé equation with affine Weyl group symmetry of type $E^{(1)}_8$ is obtained. Its degeneration to $E^{(1)}_7$, $E^{(1)}_6$ and $D^{(1)}_5$ cases are also given. Key Words and Phrases: Painlevé equation, Lax formalism, $q$-difference equation. 2010 MSC Numbers : 34M55, 39A13, 34M56. Introduction ============ A Lax formalism for the elliptic difference Painlevé equation [@Sakai] was obtained in [@e8ell]. The construction is concrete including the specification of the unknown variable $(f,g)$ of the Painlevé equation. The explicit formula of the elliptic Painlevé equation and its Lax from, however, seems to be too complicated to write it down in tidy form. On the other hand, for the multiplicative (i.e. $q$-difference) and additive cases, rather concise expressions for the Painlevé equations have been known [@ORG]. The aim of this paper is to adjust the construction of [@e8ell] to the $q$-Painlevé equations in [@ORG] and write down the corresponding Lax equations explicitly. We note that the Lax formulations of difference Painlevé equations were obtained in [@AB] for additive $E^{(1)}_6$, [@B] for additive $E^{(1)}_7$, $E^{(1)}_8$ and [@SakaiA2] for $q$-$E^{(1)}_6$ cases (see also [@Sakai-rims]). In general, the scalar Lax pair consists of a linear (difference) equation $L_1=0$, and its deformation equation $L_2=0$. Though the Lax equations $L_1, L_2$ in [@e8ell] were both of degree $(3,2)$ in variables $(f,g)$, other choices of equation $L_2$ are possible depending on the direction of deformation. In this paper, we will give an explicit expressions of the Lax pair $L_1, L_2$ for $q$-Painlevé equation (\[eq:T-bar\]) corresponding to a distinguished direction. The first equation $L_1$ (\[eq:L1\]) is of degree $(3,2)$ which is the explicit realization of that in [@e8ell]. The second equation $L_2$ (\[eq:L2\]) is of degree $(1,1)$ and hence much more economical. The contents of this paper is as follows. In section \[sect:basic\], the explicit expression of the $q$-Painlevé equation of type $E^{(1)}_8$ is recapitulated. (Its Weyl group symmetry is explained in Appendix A.) In section \[sect:lax\], corresponding Lax equations are given in explicit form. The compatibility is proved in section \[sect:proof\]. Finally, in section \[sect:dege\], the degeneration to $E^{(1)}_7$, $E^{(1)}_6$ and $D^{(1)}_5$ cases are considered. The Lax form for $q$-$E^{(1)}_7$ case seems also to be new. Fundamental equations {#sect:basic} ===================== Let $h_1, h_2, u_1, \ldots, u_8$ and $q=h_1^2h_2^2/(u_1\cdots u_8)$ be complex parameters. We consider a configuration of the following eight points in $\P^1 \times \P^1$: $$P_i=P(u_i), \quad P(u)=(f(u),g(u)), \quad f(u)=u+\dfrac{h_1}{u}, \ g(u)=u+\dfrac{h_2}{u}.$$ The functions $f=f(u), g=g(u)$ give a parametrization of a rational curve $C_0 : \varphi(f, g)=0$ of degree $(2,2)$, where $$\varphi(f,g)=(f-g)(\dfrac{f}{h_1}-\dfrac{g}{h_2})-(h_1-h_2)(\dfrac{1}{h_1}-\dfrac{1}{h_2}).$$ We define a polynomial $U(z)$ as $$U(z)=\prod_{i=1}^8(z-u_i)=\sum_{i=0}^8 (-1)^im_{8-i}z^i.$$ Hence $m_0=1$, $m_8=h_1^2h_2^2/q$. We also define $P_n(h,x), P_d(h,x)$ by $$\label{eq:Prel1} \begin{array}l (z-\dfrac{h}{z})P_n(h,z+\dfrac{h}{z})=\dfrac{1}{z^3}U(z)-(\dfrac{z}{h})^3U(\dfrac{h}{z}),\\[4mm] (z-\dfrac{h}{z})P_d(h,z+\dfrac{h}{z})={z^5}U(\dfrac{h}{z})-(\dfrac{h}{z})^5U(z). \end{array}$$ Then they satisfies $$\label{eq:Prel2} \begin{array}l P_d(h,z+\dfrac{h}{z})+h^3 z^2 P_n(h,z+\dfrac{h}{z})-(\dfrac{h}{z})^3(z+\dfrac{h}{z})U(z)=0, \\[4mm] P_d(h,g)=h^4P_n(\dfrac{1}{h},\dfrac{g}{h})|_{m_i \mapsto m_{8-i}}. \end{array}$$ Explicitly, we have $$\label{eq:Prel3} \begin{array}l P_n(h,g)= m_0g^4-m_1g^3+(m_2-3hm_0-h^{-3}m_8)g^2\\ \qquad +(2hm_1-m_3+h^{-2}m_7)g+(h^2m_0-hm_2+m_4-h^{-1}m_6+h^{-2}m_8),\\[4mm] P_d(h,g)= m_8g^4-hm_7g^3+(h^2m_6-3hm_8-h^{5}m_0)g^2\\ \qquad +(2h^2m_7-h^3m_5+h^{5}m_1)g+(h^6m_0-h^5m_2+h^4m_4-h^{3}m_6+h^{2}m_8). \end{array}$$ The $q$-Painlevé equation of type $E^{(1)}_8$ can be described by the following bi-rational transformations [@ORG]: $$\label{eq:T-bar} T: (h_1,h_2, u_1,\ldots, u_8;f,g)\mapsto (\dfrac{h_1}{q}, h_2 q, u_1,\ldots, u_8;\o{f},\o{g}),$$ where $$\label{eq:fueq} \dfrac{(\o{f}-g)(f-g)-(\frac{h_1}{q}-h_2)(h_1-h_2)\frac{1}{h_2}} {(\frac{\o{f}q}{h_1}-\frac{g}{h_2})(\frac{f}{h_1}-\frac{g}{h_2}) -(\frac{q}{h_1}-\frac{1}{h_2})(\frac{1}{h_1}-\frac{1}{h_2})h_2} =\dfrac{h_1^2h_2^4}{q}\dfrac{P_n(h_2,g)}{P_d(h_2,g)},$$ and $$\label{eq:gueq} \dfrac{(\o{f}-\o{g})(\o{f}-g)-(\frac{h_1}{q}-h_2 q)(\frac{h_1}{q}-h_2)\frac{q}{h_1}} {(\frac{\o{f}q}{h_1}-\frac{\o{g}}{h_2 q}) (\frac{\o{f}q}{h_1}-\frac{g}{h_2})-(\frac{q}{h_1} -\frac{1}{h_2 q})(\frac{q}{h_1}-\frac{1}{h_2})\frac{h_1}{q}} =\dfrac{h_1^4h_2^2}{q^3}\dfrac{P_n(\frac{h_1}{q},\o{f})}{P_d(\frac{h_1}{q},\o{f})}.$$ Under the transformation $$f\leftrightarrow \o{g}, \quad g\leftrightarrow \o{f}, \quad h_1 \rightarrow h_2 q, \quad h_2 \rightarrow \frac{h_1}{q},$$ the equations (\[eq:fueq\]) and (\[eq:gueq\]) are transformed with each other. Define polynomial $V=V(f_0, f)$ as $$\label{eq:Vpoldef} \begin{array}l V(f_0,f)=q\Big[(f_0-g)(f-g)-(\dfrac{h_1}{q}-h_2)(h_1-h_2)\dfrac{1}{h_2}\Big]P_d(h_2,g)\\ -h_1^2h_2^4\Big[(\dfrac{f_0 q}{h_1}-\dfrac{g}{h_2})(\dfrac{f}{h_1}-\dfrac{g}{h_2}) -(\dfrac{q}{h_1}-\dfrac{1}{h_2})(\dfrac{1}{h_1}-\dfrac{1}{h_2}){h_2}\Big]P_n(h_2,g). \end{array}$$ Then eq.(\[eq:fueq\]) is written as $$\label{eq:of-f} V(\o{f},f)=0.$$ In the following, we also use the notation $$\o{f}(u)=u+\dfrac{h_1}{qu}, \quad \o{g}(u)=u+\dfrac{h_2 q}{u}.$$ Lax equations {#sect:lax} ============= Let us define the Lax pair for the $q$-Painlevé equations (\[eq:fueq\]), (\[eq:gueq\]). The first equation $L_1=0$ is a three term $q$-difference equation for $Y(\frac{z}{q}), Y(z), Y(qz)$ defined by $$\label{eq:L1} \begin{array}l L_1= \dfrac{q^5U(\frac{z}{q})}{(z^2-h_1 q^2)\{f-f(\frac{z}{q})\}}\Big[Y(\frac{z}{q})-\dfrac{g-g(\frac{h_1 q}{z})}{g-g(\frac{z}{q})}Y(z)\Big]\\[6mm] \qquad +\dfrac{z^8U(\frac{h_1}{z})}{(z^2-h_1)h_1^4\{f-f(z)\}}\Big[Y(qz)-\dfrac{g-g(z)}{g-g(\frac{h_1}{z})}Y(z)\Big]\\[6mm] \qquad +\dfrac{(h_1-h_2)z^2(z^2-h_1 q)V(\o{f}(\frac{z}{q}),f)}{h_1^3 h_2^3 q g \varphi \{g-g(\frac{h_1}{z})\}\{g-g(\frac{z}{q})\}}Y(z), \end{array}$$ where $V$ is in eq.(\[eq:Vpoldef\]). The following proposition shows that the equation $L_1=0$ has the geometric properties described in [@e8ell]. \[prop1\] Put $F(f,g)=\varphi\{f-f(\frac{z}{q})\}\{f-f(z)\} L_1$. Then the algebraic curve $F=0$ in $\P^1\times \P^1$ satisfy the following properties:\ (i) It is of degree $(3,2)$.\ (ii) It passes the 12 points $P_1, \ldots, P_8$, $P(z)$, $P(\frac{h_1 q}{z})$, $Q(z)$ and $Q(\frac{z}{q})$, where $Q(u)=(f,g)$ is defined by $f=f(u)$ and $\dfrac{g-g(u)}{g-g(\frac{h_1}{u})}=\dfrac{Y(qu)}{Y(u)}$, for $u=z, \frac{z}{q}$.\ Moreover, these conditions determine the curve $F=0$ uniquely. [*Proof.*]{} (i) It is easy to see that the coefficients of $Y(qz)$ and $Y(\frac{z}{q})$ in $F$ are polynomials in $(f, g)$ of degree $(3,2)$. The coefficient of $Y(z)$ looks a rational function with numerator of degree $(3,6)$ and denominator of degree $(0,3)$. However its residues at $g=0$, $g=g(\frac{z}{q})$, $g=g(\frac{h_1}{z})$ and leading term at $g \rightarrow \infty$ are proportional to $$\begin{array}{lll} P_d(h_2,0)-h_2^4 P_n(h_2,0), &{\rm at}&\ g=0\\[2mm] P_d(h_2,g(\frac{z}{q}))+h_2^3 (\frac{z}{q})^2 P_n(h_2,g(\frac{z}{q}))-(\frac{h_2 q}{z})^3g(\frac{z}{q})U(\frac{z}{q}), &{\rm at}&\ g=g(\frac{z}{q})\\[2mm] P_d(h_2,g(\frac{h_1}{z}))+h_2^3 (\frac{h_1}{z})^2 P_n(h_2,g(\frac{h_1}{z}))-(\frac{h_2 z}{h_1})^3g(\frac{h_1}{z})U(\frac{h_1}{z}), &{\rm at}&\ g=g(\frac{h_1}{z})\\[2mm] (h_1^2h_2^2 m_0-q m_8)g^3, &{\rm at}&\ g\rightarrow \infty \end{array}$$ and all of them are zero due to eqs (\[eq:Prel2\]), (\[eq:Prel3\]). Hence the property (i) is proved. \(ii) The coefficients of $Y(qz)$ and $Y(\frac{z}{q})$ of polynomial $F$ trivially vanish at $P_1, \ldots, P_8$, $P(z)$ and $P(\frac{h_1 q}{z})$. The coefficient of $Y(z)$ is proportional to $$\label{eq:Ycoef} (u-z)(h_1 q-u z)\{P_d(h_2,g(u))+h_2^3 u^2 P_n(h_2,g(u))\}=(u-z)(h_1 q-u z)(\frac{h_2}{u})^3 g(u)U(u)$$ under the specialization $f=f(u)$ and $g=g(u)$. Hence, it also vanishes at $P_1, \ldots, P_8$, $P(z)$ and $P(\frac{h_1 q}{z})$. The vanishing of $F$ at $Q(z)$ and $Q(\frac{z}{q})$ can be directly seen from the structure of $L_1$. The property (ii) is proved. The uniqueness follows from simple dimensional argument. In [@e8ell], the second Lax equation $L_2=0$ is also defined as a curve of degree $(3,2)$ with similar vanishing conditions. However, the $L_2$ equation (the deformation equation) depends on the direction in $E^{(1)}_8$-translations and the one in [@e8ell] is not what we want here. In fact, we can take more simple Lax equation $L_2=0$ given by $$\label{eq:L2} \begin{array}l L_2=\{g-g(\frac{z}{q})\}Y(\frac{z}{q})-\{g-g(\frac{h_1 q}{z})\}Y(z)+\{f-f(z)\}(\dfrac{h_1}{z}-\dfrac{z}{q^2})\o{Y}(\frac{z}{q}). \end{array}$$ The following is the main result of this paper. \[thm:main\] The compatibility of the equations $L_1=0$ (\[eq:L1\]) and $L_2=0$ (\[eq:L2\]) gives the Painlevé equation (\[eq:fueq\]), (\[eq:gueq\]). Proof of the main theorem {#sect:proof} ========================= Here, we will prove the Theorem.\[thm:main\]. Using $L_2=0$ (\[eq:L2\]) and its shift $L_2|_{z \rightarrow q z}=0$, eliminate $Y(qz)$ and $Y(\frac{z}{q})$ from $L_1=0$ (\[eq:L1\]), we get $$\label{eq:elim1} \begin{array}l L_1= \dfrac{q^3 U(\frac{z}{q})}{z\{g-g(\frac{z}{q})\}}\o{Y}(\frac{z}{q}) -\dfrac{z^7 U(\frac{h_1}{z})}{h_1^4 q \{g-g(\frac{h_1}{z})\}}\o{Y}(z)\\[6mm] \qquad +\dfrac{(h_1-h_2)z^2(z^2-h_1 q)V(\o{f}(\frac{z}{q}),f)}{h_1^3 h_2^3 q g \varphi \{g-g(\frac{h_1}{z})\}\{g-g(\frac{z}{q})\}}Y(z)=0. \end{array}$$ We introduce auxiliary variable $W(z)$ by $$\label{eq:w2y} W(\frac{z}{q})=\o{Y}(\frac{z}{q})-\dfrac{z^8}{h_1^4 q^4}\dfrac{\{g-g(\frac{z}{q})\}}{\{g-g(\frac{h_1}{z})\}}\dfrac{U(\frac{h_1}{z})}{U(\frac{z}{q})}\o{Y}(z),$$ then, from eq. (\[eq:elim1\]) we have $$\label{eq:wy0} \dfrac{q^3 U(\frac{z}{q})}{z\{g-g(\frac{z}{q})\}}W(\frac{z}{q}) +\dfrac{(h_1-h_2)z^2(z^2-h_1 q)V(\o{f}(\frac{z}{q}),f)}{h_1^3 h_2^3 q g \varphi \{g-g(\frac{h_1}{z})\}\{g-g(\frac{z}{q})\}}Y(z)=0.$$ From eq.(\[eq:wy0\]), (\[eq:wy0\])$|_{z \rightarrow q z}$ and $L_2|_{z \rightarrow q z}$, we eliminate $Y(z)$ and $Y(qz)$ and obtain $$\label{eq:wwy1} \begin{array}l W(\frac{z}{q}) +\dfrac{({h_1}-{h_2}) ({h_1}-z^2) ({h_1} q-z^2) (f-f(z)) V(\o{f}(\frac{z}{q}),f) z^2} {g {h_1}^3 {h_2}^3 {\varphi} q^5 (g-g(\frac{{h_1}}{z})) (g-g(z))U(\frac{z}{q})}\o{Y}(z)\\[6mm] +\dfrac{({h_1} q-z^2) (g-g(\frac{{h_1}}{q z})) U(z)V(\o{f}(\frac{z}{q}),f)}{q^4 (q z^2-{h_1}) (g-g(z)) U(\frac{z}{q}) V(\o{f}(z),f)} W(z)=0. \end{array}$$ We have $$\label{eq:x1x2} \dfrac{(f-x_1)V(x_2,f)}{(\o{f}-x_2)V(\o{f},x_1)}=\dfrac{(h_1-h_2 q)\varphi}{(h_1-h_2)\varphi_u},$$ where $\varphi_u=\varphi|_{\{f\rightarrow \o{f}, h_1\rightarrow \frac{h_1}{q}\}}$. [*Proof.*]{} The (LHS) of eq.(\[eq:x1x2\]) is fractional liner function both in $x_1$ and $x_2$. Whose zeros and poles cancel due to eq.(\[eq:of-f\]) hence it is constant. Taking a limit $x_1, x_2 \rightarrow \infty$, the constant is given by $$\dfrac{(f-g)P_d(h_2, g) -h_2^3(fh_2-gh_1) P_n(h_2, g)} {(\o{f}-g)P_d(h_2, g) - h_2^3 (\o{f}h_2-gh_1/q) P_n(h_2, g)}.$$ From eq.(\[eq:fueq\]), this coincide with the (RHS) of eq.(\[eq:x1x2\]). By using eq.(\[eq:x1x2\]), eq.(\[eq:wwy1\]) can be written as $$\label{eq:wwy2} \begin{array}l W(\frac{z}{q}) +\dfrac{({h_1}-{h_2} q) ({h_1}-z^2) ({h_1} q-z^2) (\o{f}-\o{f}(\frac{z}{q})){V}(\o{f},{f}(z)) z^2} {g {h_1}^3 {h_2}^3 {\varphi_u} q^5 (g-{g}(\frac{{h_1}}{z})) (g-{g}(z))U(\frac{z}{q})}\o{Y}(z)\\[6mm] +\dfrac{({h_1} q-z^2) (\o{f}-\o{f}(\frac{z}{q}))(g-{g}(\frac{{h_1}}{q z})) U(z)} {q^4 (q z^2-{h_1}) (\o{f}-\o{f}(z)) (g-{g}(z)) U(\frac{z}{q})}W(z)=0. \end{array}$$ Using eq.(\[eq:w2y\]), express $W$ in terms of $Y$, we finally obtain $$\label{eq:L1up} \begin{array}l L_{1u}=\dfrac{U(\frac{z}{q})}{(z^2-h_1 q^2)\{\o{f}-\o{f}(\frac{z}{q})\}} \Big[\o{Y}(\frac{z}{q})-\dfrac{z^8}{h_1^4q^4}\dfrac{U(\frac{h_1}{z})}{U(\frac{z}{q})}\dfrac{g-g(\frac{z}{q})}{g-g(\frac{h_1}{z})}\o{Y}(z)\Big]\\[6mm] \qquad +\dfrac{z^8U(\frac{h_1}{qz})}{(qz^2-h_1)h_1^4\{\o{f}-\o{f}(z)\}} \Big[\o{Y}(qz)-\dfrac{h_1^4}{q^4 z^8}\dfrac{U(z)}{U(\frac{h_1}{qz})}\dfrac{g-g(\frac{h_1}{qz})}{g-g(z)}\o{Y}(z)\Big]\\[6mm] \qquad +\dfrac{(h_1-h_2 q)z^2(z^2-h_1)V(\o{f},f(z))}{h_1^3 h_2^3 q^5 g \varphi_u \{g-g(\frac{h_1}{z})\}\{g-g(z)\}}\o{Y}(z)=0. \end{array}$$ This is the three term difference equation for $\o{Y}(qz), \o{Y}(z), \o{Y}(\frac{z}{q})$ which should be compared with $T$-evolution of the eq.$L_1$ for the compatibility. To do this, one should make further variable change from $(\o{f},g)$ to $(\o{f}, \o{g})$, where $g$ and $\o{g}$ are related by eq.(\[eq:gueq\]). Though the explicit computation of this variable change is rather complicated, one can bypass it by using the geometric method as [@e8ell]. \[lem:L1u\] The algebraic curve $\varphi\{\o{f}-\o{f}(\frac{z}{q})\}\{\o{f}-\o{f}(z)\} L_{1u}=0$ in variables $(\o{f},g) \in \P^1\times \P^1$ is uniquely characterized by the following properties:\ (i) It is of degree $(3,2)$.\ (ii) It passes the 10 points $(\o{f},g)=(\o{f}(u),g(u))$ with $u=u_1, \ldots, u_8,\frac{z}{q}, \frac{h_1}{qz}$, and two more points defined by $\o{f}=\o{f}(u)$ and $\dfrac{q^4 u^8}{h_1^4}\dfrac{U(\frac{h_1}{q u})}{U(u)}\dfrac{g-g(u)}{g-g(\frac{h_1}{qu})}=\dfrac{\o{Y}(u)}{\o{Y}(qu)}$, for $u=z, \frac{z}{q}$. [*Proof.*]{} Since the structure of eq.(\[eq:L1up\]) is almost the same as eq.(\[eq:L1\]), the lemma can be shown in the same way as Proposition.\[prop1\]. \[lem:gueq-str\] The bi-rational transformation $\o{g}=\o{g}(\o{f},g)$ \[its inverse $g=g(\o{f},\o{g})$, resp.\] is uniquely characterized by the following properties. (i) It is given by a ratio of polynomials of degree $(4,1)$ passing through the 8 points $(\o{f},g)=(\o{f}(u_i),g(u_i))$ \[$(\o{f},\o{g})=(\o{f}(u_i),\o{g}(u_i))$, resp.\]. (ii) For generic parameter $u$, it transforms as $$(\o{f},g)=(\o{f}(\frac{h_1}{qu}),g(\frac{h_1}{qu})) \leftrightarrow (\o{f},\o{g})=(\o{f}(u),\o{g}(u)).$$ [*Proof.*]{} The transformation $\o{g}=\o{g}(\o{f},g)$ \[$g=g(\o{f}, \o{g})$\] is determined by eq.(\[eq:gueq\]), i.e. $$\label{eq:Vforg} \begin{array}l q^3 \left[(\o{f}-\o{g})(\o{f}-g)-(\frac{h_1}{q}-h_2 q)(\frac{h_1}{q}-h_2)\frac{q}{h_1}\right]P_d(\frac{h_1}{q},\o{f})\\[5mm] -h_1^4h_2^2\left[(\frac{\o{f}q}{h_1}-\frac{\o{g}}{h_2 q})(\frac{\o{f}q}{h_1}-\frac{g}{h_2}) -(\frac{q}{h_1}-\frac{1}{h_2 q})(\frac{q}{h_1}-\frac{1}{h_2})\frac{h_1}{q}\right]P_n(\frac{h_1}{q},\o{f})=0. \end{array}$$ This equation is linear in both $\o{g}$ and $g$, and of degree 4 in $\o{f}$ since degree 5 terms are cancelled. Hence $\o{g}(\o{f},g)$ \[$g(\o{f}, \o{g})$\] is of degree (4,1). By a similar computation as eq.(\[eq:Ycoef\]), the eq.(\[eq:Vforg\]) is written as $$\{g-g(\frac{h_1}{q u})\}\Big\{q^3 P_d(\frac{h_1}{q},\o{f}(u))+h_1^3 u^2 P_n(\frac{h_1}{q},\o{f}(u))\Big\}= \{g-g(\frac{h_1}{q u})\}(\frac{h_1}{u})^3\o{f}(u)U(u)=0,$$ when $\o{f}=\o{f}(u)$ and $\o{g}=\o{g}(u)$. Then it follows that (1) if $\o{f}=\o{f}(u_i)$ then $\o{g}=\o{g}(u_i)$ (regardless of $g$) and (2) if $\o{f}=\o{f}(u)$ and $\o{g}=\o{g}(u)$ then $g=g(\frac{h_1}{q u})$ for $u \neq u_i$. Hence the properties (i), (ii) are verified. Finally the uniqueness follows from dimensional argument. \[lem:gog\] Let $F(\o{f},g)=0$ be any curve of degree $(3,2)$ passing through the 10 points in (ii) of Lemma.\[lem:L1u\]. Then the curve in variables $(\o{f}, \o{g})$ obtained from $F(\o{f}, g(\o{f},\o{g}))=0$ is also of degree $(3,2)$ and passes the 10 points $(\o{f},\o{g})=(\o{f}(u),\o{g}(u))$ with $u=u_1, \ldots, u_8, z, \frac{h_1}{z}$. [*Proof.*]{} From lemma \[lem:gueq-str\], we have $g=\frac{A_{41}}{B_{41}}$ where $A_{41}, B_{41}$ are polynomials of degree $(4,1)$ in $(\o{f}, \o{g})$ vanishing at the 8 points : $(\o{f}(u_i), \o{g}(u_i))_{i=1}^8$. Substitute this into $F(\o{f}, g)$ we have $F(\o{f},\frac{A_{41}}{B_{41}})=\frac{P_{11,2}}{(B_{41})^2}$, where $P_{11,2}(\o{f}, \o{g})$ is a polynomial of degree $(11,2)$ vanishing at the 8 points with multiplicity 2. Since $F(\o{f}, g(\o{f},\o{g}))|_{\o{f}=\o{f}(u_i)}=F(\o{f}(u_i),g(u_i))=0$ (regardless of $\o{g}$), the polynomial $P_{11,2}$ is factorized as $P_{11,2}= \tilde{F}_{32}(\o{f}, \o{g}) \prod_{i=1}^8 \{\o{f}-\o{f}(u_i)\}$ and $\tilde{F}_{32}=0$ gives the desired curve of degree $(3,2)$ passing through the 8 points.[^1] The other two vanishing conditions follow from the relation $\tilde{F}_{32}(\o{f}(u), \o{g}(u))=0 \Leftrightarrow F(\o{f}(u)=\o{f}(\frac{h_1}{q u}), g(\frac{h_1}{q u}))=0$ which holds for generic $u(\neq u_i)$. \[lem:10pt-cond\] The equation $L_{1u}=0$ gives a curve of degree $(3,2)$ in $(\o{f},\o{g})$ passing through the 10 points $(\o{f},\o{g})=(\o{f}(u),\o{g}(u))$ with $u=u_1, \ldots, u_8, z, \frac{h_1}{z}$. [*Proof.*]{} It follows from the Lemma \[lem:L1u\] and Lemma \[lem:gog\]. Lemma \[lem:10pt-cond\] ensures the vanishing properties of $L_{1u}$ which must be satisfied by $T$-evolution of $L_1$ at the first 10 points. In order to prove the main theorem, it is enough to prove the following The equation $L_{1u}$ vanishes at $\o{Q}(z)$ and $\o{Q}(\frac{z}{q})$, where $\o{Q}(u)=(\o{f},\o{g})$ is defined by $\o{f}=\o{f}(u)$ and $\dfrac{\o{g}-\o{g}(u)}{\o{g}-\o{g}(\frac{h_1}{qu})}=\dfrac{\o{Y}(qu)}{\o{Y}(u)}$, for $u=z, \frac{z}{q}$. [*Proof.*]{} When $\o{f}=\o{f}(\frac{z}{q})$, from eq.(\[eq:gueq\]) and eq.(\[eq:Prel2\]), we have $$\begin{array}{rl} \dfrac{g-g(\frac{z}{q})}{g-g(\frac{h_1}{z})}&=\dfrac{\o{g}-\o{g}(\frac{h_1}{z})}{\o{g}-\o{g}(\frac{z}{q})} \dfrac{h_1 q}{z^2}\dfrac{P_d(\frac{h_1}{q},\o{f}(\frac{z}{q}))+(\frac{h_1}{q})^3(\frac{z}{q})^2 P_n(\frac{h_1}{q},\o{f}(\frac{z}{q}))} {P_d(\frac{h_1}{q},\o{f}(\frac{z}{q}))+(\frac{h_1}{q})^3(\frac{h_1}{z})^2 P_n(\frac{h_1}{q},\o{f}(\frac{z}{q}))}\\[4mm] &=\dfrac{\o{g}-\o{g}(\frac{h_1}{z})}{\o{g}-\o{g}(\frac{z}{q})}\dfrac{h_1^4 q^4}{z^8}\dfrac{U(\frac{z}{q})}{U(\frac{h_1}{z})}. \end{array}$$ Then, from the residue of eq.(\[eq:L1up\]) at $\o{f}=\o{f}(\frac{z}{q})$, we have $$\dfrac{\o{Y}(\frac{z}{q})}{\o{Y}(z)}=\dfrac{z^8}{h_1^4q^4}\dfrac{U(\frac{h_1}{z})}{U(\frac{z}{q})}\dfrac{g-g(\frac{z}{q})}{g-g(\frac{h_1}{z})} =\dfrac{\o{g}-\o{g}(\frac{h_1}{z})}{\o{g}-\o{g}(\frac{z}{q})}.$$ This is the desired relation for $u=\frac{z}{q}$. The relation for $u=z$ is similar. The proof of the main theorem is completed. Degenerations {#sect:dege} ============= In this section, we consider the degeneration limit of the $E^{(1)}_8$ system to $E^{(1)}_7$, $E^{(1)}_6$, $D^{(1)}_5$. For the corresponding $q$-Painlevé equations see [@KMNOY2], [@Tsuda2] and references therein. Degeneration from $E^{(1)}_8$ to $E^{(1)}_7$ -------------------------------------------- We put $(h_1, h_2)=(t \epsilon, \frac{\epsilon}{t})$ and $(u_1, \ldots, u_8)=(b_1,\ldots,b_4,\frac{\epsilon}{b_5}, \ldots, \frac{\epsilon}{b_8})$, hence $q=\frac{b_5b_6b_7b_8}{b_1b_2b_3b_4}$. Under the limit $\epsilon \rightarrow 0$ we have $$\begin{array}{ll} P_n(h_2, g) \rightarrow g^4 B_1(\frac{1}{g}), & P_d(h_2, g) \rightarrow \epsilon^4 \frac{g^4}{q} B_2(\frac{1}{tg}), \\[4mm] P_n(\frac{h_1}{q}, \o{f}) \rightarrow \o{f}^4 B_1(\frac{1}{\o{f}}), & P_d(\frac{h_1}{q}, \o{f}) \rightarrow \epsilon^4 \frac{\o{f}^4}{q} B_2(\frac{t}{q \o{f}}), \\[4mm] U(\frac{z}{q}) \rightarrow (\frac{z}{q})^8 B_1(\frac{q}{z}), & U(\frac{h_1}{z}) \rightarrow \epsilon^4 \frac{1}{q} B_2(\frac{t}{z}), \end{array}$$ where $$B_1(z)=\prod_{i=1}^4(1-b_i z),\quad B_2(z)=\prod_{i=5}^8(1-b_i z).$$ In addition to this limiting procedure, we also make a change of coordinate : $g \rightarrow \frac{1}{g}$. Then the configuration of the 8 points are $(f,g)= (b_i,\frac{1}{b_i})_{i=1,\ldots,4}$, and $(b_i t,\frac{t}{b_i})_{i=5,\ldots,8}$. They are on the curve $(fg-1)(fg-t^2)=0$. The $q$-Painlevé equation of type $E^{(1)}_7$ is then given by $(b_i,t,f,g) \mapsto (b_i,\frac{t}{q},\o{f},\o{g})$, where $$\begin{array}l \dfrac{(fg-1)(\o{f}g-1)}{(fg-t^2)(\o{f}gq-t^2)}=\dfrac{B_1(g)}{t^4 B_2(\frac{g}{t})}, \\[5mm] \dfrac{(\o{f}g-1)(\o{f}\o{g}-1)}{(\o{f}g-t^2)(\o{f}\o{g}q^2-t^2)}=\dfrac{B_1(\frac{1}{\o{f}})}{q^3 B_2(\frac{t}{\o{f}q})}. \end{array}$$ The Lax pair is $$\begin{array}{rl} L_1=&\dfrac{(1-t^2)}{g z^2}\left[\dfrac{q {B_1}(g)}{(f g-1) (g z-q)} -\dfrac{t^4 {B_2}(\frac{g}{t})}{(f g-t^2) (g z-t^2)}\right] Y(x)\\[6mm] &+\dfrac{{B_2}(\frac{t}{z})}{t^2(f-z)}\left[Y(qz)-\dfrac{t^2(1-gz)}{t^2-gz}Y(z)\right]\\[6mm] &+\dfrac{t^2{B_1}(\frac{q}{z})}{q(fq-z)}\left[Y(\frac{z}{q})-\dfrac{q t^2-gz}{t^2(q-gz)}Y(z)\right]=0, \end{array}$$ and $$L_2=\frac{qt^2 - gz}{t^2}Y(z)+ (gz-q)Y(\frac{z}{q})+ \frac{gz(fq - z)}{q^2}\o{Y}(\frac{z}{q})=0.$$ Degeneration from $E^{(1)}_7$ to $E^{(1)}_6$ -------------------------------------------- Degeneration from $E^{(1)}_7$ to $E^{(1)}_6$ is obtained by putting $b_5 \rightarrow b_5/\epsilon$, $b_6 \rightarrow b_6/\epsilon$, $b_7 \rightarrow b_7 \epsilon$, $b_8 \rightarrow b_8 \epsilon$ and $t \rightarrow t \epsilon$ and taking the limit $\epsilon \rightarrow 0$. The 8 points configuration: $(f,g)= (b_i,\frac{1}{b_i})_{i=1,\ldots,4}$, $(b_i t,0)_{i=5,6}$ and $(0,\frac{t}{b_i})_{i=7,8}$. The $q$-Painlevé equation: $$\begin{array}l \dfrac{(f g-1) (\o{f} g-1) }{f \o{f} }=\dfrac{q(b_{1} g-1) (b_{2} g-1) (b_{3} g-1) (b_{4}g-1)}{b_5 b_6(b_{7} g-t) (b_{8}g-t)},\\[6mm] \dfrac{(\o{f} g-1) (\o{f} \o{g}-1)}{ g \o{g}}=\dfrac{q^2 (b_{1}-\o{f}) (b_{2}-\o{f}) (b_{3}-\o{f}) (b_{4}-\o{f})} {(\o{f} q-b_{5} t) (\o{f} q-b_{6} t)}. \end{array}$$ The Lax pair: $$\begin{array}{rl} L_1=&\dfrac{(b_{1} q-z) (b_{2} q-z) (b_{3} q-z) (b_{4} q-z) t^2}{q (f q-z) z^4}\Big[Y(\frac{z}{q})-\dfrac{g z}{t^2(g z-q)}Y(z)\Big]\\[5mm] &+\Big[\dfrac{(b_{1} g-1)(b_{2} g-1) (b_{3} g-1) (b_{4} g-1) q }{g (f g-1) z^2 (g z-q)} -\dfrac{b_5b_6(b_{7} g-t) (b_{8}g-t)}{f g z^3}\Big]Y(z)\\[5mm] &+\dfrac{(b_{5} t-z) (b_{6} t-z)}{(f-z) z^2 t^2} \Big[Y(q z)-\dfrac{(g z-1) t^2 }{g z}Y(z)\Big]=0,\\[6mm] L_2=&\dfrac{g z}{t^2}Y(z)+(q-g z) Y(\frac{z}{q})-\dfrac{g z(f q-z)}{q^2}\o{Y}(\frac{z}{q})=0. \end{array}$$ A $2\times 2$ matrix Lax formalism for the $q$-Painlevé equation with $E^{(1)}_6$-symmetry has been obtained by Sakai [@SakaiA2]. The scalar Lax equation here may be equivalent to Sakai’s one, though we could not confirm it so far. Degeneration from $E^{(1)}_6$ to $D^{(1)}_5$ -------------------------------------------- Degeneration from $E^{(1)}_6$ to $D^{(1)}_5$ is obtained by putting $b_1 \rightarrow b_1/\epsilon$, $ b_2 \rightarrow b_2/\epsilon$, $ b_3 \rightarrow b_3 \epsilon$, $ b_4 \rightarrow b_4 \epsilon$, $ f \rightarrow f \epsilon$, $ g \rightarrow g \epsilon$, $ t \rightarrow t \epsilon$, $ z \rightarrow z \epsilon$, $\o{Y} \rightarrow \epsilon^{-3} \o{Y}$, and taking the limit $\epsilon \rightarrow 0$. The 8 points configuration: $(f,g)= (\infty,\frac{1}{b_i})_{i=1,2}$, $(b_i,\infty)_{i=3,4}$, $(b_i t,0)_{i=5,6}$ and $(0,\frac{t}{b_i})_{i=7,8}$. The $q$-Painlevé equation: $$\begin{array}l f \o{f}=\dfrac{b_5b_6(b_{7} g-t) (b_{8}g-t)}{q(b_{1} g-1) (b_{2} g-1)},\\[6mm] g \o{g}=\dfrac{(\o{f} q-b_{5} t) (\o{f}q-b_{6} t)}{q^2 b_{1} b_{2} (b_{3}-\o{f}) (b_{4}-\o{f})}. \end{array}$$ The Lax pair: $$\begin{array}{rl} L_1=&\dfrac{b_{1} b_{2} q (b_{3} q-z) (b_{4} q-z) t^2}{(f q-z)z^2}\Big[Y(\frac{z}{q})+\dfrac{q z}{q t^2}Y(z)\Big]\\[5mm] &+\Big[\dfrac{(b_{1} g-1) (b_{2} g-1)}{g} -\dfrac{b_{5} b_{6}(b_{7} g-t) (b_{8} g-t)}{f g z}\Big]Y(z)\\[5mm] &+\dfrac{(b_{5} t-z) (b_{6} t-z)}{(f-z) t^2}\Big[Y(q z)+\dfrac{t^2 }{g z}Y(z)\Big]=0,\\[6mm] L_2=&\dfrac{g z}{t^2} Y(z)+qY(\frac{z}{q})-\dfrac{gz(fq-z)}{q^2}\o{Y}(\frac{z}{q})=0. \end{array}$$ This result is essentially equivalent to the original $2 \times 2$ construction by Jimbo-Sakai [@JS]. Further degenerations of this has been comprehensively studied by Murata [@Murata2]. The Lax formalism of Jimbo-Sakai is also derived from $q$-KP/UC hierarchy, and this method is also effective to the higher order generalizations [@Tsuda]. Weyl group actions ================== Here, we will discuss the affine Weyl group symmetry of the $q$-Painlevé equation (\[eq:fueq\]) (\[eq:gueq\]), based on the constructions in [@KMNOY1],[@Murata] and [@Tsuda2]. Define multiplicative transformations $s_{ij}$, $c$, $\mu_{ij}$, $\nu_{ij}$ ($1\leq i\neq j \leq 8$) acting on variables $h_1, h_2, u_1, \ldots, u_8$ as $$\label{eq:pic-action} \begin{array}l s_{ij} = \{u_i \leftrightarrow u_j\},\qquad c= \{h_1 \leftrightarrow h_2\},\\ \mu_{ij} = \{h_1 \mapsto \dfrac{h_1h_2}{u_iu_j}, \quad u_i \mapsto \dfrac{h_2}{u_j}, \quad u_j \mapsto \dfrac{h_2}{u_i}\},\\ \nu_{ij} = \{h_2 \mapsto \dfrac{h_1h_2}{u_iu_j}, \quad u_i \mapsto \dfrac{h_1}{u_j}, \quad u_j \mapsto \dfrac{h_1}{u_i}\}. \end{array}$$ These actions generate the affine Weyl group of type $E^{(1)}_8$. A choice of simple reflections is $$\begin{array}{cccccccccccccccccc} &&&&s_{12}\\ &&&&\vert\\ c&-&\mu_{12}&-&s_{23}&-&s_{34}&-&\cdots&-&s_{78} &&. \end{array}$$ The transformations (\[eq:pic-action\]) naturally act on the 8 points configuration $(f_i,g_i)=(u_i+\frac{h_1}{u_i}, u_i+\frac{h_2}{u_i})$ as $\mu_{12}(f_1)=\frac{h_1}{u_1}+\frac{h_2}{u_2}$, $\mu_{12}(g_1)=u_1+\frac{h_2}{u_1}$ for example. Then one can extend the Weyl group actions bi-rationally including generic variables $(f,g) \in \P^1\times \P^1$. The nontrivial actions are as follows: $$\begin{array}l c(f)=g, \quad c(g)=f, \quad \mu_{ij}(f)=\tilde{f},\quad \nu_{ij}(g)=\tilde{g}, \end{array}$$ where, $\tilde{f}$ and $\tilde{g}$ are rational functions in $(f,g)$ defined by $$\begin{array}l \dfrac{\tilde{f}-\mu_{ij}(f_i)}{\tilde{f}-\mu_{ij}(f_j)}=\dfrac{(f-f_i)(g-g_j)}{(f-f_j)(g-g_i)},\\[6mm] \dfrac{\tilde{g}-\nu_{ij}(g_i)}{\tilde{g}-\nu_{ij}(g_j)}=\dfrac{(g-g_i)(f-f_j)}{(g-g_j)(f-f_i)}. \end{array}$$ Let $r$ and $T_1$ be the following compositions $$\begin{array}l r=s_{12}\mu_{12}s_{34}\mu_{34}s_{56}\mu_{56}s_{78}\mu_{78}, \quad T_1=crcr. \end{array}$$ Their actions on variables $(h_i, u_i, f, g)$ are given by $$\begin{array}{lllll} r(h_1)=v h_2, & r(h_2)=h_2, & r(u_i)=\dfrac{h_2}{u_i},& r(f)=\o{f} v, & r(g)=g, \\ T_1(h_1)=\frac{h_1}{q} v^2, & T_1(h_2)=q h_2 v^2, & T_1(u_i)=u_i v, & T_1(f)=\o{f} v, & T_1(g)=\o{g} v, \end{array}$$ where $v=qh_2/h_1$. Hence, the evolution $T$ of $q$-Painlevé equation (\[eq:T-bar\]) is the affine Weyl group translation $T_1$ up to re-scaling of the parameters and variables. [**Acknowledgment.**]{} The author would like to thank N.Joshi, K.Kajiwara and T.Tsuda for their interests and discussions. This work is partly supported by JSPS KAKENHI No.21340036 and S-19104002. [A]{} D.Arinkin and A.Borodin, [*Moduli spaces of $d$-connections and difference Painlevé equations*]{}, Duke Math. J. [**134**]{} (2006) 515-556. P.Boalch, [*Quivers and difference Painlevé equations*]{}, Groups and symmetries, 25–51, CRM Proc. Lecture Notes, 47, Amer. Math. Soc., Providence, RI, 2009. M.Jimbo and H.Sakai, [*A $q$-analog of the sixth Painlevé equation*]{}, Lett. Math. Phys. [**38**]{} (1996) 145-154. K.Kajiwara, T.Masuda, M.Noumi, Y.Ohta and Y.Yamada, [*${}_{10}E_9$ solution to the elliptic Painlevé equation*]{}, J. Phys. [**A36**]{} (2003) L263-L272. K.Kajiwara, T.Masuda, M.Noumi, Y.Ohta and Y.Yamada, [*Hypergeometric solutions to the $q$-Painlevé equations*]{}, IMRN 2004 [**47**]{} (2004) 2497-2521. M.Murata, [*New expressions for discrete Painlevé equations*]{}, Funkcial. Ekvac. [**47**]{} (2004) 291-305. M.Murata, [*Lax forms of the $q$-Painlevé equations*]{}, J. Phys. A: Math. Theor. [**42**]{} (2009) 115201. Y. Ohta, A. Ramani and B. Grammaticos, [*An affine Weyl group approach to the eight-parameter discrete Painlevé equation*]{}, J. Phys. A: Math. Gen. [**34**]{} (2001) 10523. H.Sakai, [*Rational surfaces with affine root systems and geometry of the Painlevé equations*]{}, Commun. Math. Phys. [**220**]{} (2001) 165-221. H.Sakai, [*Lax form of the $q$-Painlevé equation associated with the $A^{(1)}_2$ surface*]{}, J. Phys. A: Math. Gen. [**39**]{} (2006) 12203. H.Sakai, [*Problem: discrete Painlevé equations and their Lax forms*]{}, RIMS Kokyuroku Bessatsu [**B2**]{}, (2007) 195–208. T.Tsuda, [*A geometric approach to tau-functions of difference Painlevé equations*]{}, Lett. Math. Phys. [**85**]{} (2008), 65–78. T.Tsuda, [*On an integrable system of q-difference equations satisfied by the universal characters: Its Lax formalism and an application to q-Painlevé equations*]{}, CMP [**293**]{} (2010) 347–359. Y.Yamada, [*A Lax formalism for the elliptic difference Painlevé equation*]{}, SIGMA [**5**]{} (2009), 042. [^1]: In terms of the Picard lattice, this part corresponds to the fact that $3H_1+2H_2-\sum_{i=1}^8 E_i$ is invariant under the substitution $H_2 \mapsto 4H_1+H_2-\sum_{i=1}^8 E_i$, $E_i \mapsto H_1-E_i$ ($i=1,\ldots,8$).
--- abstract: 'Bell’s theorem states that quantum mechanics is not a locally causal theory. This state is often interpreted as nonlocality in quantum mechanics. Toner and Bacon \[Phys. Rev. Lett. **91**, 187904 (2003)\] have shown that a shared random-variables theory augmented by one bit of classical communication exactly simulates the Bell correlation in a singlet state. In this paper, we show that in Toner and Bacon protocol, one of the parties (Bob) can deduce another party’s (Alice) measurement outputs, if she only informs Bob of one of her own outputs. Afterwards, we suggest a nonlocal version of Toner and Bacon protocol wherein classical communications is replaced by nonlocal effects, so that Alice’s measurements cause instantaneous effects on Bob’s outputs. In the nonlocal version of Toner and Bacon’s protocol, we get the same result again. We also demonstrate that the same approach is applicable to Svozil’s protocol. PACS number : 03.65.Ud, 03.67.-a, 03.67.Hk, 03.65.Ta' author: - Akbar Fahmi title: Investigation of quantum entanglement simulation by random variables theories augmented by either classical communication or nonlocal effects --- Introduction ============ The development of quantum mechanics (QM) in the early twentieth century obliged physicists to radically change some of the concepts they employed to describe the world. Entanglement was first viewed as a source of some paradoxes, most noticeably the Einstein-Podolsky-Rosen paradox (EPR) [@EPR], which explicitly states that any physical theory must satisfy both local and realistic conditions. These conditions then manifest themselves in the so-called Bell inequality [@Bell; @Bell1]. However, this inequality is violated by quantum predictions. This violation is often referred to as quantum nonlocality and has been recognized as the most intriguing quantum feature. The Bell inequality has been derived in different ways [@CHSH; @CH] and over the past 30 years, various types of Bell’s inequalities have undergone a wide variety of experimental tests. All of them demonstrate strong indications against local hidden variable theories [@As]. These results are often interpreted as nonlocality in quantum mechanics. Now we face with an interesting question: *How much nonlocality or classical resources are required to simulate quantum systems?* An insightful approach for such simulation is to characterize information processing tasks in which two parties share random classical resources and communicate various types of classical bits. In this direction, simulation of Bell’s correlation by shared random-variable (SRV) models augmented by classical communication or nonlocal effects has recently attracted a lot of attention [@Mau; @Brass; @St; @Bacon; @De; @Non; @svozil; @Gisin1; @Cerf1; @Cav1; @Cav2]. The question of whether a simulation can be done with a finite amount of communication has been considered independently by Maudlin [@Mau], Brassard, Cleve, and Tapp [@Brass], and Steiner [@St]. Brassard, Cleve, and Tapp showed that $8$ bits of communication suffice for a perfect (analytic) simulation of the quantum predictions. Steiner, followed by Gisin and Gisin [@Gisin1], showed that if one allows the number of bits to vary from one instance to another, then $2$ bits are sufficient on average. It also has been shown that if many singlets have to be simulated in parallel, then block coding could be used to reduce the number of communicated bits to $1.19$ bits on average [@Cerf1]. A few years later, Toner and Bacon [@Bacon] improved these results and showed that a simulation of the Bell correlation (singlet state) is possible by implementing only one bit of classical communication. Toner and Bacon concluded that their results prove minimal amount, one bit, is sufficient to simulate projective measurements on Bell states. In the same way, Svozil suggested another model [@svozil] which is based on the Toner and Bacon protocol (TB protocol) and is more nonlocal. Afterward, Tessier *et al.* have shown that it is possible to reproduce the quantum-mechanical measurement predictions for the set of all $n$-fold products of Pauli operators on an $n$-qubit GHZ state using only Mermin-type random variables and $n-2$ bits of classical communication [@Cav1]. With a similar approach, Barrett *et al.* proposed a communication-assisted random-variables model that yields correct outcome for the measurement of any product of Pauli operators on an arbitrary graph state [@Cav2]. Independently of the above developments, Popescu and Rohrlich [@PR1] have dealt with a question: Can there be stronger correlations than the quantum mechanical correlations that remain causal (not allow signaling)? Their answer draws upon exhibiting an abstract nonlocal box wherein instantaneous communication remains impossible. This nonlocal box is such that the Clauser-Horne-Shimony-Holt (CHSH) inequality is violated by the algebraic maximum value of $4$, while quantum correlations achieve at most $2\sqrt{2}$ [@PR1; @JM]. There is a question of interest: If perfect nonlocal boxes would not violate causality, why do the laws of quantum mechanics only allow us to implement nonlocal boxes better than anything classically possible, yet not perfectly [@Bra2]? Recently, van Dam and Cleve considered communication complexity as a physical principle to distinguish physical theories from nonphysical ones. They proved that the availability of perfect nonlocal boxes makes the communication complexity of all Boolean functions trivial [@Dam]. Afterwards, Brassard *et al.* [@Bra2] showed that in any world in which communication complexity is nontrivial, there should be a bound on how much nature can be nonlocal. Besides, Pawlowski *et al.* [@IC] defined information causality as a candidate for one of the fundamental assumptions of quantum theory which distinguishes physical theories from nonphysical ones. In fact, Svozil’s model has simulated the nonlocal Box as suggested in [@PR1]. In this paper, we review the TB model which simulates Bell correlations [@Bacon]. We show that if Alice informs Bob from one of her outputs, he can deduce Alice’s measurement results with no need for more classical communications or other resources. Afterwards, we propose a nonlocal version of the TB protocol (NTB) and Svozil protocol (NS), in order to construct a similar structure as the nonlocal Box model [@PR1]. The NTB (NS) model is an imaginary device (includes two input-output ports, one at Alice’s location and another at Bob’s location), in which classical communications are replaced with instantaneous nonlocal effects. In the NTB (NS) model, we get the same result as previous ones. Moreover, it can be proved that the availability of a perfect NTB protocol makes the communication complexity of all Boolean functions trivial. This article is organized as follows: In Sec. II we briefly review the original TB protocol [@Bacon] and show that if Alice only informs Bob from one of her outputs, he can infer Alice’s outputs without any need for more classical communications. Moreover, we apply our approach to Svozil’s protocol [@svozil]. In Sec. III we extend the TB protocol to a nonlocal case by replacing classical communication bits (cbit) with nonlocal effects. In this new protocol, Alice’s measurements cause a nonlocal effect in Bob’s outputs. We also show that in this situation, if Alice only informs Bob from one of her outputs, he can deduce Alice’s measurement outputs without any need for more classical communications. In Sec. IV we summarize our results. Bob infers Alice’s Measurement outputs in the TB and The Svozil protocols ========================================================================= In this section, we briefly review the TB and Svozil protocols and show how in these protocols Bob can infer Alice’s measurement outputs. Toner and Bacon protocol ------------------------ Consider Bell’s experiment setup in which a source emits two spin half ($\frac{1}{2}$) particles (or qubits) to two spatially separate parties (conventionally named Alice and Bob). The states of shared qubits is the entangled Bell singlet state (also known as an EPR pair) $|\psi^{-}\rangle={1 \over \sqrt{2}} \left(|+ \rangle_A |- \rangle_B - |- \rangle_A |+ \rangle_B\right)$. The spin states $|+\rangle$, $|-\rangle$ are defined with respect to a local set of coordinate axes: $|+\rangle$ ($|- \rangle$) corresponds to spin-up (spin-down) along the local $\hat{z}$ direction. Alice and Bob each measure their qubit’s spin along a direction parametrized by three-dimensional unit vectors $\hat{a}$ and $\hat{b}$, respectively. Alice and Bob obtain results, $\alpha\in\{+1,-1\}$ and $\beta \in\{+1,-1\}$, respectively, which indicate whether the spin was pointing along ($+1$) or opposite ($-1$) the direction each party chose to measure. Alice and Bob’s marginal outputs appear random, with expectation values $\langle \alpha\rangle = \langle \beta \rangle =0$; joint expectation values are correlated such that $\langle \alpha \beta \rangle = - \hat{a}\cdot \hat{b}$. Bell’s correlations have three simple properties: (i) if $\hat a= \hat b$, then Alice and Bob’s outputs are perfectly anticorrelated, i.e., $\alpha = - \beta$; (ii) if Alice (Bob) reverses her (his) measurement axis $\hat a\rightarrow -\hat a$ ($\hat b\rightarrow -\hat b$), the outputs are flipped $\alpha\rightarrow -\alpha$ ($\beta\rightarrow -\beta$); and (iii) the joint expectation values are only dependent on $\hat a$ and $\hat b$ via the combination $-\hat a \cdot \hat b$. Now in order to answer the aforementioned question about what classical resources are required to simulate Bell states correlations, Toner and Bacon revised the original Bell’s model [@Bell] and obtained a minimal required number of classical resources which need to simulate the Bell states. They gave a local hidden variables model augmented by just one bit of classical communication to reproduce these three properties for all possible axes \[Bell’s model fails to reproduce property (iii)\]. In the TB protocol, Alice and Bob share two independent random variables $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ which are real three-dimensional unit vectors. They are independently chosen and uniformly distributed over the unit sphere (infinite communication at this stage). Alice measures along $\hat{a}$; Bob measures along $\hat{b}$. They obtain $\alpha\in\{+1,-1\}$ and $\beta\in\{+1,-1\}$, respectively. The TB protocol proceeds as follows: (1.) Alice’s outputs are $\alpha=-\rm sgn(\hat{a}\cdot\hat{\lambda}_{1})$. (2.) Alice sends a single bit $ c \in \{-1, +1\}$ to Bob where $c=\rm sgn(\hat{a}\cdot\hat{\lambda}_{1})\rm sgn(\hat{a}\cdot\hat{\lambda}_{2})$. (3.) Bob’s outputs are $\beta=\rm sgn[\hat{b}\cdot(\hat{\lambda}_{1}+c\hat{\lambda}_{2})]$, where the $\rm sgn$ function is defined by $\rm sgn(x)=+1$ if $x \geq 0$ and $\rm sgn(x)=-1$ if $x < 0$. The joint expectation value $\langle\alpha\beta\rangle$ is given by: $$\begin{aligned} \langle \alpha \beta \rangle = E\Bigg\{-\rm sgn (\hat{a} \cdot \hat{\lambda}_{1}) \times \sum_{d=\pm 1} \frac{(1+cd)}{2}\rm sgn \left[\hat b \cdot(\hat{\lambda}_{1} + d \lambda_{2})\right] \Bigg\}\end{aligned}$$ where $E\left\{ x \right\} ={1 \over (4\pi)^2} \int d \hat{\lambda}_{1} \int d \lambda_{2}\, x$, $c = \rm sgn (\hat{a} \cdot \hat{\lambda}_{1})\, \rm sgn (\hat{a} \cdot\hat{\lambda}_{2} )$. This integral can be evaluated which gives $\langle \alpha \beta \rangle = - \hat a \cdot \hat b$, as required. *Remark 1.*– In this article, we use the terminology in which the parties have complete control over the shared random-variables without referring to each other [@De; @Cerf1; @svozil; @Cerf; @BGS]. Therefore, we use SRV and hidden random-variables (HRV) interchangeability. *Remark 2.*– TB claimed that Bob obtains “no information" about Alice’s outputs from the cbits communications. In the next subsection, we show that it is not correct. ![The random unit vectors $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ divide the Poincare´ sphere into four parts. If Alice’s measurement setting $\hat{a}$ lies in the shaded region, she sends $c=-1$ and if her measurement axis lies in the unshaded region, she sends $c=+1$ to Bob. Toner and Bacon deduce that Bob obtains no information about Alice’s output from the communication. []{data-label="TB"}](NNTB.eps){height="6cm" width="7.5cm"} Bob Finds Alice’s measurement output in the Toner and Bacon protocol -------------------------------------------------------------------- In this subsection, we shall show that in the TB protocol Bob can deduce Alice’s measurement output, if she just notifies Bob of one of her outputs without using other classical communications. At the first stage, let us define some useful quantities. We define unit vectors $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ in the spherical coordinate ($\theta,\phi$), at the ranges of $\theta\in(0,\pi)$ and $\phi\in(0,2\pi)$ and dividing $\theta$ and $\phi$ into $N$ equal parts, so that $\pi/N=\delta\ll1$ as $N\rightarrow\infty$. Now, we consider a subset of SRV $\hat{\lambda}_{1}(\theta_{1},\phi_{1}), \hat{\lambda}_{2}(\theta_{2},\phi_{2})$ in the $xy$ plane which are represented by $\left\{\hat{\lambda}^{xy}_{1}(\theta_{1}=\pi/2,\phi_{1}=l\delta),\hat{\lambda}^{xy}_{2}(\theta_{2}=\pi/2,\phi_{2}=k\delta)\right\}$, where $l,k=0,1,...,N$. For simplicity, we do not refer to $\theta_{1,2}=\pi/2$ and denote them as $\hat{\lambda}^{xy}_{i}(\theta_{i}=\pi/2,\phi_{i}=t\delta)\equiv\hat{\lambda}^{xy}_{i,t}$, ($i=1,2$, and $t=l,k$). Here, $\hat{\lambda}^{xy}_{i,t}$ ($i=1,2$) means that the SRV $\hat{\lambda}^{xy}_{i,t}$ makes the azimuthal angle $t\delta$ with the $\hat{x}$ axis. We select a specific subset of SRV and consider the collection: $$\begin{aligned} \label{xy} \left\{\left(\hat{\lambda}^{xy}_{1,l},\hat{\lambda}^{xy}_{2,l+1}\right)\right\},\end{aligned}$$ where, $\hat{\lambda}_{1,l}^{xy}\cdot\hat{\lambda}_{2,l+1}^{xy}=\cos\delta$, $\hspace{.1cm}\delta=\frac{\pi}{N}\ll1$, and random vectors $\hat{\lambda}_{i,l+1}^{xy}$ ($i=1,2,\hspace{.2cm}l=0,...,N$) are given by applying rotation operators $R(\hat{z},\delta)\in$ SO(3) (around the $\hat{z}$ axis) on $\hat{\lambda}_{i,l}^{xy}$ $(\hat{\lambda}_{i,l+1}^{xy}=R(\hat{z},\delta)\hat{\lambda}_{i,l}^{xy},\hspace{.1cm}\forall l)$. The sequences of communicating classical bits corresponding to the above set of random variables are represented by $c^{xy}_{k}(\hat{a},\hat{\lambda}_{1,k}^{xy},\hat{\lambda}_{2,k+1}^{xy})$. With due attention to the SRV subset, the sign of one of the communicating classical bits will switch to a negative value as is shown by the following statements: $$\begin{aligned} \label{C} ...,c^{xy}_{l-2}(\hat{a},\hat{\lambda}_{1,l-2}^{xy},\hat{\lambda}_{2,l-1}^{xy})&=&+1,\nonumber\\ c^{xy}_{l-1}(\hat{a},\hat{\lambda}_{1,l-1}^{xy},\hat{\lambda}_{2,l}^{xy})&=&+1,\nonumber\\ c^{xy}_{l}(\hat{a},\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l+1}^{xy})&=&-1,\nonumber\\ c^{xy}_{l+1}(\hat{a},\hat{\lambda}_{1,l+1}^{xy},\hat{\lambda}_{2,l+2}^{xy})&=&+1,....\end{aligned}$$ In the above relations, we assumed that in the $l$-th round of the protocol, the sign of the communicated bit has been changed. According to this sequence, Bob deduces that Alice’s measurement setting lies in a plane with the unit vector $\hat{\lambda}_{1,l}^{xy}$ \[$l$-th strip Figs. \[TB\] and \[1\](a)\]. Therefore, in the spherical coordinate, the azimuthal angle of $\hat{a}$ ($\phi_{\hat{a}}$) is equal to $l\delta\pm\pi/2$, with the uncertainty factor $\delta$. One should notice that in this situation, Bob cannot yet fix the polar angle ($\theta_{\hat{a}}$). ![(Color online). (a) Subsets of shared random variables lie in the $xy$ plan. The blue zone corresponds to random variables $(\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l+1}^{xy})$ with $c_{l}^{xy}=-1$ (Fig.\[TB\]). According to the $c$ definition, Alice’s measurement setting $\hat{a}$ must lie in a plane with unit vector $\hat{\lambda}_{1,l}^{xy}$ (blue zone in the $l$-th round of experiment). The azimuthal angle ($\hat{a}$) is equal to $l\delta\pm\pi/2$. (b) Subsets of shared random variables lie in the $xz$ plane. The red zone corresponds to one set of random variables $\hat{\lambda}_{1,p}^{xz},\hat{\lambda}_{2,p+1}^{xz}$ with $c_{p}^{xz}=-1$ (Fig.\[TB\]). Alice’s measurement setting $\hat{a}$ must lie in a plane with unit vector $\hat{\lambda}_{1,p}^{xz}$ (red zone in the $p$-th round of experiment). In fact, the thin strips sweep the surface of the unit sphere so that $c=-1$ for $l$-th and $p$-th strips and $c=+1$ elsewhere. (c) These two strips cross each other at two small areas and consequently, Alice’s measurement setting $\hat{a}$ is in the same (or in the opposite) direction of the unit vector which connects the origin of the Poincare´ sphere to the cross points.[]{data-label="1"}](xy1.eps "fig:"){height="6.2cm" width="6.3cm"} ![(Color online). (a) Subsets of shared random variables lie in the $xy$ plan. The blue zone corresponds to random variables $(\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l+1}^{xy})$ with $c_{l}^{xy}=-1$ (Fig.\[TB\]). According to the $c$ definition, Alice’s measurement setting $\hat{a}$ must lie in a plane with unit vector $\hat{\lambda}_{1,l}^{xy}$ (blue zone in the $l$-th round of experiment). The azimuthal angle ($\hat{a}$) is equal to $l\delta\pm\pi/2$. (b) Subsets of shared random variables lie in the $xz$ plane. The red zone corresponds to one set of random variables $\hat{\lambda}_{1,p}^{xz},\hat{\lambda}_{2,p+1}^{xz}$ with $c_{p}^{xz}=-1$ (Fig.\[TB\]). Alice’s measurement setting $\hat{a}$ must lie in a plane with unit vector $\hat{\lambda}_{1,p}^{xz}$ (red zone in the $p$-th round of experiment). In fact, the thin strips sweep the surface of the unit sphere so that $c=-1$ for $l$-th and $p$-th strips and $c=+1$ elsewhere. (c) These two strips cross each other at two small areas and consequently, Alice’s measurement setting $\hat{a}$ is in the same (or in the opposite) direction of the unit vector which connects the origin of the Poincare´ sphere to the cross points.[]{data-label="1"}](xz1.eps "fig:"){height="6.6cm" width="5.6cm"} ![(Color online). (a) Subsets of shared random variables lie in the $xy$ plan. The blue zone corresponds to random variables $(\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l+1}^{xy})$ with $c_{l}^{xy}=-1$ (Fig.\[TB\]). According to the $c$ definition, Alice’s measurement setting $\hat{a}$ must lie in a plane with unit vector $\hat{\lambda}_{1,l}^{xy}$ (blue zone in the $l$-th round of experiment). The azimuthal angle ($\hat{a}$) is equal to $l\delta\pm\pi/2$. (b) Subsets of shared random variables lie in the $xz$ plane. The red zone corresponds to one set of random variables $\hat{\lambda}_{1,p}^{xz},\hat{\lambda}_{2,p+1}^{xz}$ with $c_{p}^{xz}=-1$ (Fig.\[TB\]). Alice’s measurement setting $\hat{a}$ must lie in a plane with unit vector $\hat{\lambda}_{1,p}^{xz}$ (red zone in the $p$-th round of experiment). In fact, the thin strips sweep the surface of the unit sphere so that $c=-1$ for $l$-th and $p$-th strips and $c=+1$ elsewhere. (c) These two strips cross each other at two small areas and consequently, Alice’s measurement setting $\hat{a}$ is in the same (or in the opposite) direction of the unit vector which connects the origin of the Poincare´ sphere to the cross points.[]{data-label="1"}](xyz1.eps "fig:"){height="6.8cm" width="6cm"} In the second stage, in order to find $\theta_{\hat{a}}$, we can select the random variables $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ in a plane with unit vector $\hat{\lambda}_{1,l}^{x'y'}$. It can be obtained by rotating the $\hat{x}$ axis by amounts $\phi=l\delta+\pi/2$ or $\phi=l\delta-\pi/2$ around the $\hat{z}$ axis. These hidden variables are obtained by $\left\{\hat{\lambda}^{x'z}_{1}(\theta,\phi=l\delta\pm\pi/2),\hat{\lambda}^{x'z}_{2}(\theta,\phi=l\delta\pm\pi/2)\right\}$ in the Poincare´ sphere coordinates. However, similar to the first stage, we consider $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ in the $xz$ plane which are represented by $\left\{\hat{\lambda}^{xz}_{1}(\theta_{1}=p\delta,\phi_{1}=0),\hat{\lambda}^{xz}_{2}(\theta_{2}=q\delta,\phi_{2}=0)\right\} \equiv\left\{\hat{\lambda}^{xz}_{1,p},\hat{\lambda}^{xz}_{2,q}\right\}$, where $p,q=0,1,...,N$. Here, $\hat{\lambda}^{xz}_{i,t}$ ($i=1,2$) means that the SRV $\hat{\lambda}^{xz}_{i,t}$ makes polar angle $t\delta$ with the $\hat{z}$ axis. We select a specific subset of SRV and consider a collection similar to Eq. (\[xy\]): $$\begin{aligned} \label{xz} \left\{\left(\hat{\lambda}^{xz}_{1,p},\hat{\lambda}^{xz}_{2,p+1}\right)\right\},\end{aligned}$$ where, $\hat{\lambda}_{1,p}^{xz}\cdot\hat{\lambda}_{2,p+1}^{xz}=\cos\delta$, $\delta=\frac{\pi}{N}\ll1$, and other random vectors such as $\left(\hat{\lambda}^{xz}_{1,p+1},\hat{\lambda}^{xz}_{2,p+2}\right)$ ($i=1,2,\hspace{.2cm}p=0,...,N$) are given by applying rotation operators $R(\hat{y},\delta)\in$ SO(3) on the $\left(\hat{\lambda}^{xz}_{1,p},\hat{\lambda}^{xz}_{2,p+1}\right)$, $\hspace{.1cm}\hat{\lambda}_{i,p+1}^{xz}= R(\hat{y},\delta)\hat{\lambda}_{i,p}^{xz},\hspace{.1cm}\forall p$. The corresponding sequences of communicating cbits are given by similar relations to (\[C\]) with $c^{xy}_{l}(\hat{a},\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l+1}^{xy}) \rightarrow c^{xz}_{p}(\hat{a},\hat{\lambda}_{1,p}^{xz},\hat{\lambda}_{2,p+1}^{xz})$. According to this sequence, Bob deduces that $\hat{a}$ lies in a plane with unit vector $\hat{\lambda}_{1,p}^{xz}$ \[in the $p$-th strip, Figs. \[TB\] and \[1\](b)\], with the uncertainty factor $\delta$. Here, we have restricted selected shared random variables to two special subsets (\[xy\]) and (\[xz\]) because they are sufficient for Bob to deduce Alice’s measurement outputs. These two strips encounter each other at two points. Alice’s measurement setting $\hat{a}$ is in the same (or in the opposite) direction of the unit vector that connects the origin of the Poincare´ sphere to the crossover points \[Fig. \[1\](c)\]. Now, if the parties collaborated and selected a specific random variables, for example, ${\lambda}_{r}$, and Alice informs Bob of only one of her outputs, Bob can deduce Alice’s measurement setting without any need for further information. For example, if Alice sends $\alpha_{r}=-sgn(\hat{a}.\hat{\lambda}_{r})=+1$ to Bob, he can deduce the $\hat{a}$ direction lies in the up (down) semicircle. Our approach is not restricted to the above selected subsets of hidden variables, The parties can use other sets of SRV such as $\hat{\lambda}^{yz}_{1}$ and $\hat{\lambda}^{yz}_{2}$ which belong to the $yz$ plane to get the same results. Svozil’s protocol ----------------- Svozil has suggested a new type of shared random-variable theory augmented by one bit of classical communication which is stronger than quantum correlations [@svozil]. It violates the CHSH inequality by $4$, as compared to the quantum Tsirelson bound $2\sqrt{2}$. Svozil’s protocol is similar to the Toner and Bacon protocol [@Bacon], but just requires only a single random variable ${\lambda }$. The another random variable ${\Delta} (\omega )$ is obtained by rotating ${\hat \lambda }$ clockwise around the origin by angle $\omega$ with a constant shift for all experiments ${\Delta} (\omega )=\lambda +\omega$. Alice’s outputs are given by $\alpha =- {\rm sgn}({\hat a} \cdot {\hat \lambda }) =-{\rm sgn}\left[\cos ({a} - { \lambda } )\right]$ and she sends classical bits $c(\omega) = {\rm sgn}({\hat a} \cdot {\hat \lambda } ){\rm sgn}\left[{\hat a} \cdot {\hat \Delta} (\omega)\right]= {\rm sgn}\left[\cos ({ a} - { \lambda } )\right] {\rm sgn}\cos \left[{ a} - { \Delta} (\omega)\right]$ to Bob. Bob’s outputs are given by $\beta (\omega )= {\rm sgn}[{\hat b} \cdot ({\hat \lambda } +c(\omega){\hat \Delta} (\omega))]$. If we let $\omega$ changes randomly on the Poincare´ sphere, then Svozil’s protocol becomes the TB protocol (with uniform distribution). In the general case $0\le \omega \le \pi /2$, the correlation function is given by $$\begin{aligned} E(\theta ,\omega )= \left\{ \begin{array}{ll} -1 & \text{ for } \;\; 0\le \theta \le {\omega \over 2} , \\ -1 +{2\over \pi}(\theta -{\omega \over 2} )&\text{ for } \;\; {\omega \over 2} < \theta \le {1 \over 2}(\pi - \omega) , \\ -2(1-{2 \over \pi } \theta ) &\text{ for } \; \; {1 \over 2}(\pi - \omega) < \theta \le {1 \over 2}(\pi + \omega ) , \\ 1+ {2\over \pi }(\theta-\pi +{\omega \over 2} ) &\text{ for } \;\; {1 \over 2}(\pi + \omega ) < \theta \le \pi - {\omega \over 2} , \\ 1 & \text{ for } \;\; \pi - {\omega \over 2} < \theta \le \pi . \end{array} \right. \label{e-2004-brainteaser-2}\end{aligned}$$ The correlation function $E(\theta ,\omega )$ is stronger than quantum correlations for all nonzero values of $\omega$. The strongest correlation function is obtained for $\omega =\pi /2$, where the two random-variable directions ${ \lambda }$ and ${ \Delta} ={ \lambda }+ \pi /2$ are orthogonal and the information of classical bits $c (\pi /2)$ are about the location of ${ a}$ within two opposite quadrants. In the case of $\omega =\pi /2$, the CHSH inequality $\vert E({ a} ,{ b} )- E({ a} ,{ b} ' )\vert+ \vert E({ a}' ,{ b} )- E({a} ',{ b} ') \vert \le 2$ is violated by the maximal algebraic value of $4$, for ${a} =0 $, ${a}'=\pi /2$, ${b} =\pi /4$, ${b}'= 3\pi /4$ (which are the same as the largest possible value which has been suggested by Popescu and Rohrlich’s nonlocal box model [@PR1]). For $\omega =0$, the classical linear correlation function $E(\theta )= 2\theta /\pi -1$ is recovered, as could be expected. Bob finds Alice’s measurement outputs in Svozil’s protocol ---------------------------------------------------------- In this subsection, we consider Svozil’s [@svozil] arguments and show that Bob can deduce Alice’s measurement setting and outputs by using cbits and only one of the Alice outputs, without asking for any further information at the end of the protocol. Here, we select the $\omega=\frac{\pi}{2}$ case and consider the subset of SRV as: $$\begin{aligned} \label{} \left\{\left(\hat{\lambda}(k\delta),\hat{\Delta}(k\delta)\right)\right\},\end{aligned}$$ where, $\delta=\pi/N\ll1$, $N\rightarrow\infty$, and $k=0,...,N$, and other random vectors are related to each other by rotating $\hat{\lambda}_{k}(\equiv\hat{\lambda}(k\delta))$ and $\hat{\Delta}_{k}(\equiv\hat{\Delta}(k\delta))$ clockwise around the origin by an angle $\delta$. It acts as a constant shift for all experiments, i.e., $\lambda_{k+1}=\lambda_{k}+\delta$. The sequences of communicating classical bits corresponding to the above set of random variables are represented by $c_{k}(\pi/2, \hat{\lambda}_{k})$. With due attention to the SRV subset, after some of the communicating classical bits, the sign of $c_{k}$s will switch to opposite values as are given by the following quantities: $$\begin{aligned} ...,c_{l-1}(\pi/2, \hat{\lambda}_{l-1})=+1, c_{l}(\pi/2, \hat{\lambda}_{l})=+1,\\ c_{l+1}(\pi/2, \hat{\lambda}_{l+1})=-1, c_{l+2}(\pi/2, \hat{\lambda}_{l+2})=-1,...,\end{aligned}$$ where, in the $l+1$-th round of the protocol the sign of communicated bits has changed (Fig. \[3\]). *Remark 3.*– With note to $c_{k}$ definition and selected random variables, if $\hat{a}$ lies in $(+\hat{\lambda}_{k},+\hat{\Delta}_{k})$ or $(-\hat{\lambda}_{k},-\hat{\Delta}_{k})$ intervals, $c_{k}=+1$ and for other ranges $c_{k}=-1$: $$\begin{aligned} &&\hat{a}\in(+\hat{\lambda}_{k},+\hat{\Delta}_{k})\vee(-\hat{\lambda}_{k},-\hat{\Delta}_{k})\rightarrow c_{k}=+1,\\ &&\hat{a}\in(+\hat{\lambda}_{k},-\hat{\Delta}_{k})\vee(-\hat{\lambda}_{k},+\hat{\Delta}_{k})\rightarrow c_{k}=-1,\hspace{.1cm}\forall k.\end{aligned}$$ As stated by the above sequence, Bob deduces that $\hat{a}$ is in the same (or in the opposite) direction as the unit vector $\hat{\lambda}_{l}$, with the uncertainty factor $\delta$. At this stage, if Alice only informs Bob of one of her outputs, for example, $\alpha_{r}(\hat{a},\lambda_{r})=- {\rm sgn}(\hat{a}.\hat{\lambda}_{r})=+1$ (or $-1$), Bob deduces that the $\hat{a}$ direction lies in the down (up) semicircle. Thus, he deduces Alice’s outputs and measurement setting, without any need for further information [@Note1]. ![ Svozil’s protocol for the case of $\omega=\frac{\pi}{2}$. Here $\hat{\lambda}_{k}$ is given by rotating $\hat{\lambda}_{k-1}$ $(\forall k)$ clockwise around the origin by $\delta$, and $\hat{\lambda}_{k,\pm}=\hat{\lambda}_{k}\pm\hat{\Delta}_{k}$. According to the $c_{k}={\rm sgn}(\hat{a}\cdot\hat{\lambda}_{k}){\rm sgn}(\hat{a}\cdot\hat{\Delta}_{k})$ definition, in the $l+1$-th round of the protocol the sign of the communicated bits change and consequently $\hat{a}$ becomes parallel (or antiparallel) to $\hat{\lambda}_{l}$.[]{data-label="3"}](NSvozil1.eps){height="8.5cm" width="8cm"} Bob infers Alice’s Measurement outputs in SRV theories augmented by NonLocal Effects ==================================================================================== In the above approaches, Bob only uses a few parts of classical communication bits, without referring to his measurement’s outputs. This leads us to ask: *Can Bob find Alice’s outputs without classical communication?* In what follows, we certainly show that the answer to the question is positive. In this section, we consider Svozil and TB protocols with fewer assumptions, given by replacing classical communications with instantaneous nonlocal effects. Then, we get the same results as the previous ones. Nonlocal description of Svozil’s protocol ----------------------------------------- Before investigating the nonlocal TB protocol, here, we modify Svozil’s argument [@svozil] by replacing classical communications with instantaneous nonlocal effects. For simplicity, we consider a similar notation as in Sec. (II-C). The nonlocal description of Svozil’s protocol proceeds as follows: Parties share independent random variables $\hat{\lambda}$ and $\hat{\Delta}(\pi/2)$. Alice measures along $\hat{a}$ and her outputs are $\alpha_{k}=-{\rm sgn}(\hat{a}\cdot\hat{\lambda}_{k})$. Alice’s measurement causes an instantaneous nonlocal effect on Bob’s measurement outputs’ so that if Bob measures in the $\hat{b}$ direction, his outputs will be $\beta_{k}={\rm sgn}[\hat{b}\cdot(\hat{\lambda}_{k}+c_{k}\hat{\Delta}_{k})]$, where $c_{k}={\rm sgn}(\hat{a}\cdot\hat{\lambda}_{k}){\rm sgn}(\hat{a}\cdot\hat{\Delta}_{k}))$ (Fig. \[3\]). Let us select a subset of hidden variables and consider the collection: $$\begin{aligned} \label{S} &&\left\{\left(\hat{\lambda}_{k},\hat{\Delta}_{k}\right)\right\},\end{aligned}$$ where, $\hat{\lambda}_{k}\cdot\hat{\Delta}_{k}=0$, $\hspace{.1cm}\hat{\lambda}_{k+1}=R_{clockwise}(\delta)\hat{\lambda}_{k},\hspace{.2cm} \hat{\Delta}_{k+1}= R_{clockwise}(\delta)\hat{\Delta}_{k},\hspace{.2cm}\hat{\lambda}_{k,\pm}= \hat{\lambda}_{k}\pm\hat{\Delta}_{k}, \hspace{.2cm}\forall k,\hspace{.2cm} \hat{\lambda}_{k,+}\cdot\hat{\lambda}_{k,-}=0,$ and $\delta=\frac{\pi}{N}\ll1,\hspace{.1cm}k=0,...,N$. The random variables $\hat{\lambda}_{k,\pm}$ divide the Poincare´ sphere into four equal quadrants. *Remark 4.*– Taking into account the definition of $c_{k}$ and selected random variables, we know that if $\hat{b}$ lies in the $(\hat{\lambda}_{k,+},\hat{\lambda}_{k,-})$ or $(-\hat{\lambda}_{k,+},-\hat{\lambda}_{k,-})$ intervals, Bob cannot deduce the nonlocal effect of $c_{k}$. Yet, for other ranges, he can exactly attain the amount of $c_{k}$ by: $$\begin{aligned} \label{Non3} &&\hat{b}\in(\hat{\lambda}_{k,+},\hat{\lambda}_{k,-}),\hspace{.2cm}\beta_{k}=\pm1\rightarrow c_{k}=?,\\ &&\hat{b}\in(-\hat{\lambda}_{k,+},-\hat{\lambda}_{k,-}),\hspace{.2cm}\beta_{k}=\pm1\rightarrow c_{k}=?,\\ &&\hat{b}\in(\hat{\lambda}_{k,+},-\hat{\lambda}_{k,-}),\hspace{.2cm}\beta_{k}=\pm1\rightarrow c_{k}=\pm1,\\ &&\hat{b}\in(-\hat{\lambda}_{k,+},\hat{\lambda}_{k,-}),\hspace{.2cm}\beta_{k}=\pm1\rightarrow c_{k}=\mp1,\hspace{.1cm}\forall k.\end{aligned}$$ The collection (\[S\]) assures Bob that if $\hat{b}$ lies in one of (8) or (9) intervals, after some round of experiment the sign of Bob’s outputs will switch to negative values as are given by the following quantities: $$\begin{aligned} ..., \beta_{l-1}(\pi/2, \hat{\lambda}_{l-1})=+1, \beta_{l}(\pi/2, \hat{\lambda}_{l})=+1,\\ \beta_{l+1}(\pi/2, \hat{\lambda}_{l+1})=-1, \beta_{l+2}(\pi/2, \hat{\lambda}_{l+2})=-1,....\end{aligned}$$ Here, we assumed that in the $l+1$-th round of the protocol the sign of Bob’s outputs has changed and so according to *Remark 1*, Bob deduces sequences of nonlocal effects $c_{k}$ as the following: $$\begin{aligned} \label{Sec1} ...,c_{(l-1)}=+1,c_{l}=+1,c_{(l+1)}=-1,c_{(l+2)}=-1,...,\end{aligned}$$ Therefore, Bob concludes that $\hat{a}$ is in the same (or in the opposite) direction of the unit vector $\hat{\lambda}_{l}$, with the uncertainty factor $\delta$. At this stage, if Alice informs Bob of only one of her outputs, for example, $\alpha(\hat{a},\lambda_{r})=-{\rm sgn}(\hat{a}.\hat{\lambda}_{r})=+1$ (or $-1$), Bob will infer that $\hat{a}$ lies in the down (up) semicircle [@Note1]. Based on what we have shown, the reader can admit that Bob can deduce Alice’s measurement setting by considering every two subsets of SRV. The study of two special cases of Alice’s measurement settings seems interesting. *Remark 5.*– If the angle between measurement settings of the parties is equal to $\mid\varphi_{\hat{a}}-\varphi_{\hat{b}}\mid=\pi/4$ or $3\pi/4$ then Bob will get one of the following outputs: $$\begin{aligned} \label{Non4} \text{If}\left\{ \begin{array}{ll} \beta_{k}=+1\hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(-\hat{\lambda}_{k,+},+\hat{\lambda}_{k,-}),\text{and} \\ \beta_{k}=-1 \hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(+\hat{\lambda}_{k,+},-\hat{\lambda}_{k,-}),\hspace{.5cm}\forall k, \end{array} \right\} \text{then, Bob will deduce}\hspace{.1cm}\mid\varphi_{\hat{a}}-\varphi_{\hat{b}}\mid=\pi/4\hspace{.2cm}\text{or}\hspace{.2cm} 3\pi/4, \hspace{.1cm}\Rightarrow\hat{a}=\pm R_{clockwise}(\pi/4)\hat{b},\nonumber\end{aligned}$$ where Bob obtains Alice’s measurement direction ($\pm\hat{a}$) by rotating $\hat{b}$ clockwise around the center by the value of $\pi/4$. $$\begin{aligned} \label{Non5} \text{If} \left\{ \begin{array}{ll} \beta_{k}=-1\hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(-\hat{\lambda}_{k,+},+\hat{\lambda}_{k,-}),\text{and} \\ \beta_{k}=+1 \hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(+\hat{\lambda}_{k,+},\hat{-\lambda}_{k,-}),\hspace{.5cm}\forall k, \end{array} \right\} \text{then, Bob will deduce}\hspace{.1cm}\mid\varphi_{\hat{a}}-\varphi_{\hat{b}}\mid=\pi/4 \hspace{.1cm}\text{or}\hspace{.2cm} 3\pi/4, \hspace{.1cm} \Rightarrow\hat{a}=\pm R_{c-clockwise}(\pi/4)\hat{b},\nonumber\end{aligned}$$ where Bob obtains $\pm\hat{a}$ by rotating $\hat{b}$ counterclockwise around the center by the value of $\pi/4$. Therefore, if Alice informs Bob of only one of her outputs, he will exactly deduce the $\hat{a}$ direction without any need for further information. Nonlocal description of TB model -------------------------------- In this subsection, we suggest a nonlocal version of the TB protocol (NTB) which is an imaginary device that includes two input-output ports, one at Alice’s location and another at Bob’s, while Alice and Bob are spacelike separated. NTB protocol proceeds as follows: The parties share two independent random variables $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$. Alice measures along $\hat{a}$ and her output is $\alpha=-{\rm sgn}(\hat{a}\cdot\hat{\lambda}_{1})$. Alice’s measurement causes a nonlocal effect on Bob’s measurement outputs’ so that if Bob’s measurement setting is in the $\hat{b}$ direction, his output is $\beta={\rm sgn}[\hat{b}\cdot(\hat{\lambda}_{1}+c\hat{\lambda}_{2})]$, where $c={\rm sgn}(\hat{a}\cdot\hat{\lambda}_{1}){\rm sgn}(\hat{a}\cdot\hat{\lambda}_{2})$ (Fig. \[2\]). *Remark 6.*– The TB and the NTB frameworks are equivalent at the level of what they aim to calculate; we can replace one bit classical communication in the Toner and Bacon model [@Bacon] with a nonlocal effect so that the marginal and joint probabilities calculated in either of these scenarios are similar to those within the other one. In the NTB model, Alice and Bob know directions of random variables $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ for each round of the protocol, but, the values of $c$ are not accessible to Bob. Similar to the Sec. (II-A), we consider the unit vectors $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$ in the spherical coordinate ($\theta,\phi$) in the ranges of $\theta\in(0,\pi)$ and $\phi\in(0,2\pi)$, and divide them into $N$ equal parts ($\delta=\pi/N\ll1$, with $N\rightarrow\infty$). To show that selected SRV subsets (Sec. II-A) are not restricted to collection \[xy\], we select a subset of SRV in which the elements of each pair are orthogonal. In the first case, we select a subset of SRV which lies in the $xy$ plane. In the Poincare´ sphere coordinates, the selected SRV is represented by $\left\{\hat{\lambda}^{xy}_{1}(\theta=\pi/2,l\delta),\hat{\lambda}^{xy}_{2}(\theta=\pi/2,k\delta)\right\} \equiv\left\{\hat{\lambda}^{xy}_{1,l},\hat{\lambda}^{xy}_{2,k}\right\}$, where $l,k=0,1,...,N$. Let us select a subset of SRV and consider the collection: $$\begin{aligned} \label{S2} \left\{\left(\hat{\lambda}^{xy}_{1,l},\hat{\lambda}_{2,l}^{xy}\right)\right\},\end{aligned}$$ where, $\hat{\lambda}_{2,l}^{xy}=R(\hat{z},\pi/2)\hat{\lambda}_{1,l}^{xy}$, $\hspace{.2cm}$ $\hat{\lambda}_{1,l}^{xy}\cdot\hat{\lambda}_{2,l}^{xy}=0$, and $\hat{\lambda}_{i,l+1}^{xy}=R(\hat{z},\delta)\hat{\lambda}_{i,l}^{xy}$, $\hspace{.2cm}i=1,2,\hspace{.2cm}l=0,...,N,\hspace{.2cm}\forall l$. Moreover, we define random variables $\hat{\lambda}_{l,\pm}^{xy}=\hat{\lambda}_{1,l}^{xy}\pm\hat{\lambda}_{2,l}^{xy},$ $\hspace{.2cm}\hat{\lambda}_{l,+}^{xy}\cdot\hat{\lambda}_{l,-}^{xy}=0,\hspace{.2cm}\forall l$. The random variables $\hat{\lambda}_{l,\pm}$ divide the Poincare´ sphere into four equal parts. The other elements of set (\[S2\]) are given by rotating $\hat{\lambda}_{l,\pm}$ around the $\hat{z}$ axis by the value of $\delta$, $\hat{\lambda}_{l+1,\pm}^{xy}=R(\hat{z},\delta)\hat{\lambda}_{l,\pm}^{xy}$. ![NTB protocol as an elementary resource for simulating physical systems, where parties shared two sets of independent random variables $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$; Alice and Bob inputs are $\hat{a}$ and $\hat{b}$, respectively. NTB protocol outputs are $\alpha=-{\rm sgn}(\hat{a}.\hat{\lambda}_{1})$ and $\beta={\rm sgn}[\hat{b}.(\hat{\lambda}_{1}+c\hat{\lambda}_{2})]$, where $c={\rm sgn}(\hat{a}\cdot\hat{\lambda}_{1}){\rm sgn}(\hat{a}\cdot\hat{\lambda}_{2})$.[]{data-label="2"}](NTB1.eps){height="4cm" width="8cm"} ![(Color online). (a) Subsets of shared random variables lie in the $xy$ plan. The blue zone defines a plane with unit vectors $\hat{\lambda}_{2,l}^{xy}$. According to the definition of $c$, Alice’s measurement setting $\hat{a}$ must lie in the blue zone. (b) Subsets of shared random variables lie in the $xz$ plane. The red zone defines a plane with unit vectors $\hat{\lambda}_{2,p}^{xz}$, so that Alice’s measurement setting $\hat{a}$ must lie in the red zone. In fact, these thin strips sweep the surface of the unit sphere.[]{data-label="4"}](Nxy1.eps "fig:"){height="7cm" width="7.5cm"}\ ![(Color online). (a) Subsets of shared random variables lie in the $xy$ plan. The blue zone defines a plane with unit vectors $\hat{\lambda}_{2,l}^{xy}$. According to the definition of $c$, Alice’s measurement setting $\hat{a}$ must lie in the blue zone. (b) Subsets of shared random variables lie in the $xz$ plane. The red zone defines a plane with unit vectors $\hat{\lambda}_{2,p}^{xz}$, so that Alice’s measurement setting $\hat{a}$ must lie in the red zone. In fact, these thin strips sweep the surface of the unit sphere.[]{data-label="4"}](Nxz1.eps "fig:"){height="7.2cm" width="7.5cm"} *Remark 7.*– Concerning the definition of $c_{l}$ and the selected random variables in (\[Non3\])-(9), we know that if $\hat{b}$ lies in the $(\hat{\lambda}^{xy}_{l,+},\hat{\lambda}^{xy}_{l,-})$ or $(-\hat{\lambda}^{xy}_{l,+},-\hat{\lambda}^{xy}_{l,-})$ intervals, Bob cannot deduce the nonlocal effect of $c_{l}$, but for other ranges, he can exactly attain the value of $c_{l}$. The collection (\[S2\]) assures Bob that if $\hat{b}$ lies in one of the (8) or (9) intervals, his corresponding outputs will satisfy the following sequences: $$\begin{aligned} \label{beta} &&...,\beta_{l-1}(c_{l-1},\hat{\lambda}_{1,l-1}^{xy},\hat{\lambda}_{2,l-1}^{xy})=+1, \beta_{l}(c_{l},\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l}^{xy})=+1,\nonumber\\ &&\beta_{l+1}(c_{l+1},\hat{\lambda}_{1,l+1}^{xy},\hat{\lambda}_{2,l+1}^{xy})=-1, \beta_{l+2}(c_{l+2},\hat{\lambda}_{1,l+2}^{xy},\hat{\lambda}_{2,l+2}^{xy}) =-1,....\hspace{.4cm}\end{aligned}$$ Similar to the previous case, we assume here that in the $l+1$-th round of the protocol, the sign of Bob’s outputs has changed and similar to *Remark 3*, Bob deduces sequences of nonlocal effects of $c_{k}$ as following: $$\begin{aligned} \label{Sec1} ..., c^{xy}_{(l-1)}=+1, c^{xy}_{l}=+1, c^{xy}_{(l+1)}=-1, c^{xy}_{(l+2)}=-1,....\nonumber\\\end{aligned}$$ Therefore, Bob infers that $\hat{a}$ lies in the plane with unit vector $\hat{\lambda}^{xy}_{2,l}$, with the uncertainty factor $\delta$ \[Fig. \[4\](a)\] [@Note2]. In fact, $\hat{a}$ located in the uncommon parts of two semispheres which are defined by $\hat{\lambda}^{xy}_{2,l}$ and $\hat{\lambda}^{xy}_{2,l+1}$ \[Fig. \[4\](a)\]. In the next step of protocol, parties select another subset of SRV in the $xz$ plane as: $$\begin{aligned} \label{S11} \left\{\left(\hat{\lambda}^{xz}_{1,p},\hat{\lambda}_{2,p}^{xz}\right)\right\},\end{aligned}$$ where, $\hat{\lambda}_{1,p}^{xz}\cdot\hat{\lambda}_{2,p}^{xz}=0$, $\hspace{.2cm}\hat{\lambda}_{2,p}^{xz}=R(\hat{y},\pi/2)\hat{\lambda}_{1,p}^{xz}$, $\hspace{.2cm}\hat{\lambda}_{j,p+1}^{xz}=R(\hat{y},\delta)\hat{\lambda}_{j,p}^{xz}$ $\hspace{.2cm}\forall p$, and $\hspace{.2cm}\delta=\frac{\pi}{N}\ll1,\hspace{.2cm}j=1,2,\hspace{.2cm}p=0,...,N$. Moreover, we define random variables $\hat{\lambda}_{p,\pm}^{xz}=\hat{\lambda}_{1,p}^{xz}\pm\hat{\lambda}_{2,p}^{xz},$ $\hspace{.2cm}\hat{\lambda}_{p,+}^{xz}\cdot\hat{\lambda}_{p,-}^{xz}=0,\hspace{.2cm}\forall p$. The random variables $\hat{\lambda}_{p,\pm}^{xz}$ divide the Poincare´ sphere into four equal parts. The other elements of set (\[S11\]) are given by rotating $\hat{\lambda}_{p,\pm}$ around the $\hat{y}$ axis by the value of $\delta$, $\hat{\lambda}_{p+1,\pm}^{xz}=R(\hat{y},\delta)\hat{\lambda}_{p,\pm}^{xz}$. Bob’s outputs are similar to (\[beta\]), with $\beta_{l}(c_{l},\hat{\lambda}_{1,l}^{xy},\hat{\lambda}_{2,l}^{xy}) \rightarrow\beta_{p}(c_{p},\hat{\lambda}_{1,p}^{xz},\hat{\lambda}_{2,p}^{xz})$ and $c^{xy}_{l}\rightarrow c^{xz}_{p}$. With this sequence, Bob infers that $\hat{a}$ lies in the plane with unit vector $\hat{\lambda}^{xz}_{2,p}$, with the uncertainty factor $\delta$ \[Fig. \[4\](b)\]. The subsets (\[S2\]) and (\[S11\]) define two strips as Figs. \[4\](a) and \[4\](b) show. These two strips cross each other at two points \[similar to Fig. \[1\](c)\]. Alice’s measurement setting $\hat{a}$ is in the same (or in the opposite) direction as the unit vector that connects the origin of the Poincare´ sphere to the cross points \[Fig. \[1\](c)\]. Similar to what happens in the previous case, if Alice informs Bob of only one of her outputs, he exactly deduces the $\hat{a}$ direction without any need for further information. Here, we discuss two interesting cases in Alice’s measurements setting. *Remark 8.*– If the angle between the measurement settings of parties in the $xy$ plane is equal to $\mid \phi_{\hat{a}}-\phi_{\hat{b}} \mid=\pi/4$ or $3\pi/4$, Bob’s measurement outputs are given as the following:\ $$\begin{aligned} \label{Non14} &&\text{If} \left\{ \begin{array}{ll} \beta_{k}=+1\hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(-\hat{\lambda}^{xy}_{k,+},+\hat{\lambda}^{xy}_{k,-}),\text{and} \\ \beta_{k}=-1 \hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(+\hat{\lambda}^{xy}_{k,+},\hat{-\lambda}^{xy}_{k,-}),\hspace{.5cm}\forall k \end{array} \right\} \hspace{.2cm}\text{then, Bob will deduce} \hspace{.1cm}\mid\phi_{\hat{a}}-\phi_{\hat{b}}\mid=\pi/4\hspace{.1cm} \text{or}\hspace{.1cm} 3\pi/4,\nonumber\\ &&\hspace{9cm}\Rightarrow \hat{a} \hspace{.1cm}\text{lies in the plane with unit vector}\hspace{.2cm}\hat{n}=+ R_{\hat{\phi}}(\hat{z},\pi/4)\hat{b},\nonumber\end{aligned}$$ where Alice’s measurement setting ($\hat{a}$) lies in a plane with the unit vector $\hat{n}$ by rotating $\hat{b}$ around the $\hat{z}$ axis (in the $+\hat{\phi}$ direction) by $+\pi/4$. $$\begin{aligned} \label{Non14} &&\text{If} \left\{ \begin{array}{ll} \beta_{k}=-1\hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(-\hat{\lambda}^{xy}_{k,+},+\hat{\lambda}^{xy}_{k,-}), \text{and}\\ \beta_{k}=+1 \hspace{.1cm}\text{for}\hspace{.1cm} \hat{b}\in(+\hat{\lambda}^{xy}_{k,+},\hat{-\lambda}^{xy}_{k,-}),\hspace{.5cm}\forall k \end{array} \right\}\hspace{.2cm}\text{then, Bob will deduce}\hspace{.1cm}\mid\phi_{\hat{a}}-\phi_{\hat{b}}\mid=\pi/4\hspace{.1cm} \text{or}\hspace{.1cm} 3\pi/4,\\ &&\hspace{9cm}\Rightarrow \hat{a} \hspace{.1cm}\text{lies in the plane with unit vector}\hspace{.2cm}\hat{n}=+ R_{\hat{\phi}}(\hat{z},-\pi/4)\hat{b},\hspace{.3cm}\nonumber\end{aligned}$$ where Alice’s measurement setting ($\hat{a}$) lies in a plane with the unit vector $\hat{n}$ by rotating $\hat{b}$ around the $\hat{z}$ axis (in the $+\hat{\phi}$ direction) by $-\pi/4$. Similar to the above approach, if the angle between measurement settings of parties in the $xz$ plane is equal to $\pi/4$ or $3\pi/4$, $\hat{a}$ lies in a plane with the unit vector $\hat{n'}$, and Bob obtains the direction $\hat{n'}$ by rotating $\hat{b}$ around the $\hat{y}$ axis by $\pm\pi/4$. Summery and outlooks ==================== In this paper, we reviewed TB and Svozil protocols and showed that if parties select two subsets of SRV, Bob can deduce all of Alice’s measurement. $$\begin{aligned} %\left\{ \begin{array}{ll} \text{\textbf{The original TB protocol} (SRV + 1 cbit for each round)} \\ $\hspace{1cm}$\text{ + \textbf{Alice informs Bob of one of her outputs} } \end{array} %\right\} \longrightarrow \begin{array}{ll} \text{\textbf{Bob deduces Alice's measurement} } \\ $\hspace{1cm}$\text{\textbf{outputs with certainty}}\hspace{.1cm} \delta\nonumber \end{array}\end{aligned}$$ Afterwards, we suggested a nonlocal version of TB and Svozil protocols by replacing classical communications with nonlocal effects and obtained the same results as mentioned in the previous part. $$\begin{aligned} %\left\{ \begin{array}{ll} \text{\textbf{The nonlocal version of TB protocol} (SRV + Alice's measurement } \\ $\hspace{3cm}$\text{causes a nonlocal effect on Bob's output)} \\ $\hspace{1.3cm}$\text{ + \textbf{Alice informs Bob from one of her outputs} } \end{array} %\right\} \longrightarrow \begin{array}{ll} \text{\textbf{Bob deduces Alice's measurement} } \\ $\hspace{1cm}$\text{\textbf{outputs with certainty}}\hspace{.1cm} \delta\nonumber \end{array}\end{aligned}$$ Here, a question arises: *are TB and NTB protocols causal?* In the NLB-box [@PR1], if Alice’s (Bob’s) input is $x=0$ ($y=0$), she (he) can distinguish the other one’s output exactly. Otherwise, she (he) doesn’t have any information about his (her) outputs. It is usually interpreted that the NLB-box is causal. We know that the NTB protocol cannot be used for Superluminal signaling in each rounds of the protocol. Only, after the $l+p$ rounds of the protocol, Bob concludes that $\hat{a}$ is orthogonal to $\hat{\lambda}^{xy}_{2,l}$ and $\hat{\lambda}^{xz}_{2,p}$ directions. He must await for Alice’s message. Only at this stage, he can know Alice’s measurement setting exactly. We know that NLB represents undirected resources, but NTB represents directed ones that can be shared between two parties [@BP]. Hence, in the NLB approach, Bob does not have complete information about Alice’s inputs. Yet, in our description Bob gets complete information about each of Alice’s results. As we know, Cerf *et al.* [@Cerf] suggested a kind of NLB-box based on the TB protocol which perfectly simulated a maximally entangled (singlet) state by using one instance of the NLB-box machine and no communication at all. The NTB protocol can be used for discussing the communication complexity problem. In this approach, Alice and Bob shared an NTB machine as well as shared random variables in the form of the pairs of normalized vectors $\hat{\lambda}_{1}$ and $\hat{\lambda}_{2}$, randomly and independently distributed over the Poincare´ sphere. $n$-tuple of inputs is denoted as $\hat{a}\equiv(x_{1},x_{2},...,x_{n})$ and $\hat{b}\equiv(y_{1},y_{2},...,y_{n})$ are the vectors that determine Alice and Bob measurements, respectively (where, $x_{i},y_{j}\in\{0,1\}$, and $i,j=0,1,...,n$). With due attention to our approach, we proved that the availability of perfect the NTB protocol makes the communication complexity of all Boolean functions trivial. Therefore, TB’s claim that Bob obtains “no information" about Alice’s outputs from the classical communications, is not correct. It seems that the TB protocol used some unacceptable concepts in its approach, and consequently the question about “what minimum classical resources are required to simulate quantum correlations?" is still open [@Comm]. Moreover, in TB and NTB protocols, the parties have unrestricted control of the SRV. Therefore, here is another interesting model in which the parties have partial information (or don’t have any information) about the SRV. It sheds light on quantum-entanglement notation. $\vspace{.1cm}$ **Acknowledgments:** We thank E. Azadeghan for discussions and reading our manuscript. A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. **47**, 777-780 (1935). J. S. Bell, On the Einstein, Podolsky, Rosen paradox, physics (Long Island City, N.Y.) **1**, 195 (1964). J. S. Bell, *Speakable and Unspeakable in Quantum Mechanics* (Cambridge Univ. Press, Cambridge, U.K.) (1993). J. F. Clauser, M.A. Horne, A. Shimony, and R. A. Holt, Proposed experiment to test local hidden-variable theories, Phys. Rev. Lett. **23**, 880 (1969). J.F. Clauser, and M.A. Horne, Experimental consequences of objective local theories, Phys. Rev.D $\mathbf{10}$, 526 (1974). A. Aspect, P. Grangier, and G. Roger, Experimental realization of Einstein-Podolsky-Rosen-Bohm gedankenexperiment: A new violation of Bell’s inequalities, Phys. Rev. Lett. **49**, 91-94 (1982); W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Violation of Bell inequalities by photons more than 10 km apart, Phys. Rev. Lett. **81**, 3563-3566 (1998); J. Pan, *et al.*, Experimental test of quantum nonlocality in three-photon Greenberger, Horne, Zeilinger entanglement, Nature **403**, 515 (2000); M. A. Rowe, *et al.*, Experimental violation of a Bell’s inequality with efficient detection, Nature **409**, 791 (2001); C. A. Sackett, *et al.*, Experimental entanglement of four particles, Nature **404**, 256 (2000). T. Maudlin, in *PSA 1992, Volume 1*, edited by D. Hull, M. Forbes, and K. Okruhlik (Philosophy of Science Association, East Lansing, 1992), pp. 404–417. G. Brassard, R. Cleve, and A. Tapp, Cost of exactly simulating quantum entanglement with classical communication, Phys. Rev. Lett. **83**, 1874 (1999); G. Brassard, Quantum communication complexity, Found. Phys. **33**, 1593 (2003) (quant-ph/0101005). M. Steiner, Towards quantifying nonlocal information transfer: finite-bit nonlocality, Phys. Lett. A **270**, 239 (2000). B. Gisin and N. Gisin, A local hidden variable model of quantum correlation exploiting the detection loophole, Phys. Lett. A 260, 323 (1999). N. J. Cerf, N. Gisin, and S. Massar, Classical teleportation of a quantum bit, Phys. Rev. Lett. 84, 2521 (2000). B. F. Toner, and D. Bacon, Communication cost of simulating Bell correlations, Phys. Rev. Lett. **91**, 187904 (2003). K. Svozil, Communication cost of breaking the Bell barrier, Phys. Rev. A **72**, 050302(R) (2005); Erratum: Communication cost of breaking the Bell barrier, *ibid* **75**, 069902(E)(2007). T. E. Tessier, C. M. Caves, I. H. Deutsch, B. Eastin, and D. Bacon, Optimal classical-communication-assisted local model of $n$-qubit Greenberger-Horne-Zeilinger correlations, Phys. Rev. A **72**, 032305 (2005). J. Barrett, C. M. Caves, B. Eastin, M. B. Elliott, and S. Pironio, Modeling Pauli measurements on graph states with nearest-neighbor classical communication, Phys. Rev. A **75**, 012103 (2007). J. Degorre, S. Laplante and J. Roland, Classical simulation of traceless binary observables on any bipartite quantum state, Phys. Rev. A **75**, 012309 (2007); Simulating quantum correlations as a distributed sampling problem, *ibid* **72**, 062314 (2005). S. Gröblacher, *et al.*, An experimental test of nonlocal realism, (Supplementary information part I), Nature **446**, 871, (2007). S. Popescu, and D. Rohrlich, Quantum nonlocality as an axiom, Found. Phys. **24**, 379 (1994). N. S. Jones and Ll. Masanes, Interconversion of nonlocal correlations, Phys. Rev. A **72**, 052312, (2005). G. Brassard, H. Buhrman, N. Linden, A. A. Méthot, A. Tapp, and F. Unger, Limit on nonlocality in any World in which communication complexity is not trivial, Phys. Rev. Lett. **96**, 250401 (2006). W. van Dam, PhD thesis, Univ. Oxford (2000); quant-ph/0501159. M. Pawlowski *et al.*, Information causality as a physical principle, Nature **461**, 1101, (2009). N. Brunner, N. Gisin and V. Scarani, Entanglement and nonlocality are different resources, New Journal of Physics **7** 88 (2005). N. J. Cerf, N. Gisin, S. Massar, and S. Popescu, Simulating maximal quantum entanglement without communication, Phys. Rev. Lett. **94**, 220403 (2005). If classical communicating bits are given by reverse sign $...,c_{(l-1)}=-1,c_{l}=-1,c_{(l+1)}=+1,c_{(l+2)}=+1,...$, then, either Alice’s measurement setting $\hat{a}$ will be orthogonal to $\hat{\lambda}_{l+1}$ or $\hat{a}$ will be in the same (or in opposite) direction of the unit vector $\hat{\Delta}_{l+1}$ (with uncertainty factor $\delta$). If nonlocal effects $c_{k}$ are given by reverse sign $..., c^{xy}_{(l-1)}=-1, c^{xy}_{l}=-1, c^{xy}_{(l+1)}=+1, c^{xy}_{(l+2)}=+1,...,$, Bob will deduce that $\hat{a}$ is orthogonal to the $\hat{\lambda}^{xy}_{1,l+1}$ direction (with uncertainty factor $\delta$). J. Barrett, and S. Pironio, Popescu-Rohrlich Correlations as a unit of nonlocality, Phys. Rev. Lett. **95**, 140401 (2005). In another work, we try to clarify this point.
=1 epsf **Wilson Fermions at finite temperature** Michael Creutz Physics Department Brookhaven National Laboratory, Upton, NY 11973 creutz@bnl.gov 1.5in Abstract I conjecture on the phase structure expected for lattice gauge theory with two flavors of Wilson fermions, concentrating on large values of the hopping parameter. Numerous phases are expected, including the conventional confinement and deconfinement phases, as well as an Aoki phase with spontaneous breaking of flavor and parity and a large hopping phase corresponding to negative quark masses. In this talk I conjecture on the rather rich phase structure expected for lattice gauge theory with Wilson fermions, paying particular attention to what happens for large hopping parameter. I consider both zero and non-zero temperature. I restrict myself to the standard hadronic gauge theory of quarks interacting with non-Abelian gluons. I leave aside issues related to electromagnetism and weak interactions, both of which also raise fascinating issues for lattice field theory. The parameters of the strong interactions are the quark masses. I implicitly include here the strong CP violating parameter $\theta$, as this can generally be rotated into the mass matrix \[1\]. The quark masses are in fact the only parameters of hadronic physics, the strong coupling being absorbed into the units of measurement via the phenomenon of dimensional transmutation \[2\]. For the purposes of this talk, I take degenerate quarks at $\theta=0$; so, I can consider only a single mass parameter $m$. I discuss only the two flavor case, as this will make some of the chiral symmetry issues simpler. I also will treat the theory at finite temperature, $T$, introducing another variable. Finally, as this is a lattice talk, I introduce the lattice spacing $a$ as a third parameter. On the lattice with Wilson fermions the three parameters $(m,T,a)$ are usually replaced with $\beta$, representing the inverse bare lattice coupling squared, the fermion hopping parameter $K$, and the number of time slices $N_t$. The mapping between $(m,T,a)$ and $(\beta,K,N_t)$ is non-linear, well known, and not the subject of this talk. Note that in considering the structure of the theory in either of these sets of variables, I am inherently talking about finite lattice spacing $a$. Thus this entire talk is about lattice artifacts. I start with the $(\beta,K)$ plane at zero temperature, and defer how this is modified at finite temperature. The $\beta$ axis with $K=0$ represents the pure gauge theory of glueballs. This is expected to be confining without any singularities at finite $\beta$. The line of varying $K$ with $\beta=\infty$ represents free Wilson fermions \[3\]. Here, with conventional normalizations, the point $K={1\over 8}$ is where the mass gap vanishes and a massless fermion should appear in the continuum limit. The full interacting continuum limit should be obtained by approaching this point from the interior of the $(\beta,K)$ plane. While receiving the most attention, this point $K={1\over 8}$ is not the only place where free Wilson fermions lose their mass gap. At $K={1\over 4}$ four doubler species become massless. Also formally at $K=\infty$ six more doublers loose their mass. (Actually, a more natural variable is ${1\over K}$.) The remaining doublers occur at negative $K$. The $K$ axis at vanishing $\beta$ also has a critical point where the confining spectrum appears to develop massless states. Strong coupling arguments as well as numerical experiments place this point somewhere near $K={1\over 4}$, but this is probably not exact. The conventional picture connects this point to $(\beta=\infty, K={1\over 8})$ by a phase transition line representing the lattice version of the chiral limit. Now I move ever so slightly inside the $(\beta,K)$ plane from the point $(\infty, {1\over 8})$. This should take us from free quarks to a confining theory, with mesons, baryons, and glueballs being the physical states. Furthermore, when the quark is massless, we should have chiral symmetry. Considering here the two flavor case, this symmetry is nicely exemplified in a so called “sigma” model, with three pion fields and one sigma field rotating amongst themselves. Defining the fields $$\eqalign{ &\sigma=\overline\psi\psi\cr &\vec\pi=i\overline\psi\gamma_5\vec\tau\psi\cr } \eqno (1)$$ I consider constructing an effective potential. For massless quarks this is expected to have the canonical sombrero shape stereotyped by $$V\sim\lambda(\sigma^2+\vec\pi^2-v^2)^2 \eqno (2)$$ and illustrated schematically in Fig. (1). The normal [æ]{}ther is taken with an expectation value for the sigma field $\langle\sigma\rangle\sim v$. The physical pions are massless goldstone bosons associated with slow fluctuations of the [æ]{}ther along the degenerate minima of this potential. As I move up and down in $K$ from the massless case near ${1\over 8}$, this effective potential will tilt in the standard way, with the sign of $\langle\sigma\rangle$ being appropriately determined. The role of the quark mass is played by the distance from the critical hopping, $m_q\sim K_c-K$ with $K_c\sim {1\over 8}$. At the chiral point there occurs a phase transition, of first order because the sign of $\langle\sigma\rangle$ jumps discontinuously. At the transition point there are massless goldstone pions representing the spontaneous symmetry breaking. With an even number of flavors the basic physics on each side of the transition is the same, since the sign of the mass term is a convention reversable via a chiral rotation. For an odd number of flavors the sign of the mass is significant because the required rotation involves the $U(1)$ anomaly and is not a good symmetry. This is discussed in some detail in my recent paper, Ref. \[1\]. For the present discussion I stick with two flavors. A similar picture should also occur near $K={1\over 4}$, representing the point where a subset of the fermion doublers become massless. Thus another phase transition should enter the diagram at $K={1\over 8}$. Similar lines will enter at negative $K$ and further complexity occurs at $K=\infty$. For simplicity, let me concentrate only on the lines from $K={1\over 8}$ and ${1\over 4}$. Now I delve a bit deeper into the $(\beta,K)$ plane. The next observation is that the Wilson term separating the doublers is explicitly not chiral invariant. This should damage the beautiful symmetry of our sombrero. The first effect expected is a general tilting of the potential. This represents an additive renormalization of the fermion mass, and appears as a beta dependent motion of the critical hopping away from ${1\over 8}$. Define $K_c(\beta)$ as the first singular place in the phase diagram for increasing $K$ at given $\beta$. This gives a curve which presumably starts near $K={1\over 4}$ at $\beta=0$ and ends up at ${1\over 8}$ for infinite $\beta$. Up to this point I have only reviewed standard lore. Now I continue to delve yet further away from the continuum chiral point at $(\beta,K)=(\infty,{1\over 8})$. Then I expect the chiral symmetry breaking of the Wilson term to increase and become more than a simple tilting of the Mexican hat. I’m not sure to what extent a multipole analysis of this breaking makes sense, but let me presume that the next effect is a quadratic warping of our sombrero, i.e. a term something like $\alpha \sigma^2$ appearing in the effective sigma model potential. This warping cannot be removed by a simple mass renormalization. There are two possibilities. This warping could be upward or downward in the $\sigma$ direction. Indeed, which possibility occurs can depend on the value of $\beta$. Consider first the case where the warping is downward, stabilizing the sigma direction for the [æ]{}ther. At the first order chiral transition, this distortion gives the pions a small mass. The transition then occurs without a diverging correlation length. As before, the condensate $\langle \sigma \rangle$ jumps discontinuously, changing its sign. However, the conventional approach of extrapolating the pion mass to zero from measurements at smaller hopping parameter will no longer yield the correct critical line. The effect of this warping on the potential is illustrated in Fig. (2). A second possibility is for the warping to be in the opposite direction, destabilizing the $\sigma$ direction. In this case we expect two distinct phase transitions to occur as $K$ passes through the critical region. For small hopping we have our tilted potential with $\sigma$ having a positive expectation. As $K$ increases, this tilting will eventually be insufficient to overcome the destabilizing influence of the warping. At a critical point, most likely second order, it will become energetically favorable for the pion field to acquire an expectation value, such a case being stabilized by the upward warping in the sigma direction. As $K$ continues to increase, a second transition should appear where the tilting of the potential is again sufficiently strong to give only sigma an expectation, but now in the negative direction. The effect of this upward warping on the effective potential is illustrated in Fig. (3). Thus we expect our critical line to split into two, with a rather interesting phase between them. This phase has a non-vanishing expectation value for the pion field. As the latter carries flavor and odd parity, both are spontaneously broken. Furthermore, since flavor is still an exact continuous global symmetry, when it is broken Goldstone bosons will appear. In this two flavor case, there are precisely two such massless excitations. If the transitions are indeed second order, a third massless particle appears just at the transition lines, and these three particles are the remnants of the three pions from the continuum theory. This picture of a parity and flavor breaking phase was proposed some time ago by Aoki \[4\], who presented evidence for its existance in the strong coupling regime. This phase should be “pinched” between the two transitions, and become of less importance as $\beta$ increases. Whether the phase might be squeezed out at a finite $\beta$ to the above first order case, or whether it only disappears in the infinite $\beta$ limit is a dynamical question as yet unresolved. A similar critical line splitting to give a broken flavor phase should also enter our phase diagram from $(\beta,K)=(\infty,{1\over 4})$, representing the first set of doublers. Evidence from toy models \[5\] is that after this line splits, the lower half joins up with the upper curve from the $(\beta,K)=(\infty,{1\over 8})$ point. In these models, there appears to be only one broken parity phase at strong coupling. Now let me go to finite temperature, or more precisely, finite $N_t$, the number of sites in the temporal direction. Along the $\beta$ axis, representing the pure glue theory, a deconfinement transition is expected \[6\]. For an $SU(3)$ gauge group, this transition is expected to be first order. Turning on the fermion hopping, this transition should begin to move in $\beta$, the first effect being an effective renormalization of $\beta$ down toward stronger couplings. In the process, the transition may soften, and perhaps eventually turn into a rapid crossover rather than a true singularity. In any case, the numerical evidence is for a single transition where both the Polyakov line and the chiral symmetry order parameter undergo a rapid change. The transition region should continue into the $(\beta,K)$ plane to eventually meet the bulk transition line near $K_c(\beta)$ coming in from strong coupling. On the weak coupling side of the deconfinement transition, physics is dramatically different. Here as the quark mass goes to zero, we expect chiral symmetry restoration in the thermal [æ]{}ther. In terms of the effective potential, we expect only a single simple minimum. Most importantly, we do not expect any singularity around zero quark mass, with physics depending smoothly around the $(\beta,K)=(\infty,{1\over 8})$ point. In other words, we expect the chiral transtion at small quark masses to be absorbed into the finite temperature transition. As the hopping continues to increase, the $m\leftrightarrow -m $ symmetry of the continuum theory will play a role, bouncing the deconfinement transition back towards larger $\beta$ after $K$ passes $K_c$. What is less clear is what happens to the finite temperature line as we continue further toward the chiral transitions of the doublers. Here I conjecture that another transition line enters the picture. For small $N_t$ the theory is effectively a three dimensional one, which should have its own chiral transition, possibly somewhere between $K={1\over 8}$ and $K={1\over 4}$. Speculating that the deconfinement transition bounces as well off of this line, but on the opposite side, I arrive at the qualitative finite temperature phase diagram sketched in Fig. (4). To summarize the picture, at small $\beta$ and small $K$ we have the usual low temperature confined phase. Increasing $K$, we enter the Aoki phase with spontaneous breaking of flavor and parity. As $\beta$ increases, the Aoki phase pinches down into either a narrow point or a single first order line, leading towards the free fermion point at $(\beta,K)=(\infty,{1\over 8})$. Before reaching that point, this line collides with and is absorbed in the deconfinement transition line. The latter then bounces back towards larger $\beta$. Above the chiral line is a phase nearly equivalent physically with the usual confined phase, just differing in the sign of the light quark masses. Indeed, the only physical difference is via the lattice artifacts of the doublers. Finally, and most speculatively, there may be a three dimensional chiral line coming in from large $\beta$ which reflects the deconfinement transition back to meet the doubler chiral line heading towards $(\beta,K)=(\infty,{1\over 4})$. This diagram is wonderfully complex, probably incomplete, and may take some time to map out. Given the results presented by Ukawa at this meeting \[7\], it appears that we may as yet be at too small a value of $N_t$ for the negative mass confined phase to have appeared. As a final reminder, this entire discussion is of lattice artifacts, and other lattice actions, perhaps including various “improvements,” will look dramatically different. [**ACKNOWLEDGMENT**]{} This manuscript has been authored under contract number DE-AC02-76CH00016 with the U.S. Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. REFERENCES 1\. For a recent discussion of this old topic, see M. Creutz, Phys. Rev. D52, 2951 (1995). 2\. S. Coleman and E. Weinberg, Phys. Rev. D7 1888 (1973). 3\. K. Wilson, in [*New Phenomena in Subnuclear Physics*]{}, Edited by A. Zichichi (Plenum Press, NY, 1977), p. 24. 4\. S. Aoki, Nucl. Phys. B314, 79 (1989); S. Aoki and A. Gocksch, Phys. Rev. D45, 3845 (1992). 5\. S. Aoki, S. Boetcher, and A. Gocksch, Phys. Lett. B331, 157 (1994); K. Bitar and P. Vranas, Phys. Rev. D50, 3406 (1994); Nucl. Phys. B, Proc. Suppl. 34, 661 (1994). 6\. For a review see R. Gavai, in [*Quantum Fields on the Computer*]{}, M. Creutz, ed., p. 51 (World Scientific, 1992). 7\. S. Aoki, A. Ukawa, and T. Umemura, Phys. Rev. Lett. 76, 873 (1996).
--- abstract: 'Binaries with circumbinary disks are commonly found among optically bright post-AGB stars. Although clearly linked to binary interaction processes, the formation, evolution and fate of these disks are still badly understood. Due to their compactness, interferometric techniques are required to resolve them. Here, we discuss our high-quality multiwavelength interferometric data of two prototypical yet very different post-AGB binaries, AC and 89 Herculis, as well as the modeling thereof with radiative transfer models. A detailed account of the data and models of both objects is published in three separate papers elsewhere; here we focus on comparing the modeling results for the two objects. In particular we discuss the successes and limitations of the models which were developed for protoplanetary disks around young stars. We conclude that multiwavelength high-angular-resolution observations and radiative transfer disk models are indispensible to understand these complex interacting objects and their place in the grand scheme of the (binary) evolution of low and intermediate mass stars.' author: - 'M. Hillen$^1$, J. Menu$^1$, B. de Vries$^2$, H. Van Winckel$^1$, M. Min$^3$, G.D. Mulders$^4$, C. Gielen$^5$, T. Wevers$^6$, S. Regibo$^1$, and T. Verhoelst$^5$' bibliography: - 'Hillen1.bib' title: 'A Tale of Two Stars: Interferometric Studies of Post-AGB Binaries' --- Introduction ============ Based on recent surveys of the optically-bright post-AGB population in the Magellanic Clouds (MCs) [@2011AAvanAarle; @2014MNRASKamath see also D. Kamath’s contribution in these proceedings], the formation of disks around post-AGB binaries seems to be a common process. Indeed, in analogy with the Galactic post-AGB stars with confirmed disks, about 40% of the optically bright post-AGB stars in the MCs have similar observational characteristics (i.e. a comparable IR excess and photospheric depletion pattern). Binary interaction is clearly a key ingredient in the formation of these disks, since in the Galaxy such stable structures are only found around post-AGB stars in binary systems of typically 1 AU in separation ($P_{\rm{orb}}\sim100-3000$ d). The advantage of studying post-AGB stars in the MCs is that their distances, and hence luminosities, are well constrained, which is not the case for a typical Galactic source. On the other hand, to constrain the structure and evolution of the circumstellar environment in greater detail, one can better study the Galactic objects which can be spatially resolved with high-angular-resolution techniques. Here we focus on the results obtained in our recent studies of this kind. The long-term goals of this research are to further binary evolution theory by - empirically constraining uncertain binary interaction processes related to the formation of these elusive disks, - connecting the post-AGB binaries to other objects and evolutionary channels in the “binary zoo,” in search of their progenitors and progeny, but also to study disk evolution in itself, since these objects - form an ideal laboratory to study dust coagulation in a semi-stable environment, - offer a unique region of parameter space to study mechanisms that are relevant for the formation of (circumbinary) planets. The Two Stars: the Prototypes in Hercules ========================================= Two post-AGB systems were selected for a detailed study of the structure of their circumstellar environment, 89 Herculis [published as @2013AAHillen; @2014AAHillen] and AC Herculis (Hillen et al., in prep.). Both systems are among the brightest and closest post-AGB binaries and have a long history as being recognised as likely disk objects. @1993AAWaters postulated 89 Her to have a disk, based on their evidence for the binary nature of the central object and the observed characteristics of the circumstellar environment (i.e. the stability of the IR excess, the CO(1-0) line profile, etc.). Similarly, @1998AAVanWinckel concluded, based on the close resemblance of AC Her with the Red Rectangle (i.e., the dust mineralogy, CO rotational line emission, mm continuum flux, etc.), that the circumstellar dust and gas in this system must also be in the form of a circumbinary disk. Simple radiative transfer disk models have already been computed for AC Her, in combination with a mineralogical study of the mid-IR emission features, by [@2007AAGielen]. Here we compare our results for the two systems. Our Tools ========= Observations: Optical Interferometry Combined with the SED ---------------------------------------------------------- Extensive high-quality data sets were gathered for the two objects under study. For both systems the spectral energy distribution (SED) was constructed with a wide variety of photometric data from the literature, combined with new photometry collected with the SPIRE instrument onboard the [*Herschel*]{} satellite [@SPIRESwinyard; @2010AAPilbratt], as well as with the archival ISO spectra. For 89 Her, we collected multiwavelength interferometric data, with currently operational interferometers (the VLTI, the CHARA Array and the NPOI) and from the archives of the PTI and the IOTA, that cover the optical, near-IR and mid-IR wavelength domains. In the case of AC Her, only three visibility spectra were acquired with the MIDI instrument on the VLTI, but they are of very high quality and spatial resolution. Radiative Transfer Disk Models ------------------------------ The main modeling tool used in our work is the MCMax radiative transfer code [@2009AAMin]. Being developed to model the effects of radiation transport through the optically thick media in protoplanetary disks on dust-related observables, MCMax can be very well applied to the circumstellar environments of evolved stars as well, and in particular to the disks around post-AGB binaries. The radiative transfer in MCMax is performed with a Monte Carlo method. The code, moreover, computes the vertical structure of the disk, by solving the equation of hydrostatic equilibrium. Finally, a grain size distribution can be included in the model, in combination with size-dependent dust settling to the disk midplane (i.e. turbulence vs. gravity, included in the form of a diffusion equation). An important assumption in the modeling is the way the radial surface density distribution is parameterized, which is typically in the form of a power law (with index -1 for protoplanetary disks). Figure \[figure:diagram\] summarizes in the form of a diagram the iterative processes implemented in MCMax to arrive at a final disk structure and model predictions for observables [for more details, see @2014AAHillen; @2012AAMulders; @2009AAMin and references therein]. In particular, the inclusion of mm-sized grains that are settled to the midplane of the disk is a great improvement with respect to previous radiative transfer models applied to post-AGB disks. \[figure:diagram\] Modeling Results: Comparing the Two Stars ========================================= Extensive model grids were computed to explore the relevant parameter space. Due to the complexity of these stable, passive structures, there are many parameters involved in the structure computation, in addition to the geometric parameters like inclination and disk orientation on the sky (see Table \[table:MCMaxmodels\]). Not all parameters can be independently constrained based on the SED and interferometric data at near- to mid-IR wavelengths only. Therefore, we assumed certain values for specific parameters. These assumptions are different for the two systems, complicating a quantitative comparison between the resulting values for the fitted parameters. Nonetheless, several conclusions can be drawn with respect to the parameters that *are* well determined and concerning the validity of certain assumptions/parameterizations. [l c c]{} Parameter & Best 89 Her & Best AC Her\ $M_{\mathrm{dust}}$ (M$_\odot$) & $5\,\times\,10^{-4}$ & $2.5\,\times\,10^{-3}$\ gas/dust & 100 (F) & 1 or 10\ $a_{\mathrm{min}}$ & 0.01 (F) & 0.01 (F)\ $a_{\mathrm{max}}$ (mm) & 10.0 (F) & 1.0\ $q$ & –3.00 & –3.25\ $R_{\mathrm{in}}$ (AU) & 3.75 & 34.0\ $R_{\mathrm{out}}$ & 50 (F) & 200 (F)\ $R_{\mathrm{mid}}$/$R_{\mathrm{in}}$ & 3.0 & 2.0\ $p_{\mathrm{in}}$ & –3.0 & –3.0\ $p_{\mathrm{out}}$ & 1.5 & 1.0 (F)\ $\alpha$ & 0.01 (F) & 0.01 (F)\ $i$ ($^\circ$) & 13 (F) & 50\ First, it is striking that both systems require a surface density parameterization of two joined power laws to explain the data. The interferometric data (in the near-IR for 89 Her and in the mid-IR for AC Her) require a smoother intensity distribution than can be provided with a single power-law model [see @2014AAHillen for a detailed discussion]. Second, it is apparent that in both systems the grain size distribution power-law index is larger than –3.5, the value often assumed for protoplanetary disks. For post-AGB disks the inclusion of large grains is thus clearly very important. Third, our derived dust masses are rather high. In the case of 89 Her, our dust mass is a factor of five larger than the value estimated from the measured gas mass (from CO rotational lines) by @2013AABujarrabalB combined with a standard gas/dust ratio of 100. This we judge to be within the errors of both methods, especially given the different assumed distance. In the case of AC Her, the gas/dust ratio is also a fit parameter in our models because it affects the settling of dust particles and thus the shape of the inner rim, and hence the interferometric data. Our dust mass for this system is a factor $\sim$3 larger than the total gas mass that was found by @2013AABujarrabalB. With the indication for a gas/dust ratio smaller than 100 (the best-fit value is $\sim$1-10) from our models and our larger distance, this discrepancy is within what the respective errors allow. Nevertheless, it would be interesting to check whether the CO lines are affected by optical depth effects, to see whether our modeling might be biased by any of our assumptions or simplifications. Only by combining the various data sets can this be resolved. Finally, the foremost distinction between the two systems are their vastly different inner radii. The hottest dust in 89 Her coincides rather well with the expected dust condensation radius. In AC Her the inner rim is located much further out, almost an order of magnitude beyond the dust condensation radius, categorizing it as a “post-AGB transitional disk.” The origin of this large inner hole is yet unexplained. It is interesting that AC Her combines a rather large disk mass with a seemingly evolved inner disk. On the other hand, 89 Her has a relatively massive large-scale outflow, well-resolved in CO rotational lines, despite its inner radius coinciding with the dust condensation radius. Such an outflow is not yet detected in AC Her. Are different disk dispersal mechanisms responsible for the current state of the two systems? Conclusions =========== Optical interferometry is a powerful technique to trace the inner regions of dusty disks. Radiative transfer modeling techniques of optically thick media have come of age in the past decade and can now be successfully applied to the circumbinary disks around post-AGB binaries. Combining these tools allows us to constrain the elusive inner disk regions in great detail. We have shown for two systems that a large set of state-of-the-art observations can be well matched with these models, but that the resulting parameter values raise several questions concerning their evolution. The works presented here, and even published throughout the literature, have only scratched the surface of what is feasible. With the 2nd-generation instruments coming online on the VLTI in the coming years, and ALMA almost fully operational, more exciting results can be expected in the coming decade. By tracing the complex structures and matter streams in a large number of post-AGB objects, we hope to connect these peculiar systems to specific populations of stars from which they originate and into which they evolve. An exciting time lies ahead! You mentioned amorphous carbon and iron as continuum opacity sources in your models. How well is the abundance of amorphous carbon or iron constrained by your models? What would happen if you’d set the abundances of Fe and amorphous carbon to zero in your models? The abundance of amorphous carbon or metallic iron cannot be constrained with the data and models that we have. Other parameters, like the grain size distribution power law index, can mimic effects of varying opacities/abundances.
--- abstract: 'We evaluate energy levels of the $K \pi$ system in the $K^*$ channel in finite volume using chiral unitary theory. We use these energy levels to obtain $K \pi$ phase shifts, and then obtain the $K^*$ mass and its decay width. We investigate their dependence on the pion mass and compare this with Lattice QCD calculations. We also compare our method with the standard Lüscher approach, and solve the inverse problem to obtain the $K \pi$ phase shifts from these “synthetic” lattice data.' author: - Dan Zhou - 'Er-Liang Cui' - 'Hua-Xing Chen' - 'Li-Sheng Geng' - 'Li-Hua Zhu' title: '$K\pi$ interaction in finite volume and the $K^*$ resonance' --- Introduction ============ Lattice QCD is developing very fast in these years. One can use this method to evaluate the discrete energy levels of the finite box, and then reconstruct phase shifts of the decay products in the continuum. To do this, one usually uses the Lüscher’s approach [@luscher; @Luscher:1990ux], which has a higher accuracy and consistency with the decay channels of the hadrons, and so it is widely used in lattice studies [@Bernard:2008ax; @Bernard:2010fp]. These discrete energy levels can not be directly measured in the experiments. However, in Ref. [@Doring:2011vk] the authors proposed one method to estimate them through an effective approach whose parameters are obtained by fitting the experimental data. This method has been applied in Ref. [@Doring:2011ip] to obtain finite volume results from the Jülich model for meson baryon interaction, and in Ref. [@MartinezTorres:2011pr] to study the interaction of the $DK$ and $\eta D_s$ systems where the $D_{s0}^*(2317)$ resonance is dynamically generated from the interaction of these channels [@Kolomeitsev:2003ac; @Hofmann:2003je; @Guo:2006fu; @Gamermann:2006nm]; the case of the $\kappa$ resonance in the $K \pi$ $S$-wave channel is studied in Ref. [@Doring:2011nd]; the case of $\Lambda_c(2595)$ resonance in the $DN$ and $\pi \Sigma_c$ channels in finite volume is studied in Ref. [@Xie:2012np]. An extension of the approach of Ref. [@Doring:2011vk] to the case of interaction of unstable particles is studied in Ref. [@Roca:2012rx]. We also use it to study the interaction of two pions in the $\rho$ channel in finite volume [@Chen:2012rp]. In the present work we shall study the $K \pi$ interaction in the $K^*$ channel in finite volume. This $K^*$ meson has been measured very well in the experiments and we can use the chiral unitary model to well describe it. Recently several Lattice groups also studied it and evaluated the relevant discrete energy levels using the Lüscher’s approach [@Fu:2012tj; @Lang:2012sv; @Dudek:2014qha]. Again we note that these energy levels can not be directly measured in the experiments, so one needs to make extra efforts in order to compare these energy levels with the experimental data of the $K^*$ meson. Lattice theorists usually transform these energy levels into the phase shifts, and then calculate the physical quantities of the $K^*$ meson. Accordingly, one can do the opposite process [@Doring:2011vk], and this is what we shall study in this paper, i.e., in this paper we shall follow the approach of Ref. [@Doring:2011vk], and inversely transform the experimental data of $K^*$ meson into “synthetic” energy levels. To do this we need to use the chiral unitary model to study the $K \pi$ interaction in the $K^*$ channel in finite volume. To make a complete analysis, we shall also use these “synthetic” data to calculate the phase shifts and then calculate the physical quantities for the $K^*$ meson. We shall refer to the results of Refs. [@Fu:2012tj; @Lang:2012sv; @Prelovsek:2013ela] for comparison along the paper. We shall also compare our method with the standard Lüscher approach, and solve the inverse problem and obtain the $K \pi$ phase shifts from these “synthetic” lattice data. This paper is organized as follows. In Sec. II we study the $K \pi$ scattering in the $K^*$ region using the chiral unitary model both in infinite space and in finite volume. Then in Sec. III we use these formulae to evaluate energy levels and phase shifts. The pion mass dependence of these results is studied in Sec. IV where we also study their comparison with the Lattice data. We compare our method with the standard Lüscher approach in Sec. V, and solve the inverse problem to obtain $K \pi$ phase shifts from these “synthetic” lattice data in Sec. VI. Finally we show some concluding remarks in Sec. VII. The Chiral Unitary Approach In Infinite and Finite Box ====================================================== The $K \pi$ scattering amplitude in $P$-wave has been studied in Refs. [@Oller:1998zr; @Xiao:2013mn] by using the chiral unitary model. In this paper we shall follow the same approach and use the following Bethe-Salpeter equation in their on-shell factorized form [@Oller:1998zr; @Xiao:2013mn; @Oller:1997ti; @Oller:2001fj] (for a quantitative study of off-shell effects in this context, see, e.g., Ref. [@Altenbuchinger:2013gaa]): $$\begin{aligned} T(s) &=& {V(s) \over 1 - V(s) G(s)} \, . \label{bethesal}\end{aligned}$$ Here we only consider the $K \pi$ channel, but the $K \eta$ and $K \eta^\prime$ channels may also be important. In Ref. [@Guo:2011pa], the $K^*$(892) is studied with the coupled channels $K \pi$, $K \eta$ and $K \eta'$. The coupling of the $K^*$(892) to $K \pi$ is the dominance, but some smaller, although not negligible couplings to $K \eta$ and $K \eta'$ are also found. The couplings by themselves do not give a measure of the relevance of the channel, because if the mass of the channel is far away from the pole, the relevance would be much smaller for a same coupling. Furthermore, in such a case, the effect of these channels and other missing channels can be absorbed in the study with one channel by changing the subtraction constants, and the energy dependence of the potential a bit, which is explicitly done in our model. Indeed, the fit to the data with just the $K \pi$ channel is very good, as found in Ref. [@Oller:1998zr] and shown below. Moreover, the elimination of one channel in terms of an effective potential for another channel in the content of lattice QCD analysis has been shown to be a valid and useful tool in Ref. [@sasa]. The relevant $V$-matrix for the $K \pi$ scattering has been studied in Refs. [@Oller:1998zr; @Oller:2000ug; @Xiao:2013mn]: $$V(s) = -\frac{p^2}{2f^2}(1+\frac{2G_{V}^{2}}{f^2}\frac{s}{M_{K^*}^{2}-s}) \, , \label{eq:Vmatrix}$$ where $M_{K^*}$ is the bare $K^*$ mass, $f$ is the $\pi/K$ decay constant, and $G_V$ is the coupling for a vector meson to two pseudoscalar mesons. We note that this potential $V(s)$ is a bit different from the one used in Ref. [@Xiao:2013mn], where the factor $p^2$ is absorbed into their $G$-function so that $V(s)$ does not depend on the momentum. The $G$-function for the two-meson ($\pi$-$K$) propagator having masses $m_\pi$ and $m_K$ is defined as $$\begin{aligned} \label{eq:gfunction} G (p^2) &=& i \int {d^4 q \over (2 \pi)^4} {1 \over q^2 - m_\pi^2 + i \epsilon} {1 \over (p - q)^2 - m_K^2 + i \epsilon} \, ,\end{aligned}$$ where $p$ is the four-momentum of the external meson-meson system. There are many methods to regularize this loop-function. In Ref. [@Xiao:2013mn] the authors use the cut-off method, but in this paper we shall use the dimensional regularization which is more convenient when studying the $K \pi$ interaction in finite volume. We note that these two methods are equivalent up to certain energy level range, as proved in Ref. [@Oller:2001fj]. The dimensional regularization result is $$\begin{split} G(s)=&\frac{1}{(4\pi)^2}\{a(\mu)+\log\frac{m_\pi^2}{\mu^2}+\frac{m_K^2-m_\pi^2+s}{2s}\log\frac{m_K^2}{m_\pi^2}\\ &+\frac{Q(\sqrt{s})}{\sqrt{s}}[\log(s-(m_K^2-m_\pi^2)+2\sqrt{s}Q(\sqrt{s}))+\log(s+(m_K^2-m_\pi^2)+2\sqrt{s}Q(\sqrt{s}))\\ &-\log(-s+(m_K^2-m_\pi^2)+2\sqrt{s}Q(\sqrt{s}))-\log(-s-(m_K^2-m_\pi^2)+2\sqrt{s}Q(\sqrt{s}))]\}\, , \end{split} \label{eq:GDR}$$ where $s=p^2$, $Q(\sqrt{s})$ is the on-shell momentum of the particles, $\mu$ is a regularization scale and $a(\mu)$ is a subtraction constant. In this paper we shall work in the center-of-mass frame, where the energy of the system is $E=\sqrt{s}$. The regularization parameters are chosen to be $$\begin{aligned} a(\mu) &=& - 1.0 \, , \\ \mu &=& M_{K^*} \, .\end{aligned}$$ The two parameters $f$ and $G_V$ are taken from Ref. [@Xiao:2013mn]: $$\begin{aligned} G_V &=& 53.81~{\rm MeV} \, , \\ f &=& 86.22~{\rm MeV} \, ,\end{aligned}$$ but the parameter $M_{K^*}$ is a bit different from the one used in Ref. [@Xiao:2013mn], because we are using the dimensional regularization other than the cut-off method used in Ref. [@Xiao:2013mn]. To fix $M_{K^*}$, we use the experimental data of the $K \pi$ $P$-wave phase shifts, which are related to the $T(s)$ through: $$\begin{aligned} \label{eq:delta} T(E) &=& { - 8 \pi E \over p \cot\delta(p) - i p } \, ,\end{aligned}$$ where $p$ is the center-of-mass momentum. We use the experimental data of Refs. [@Mercer:1971kn; @Estabrooks:1977xe], and evaluate $M_{K^*}$. The fitting results are shown in Fig. \[fig:fittingdata\], where $M_{K^*}$ is fitted to be: $$\begin{aligned} M_{K^*} &=& 919.03~{\rm MeV} \, .\end{aligned}$$ ![The solid curve shows $K \pi$ scattering $P$-wave phase shifts obtained using Eq. (\[bethesal\]) and Eq. (\[eq:delta\]), and the dotdashed curve the results from Ref. [@Xiao:2013mn]. The experimental data are taken from Ref. [@Estabrooks:1977xe] and Ref. [@Mercer:1971kn], shown using solid circles and triangles, respectively.[]{data-label="fig:fittingdata"}](fittingdata.eps) ![The real part of Eq. (\[gtilde\]). Here we choose $L = 2.5~m_\pi^{-1}$ and $E = 800$ MeV.[]{data-label="fig:difference"}](difference.eps) All the above formulae are defined in the infinite space. To study the $K^*$ meson in the finite volume, we simply change the $G$-function of dimensional regularization (Eq. (\[eq:GDR\])) by the one which is defined in the finite box of side $L$ [@Doring:2011jh; @MartinezTorres:2011pr], i.e., we simply change the integration over momenta by a sum over the discrete values of the momenta allowed by the periodic conditions in the box. We denote the latter one by $\tilde G(s,L)$, and it can be obtained through: $$\begin{aligned} \label{eq:difference} \tilde G(s,L) - G(s) &=& \lim_{q_{\rm max} \rightarrow \infty} \Big ( {1 \over L^3} \sum_{q_i}^{q_{\rm max}} I(q_i) - \int_{q<q_{\rm max}}{d^3 q \over (2\pi)^3} I(q) \Big ) \, . \label{gtilde}\end{aligned}$$ In this equation the discrete momenta in the sum are given by $\vec q = {2 \pi \over L} \vec n ~~ ( \vec n \in \mathcal{Z}^3 )$ and the function $I(q_i)$ is $$\begin{aligned} I(q_i) = {1 \over 2 \omega_1(\vec q) \omega_2(\vec q)} {\omega_1(\vec q) + \omega_2(\vec q) \over E^2 - (\omega_1(\vec q) + \omega_2(\vec q))^2} \, , \label{ifun}\end{aligned}$$ where $\omega_{1,2}(\vec q) = \sqrt{m_{1,2}^2 + \vec q^2}$. We show the real part of $\tilde G(s,L) - G(s)$ in Fig. \[fig:difference\] as a function of $q_{\rm max}$, where $L$ is fixed to be 2.5 $m_\pi^{-1}$ and $E$ to be $800$ MeV. Its convergence is good when $q_{\rm max}$ is larger than 3000 MeV. However, we shall still make an average of this quantity for smaller values of $q_{\rm max}$ in order to save the computational time [@Doring:2011jh; @MartinezTorres:2011pr]. The Energy Levels in the Chiral Unitary Approach {#sec:energylevels} ================================================ To calculate the energy levels of the $K \pi$ scattering amplitude in $P$-wave, we need to find the poles of the $T(s)$ matrix, which are just solutions of the following equation $$\begin{aligned} 1 - V(s) \tilde G(s,L) = 0 \, . \label{eq:EL}\end{aligned}$$ Here $\tilde G(s,L)$ is defined in the finite volume and can be obtained through Eq. (\[gtilde\]). From this equation we can clearly see that the energy levels for $K \pi$ $P$-wave scattering are functions of the cubic box size $L$, as well as the pion mass $m_\pi$. In the following sections we shall study their dependence on these variables. In this section we study the volume dependence and in the next section we shall study the pion mass dependence. We note again that our procedures follow closely the method used in Refs. [@Doring:2011jh; @MartinezTorres:2011pr; @Chen:2012rp; @Doring:2011nd; @Xie:2012np; @Roca:2012rx]. In Fig. \[fig:Level\] we show the energy levels as functions of the cubic box size $L$, which are obtained after performing an average for different $q_{max}$ values between 1200 MeV and 2000 MeV. Actually, the results for different $q_{\rm max}$ values are almost the same. In this figure we have used the dimensional regularization, Eq. (\[eq:GDR\]), to calculate Eq. (\[eq:gfunction\]) and then calculate $\tilde G(s,L)$ of Eq. (\[gtilde\]), while we can also use the cut-off method to calculate Eq. (\[eq:gfunction\]): $$\begin{aligned} G_{\rm cut off}(s,L)=\int_{q<q^\prime_{\rm max}}{d^3 q \over (2\pi)^3} {1 \over 2 \omega_1(\vec q) \omega_2(\vec q)} {\omega_1(\vec q) + \omega_2(\vec q) \over E^2 - (\omega_1(\vec q) + \omega_2(\vec q))^2} \, , \label{eq:cutoff}\end{aligned}$$ which can be inserted into Eq. (\[gtilde\]) and then calculate $\tilde G(s,L)$. The energy levels can be similarly calculated and the results are shown in Fig. \[fig:cutoff\]. We note that the cutoff used in Eq. (\[eq:cutoff\]), denoted as $q^\prime_{\rm max}$, is different from $q_{\rm max}$ used in Eq. (\[gtilde\]). We choose $q^\prime_{\rm max}$ to be 724.70 MeV following Ref. [@Xiao:2013mn]. It is significantly larger than the discrete momentum $2 \pi / L = 433$ MeV when $L$ is around 2.0 $m_\pi^{-1}$. ![Solid curves are $K \pi$ scattering energy levels evaluated using Eq. (\[eq:GDR\]) and Eq. (\[gtilde\]), and dashed curves are energy levels evaluated using Eq. (\[eq:cutoff\]) and Eq. (\[gtilde\]), when $q^\prime_{\rm max}=724.70$ MeV [@Xiao:2013mn].[]{data-label="fig:cutoff"}](levecom.eps) The phase shift can be extracted from these energy levels. To do this we follow the procedure used in Ref. [@Doring:2011vk], and use Eq. (\[eq:delta\]) to calculate the $K \pi$ $P$-wave phase shifts, where the scattering amplitudes $T(E,L)$ are obtained using the energy levels shown in Fig. (\[fig:Level\]): $$\begin{aligned} \label{eq:T11} T(E, L) &=& { V(E) \over 1 - V(E) G(E) } = { \tilde G(E,L)^{-1} \over 1 - \tilde G(E,L)^{-1} G(E) } \, .\end{aligned}$$ Here we have used Eq. (\[eq:EL\]), i.e., $V(s)^{-1} = \tilde G(s,L)$. Although these procedures can be done for all energy levels, the lowest energy level should be the best one, because we are using the chiral unitary approach which is an effective theory for low energies. Accordingly, we use the lowest energy level to evaluate phase shifts, and the result is shown in Fig. \[fig:PhaseRead\]. For comparison, we also show the phase shifts evaluated using the second and the third energy levels. Using the phase shift $\delta(E)$ we can fit the physical quantities for the $K^*$ meson, and evaluate $m_{K^*}$, $g_{K^* \pi K}$ and $\Gamma_{K^*}$. We note that $m_{K^*}$ is the $K^*$ mass we obtained, i.e., one of our outputs; while $M_{K^*}$ is the bare $K^*$ mass, i.e., one of our inputs. To do that, we use the following two equations in Refs. [@Chen:2012rp; @sasa] to extract the $K^*$ properties: $$\begin{aligned} \label{eq:KstarWidth} \cot \delta(s) = {m_{K^*}^2 - s \over \sqrt s~\Gamma_{K^*}(s)} \, , ~~~{\rm and }~~~ \Gamma_{K^*}(s) = {p^3 \over s} {g^2_{K^* \pi K} \over 8 \pi} \, .\end{aligned}$$ We note that the factor $8 \pi$ in the second equation is our normalization, while in Ref. [@Prelovsek:2013ela] the authors use $6 \pi$. The results from fitting the phase shifts calculated using the lowest $K \pi$ energy level are $$\begin{aligned} m_{K^*} = 894.89_{-37.77}^{+39.75} {\rm~MeV}\, , g_{K^* \pi K}=6.48_{-0.12}^{+0.13} \, , \Gamma_{K^*} = 50.68_{-8.00}^{+8.24} {\rm~MeV} \, . \label{Kstarmass1}\end{aligned}$$ In these results the theoretical uncertainties are estimated following Ref. [@Chen:2012rp], where we assume that the uncertainties of the three parameters $G_V$, $M_{K^*}$ and $f$ in Eq. (\[eq:Vmatrix\]) are all about 4%. The uncertainties of the energy levels and phase shifts are shown in Fig. \[fig:certainty\]. Particularly, the uncertainty of phase shifts is quite large around $E = 900{\rm~MeV}$. However, the fitted results shown in Eq. (\[Kstarmass1\]) have moderate and acceptable uncertainties, suggesting our method is “stable” (see also the discussions in Sec. \[sec:luscher\]). Similarly, we can fit the second and the third energy levels. We find that the results do not change much: the results from fitting the phase shifts calculated using both the first and the second energy levels are (overlapped points are counted just once): $$\begin{aligned} m_{K^*} = 894.78{\rm~MeV}\, , g_{K^* \pi K}=6.34 \, , \Gamma_{K^*} = 48.49 {\rm~MeV} \, ,\end{aligned}$$ and the results from fitting the phase shifts calculated using all the three energy levels are (overlapped points are counted just once): $$\begin{aligned} m_{K^*} = 894.84{\rm~MeV}\, , g_{K^* \pi K}=6.31 \, , \Gamma_{K^*} = 48.04 {\rm~MeV} \, .\end{aligned}$$ Dependence on the Pion Mass {#sec:pimass} =========================== As we know, due to the computer limitation, the Lattice QCD calculations usually use a non-physical pion mass. Therefore, in this section, we also use non-physical pion masses to study the mass and decay width of the $K^*$ meson, in order to compare with the Lattice QCD result. We define $m_\pi^0$ to be the physical $\pi$ mass, and now $m_\pi$ is a free parameter. We change it from $m_\pi^0$ to $3 m_\pi^0$. At the same time other parameters can also change with $m_\pi$. We follow the same approach of Refs. [@Chen:2012rp; @Boucaud:2007uk; @Beane:2007xs; @Noaki:2008gx; @Pelaez:2010fj], where the variation of $f$ as a function of $m_\pi$ is $$\frac{f(m_{\pi})}{f(m^0_{\pi})}= 1 +0.048((\frac{m_{\pi}}{m^0_{\pi}})^2-1),$$ with $f(m^0_{\pi})=86.22$ MeV. The coupling $G_V$ is related to $f$ [@sakurai; @Bando:1987br; @Ecker:1989yg; @Zhou:2014ila], as $G_V = f / \sqrt 2$, valid to leading order, consequently, we keep $G_V/f$ unchanged. The kaon mass $m_K$ can also change with the pion mass $m_\pi$, and we use the following relation [@Ren:2012aj]: $$m_K^2 = a + bm_\pi ^2, \label{eq.mK}$$ where $a = 0.29{\rm~GeV}^2$, and $b = 0.67$. We note that the Lattice calculations also use non-physical kaon masses [@Fu:2012tj; @Lang:2012sv], but all these values are not much different from the physical one. Accordingly, in this paper we shall first keep it unchanged, then use the kaon mass in Eq. (\[eq.mK\]), and finally use the same values of $m_K$ as the Lattice ones [@Fu:2012tj; @Lang:2012sv] in order to compare our results with theirs. On the other hand, the bare $K^*$ mass, $M_{K^*}$ in Eq. (\[eq:Vmatrix\]), provides the link of the theory to a genuine component of the $K^*$ meson, not related to the $K \pi$ component, and we assume it to be $m_{\pi}$ independent. To calculate the energy levels we follow the same procedures which have been used in the previous section. The result is shown in Fig. \[fig:mpi\] where we have used $m_\pi = 1.5~m_\pi^0$ (left), $m_\pi = 2.0~m_\pi^0$ (middle) and $m_\pi = 2.5~m_\pi^0$ (right). The solid curves are obtained using the physical kaon mass $m_K = 496 {\rm~MeV}$, and the dotted curves are obtained using the non-physical kaon mass evaluated using Eq. (\[eq.mK\]). We can see that the results obtained using these different kaon masses do not differ much. Here, we note that the $x$-coordinate is expressed in units of $m_\pi^{-1}$, not $(m_\pi^0)^{-1}$. These energy levels can be used to calculate the phase shifts again following our previous procedures. The results are shown in Fig. \[fig:mpips\], where again we have used $m_\pi = 1.5~m_\pi^0$ (left), $m_\pi = 2.0~m_\pi^0$ (middle) and $m_\pi = 2.5~m_\pi^0$ (right). The solid curves are obtained using the physical kaon mass $m_K = 496 {\rm~MeV}$, and the dashed curves are obtained using the non-physical kaon mass evaluated using Eq. (\[eq.mK\]). We note that the dashed curve on the right of Fig. \[fig:mpips\] vanishes, because the sum of $2.5~m_\pi^0$ and non-physical kaon mass $m_K = 610{\rm~MeV}$ is already above the $K^*$ threshold. Now we can compare our results with the Lattice results of Refs. [@Fu:2012tj; @Lang:2012sv], where $m_\pi =240$ MeV and $m_K=548$ MeV are used in Ref. [@Fu:2012tj], and $m_\pi=266$ MeV and $m_K=552$ MeV are used in Ref. [@Lang:2012sv]. We show their comparisons in Fig. \[fig:Lattice\] and Tables \[tab:comparedata1\] and  \[tab:comparedata2\], where $E_1$ and $E_2$ are on the lowest and the second energy levels, and $\delta_1$ and $\delta_2$ are extracted from $E_1$ and $E_2$, respectively. We find that the energy levels and the extracted phase shifts are similar, and so our results compare favorably with those lattice results obtained in Refs. [@Fu:2012tj; @Lang:2012sv]. Again the theoretical errors are obtained by assuming that the uncertainties of the three parameters $G_V$, $M_{K^*}$ and $f$ in Eq. (\[eq:Vmatrix\]) are about 4%. We also show more points in Table \[tab:diffmpi\] which may be useful. \[tab:comparedata1\]   ----------------- ----------------------------------- ------------------------------------ Our Results $912.6_{-33.5}^{+33.4} {\rm MeV}$ $1166.7_{-5.1}^{+5.2}{\rm MeV}$ Lattice Results $926.9_{-10.0}^{+23.5} {\rm MeV}$ $1171.7_{-22.5}^{+40.0} {\rm MeV}$ : Comparison with Ref. [@Fu:2012tj], where $m_\pi = 240$ MeV, $m_K = 548$ MeV and $L = 3$ fm. \[tab:comparedata2\]   ----------------- ---------------------------------- ---------------------------------- ----------------------------------------------- ----------------------------------------------- Our Results $926.2_{-36.8}^{+36.0}{\rm MeV}$ $1511.4_{-7.5}^{+9.6} {\rm MeV}$ $158.05^\circ$ $_{-8.45^\circ}^{+8.52^\circ}$ $175.52^\circ$ $_{-3.62^\circ}^{+2.79^\circ}$ Lattice Results $915.6 \pm 3.0 {\rm MeV}$ $1522.3 \pm 7.0 {\rm MeV}$ $160.61^\circ \pm 0.73 ^\circ$ $177.0^\circ \pm 2.6^\circ$ : Comparison with Ref. [@Lang:2012sv], where $m_\pi = 266$ MeV, $m_K = 552$ MeV and $L = 1.98$ fm.   --- -- -- -- -- -- -- -- --                   : Some examples of energy levels and phase shifts. \[tab:diffmpi\] Finally, we use Eq.(\[eq:KstarWidth\]) to fit the phase shifts obtained using the lowest $K \pi$ energy level, and obtain the $K^*$ mass (left), the coupling constant $g_{K^* \pi K}$ (middle) and the decay width $\Gamma_{K^*}$ (right), which are shown in Fig. \[fig:Kstarmass\] as functions of $m_\pi$. We can see that the results of $g_{K^* \pi K}$ obtained using the physical kaon mass and non-physical kaon masses in Eq. (\[eq.mK\]) are very similar, while the results of $m_{K^*}$ and $\Gamma_{K^*}$ are not so similar. This may be because the phase spaces differ much, although the kaon masses do not differ much. We also note that when using Eq. (\[eq.mK\]), the physical kaon mass $m_K = 496{\rm~MeV}$ can not be reached at the physical pion mass $m_\pi = 138{\rm~MeV}$. Therefore, the dashed curves and the solid curves are not connected. Again we can compare our results with the lattice results in Ref. [@Prelovsek:2013ela], where $m_\pi = 266 {\rm MeV}$, $m_K = 552 {\rm MeV}$ and $L = 1.98 {\rm fm}$. Their results are $m_{K^*} = 891 \pm 14$ MeV and $g_{K^* \pi K} = 5.7 \pm 1.6$, which change to $m_{K^*} = 891 \pm 14$ MeV and $g_{K^* \pi K} = 6.6 \pm 1.9$ in our normalization after taking into account the factor ${8 \pi \over 6 \pi}$. These results are in agreement, within uncertainties, with our result $m_{K^*} = 910.5_{-33.8}^{+34.3}$ MeV and $g_{K^* \pi K} = 5.61_{-0.27}^{+0.21}$. To be complete, we also show the results calculated by fitting the phase shifts obtained using the three lowest energy levels of Fig. \[fig:Kstarphy\]. The solid curves are obtained using the first (lowest) $K \pi$ energy level, the dotdashed curves are obtained using both the first and the second energy levels, and the dashed curves are obtained using all the three lowest energy levels. We find that these results are almost the same. Comparison of our results with the standard Lüscher’s approach {#sec:luscher} ============================================================== To make our analysis complete, we compare our results with the Lattice results obtained using the standard Lüscher’s approach [@Doring:2011vk]. To do this, we follow the same approach of Refs. [@Chen:2012rp; @Boucaud:2007uk; @Beane:2007xs; @Noaki:2008gx; @Pelaez:2010fj]: we use as input the energy levels which we have calculated in Sec. \[sec:energylevels\] and Sec. \[sec:pimass\] using our $G$-functions (Eqs. (\[eq:GDR\]) and (\[gtilde\])), but evaluate phase shifts using both our method and the Lüscher’s $G$-function. The function $I(q_i)$ of Eq. (\[ifun\]) can be written as $$\begin{split} \label{luscherap} {1 \over 2 \omega_1 \omega_2} {\omega_1 + \omega_2 \over E^2 - (\omega_1 + \omega_2)^2 + i \epsilon} =~&{1 \over 2E} { 1 \over p^2 - \vec q^2 + i \epsilon }~~~~~~~~~~~~~(a)\\ &- {1 \over 2 \omega_1 \omega_2} {1 \over \omega_1 + \omega_2 + E}~~~~~(b) \\ &- {1 \over 4 \omega_1 \omega_2} {1 \over \omega_1 - \omega_2 - E}~~~~~(c) \\ &- {1 \over 4 \omega_1 \omega_2} {1 \over \omega_2 - \omega_1 - E}~~~~~(d) \, , \end{split}$$ In the standard Lüscher’s approach only the first term of this equation is kept. We use the following two sets of energy levels: a) the lowest $K \pi$ energy level shown in Fig. \[fig:Level\], where $m_\pi = 138$ MeV and $m_K = 496$ MeV are used; and b) the lowest $K \pi$ energy level shown in the right panel of Fig. \[fig:Lattice\], where $m_\pi = 266$ MeV and $m_K = 552$ MeV are used [@Lang:2012sv]. The obtained phase shifts are shown in Fig. \[fig:compare Luscher\], where the dashed curves are the phase shift evaluated using the standard L$\ddot{\rm u}$scher’s approach and the solid curves are our results. These phase shifts can be similarly used to fit the physical quantities: $$\begin{aligned} &a):&~~~~m_{K^*} = 961.54 {\rm~MeV}\, , g_{K^* \pi K}=8.25 \, , \Gamma_{K^*} = 110.77 {\rm~MeV} \, ,\\ \nonumber &b):&~~~~m_{K^*} = 926.04 {\rm~MeV}\, , g_{K^* \pi K}=6.48 \, , \Gamma_{K^*} = 17.14 {\rm~MeV} \, , \label{lushermass1}\end{aligned}$$ Comparing these two figures, we clearly see that the L$\ddot{\rm u}$scher’s results and our results are quite similar in case (b) when nonphysical pion and kaon masses are used. This confirms the validity of the standard Lüscher’s approach in the real simulation. There are some differences in case (a) when the physical pion and kaon masses are used, but still both results are consistent with each other within uncertainties, considering the uncertainties related to phase shifts are quite large, as shown in the right panel of Fig. \[fig:certainty\]. We note that this discrepancy is partly caused by hidden systematics in different approaches. Moreover, the result at higher energy is obtained using a smaller volume, which could cause sizable finite volume effects (see also discussions in Refs. [@Luscher:1990ux; @Chen:2012rp]). Accordingly, we would like to suggest Lattice theorists to pay attention to this effect when they extract the physical information using the Lattice data calculated in a (too) small box, for example, if they want to use the physical pion mass but still the computational power is limited. For completeness, we try to find where these differences come from by adding the second, the third and the fourth terms of Eq. (\[luscherap\]) to the first standard Lüscher’s term. The results for physical pion and kaon masses, are shown in Fig. \[fig:LuscherP\], and the results for $m_\pi=266$ MeV and $m_K=552$ MeV, are shown in Fig. \[fig:LuscherNP\]. These results suggest that the relativistic corrections can be well taken into account by simply adding either the third or the fourth terms of Eq. (\[luscherap\]). THE INVERSE PROBLEM OF GETTING PHASE SHIFT FROM LATTICE DATA ============================================================ In this section we study the inverse process of getting phase shifts from Lattice Data using two energy levels and a parametrized potential. This has also been done in Refs. [@Doring:2011vk; @MartinezTorres:2011pr; @Doring:2011nd; @Xie:2012np; @Chen:2012rp; @Roca:2012rx], showing this method is rather efficient. To do this we assume that the first and second energy levels shown in Fig. [\[fig:Level\]]{} are “Lattice” inputs, or “synthetic” data. We shall use them to inversely evaluate the $V$-matrix and then calculate phase shifts. At the same time we shall give these “lattice data” some error bars which can be used to evaluate the uncertainties of the phase shifts. Our procedures follow Refs. [@Doring:2011vk; @MartinezTorres:2011pr; @Doring:2011nd; @Xie:2012np; @Chen:2012rp; @Roca:2012rx]. We take five energies from the first level and five more from the second one (we note that their volumes are also different), and associate to them an error of 10 MeV. Then we use the following function which accounts for a CDD pole [@Castillejo:1955ed] to do the one-channel fitting: $$V=-ap^2(1+\frac{bs}{c^2-s}) \, .$$ where $a$, $b$ and $c$ are three free parameters which we shall fit with the “Lattice” data shown in Fig. \[fig:Level\]. The results are shown Fig. \[fig:fitting\] where the energy levels are calculated from all the possible sets of parameters having $\chi^2 < \chi^2_{\rm min} + 1$. Here $\chi^2_{\rm min} = 0.064$ is the best fitting we obtained, where the three parameters are: $$\begin{aligned} a= 6.50\times10^{-5} ~{\rm MeV}^{-2},~~~b =0.79,~~~c=918.90 \rm ~MeV \, ,\end{aligned}$$ we find that errors in the phase shift are large at small energies, but they become smaller as the energy increases. As mentioned in Ref. [@Doring:2011vk] the result of this inverse analysis does not depend on which cut off, or subtraction constant one uses in the analysis, as far as one uses the same ones to induce $V$ from the lattice data and then later on to get phase shifts in the infinite volume from Eq. (\[bethesal\]). The method proves to be practical and efficient. Conclusion {#sec:summary} ========== In this paper we use the efficient strategy proposed in Ref. [@Doring:2011vk] to obtain $K \pi$ phase shifts, and thus the $K^*$ meson properties from energy levels obtained in lattice calculations. To do this we studied the $K \pi$ interaction in $P$-wave in a finite box using the chiral unitary approach which has been very successful to provide $K \pi$ phase shifts in infinite space. We evaluated energy levels which are functions of the cubic box size $L$ and the pion mass $m_\pi$. Then we use these energy levels to obtain $K \pi$ phase shifts. Finally we use these phase shifts to fit the physical quantities for the $K^*$ meson: $m_{K^*} = 894.89_{-37.77}^{+39.75} {\rm~MeV}$, $g_{K^* \pi K}=6.48_{-0.12}^{+0.13}$, $\Gamma_{K^*} = 50.68_{-8.00}^{+8.24} {\rm~MeV}$. To compare our results with the Lattice QCD calculations, we also used non-physical pion masses and redid the same calculations. We note that other parameters can also change with $m_\pi$, and we have considered these effects. The comparison of our results with the Lattice QCD results are shown in Table \[tab:comparedata1\] and Table \[tab:comparedata2\], where we can see our results compare favorably with those lattice results obtained in Refs. [@Fu:2012tj; @Lang:2012sv]. We note that in these calculations we have estimated the theoretical uncertainties. To make our analysis complete, we compare our results with the Lattice results obtained using the standard Lüscher’s approach. We find that the L$\ddot{\rm u}$scher’s results and our results are quite similar for the $K^*(892)$ resonance when the non-physical pion and kaon masses, $m_\pi = 266$ MeV and $m_K = 552$ MeV, are used. There are some differences when the physical pion and kaon masses are used, although both results are still consistent with each other within uncertainties. This discrepancy is partly caused by hidden systematics in different approaches. Moreover, the result at large energy are obtained using a small volume, and could cause sizable finite volume effects. Accordingly, we would like to suggest Lattice theorists to pay attention to this effect when they extract the physical information using the Lattice data calculated in a (too) small box, for example, if they want to use the physical pion mass but still the computational power is limited. Our analyses also suggest that the relativistic corrections can be well taken into account by simply adding either the third or the fourth term of Eq. (\[luscherap\]). We also studied the inverse process of getting phase shifts from our “synthetic¡¯¡¯ lattice data using two energy levels and a parametrized potential. Acknowlegement ============== We thank Eulogio Oset for suggesting this problem and valuable help, and Bao-Xi Sun, Chu-Wen Xiao and Xiu-Lei Ren for useful discussion and information. We also than Zhi-Hui Guo for useful discussion and information about Ref. [@Guo:2011pa], and we have started to study the $K^*$(892) resonance in finite volume with coupled channels $K \pi$, $K \eta$ and $K \eta'$. This work is supported by the National Natural Science Foundation of China under Grant No.11205011, 11475015, 11375024 and 11375023, and the Fundamental Research Funds for the Central Universities. [99]{} M. Lüscher, Commun. Math. Phys.  [**105**]{} (1986) 153. M. Lüscher, Nucl. Phys.  B [**354**]{} (1991) 531. V. Bernard, M. Lage, U. G. Meissner and A. Rusetsky, JHEP [**0808**]{} (2008) 024. V. Bernard, M. Lage, U.-G. Meissner and A. Rusetsky, JHEP [**1101**]{} (2011) 019. M. Doring, U. -G. Meissner, E. Oset and A. Rusetsky, Eur. Phys. J. A [**47**]{}, 139 (2011). M. Doring, J. Haidenbauer, U. -G. Meissner, A. Rusetsky, Eur. Phys. J. A [**47**]{}, 163 (2011). A. Martinez Torres, L. R. Dai, C. Koren, D. Jido and E. Oset, Phys. Rev.  D [**85**]{}, 014027 (2012). E. E. Kolomeitsev, M. F. M. Lutz, Phys. Lett.  [**B582**]{}, 39-48 (2004). J. Hofmann, M. F. M. Lutz, Nucl. Phys.  [**A733**]{}, 142-152 (2004). F. -K. Guo, P. -N. Shen, H. -C. Chiang, R. -G. Ping, B. -S. Zou, Phys. Lett.  [**B641**]{}, 278-285 (2006). D. Gamermann, E. Oset, D. Strottman, M. J. Vicente Vacas, Phys. Rev.  [**D76**]{}, 074016 (2007). M. Doring, U. G. Meissner, JHEP [**1201**]{}, 009 (2012). J. -J. Xie and E. Oset, Eur. Phys. J. A [**48**]{}, 146 (2012). L. Roca and E. Oset, Phys. Rev. D [**85**]{}, 054507 (2012). Hua-Xing  Chen and E. Oset, Phys. Rev. D [**87**]{},016014 [2013]{} Ziwen Fu and Kan Fu, Phys. Rev. D [**86**]{}, 094507 (2012). C. B. Lang, and Leskovec Luka and Mohler Daniel and Prelovsek Sasa, Phys. Rev. D [**86**]{}, 054508 (2012). S. Prelovsek, L. Leskovec, C. B. Lang and D. Mohler, Phys. Rev. D [**88**]{}, no. 5, 054508 (2013). J. J. Dudek [*et al.*]{} \[Hadron Spectrum Collaboration\], Phys. Rev. Lett.  [**113**]{}, no. 18, 182001 (2014). J. A. Oller and E. Oset, Phys. Rev. D [**60**]{}, 074023 (1999). C. W. Xiao, F. Aceti, and M. Bayar, Eur. Phys. J.  A [**49**]{}, 22 (2011). J. A. Oller and E. Oset, Nucl. Phys. A [**620**]{}, 438 (1997) \[Erratum-ibid. A [**652**]{}, 407 (1999)\]. J. A. Oller and U. G. Meissner, Phys. Lett. B [**500**]{}, 263 (2001). M. Altenbuchinger and L. S. Geng, Phys. Rev. D [**89**]{}, no. 5, 054008 (2014). Z. H. Guo and J. A. Oller, Phys. Rev. D [**84**]{}, 034005 (2011). C. B. Lang, D. Mohler, S. Prelovsek and M. Vidmar, Phys. Rev. D [**84**]{}, 054503 (2011). J. A. Oller, E. Oset and J. E. Palomar, Phys. Rev. D [**63**]{}, 114009 (2001). M. Doring, J. Haidenbauer, U.-G. Meisssner, A. Rusetsky, Eur. Phys. J.  A [**47**]{}, 163 (2011). P. Estabrooks, R. K. Carnegie, A. D. Martin, W. M. Dunwoodie, T. A. Lasinski and D. W. G. S. Leith, Nucl. Phys. B [**133**]{}, 490 (1978). R. Mercer, P. Antich, A. Callahan, C. Y. Chien, B. Cox, R. Carson, D. Denegri and L. Ettlinger [*et al.*]{}, Nucl. Phys. B [**32**]{}, 381 (1971). P. Boucaud [*et al.*]{} \[ETM Collaboration\], Phys. Lett. B [**650**]{}, 304 (2007). S. R. Beane, T. C. Luu, K. Orginos, A. Parreno, M. J. Savage, A. Torok and A. Walker-Loud, Phys. Rev. D [**77**]{}, 014505 (2008). J. Noaki, S. Aoki, T. W. Chiu, H. Fukaya, S. Hashimoto, T. H. Hsieh, T. Kaneko and H. Matsufuru [*et al.*]{}, PoS LATTICE [**2008**]{}, 107 (2008). J. R. Pelaez and G. Rios, Phys. Rev. D [**82**]{}, 114002 (2010). J. J. Sakurai, Currents and mesons (University of Chicago Press, Chicago Il 1969). M. Bando, T. Kugo and K. Yamawaki, Phys. Rept.  [**164**]{}, 217 (1988). G. Ecker, J. Gasser, H. Leutwyler, A. Pich and E. de Rafael, Phys. Lett.  B [**223**]{}, 425 (1989). Y. Zhou, X. L. Ren, H. X. Chen and L. S. Geng, Phys. Rev. D [**90**]{}, no. 1, 014020 (2014). X. -L. Ren, L. S. Geng, J. Martin Camalich, J. Meng and H. Toki, JHEP [**1212**]{}, 073 (2012). L. Castillejo, R. H. Dalitz and F. J. Dyson, Phys. Rev.  [**101**]{}, 453 (1956).
--- abstract: 'New low frequency $ac$ susceptibility measurements on two different spin glasses show that cooling/heating the sample at a constant rate yields an essentially reversible (but rate dependent) $\chi(T)$ curve; a downward relaxation of $\chi$ occurs during a temporary stop at constant temperature ([*ageing*]{}). Two main features of our results are: (i) when cooling is resumed after such a stop, $\chi$ goes back to the reversible curve ([*chaos*]{}) (ii) upon re-heating, $\chi$ perfectly traces the previous ageing history ([*memory*]{}). We discuss implications of our results for a [*real space*]{} (as opposed to [*phase space*]{}) picture of spin glasses.' address: - | $^1$Service de Physique de l’Etat Condensé, CEA Saclay,\ 91191 Gif sur Yvette Cedex, France - | $^2$Department of Material Science, Uppsala University, P.O. Box 534,\ 751 21 Uppsala, Sweden author: - 'K. Jonason$^2$, E. Vincent$^1$, J. Hammann$^1$, J.P. Bouchaud$^1$, P. Nordblad$^2$' title: Memory and Chaos Effects in Spin Glasses --- PACS numbers: 75.50.Lk 75.10.Nr 0.2cm ** to appear in Phys. Rev. Lett. [2]{} The dynamic properties of the spin glass phase have been extensively studied by both experimentalists and theorists for almost two decades [@Young; @saclayrev1]. The observed properties reflect the out-of-equilibrium state of the system: the response to a field variation is logarithmically slow, and, in addition, depends on the time spent at low temperature (“ageing”). Ageing is fully reinitialized by heating the sample above the glass temperature $T_g$. It corresponds to the slow evolution of the system towards equilibrium, starting at the time of the quench below $T_g$. Many aspects of ageing are similar to the“physical ageing” phenomena that have been characterized in the mechanical properties of glassy polymers [@struik]. In the last few years, some interesting progress in the theoretical understanding of ageing in disordered systems has been achieved [@mfnonequ]. From the studies of the critical behaviour at $T_g$ [@critic], it appears that the approach of $T_g$ is associated to the divergence of a spin-spin correlation length, as is the case in the phase transition of classical ordered systems. In the spin glass phase, the system is out of equilibrium: as in simple ferromagnets, it is tempting to associate ageing with the progressive growth of a typical domain size towards an equilibrium infinite value. However, this simple picture cannot account for all the experimental observations. In particular, the effect of small temperature cycles (within the spin-glass phase) is rather remarkable [@cycleuppsala; @cyclesaclay]: $\bullet$ on the one hand, ageing at a higher temperature barely contributes to ageing at a lower temperature. Said differently (as will be discussed again below), the thermal history at sufficiently higher temperatures is irrelevant. This is at variance with a simple scenario of thermal activation over barriers, where the time spent at higher temperature would obviously help the system to find its equilibrium state. Everything happens as if there were strong changes of the free-energy landscape with temperature. This point is suggestive of the “chaotic” aspect of the spin glass phase that has been predicted from mean field theory [@mpv] and from scaling arguments in [@BrayMoore; @FHlett]. $\bullet$ on the other hand, interesting memory effects concomitantly appear: the state reached by the system at a given temperature can be retrieved after a negative temperature cycle. In the present letter, we describe some new experiments which reveal in a rather striking way these memory and chaos effects, and we point out their implications for the construction of a real space picture of spin glasses. The results are obtained using a new experimental protocol, which has first been proposed and applied to the metallic Cu:Mn spin glass by one of us [@Nordblad]. We now develop this approach in a series of measurements on the $CdCr_{1.7}In_{0.3}S_4$ insulating spin glass [@mtrlref]. The universality of the out-of-equilibrium dynamics in spin glasses is evidenced by the similarity between the results on two very different realisations of spin glass systems. In this procedure, we record the ac-susceptibility of the sample as a function of temperature. The ac field has a low frequency of $\omega/2\pi=0.04 Hz$, to allow the relaxation of the susceptibility due to ageing to be clearly visible on the scale of several hours. The peak amplitude of the ac field is $0.3Oe$, which is low enough not to affect the properties of the system. We cool the system from above $T_g=16.7K$ down to $5K$ at a constant cooling rate of $0.1K/min$, and then heat it back continuously at the same rate. This yields two slightly different curves; the one obtained upon heating (a bit below the other one) is chosen as the [*reference curve*]{}, and shown as a solid line in Fig. 1. We then repeat the experiment, but now stop during cooling at an intermediate temperature $T_{1}=12 K$ during a certain waiting time $t_{w1}=7 h$. During $t_{w1}$, due to ageing, both $\chi'$ and $\chi''$ relax downwards, by about the same amount; for $\chi''$, however, the relative amount is much larger, which makes the effect more visible, and in the following we mainly concentrate on the out-of-phase component. After the ageing stage at $T_1$, the cooling procedure resumes, and one observes that $\chi''$ and $\chi'$ merge back with the reference curve only a few Kelvin below $T_1$. Thus, ageing at $T_1$=12 K has not influenced the result at lower temperatures ([*“chaos” effect*]{}). 1.2cm The surprise is that when the sample is re-heated at a constant heating rate (i.e. no further stops on the way up), we find that the trace of the previous stop (the dip in $\chi''$) is exactly recovered (see Fig 1). The memory of what happened at $T_{1}=12 K$ has not been erased by the further cooling stage, [*even though $\chi''$ at lower temperatures lies on the reference curve*]{}. The system can actually retrieve information from several stops if they are sufficiently separated in temperature. In Fig.2, we show a “double memory experiment”, in which two ageing evolutions, one at $T_1=12K$ and the other at $T_2=9K$, are retrieved [@remark]. In the inset of Fig.2, the result of a similar experiment on a Cu:Mn sample is shown [@Nordblad]. As discussed above, the cooling rate dependence of the dynamics in spin glasses is largely governed by the “chaos” effect. For example, it has been shown that there is no difference in the ageing behaviour if the spin glass has been directly quenched from above $T_g$ or if it has been subjected to a very long waiting pause immediately below $T_g$ [@cyclesaclay]. However, the influence of the cooling rate was not quantitatively characterized in systematic measurements, and this point is of a particular interest for the comparison between spin glasses and other glassy systems. We have therefore performed the following experiment. We cool the sample progressively and continuously (in fact, by steps of $0.5K$) from above $T_g$ to $12K=0.72T_g$, using three very different cooling rates. The result is shown in Fig.3. The initial values of $\chi'$ and $\chi"$ are indeed different: slower cooling yields a smaller initial value of the susceptibility, a value that is closer to “equilibrium”. A small horizontal shift of the curves along the time scale allows the superposition of the three of them; the curves obtained after a slower cooling are somewhat “older”. However, all curves are clearly converging towards the same asymptotic value. This behaviour contrasts with that observed in systems where thermally activated domain growth is important, for example the dielectric relaxation of the dipole glass $K_{1-x}Li_{x}TaO_3$ [@levelut]. There, it is found that different cooling rates lead the system to very different apparent asymptotic values of the dielectric constant. 1.5cm One can furthermore show that the cooling rate effect seen in Fig. 3 is entirely due to the last temperature interval, and not at all to the time spent at higher temperatures. We again use different cooling rates from above $T_g=16.7K$ to $14K$, but then we rapidly cool from $14K$ to $12K$, where the relaxation is measured. In this procedure, despite very different average cooling rates, the last two Kelvin are always crossed at the same speed. The result, in Fig. 3 (inset), is unambiguous: the obtained relaxation is the same in all cases, for $\chi'$ as well as for $\chi''$. Thus, in a spin glass, the only influence of the cooling rate on the ageing state is due to the very last temperature interval before reaching the measurement temperature, while due to chaos effects the time spent at higher temperatures does not contribute. Again, this strongly contrasts with the case of $K_{1-x}Li_{x}TaO_3$ alluded to above, where it is the time spent at temperatures near the glass transition that mostly determines the final state after the quench [@levelut]. 0.7cm 0.5cm There has been quite a number of approaches, inspired by Parisi’s solution of mean-field models, in which ageing can be described in terms of a random walk in the space of the metastable states [@sibani; @jpbdean; @orbach]. The memory and chaos effects have been described in terms of a [*hierarchical organization of the metastable states as a function of temperature*]{} [@cyclesaclay], a picture in which the growth of the free energy barriers when the temperature decreases could be characterized quantitatively [@sacorbach]. Although these phase space pictures are very helpful (and have been used very early in the context of spin-glasses and glasses), it is obvious that they need to be linked with real space pictures, where the experimental signal can be attributed to certain clusters of spins which flip collectively. Macroscopically, ageing means that the system becomes “stiffer” with time, in the sense that the response to a field variation becomes slower and slower with increasing age. Slower response means larger free-energy barriers, and correspondingly a larger number of spins to be simultaneously flipped: one is thus naturally led to think in terms of growing “domains” (or “droplets”) of correlated spins. The simplest picture based on this idea has been proposed by Fisher and Huse [@FH] and Koper and Hilhorst [@KH] in slightly different terms. It is based on the postulate that, at any given temperature below the spin-glass transition, there is only one phase (and its spin reversed counterpart) to be considered, much as in a standard ferromagnet. The difference is of course that all spins do not point in one direction, but arrange in a random (but fixed) way imposed by the interplay between the disordered nature of the interactions and the temperature. One can however by convention call one of the phases ‘up’ and the other one ‘down’; again as in a ferromagnet, the typical domain size is expected to grow with time, albeit logarithmically slowly since domain walls are pinned by the disorder. In principle, this picture should lead to very strong cooling rate effects, which are, as stated above, not those that are observed experimentally in spin glasses (see Fig. 3). However, if one assumes (as suggested by mean field and scaling arguments) [*chaos with temperature*]{}, in the sense that the phase growing at temperature $T_1$ is not at all the “correct” equilibrium phase for another temperature $T_2$, the cooling rate dependence can indeed be small since the time spent at a higher temperature does not bring the system any closer to equilibrium. This must be contrasted with random field like systems, where the equilibrium state is the same in the whole low temperature phase, and where cooling rate effects are strong. The chaotic dependence of the phase with temperature allows one to argue why ageing is restarted when the temperature is lowered: the configuration reached after ageing at $T_1$ is, from the point of view of the ‘$T_2$-phase’, completely random. Correspondingly, new $T_2$-domains have to grow. The problem, however, comes from the observed memory effect: it shows that the $T_2$-domains must indeed grow somewhere, but [*without destroying the preexisting $T_1$-domains*]{}. The only possibility is that the $T_2$-domains [*do not nucleate everywhere*]{}, but only around certain favorable nucleation centers, [*coexisting*]{} with the previous backbone of $T_1$-domains. This is however in contradiction with the idea that, at any temperature, only one phase (and thus two types of domains) is enough to describe the dynamics of the system completely, since we already require the coexistence of [*two*]{} types of domains ($T_1$- and $T_2$-domains). The same argument shows that, at the first temperature $T_1$, the system must actually be in a mixture of all the different phases encountered between $T_g$ and $T_1$. Intuitively, it seems clear that if the system is so fragile to temperature changes, then by the same token it is hard to imagine that other nearby ‘phases’ will not nucleate simultaneously with the nominal phase. In other words, the question is whether, at long times, the ‘defects’ are only domain walls between two well identified phases (as in the coarsening/droplet picture), or whether these defects are more complicated (branched) objects. We believe that the memory effect discussed above is incompatible with a picture of the [*out-of-equilibrium*]{} dynamics based on one type of domains only. Obviously, this does not exclude the possibility that the [*equilibrium*]{} phase is unique; from an experimental point of view, however, the question is not relevant, since the system is never in equilibrium. A similar conclusion has been suggested by ‘second noise spectrum’ experiments [@Weissman], and by recent out of equilibrium numerical simulations [@Marinari]. The construction of a consistent ‘hierarchical droplet’ picture appears completely open and beyond the scope of the present paper. A possibility (discussed in[@jpbdean; @mfnonequ]) is that correlations at a given temperature establish themselves progressively, but only over non compact (i.e. fractal) clusters of spins, which are large enough to be frozen at that temperature, but leave the surrounding sea of spins relatively free. As the temperature is lowered, smaller clusters begin to freeze (and thus provide a new aging signal), while larger clusters are completely blocked (thus leading to the memory effect). Let us note that the idea of “droplets within droplets” has previously been discussed in [@Villain; @fractclust] without (to our knowledge) getting to the stage of a more quantitative model. The present experiments, which suggest the coexistence of many different time scales (and thus, presumably, many length scales) may force one to take the idea of fractal droplets more seriously. 0.5cm [*Acknowledgements*]{} Financial support from the Swedish Natural Science Research Council (NFR) is acknowledged; one of us (KJ) wants to thank the Swedish Institute (SI) for a fellowship. We are grateful to F. Alberici, L.F. Cugliandolo, P. Doussineau, A. Levelut, M. Mézard, M. Ocio, G. Parisi for enlightening discussions, and to L. Le Pape for his technical support. [99]{} “Spin Glasses and Random Fields”, [*Series on Directions in Condensed Matter Physics*]{} Vol.12, A.P. Young Editor, World Scient. 1998. E. Vincent, J. Hammann and M. Ocio in [Recent Progress in Random Magnets]{}, ed. D.H. Ryan (World Scientific, Singapore, 1992) [*(cond-mat/9607224)*]{}. L.C.E. Struik, “Physical Aging in Amorphous Polymers and Other Materials”, Elsevier Scient. Pub. Co., Amsterdam 1978. J.-P. Bouchaud, L.F. Cugliandolo, J. Kurchan and M. Mézard, in [*op. cit.*]{} [@Young] pp.161-223, and ref. therein. see e.g. references in E. Vincent and J. Hammann, [*J. Phys. C: Solid State Phys.*]{} [**20**]{}, 2659 (1987). P. Granberg, L. Lundgren and P. Nordblad, [*J. Magn. Magn. Mater.*]{} [**92**]{}, 228 (1990); P. Granberg, L. Sandlund, P. Nordblad, P. Svedlindh and L. Lundgren, [*Phys. Rev. B*]{} [**38**]{}, 7097 (1988). Ph. Refregier, E. Vincent, J. Hammann and M. Ocio, [*J. Phys. (France)*]{} [**48**]{}, 1533 (1987); E. Vincent, J.P. Bouchaud, J. Hammann and F. Lefloch, [*Phil. Mag. B*]{} [**71**]{}, 489 (1995). M. Mézard, G. Parisi and M.A. Virasoro, “Spin Glass Theory and Beyond”, [*World Scient. Lect. Notes in Phys.*]{} Vol. [**9**]{} (1987). A.J. Bray and M.A. Moore, [*Phys. Rev. Lett.*]{} [**58**]{}, 57 (1987). D.S. Fisher and D.A. Huse, [*Phys. Rev. Lett.*]{} [**56**]{}, 1601 (1986). P. Nordblad, unpublished; P. Nordblad and P. Svedlindh, in [*op. cit.*]{} [@Young] pp.1-27. M. Alba, J. Hammann, M. Ocio and Ph. Refregier, [*J. Appl. Phys.*]{} [**61**]{}, 3683 (1987). Note that if $T_2$ is very close to $T_1$, ageing at $T_2$ slowly wipes out the memory of the dip at $T_1$. On the other hand, both dips are erased by heating above $T_1$. F. Alberici, P. Doussineau and A. Levelut, [*Europhys. Lett.*]{} [**39**]{}, 329 (1997); F. Alberici-Kious, J.P. Bouchaud, L.F. Cugliandolo, P. Doussineau and A. Levelut, [*Aging in K$_{1-x}$Li$_x$Ta0$_3$: a domain growth interpretation*]{}, cond-mat/9805208. K.H. Hoffmann and P. Sibani, [*Z. Phys. B Cond. Matt.*]{} [**80**]{}, 429 (1990); K.H. Hoffmann, S. Schubert and P. Sibani, [*Europhys. Lett.*]{} [**38**]{} 613 (1997). J.-P. Bouchaud and D.S. Dean, [*J. Phys. I France*]{} [**5**]{}, 265 (1995) Y.G. Joh, R. Orbach and J. Hammann, [*Phys. Rev. Lett.*]{} [**77**]{}, 4648 (1996). J. Hammann, M. Lederman, M. Ocio, R. Orbach and E. Vincent, [*Physica A*]{} [**185**]{}, 278 (1992). D.S. Fisher and D.A. Huse, [*Phys. Rev. B*]{} [**38**]{}, 373 and 386 (1988). G.J.M. Koper and H.J. Hilhorst, [*J. Phys. France*]{} [**49**]{}, 429 (1988). M.B. Weissman, N.E. Israeloff and G.B. Alers, [*J. Magn. Magn. Mater.*]{} [**114**]{}, 87 (1992). E. Marinari, G. Parisi, J.J. Ruiz-Lorenzo, F. Ritort, [*Phys. Rev. Lett.*]{} [**76**]{} 843 (1996); E. Marinari, G. Parisi, J.J. Ruiz-Lorenzo, in [*op. cit.*]{} [@Young] pp. 59-98; E. Marinari, G. Parisi, J.J. Ruiz-Lorenzo, [*On the Phase Structure of the 3D Edwards Anderson spin glass*]{}, cond-mat/9802211. J. Villain, [*J. Physique France*]{} [**46**]{}, 1843 (1985); for related ideas, see M. Feigel’man, L. Ioffe, Z. Phys. B [**51**]{} 237 (1983) and M. Gabay, T. Garel, J. Physique [**46**]{}, 5 (1985). M. Ocio, J. Hammann and E. Vincent, [*J. Magn. Magn. Mater.*]{} [**90-91**]{}, 329 (1990).
--- abstract: 'Let $S$ be a 2-group. The rank of $S$ is the maximal dimension of an elementary abelian subgroup of $S$ over $\Bbb Z_2$. The purpose of this article is to determine the rank of $S$, where $S$ is a Sylow 2-subgroup of the classical simple groups of odd characteristic.' author: - Mong Lung Lang title: | $\mbox { Ranks of the Sylow 2-Subgroups of the Classical Simple Groups }$ --- [ 0. Introduction]{} A 2-group is called [*realisable*]{} if it is isomorphic to a Sylow 2-subgroup of a finite simple group. It is well known that very few 2-groups are realisable (see \[HL\], \[M\]). In order to understand such realisable 2-groups, one would like to find a set of invariance of 2-groups which enables us to differentiate $\Omega_1$ (the set of realisable 2-groups) from $\Omega_2$ (the set of non-realisable 2-groups). This article studies the 2-rank (an obvious choice of invariance of 2-groups) of the classical simple groups of odd characteristic. The results are tabulated in the following table. Note that the 2-ranks of $PSL_{2n+1}(q)$ and $PSU_{2n+1}(q)$ are the same as $GL_{2n}(q)$ and $U_{2n}(q)$ respectively and the 2-ranks of $GL_{2n}(q)$ and $U_{2n}(q)$ have been determined in \[L\]. Note also that classical groups of small ranks have be determined (Theorem 4.10.5 of \[GLS3\]). Projective Symplectic Groups ============================ Let $X$ be a 2-group. The wreath product of $X$ and $\Bbb Z_2$ is given as follows : $$w(X) = w_1(X) = \left < \left (\begin{array} {cc} X&0\\ 0&1\\ \end{array}\right ), \left (\begin{array} {cc} 1&0\\ 0&X\\ \end{array}\right ) , J= \left (\begin{array} {cc} 0&1\\ 1&0\\ \end{array}\right ) \right >.$$ Let $z_0\in Z(X)-\{1\}$. For our convenience, we use the following notations : [$$\overline X = X/\left <z_0\right >, Z = \left <\mbox{ diag}\,(z_0,z_0)\right >, \mbox{ diag}\,(X, X) = B, \overline B= B/Z, \overline {w(X)} = w(X)/Z. \eqno(1.1)$$]{} Note that $$B = \left< \left (\begin{array} {cc} t &0\\ 0&1\\ \end{array}\right )\,:\, t \in X\right > \rtimes \left< \left (\begin{array} {cc} t&0\\ 0&t\\ \end{array}\right ) \,:\, t \in X \right > = B_1 \rtimes B_2. \eqno (1.2)$$ $$\overline B =\overline B_1\rtimes \overline B_2 \cong B_1\rtimes \overline B_2 \cong X\rtimes \overline X \,,\,\, \overline {w(X)} =\overline B\rtimes \overline {\left<J\right > } \cong \overline B \rtimes \left < J\right >. \eqno (1.3)$$ [**Remark 1.0.**]{} [**One should take note that $\overline X =X/\left <z_0\right >$ and the others, such as $\overline B_1, \overline B_2,\cdots $ are defined as $B_1/Z$, $B_2/Z, \cdots $. To be more precise, let $K$ be a group where members of $K$ are $r\times r$ matrices over $X$ and let $z_0 \in Z(X)$ be fixed, the bar notation $\overline K$ is defined to be $KZ/Z$, where $Z =\left <diag\,(z_0,z_0, \cdots , z_0)\right >$ $(r$ of them$)$.** ]{} [**Lemma 1.1.**]{} [*Let $X$ be given as in the above. $r_2(\overline {w(X)} )\ge r_2(\overline X) +2.$ Let $\overline E$ be elementary abelian. Suppose that $\overline E$ is not a subgroup of $\overline B$. Then $r_2(\overline E) \le r_2(\overline X) +2$.* ]{} [*Proof.*]{} Let $ V_0 = \left< \left (\begin{array} {cc} z_0&0\\ 0&1\\ \end{array}\right ), \left (\begin{array} {cc} t&0\\ 0&t\\ \end{array}\right ), \left (\begin{array} {cc} 0&1\\ 1&0\\ \end{array}\right ) \,:\, t \in X \right > \subseteq w(X).$ It is clear that $r_2(\overline {w(X)}) \ge r_2(\overline V_0) = 2+r_2(\overline X)$. This completes the proof of the first part of our lemma. Since $\overline E$ is not a subgroup of $\overline B$, one has $\overline E = (\overline E\cap \overline B )\times \overline R$, where $\overline R$ is not the identity group. Applying Lemma 2.3 of \[L\], $\overline R$ is of dimension 1. Hence $$r_2(\overline E) = r_2(\overline E \cap \overline B) +1. \eqno(1.4)$$ Further, $ E$ (the preimage of $\overline E$ in $w(X)$) possesses an element of the form $ \sigma =\left (\begin{array} {cc} 0&x_0\\ y_0&0\\ \end{array}\right )\in E -B $ and $\overline E = (\overline E\cap \overline B)\times \left <\overline \sigma \right >$. Let $\tau=\left (\begin{array} {cc} x& 0\\ 0&y\\ \end{array}\right )\in E \cap B.$ Since $\overline E$ is abelian, $[\sigma , \tau] \in Z.$ It follows that $x= x_0yx_0^{-1} \mbox{ modulo } \left <z_0\right >.$ Hence $x \in X$ is uniquely determined by $y\in X $ modulo $\left < z_0\right > $. As a consequence, the dimension of $ \overline E \cap \overline B$ is no more than the rank of $\overline X +1$ (1 comes from the fact that $x$ determines $y$ modulo $ \left <z_0\right > $, a group of rank 1). Equivalently, $r_2( \overline E \cap \overline B) \le r_2(\overline {X}) +1. $ The inequality (1.4) now becomes $r_2(\overline E) \le 2 +r_2(\overline X)$. [**Lemma 1.2.**]{} [ *Let $Q$ be generalised quaternion and let $\overline {w(Q)} = w(Q)/diag(-1,-1).$ Then $r_2(\overline {w(Q)}) = 4$. Let $\overline E$ be elementary abelian of dimension $4$. Then $\overline E$ possesses an element of the form $ \overline {\left (\begin{array} {cc} 0&x\\ y&0\\ \end{array}\right ) },$ where $x,y \in Q, xy=yx = \pm 1 .$* ]{} [*Proof.*]{} Let $X=Q$, $z_0 = -1$. Applying Lemma 1.1, equation (1.3) and Lemma 2.3 of \[L\], we have $$4 = 2+r_2(\overline X) \le r_2(\overline {w(Q)} ) =r_2(\overline B\rtimes \left<J\right >) \le r_2(\overline B) +1 \le r_2( X) +r_2(\overline X)+1=4.$$ Since $r_2(\overline B) \le r_2(X) + r_2(\overline X) = 1+ 2=3$, $\overline E $ is not a subgroup of $\overline B$. Hence $\overline E \cap (\overline {w(Q)} - \overline B) \ne \emptyset.$ Hence $\overline E$ must possess an element of the form $ \overline {\left (\begin{array} {cc} 0&x\\ y&0\\ \end{array}\right ) },$ where $x , y\in Q, xy=yx=\pm 1 .$ This completes the proof of the lemma. [**Lemma 1.3.**]{} *Let $Q$ be generalised quaternion and let $\overline {w_2(Q)} = w_2(Q)/diag(-1,-1, $ $-1,-1).$ Then $r_2(\overline {w_2(Q)}) = 6$, $r_2(w(Q)\rtimes \overline {w(Q)})=5.$ In particular, if $\overline E$ is elementary abelian of dimension $6$, then $\overline E$ possesses an element of the form $ \overline {\left (\begin{array} {cc} 0&x\\ y&0\\ \end{array}\right ) },$ where $x,y \in w(Q), xy=yx = \pm 1 .$* [*Proof.*]{} Let $X= w(Q)$, $z_0 =$ diag$\,(-1,-1)$. Then $w_2(Q) = w(X)$. By Lemmas 1.1 and 1.2, $r_2(\overline {w_2(Q)}) \ge 2+r_2(\overline X) = 6$. Direct calculation shows that the rank of $ \overline B \cong X\rtimes \overline X \cong w(Q)\rtimes \overline {w(Q)}$ is 5. Let $\overline E$ be an elementary abelian subgroup of $\overline {w(X)}$ of dimension $r_2(\overline{w(X)}) \ge 6$. It follows from the above that $\overline E$ is not a subgroup of $w(Q)\rtimes \overline {w(Q)}$. This implies that $\overline E \cap ( \overline {w_2(Q)} -\overline B) \ne \emptyset$. Hence $\overline E$ must possess an element of the form $ \overline {\left (\begin{array} {cc} 0&x\\ y&0\\ \end{array}\right ) },$ where $x , y\in w(Q), xy=yx=\pm 1 .$ By Lemmas 1.1 and 1.2, $$r_2(\overline E) \le 2 +r_2(\overline X) = 2+4= 6 .\eqno(1.5)$$ This completes the proof of the lemma. [**Lemma 1.4.**]{} [*Let $Q$ be generalised quaternion and let $\overline {w_n(Q)} = w_n(Q)/Z$, where $z_0 = diag\,(-1, \cdots,-1)\,\,(2^{n-1}\,\,of\,\,them)$. Then $r_2(\overline {w_n(Q)}) \ge 2^n+1$.*]{} [*Proof.*]{} Let $\left <i,j\right >$ be the quaternion subgroup of order 8 of $Q$. Let $E_m$ be the matrix obtained from $I_{2^n}$ by replacing the $(m,m)$-entry by $-1$ and let $$M = \left < E_m, \mbox{diag}\,(i,i,\cdots , i), \mbox{diag}\,(j,j,\cdots , j), m = 1,2, \cdots, 2^n \right > \subseteq w_n(Q).$$ Then $\overline M$ is elementary abelian of dimension $2^n+1$. This completes the proof of the lemma. [**Lemma 1.5.**]{} [*Let $Q$ be generalised quaternion and let $\overline {w_3(Q)} = w_3(Q)/Z$, where $z_0 = diag\,(-1, -1,-1,-1)$. Then $r_2(\overline {w_3(Q)}) = 9$. Let $\overline E$ be elementary abelian of dimension $9$. Then $\overline E \subseteq diag\,\overline {(w_2(Q), w_2(Q))}$.* ]{} [*Proof.*]{} Let $X = w_{2}(Q)$. Let $\overline E$ be elementary abelian of dimension $r_2(\overline {w_3(Q)})$. Suppose that $\overline E$ is not a subgroup of $\overline B$ (see (1.2) for notation). By Lemmas 1.1 and 1.3, $r_2(\overline E) \le r_2(\overline X) +2 = 8$. A contradiction (see Lemma 1.4). Hence $\overline E \subseteq \overline B = \mbox{diag}\,\overline {(w_2(Q), w_2(Q)) }$. It follows that $$r_2(\overline E) \le r_2(\overline B) =r_2( X\rtimes \overline X) \le r_2( X) + r_2( \overline X) = 4+6=10.$$ Suppose that $r_2(\overline E) = 10$. Let $\overline E = \overline V \times \overline W$, where $\overline V = \overline E \cap \overline B_1$ (see (1.2) for notation). It follows that $\overline V$ is of dimension $4 = r_2(X)$ and that $\overline {W}$ is of dimension $6 = r_2(\overline X)$. Since $\overline B_1 \cong B_1\cong w_2(Q)$ and elementary abelian subgroups of dimension 4 of $w_2(Q)$ are subgroups of $w(Q)\times w(Q)$ (Lemma 3.2 of \[L\]), we have $$\overline V = \overline {\mbox{diag}\,(X_1, X_2, I_4)} \cong\mbox{diag}\,(X_1, X_2, I_4) ,$$ where $X_i$ are elementary abelian subgroups of dimension 2 of $w(Q)$. An easy observation of the structure of $ w(Q)$ shows that $X_i$ is either $$\left < \left (\begin{array} {cc} -1&0\\ 0&1\\ \end{array}\right ), \left (\begin{array} {cc} 1&0\\ 0&-1\\ \end{array}\right )\right > \mbox{ or } \left < \left (\begin{array} {cc} -1&0\\ 0&-1\\ \end{array}\right ), \left (\begin{array} {cc} 0& g\\ g^{-1}&0\\ \end{array}\right )\right >.\eqno(1.6)$$ Further, since $r_2(\overline W) =6$, we have $r_2(\overline W|\overline B_2) = 6$ (Lemma 2.3 of \[L\]). Note that $\overline B_2 \cong\overline {w_2(Q)} $. Applying Lemma 1.3, $\overline W|\overline B_2$ possesses an element of the form $$\overline { \left (\begin{array} {cccc} 0& u&0&0\\ v& 0&0&0\\ 0& 0&0&u\\ 0& 0&v&0\\ \end{array}\right )}\mbox{ where } u,v\in w(Q), uv=vu = \pm I_2 .\eqno(1.7)$$ Since $r_2(\overline W|\overline B_2) = 6 = r_2 (\overline B_2) $, $\overline W|\overline B_2$ must contain all the involutions of the centre of $\overline B_2$. Hence $$\overline { \left (\begin{array} {cccc} -I_2& 0&0&0\\ 0& I_2&0&0\\ 0& 0&-I_2&0\\ 0& 0&0&I_2\\ \end{array}\right )} \in \overline W|\overline B_2.\eqno(1.8)$$ It follows that $$\overline { \left (\begin{array} {ccc} x &0&0\\ 0&0&u\\ 0&v&0\\ \end{array}\right )}\,\,, \,\,\,\overline { \left (\begin{array} {ccc} w&0&0\\ 0&-I_2&0\\ 0&0&I_2\\ \end{array}\right )} \in \overline W, \mbox{ where } x, w\in w_2(Q).\eqno(1.9)$$ Since the above elements are of order 2 in $\overline E$, we have $$x^2 = I_4, uv=vu=I_2, w^2 = I_4, \mbox{ or } x^2 = -I_4, uv=vu=-I_2, w^2 = I_4.\eqno (1.10A)$$ Since the elements in (1.9) commute with each other in $\overline E$, we have $$xwx^{-1}= -w \,\,\,(\mbox{not } \pm w).\eqno (1.10B)$$ Note that $[\overline V, \overline W]= \overline 1$. It follows that $x, w \in C_{\overline B_1}(\overline V) \cong C_{ B_1}(\mbox{diag}\,(X_1,X_2, I_4))$. An easy observation of the centraliser of $ \mbox{diag}\,(X_1,X_2, I_4) $ in $\overline B_1 \cong B_1 \cong w_2(Q)$ shows that $C_{\overline B_1}( \overline V ) \cong C_{ B_1}(\mbox{diag}\,(X_1,X_2, I_4))$ does not possess elements $x,w$ satisfy $$w^2 = I_4, x^2 = \pm I_4, xwx^{-1}= -w.$$ A contradiction (see remark). Hence $r_2(\overline E) \le 9.$ It follows by Lemma 1.4 that $r_2(\overline E) =9$. [**Remark.**]{} The centraliser of [$\left < \left (\begin{array} {cc} -1&0\\ 0&1\\ \end{array}\right ), \left (\begin{array} {cc} 1&0\\ 0&-1\\ \end{array}\right )\right >$ ]{} in $w(Q)$ is $Q\times Q$. Further, $$C_{w(Q)}\left( \left < \left (\begin{array} {cc} -1&0\\ 0&-1\\ \end{array}\right ), \left (\begin{array} {cc} 0& g\\ g^{-1}&0\\ \end{array}\right )\right >\right ) = \left < \left (\begin{array} {cc} a&0\\ 0& g^{-1} a g \\ \end{array}\right ), \left (\begin{array} {cc} 0& g\\ g^{-1}&0\\ \end{array}\right )\right >.$$ Direct calculation shows that members of $w_2(Q) - w(Q) \times w(Q)$, which take the form $ \left (\begin{array} {cc} 0&a_{12}\\ a_{21}&0\\ \end{array}\right )$ do not centralise diag$\,(X_1,X_2)$. It follows that $$C_{w_2(X)}(X_1\times X_2) \cong C_{w(X)}(X_1)\times C_{w(X)}(X_2) .\eqno(1.11)$$ This enables us to determine the centraliser $ C_{\overline B_1}(\overline V)\cong C_{B_1}(\mbox{diag}\,(X_1,X_2, I_4))\cong C_{w_2(Q)}(\mbox{diag}\,(X_1,X_2))$. As a consequence, if $x,w \in C_{\overline B_1}(\overline V)$ gives the relation $w^2 = I_4, x^2 = \pm I_4, xwx^{-1}=\pm w_4$, then $xwx^{-1}= w$ all the time ! Note that (1.11) is a special case of the following : Let $X$ be a $2$-group and let $E_i$ $(1\le i \le 2^n)$ be subgroups of $X$. Then $$C_{w_n(X)}(\mbox{diag}\, (E_1, \cdots , E_i,\cdots, E_{2^n} )) =\mbox{ diag}\,(\cdots, C_X(E_i), \cdots).\eqno(1.12)$$ [**Lemma 1.6.**]{} [ *Suppose $n\ge 3$. Then $r_2(\overline {w_n(Q)}) = 2^n+1$. Further, elementary abelian subgroups $\overline E$ of dimension $\overline {w_n(Q)}$ are subgroups of $$\mbox{diag} \,(\overline {w_n(Q), w_n(Q))}.$$*]{} [*Proof.*]{} In the case $n=3$, by Lemma 1.5, we have $r_2(\overline {w_3(Q)}) = 2^3+1$ and $overline E \subseteq \mbox{diag} \,(\overline {w_2(Q), w_2(Q))}$, Suppose our lemma holds for $n$. In the $n+1$ case, let $X = w_n(Q)$ and let $\overline E$ be elementary abelian of dimension $r_2(\overline {w_{n+1}(Q)})$. By Lemma 1.4, $r_2(\overline E)\ge 2^{n+1} +1$. If $\overline E$ is not a subgroup of $\overline B$ (see (1.2) for notation), then $r_2(\overline E)\le r_2(\overline X) +2$ (Lemma 1.1). It follows that $$r_2(\overline X)+2 \ge r_2(\overline E) +2 \ge 2^{n+1}+1.$$ By inductive hypothesis, $ r_2(\overline X) = 2^n+1$. Hence $2^n+3 \ge 2^{n+1}+1.$ A contradiction. Hence $\overline E$ is a subgroup of $\overline B = \mbox{diag} \,(\overline {w_n(Q), w_n(Q))}$. Applying (1.2) and Lemma 2.3 of \[L\], $$r_2(\overline E) = r_2(\overline B) \le r_2(X) + r_2(\overline X) = 2^n+2^n+1 = 2^{n+1} +1 .$$ Hence $r_2(\overline E) \le r_2(\overline B) \le 2^{n+1} +1$. By Lemma 1.4, we have $r_2(\overline E) = 2^{n+1} +1$. Let $n = 2^{m_1} + 2^{m_2} +\cdots + 2^{m_u}$ ($m_1 <m_2 <\cdots) $ be the 2-adic representation of $n$. Then $$S =\,\mbox{diag}\,(w_{m_1}(Q), w_{m_2}(Q),\cdots , w_{m_u}(Q))$$ is isomorphic to a Sylow 2-subgroup of $Sp_{2n}(q)$ ($Q$ is generalised quaternion). Let $Z = \mbox{diag}\,(-1, -1, \cdots , -1)$ ($n$ of them). Then $\overline S = S/Z$ is isomorphic to a Sylow 2-subgroup of $PSp_{2n}(q)$ (Theorem 6 of Wong \[W\]). Similar to (1.1)-(1.3), we have $$\overline S\cong ( w_{m_2}(Q) \times w_{m_3}(Q) \times \cdots \times w_{m_u}(Q)) \rtimes \overline {w_{m_1}(Q) }. \eqno(1.13)$$ To be more precise, one has $$\overline S = \overline B_1 \rtimes \overline B_2, \eqno(1.14)$$ where $B_2 =\{ \mbox{diag}\,(t,t,\cdots, t)\,\,(n/2^{m_1} \mbox{ of them})\,:\, t\in w_{m_1}(Q)\} \cong w_{m_1}(Q)$ and $B_1 = $ $\mbox{diag}\,{( w_{m_2}(Q) , \cdots , w_{m_u}(Q), I_{2^{m_1}} )}.$ Note that $$\overline B_1 =\mbox{diag}\,\overline{( w_{m_2}(Q) , \cdots , w_{m_u}(Q) , I_{2^{m_1}} )}\cong \mbox{diag}\,{( w_{m_2}(Q) , \cdots , w_{m_u}(Q), I_{2^{m_1}} )}.$$ Applying the proof of Lemma 1.4, we have $$r_2(\overline S) \ge n+1.\eqno(1.15)$$ Applying Proposition 3.4 of \[L\] and Lemma 1.6, we have 1. if $m_1=0$, then $r_2(\overline S) \le \sum_{i=2}^{u} r_2(w_{m_i}(Q))+r_2(\overline Q) =n+1$, 2. if $m_1=1$, then $r_2(\overline S) \le \sum_{i=2}^{u} r_2(w_{m_i}(Q)) +r_2(\overline {w(Q)}) =n+2$, 3. if $m_1=2$, then $r_2(\overline S) \le \sum_{i=2}^{u} r_2(w_{m_i}(Q)) +r_2(\overline {w_2(Q)}) =n+ 2$, 4. if $m_1 \ge 3$, then $r_2(\overline S) \le \sum_{i=1}^{u} r_2(w_{m_i}(Q)) = n+1.$ \(i) By (1.15), we have $r_2(\overline S) = n+1$. \(ii) The case $n=2$ is covered by Lemma 1.2. We shall assume that $n>2$. We have $B_2 \cong w(Q)$ (see (1.14) for notation). By Lemma 1.2, $r_2(\overline B_2) =4$. Suppose that $r_2(\overline S)=n+2 $. Let $\overline E$ be elementary abelian of dimension $n+2$. Let $\overline E = \overline V \times \overline W$, where $\overline V = \overline E\cap \overline B_1$. The equality $r_2(\overline S) = r_2(\overline E) = \sum_{i=2}^{u} r_2(w_{m_i}(Q)) +r_2(\overline {w(Q)}) =n+2$ implies that $ V\cong \overline V $ ($V$ is the preimage of $\overline V$) is of dimension $n-2$ and $\overline W$ is of dimension 4. Since $r_2(w(Q)) =2$, applying Lemma 3.2 of \[L\], we have $$V = \mbox{diag}\, (X_1, X_2, \cdots , X_{n/2-1}, I_2),\eqno (1.16)$$ where $X_i \subseteq w(Q)$, $r_2(X_i) = 2$. Applying Lemma 2.3 of \[L\], $r_2(\overline W) = r_2(\overline W|\overline B_2) = 4$. As a consequence, $ D= \left < \mbox{diag}\, (X_1, I_2, \cdots , I_2, I_2), \overline W|\overline B_2 \,\right >$ is elementary abelian of dimension 6. Note that $D$ is isomorphic to an elementary abelian subgroup of $w(Q)\rtimes \overline {w(Q)}$. This is a contradiction (see Lemma 1.3). Hence $r_2(\overline S) = n+1$. \(iii) The case $n=4$ is covered by Lemma 1.2. We shall assume that $n>4$. We have $B_2 \cong w_2(Q)$. By Lemma 1.2, $r_2(\overline B_2) =6$. Suppose that $r_2(\overline S)=n+2 $. Let $\overline E$ be elementary abelian of dimension $n+2$. Let $\overline E = \overline V \times \overline W$, where $V = E\cap B_1$. The equality $r_2(\overline S) = r_2(\overline E) = \sum_{i=2}^{u} r_2(w_{m_i}(Q)) +r_2(\overline {w_2(Q)}) =n+2$ implies that $ V\cong \overline V $ is of dimension $n-4$ and $\overline W$ is of dimension 6. Since $r_2(w(Q)) =2$, applying Lemma 3.2 of \[L\], we have $$V = \mbox{diag}\, (X_1, X_2, \cdots , X_{n/2-1}, I_4),\eqno (1.17)$$ where $X_i \subseteq w(Q)$, $r_2(X_i) = 2$. Applying Lemma 2.3 of \[L\], $r_2(\overline W) = r_2(\overline W|\overline B_2) = 6$. As a consequence, $ D= \left < \mbox{diag}\, (X_1, X_2, I_4, \cdots , I_4, I_4), \overline W|\overline B_2 \,\right >$ is elementary abelian of dimension 10. Note that $ D$ is isomorphic to an elementary abelian subgroup of $w_2(Q)\rtimes \overline {w_2(Q)}$. This is a contradiction (see Lemma 1.5). Hence $r_2(\overline S)=n+1$. \(iv) By (1.15), we have $r_2(\overline S) = n+1$. In summary, we have the following result : [**Proposition 1.7.**]{} *Let $S$ be a Sylow $2$-subgroup of $PSp_{2n}(q)$. Then* 1. $r_2(S) = 4$ if $n=2$, $r_2(S) = 6$ if $n=4$, 2. $r_2(S) = n+1$ if $n\ne 2,4$. Projective Special Linear and Unitary Groups ============================================= Projective Special Linear and Unitary Groups I ---------------------------------------------- The main purpose of this section is to determine the 2-rank of $PSL_{2n}(q)$, where $q\equiv 3$ (mod 4) and the 2-rank of $PSU_{2n}(q)$ where $q\equiv 1$ (mod 4). [**Lemma 2.1.**]{} [*Let $K$ be a $2$-group and let $z_0\in Z(K)^{\times}, \overline K = K/\left <z_0\right >$. Suppose that $r_2(K) = r_2(\overline K) =2$. Then $r_2(\overline {w(K)}) = 4$.*]{} [*Proof.*]{} Let $X=K$. By Lemma 1.1 $r_2(\overline {w(X)}) \ge r_2(\overline X) + 2 \ge 4$. Let $\overline E$ be elementary abelian of dimension $r_2(\overline {w(X)})$. Suppose that $\overline E$ is not a subgroup of $\overline B$ (see (1.1)-(1.3) for notation). By Lemma 1.1, $r_2(\overline E) \le 4 $. Suppose that $\overline E \subseteq \overline B$. By Lemma 2.3 of \[L\], $$r_2(\overline E) =r_2(\overline B) =r_2(X\rtimes \overline X) \le 4.$$ This completes the proof of the lemma. [**Lemma 2.2.**]{} [*Let $K$ be a $2$-group. $z_0\in Z(K)^{\times}, \overline {w_n(K)}= w_n(K)/diag\,(z_0,\cdots z_0)$ $ (2^n\,\,of\,\,them)$. Suppose that $r_2(K) =2$. Then $r_2(\overline {w_n(K)}) \ge 2^{n+1}-1$.*]{} [*Proof.*]{} Since $ r_2(K) =2$, $r_2(w_n(K)) = 2^{n+1}$ (Proposition 3.4 of \[L\]). It follows that $r_2(\overline {w_n(K)}) \ge r_2({w_n(K)})-1 = 2^{n+1} -1$. [**Lemma 2.3.**]{} [ *Let $K$ be a $2$-group. $z_0\in Z(K)^{\times},$ $\overline {w_2(K)}=w_2(K)/diag\,(z_0,z_0,z_0,z_0)$. Suppose that $r_2(K) = r_2(\overline K) =2$, $r_2(\overline {K\times K}) = 3.$ Suppose further that $C_{w(X)}(D)$ possesses no $x,$ $w$ such that $w^2 = I_2$, $x^2 = diag\,(t,t)I_2$, $xwx^{-1} = diag\,(e,e) w$ $(t\in \left<z_0\right >,e\in \left<z_0\right > -\{1\} )$, where $D$ is any elementary abelian subgroup of dimension $4$ of $w(K)$. Then $r_2(\overline {w_2(K)}) = 7$.*]{} [*Proof.*]{} Let $\overline E$ be an elementary abelian subgroup of $\overline {w_2(K)}$ of dimension $r_2(\overline {w_2(K)})$ and let $X =w(K)$. Suppose that $\overline E$ not a subgroup of $\overline B$ (see (1.1)-(1.3) for notation). Then $r_2(\overline E) \le r_2(\overline {X}) + 2 = 6$ (Lemmas 1.1, 2.1). This contradicts Lemma 2.2. Hence $\overline E$ is a subgroup of $\overline B$. By Proposition 3.4 of \[L\], $r_2(w(K))=4$. Hence $$r_2(\overline E) =r_2(\overline B) \le r_2(B_1) + r_2(\overline B_2) = r_2(X) + r_2(\overline X) = 8.$$ Suppose that $r_2(\overline E)=8$. Let $\overline E= (\overline E\cap \overline B_1) \times \overline W.$ It follows that $r_2(\overline E\cap \overline B_1) = r_2(\overline W) = 4.$ By Lemma 2.3 of \[L\], $r_2(\overline W|\overline B_2) = 4 >3 = r_2(\overline {K\times K})$. Hence $\overline W|\overline B_2$ possesses an element of the form $$\overline { \left (\begin{array} {cccc} 0& u&0&0\\ v& 0&0&0\\ 0& 0&0&u\\ 0& 0&v&0\\ \end{array}\right )},$$ where $ u,v\in K, uv=vu = t$ for some $t\in\left < z_0\right >$. Further, since $r_2(\overline W|\overline B_2) =r_2(\overline B_2) = r_2(\overline {w(X)})$ (see Lemma 2.1), $\overline W|\overline B_2$ must contain the following involution of $Z(\overline B_2)$. $$\overline { \left (\begin{array} {cccc} e& 0&0&0\\ 0& 1&0&0\\ 0& 0&e&0\\ 0& 0&0&1\\ \end{array}\right )},$$ where $e\in\left < z_0\right >$ is of order 2. As a consequence, $\overline W$ must contain the following elements. $$\overline { \left (\begin{array} {ccc} x&0&0\\ 0&0&u\\ 0&v&0\\ \end{array}\right )}\,,\,\, \overline { \left (\begin{array} {ccc} w&0&0\\ 0&e&0\\ 0&0&1\\ \end{array}\right )},$$ where $x,w\in w(X)$, $x^2 = diag\,(t,t)$, $w^2 = I_2$. Further, $xwx^{-1} = diag\,(e,e) w$ ($\overline W$ is abelian). Since $[\overline E \cap B_1, \overline W] = \overline 1$, $x, w \in C_{\overline B_1}(\overline E\cap \overline B_1)$, where $\overline B_1 \cong w(X)$ and that $r_2(\overline E\cap \overline B_1)=4.$ This contradicts our assumption ($\overline E\cap \overline B_1\cong \overline E \cap B_1$ is of dimension 4). Hence $r_2(\overline E) = r_2(\overline {w_2(T)}) \le 7$. By Lemma 2.2, we conclude that $r_2(\overline {w_2(T)}) = 7$. [**Lemma 2.4.**]{} [*Let $K$ be a $2$-group. Suppose that $r_2(K) = r_2(\overline K) =2$, $r_2(\overline {K\times K}) = 3.$ Suppose that $r_2(K) = r_2(\overline K) =2$, $r_2(\overline {K\times K}) = 3.$ Suppose further that $C_{w(X)}(D)$ possesses no $x,$ $w$ such that $w^2 = I_2$, $x^2 = diag\,(t,t)I_2$, $xwx^{-1} = diag\,(e,e) w$ $(t\in \left<z_0\right >, e\in \left<z_0\right > -\{1\} )$, where $D$ be an elementary abelian subgroup of dimension $4$ of $w(K)$. If $n \ge 2$, then $r_2(\overline {w_n(K)}) = 2^{n+1}-1$.*]{} [*Proof.*]{} Our assertion holds for $n = 2$ (Lemma 2.3). Suppose that $r_2(\overline {w_{m} (K)}) = 2^{m+1}-1$. Let $\overline E$ be elementary abelian of maximal rank of $\overline {w_{m+1}(K)}$. Let $X = w_m(K)$. Then $w_{m+1}(X) = w( X )$. Suppose that $\overline E$ is not a subgroup of $\overline {B}$. Then $r_2(\overline E) \le 2+ r_2(\overline X) = 2^{m+1} +1$ (Lemma 1.1). This contradicts Lemma 2.2. Hence $\overline E$ is a subgroup of $\overline {B}$. Applying Proposition 3.4 of \[L\] and our inductive hypothesis, we have $$r_2(\overline E)\le r_2(\overline {B}) \le r_2(B_1) + r_2(\overline B_2) = r_2(X) + r_2(\overline X) = 2^{m+1}+ 2^{m+1}-1 = 2^{m+2}-1.$$ By Lemma 2.2, $r_2(\overline E) = 2^{m+2}-1$. Let $T$, $R$, $S(T,R,J)$ and $W(TR, 1,J)$ be given as in sections 4.1 and 4.2 of \[L\]. [**Lemma 2.5.**]{} [ *Let $T\rtimes R$ be a semidirect product of $2$-groups. Suppose that $r_2(TR)$ $ >r_2(T)$ and that $ \sum_{i=2}^u r_2(w_{m_i}(TR, 1,J)) + r_2(\overline {w_{m_1}(TR, 1,J))} $ $=r_2(\overline {W(TR,1, J)}) .$ Then $r_2(\overline {W(TR,1, J)}) > r_2(\overline {S(T,R, J)})$.*]{} [*Proof.*]{} Recall first that $\,\,\overline { \mbox{diag}\,(w_{m_1}(TR,1,J)\,, w_{m_2}(TR,1,J)\,, \cdots \,, w_{m_u}(TR,1,J))}$ $ =\overline {W(TR,1,J)}$. It is easy to see that the above can be written as the following semidirect product. $$\mbox{diag}\,(w_{m_2}(TR,1,J), w_{m_3}(TR,1,J), \cdots , w_{m_u}(TR,1,J), I_{2^{m_1}}) \rtimes \overline D,$$ where $D = \left < \mbox{diag}\, (t,t,\cdots)\,:\, t \in w_{m_1}(TR, 1,J)\right >.$ Suppose that $r_2(\overline {W(TR,1, J)}) = r_2(\overline {S(T,R, J)}) =d$. Let $\overline E \subseteq\overline {S(T,R, J)} $ be elementary abelian of dimension $d$. It follows that $\overline E$ is of maximal dimension in $\overline {W(TR,1, J)}$. By the assumption of the lemma, we have $ r_2( \overline {\mbox{diag}\,(I_{2^{m_2}}, \cdots , w_{m_i}(TR,1,J), \cdots , I_{2^{m_u}}, I_{2^{m_1}})\cap E}) = r_2(w_{m_i}(TR,1,J)).$ Since $r_2(TR)>r_2(T)$, we have $r_2(TR) \ge 2$. By Lemma 3.2 of \[L\], the intersection $\overline {\mbox{diag}\,( w_{m_2}(TR,1,J), I_{2^{m_3}}, \cdots , I_{2^{m_u}}, I_{2^{m_1}})\cap E})$ must take the following form : $$\overline {\mbox{diag}\,(\prod_{i=1}^{2^{m_2}} V_i,I_{2^{m_3}}, \cdots , I_{2^{m_u}}, I_{2^{m_1}})},\mbox{ where } V_i\subseteq TR,\, r_2(V_i) =r_2(TR)\mbox{ for all } i.$$ It follows that the intersection, as well as $\overline E$, must contain an element of the form $\overline { \mbox{diag}\,(tr, 1,1,\cdots, 1)}$ $(t\in T, r\in R)$. Since $r_2(TR)>r_2(T)$, $r_2(TR) = r_2(V_i)$, we may assume that $r \ne 1$. This implies that $E$, as well as $S(T,R,J)$, contains an element diag$\,(x_1, x_2, \cdots, )$ ($x_i = t_ir_j, t_i \in T, r_i\in R)$, where the number of $i$ such that $r_i\ne 1$ is odd. This is a contradiction (see remark of section 4.2 of \[L\]). Hence $r_2(\overline {W(TR,1, J)}) > r_2(\overline {S(T,R, J)})$. Let $n = 2^{m_1} + 2^{m_2} +\cdots + 2^{m_u}$ ($m_1 <m_2 <\cdots) $ be the 2-adic representation of $n$ and let $T$ be generalised quaternion of order $2^{t+1}$, where $ 2^{t+1}||(q^2-1)$, and let $R$ be given as in section 5.1 of \[L\]. Then the group $TR$ satisfies the assumption of Lemma 2.4. Further, $$S(T,R,J) = \mbox{diag}\,( w_{m_1}(T,R,J), w_{m_2}(T,R,J), \cdots , w_{m_u}(T,R,J)) \rtimes U(R)$$ is a Sylow 2-subgroup of $SL_{2n}(q)$, $ q\equiv 3$ (mod 4) (Theorem 4 of Wong \[W\]). Let $z_0\in T$ be the element of order $2$. Then $Z= \left <\mbox{diag}\,(z_0, z_0, \cdots ) \right > \subseteq S(T,R,J)$ is the centre of $S(T,R,J)$. Further, $$S(T,R,J)/Z\eqno(2.1)$$ is a Sylow 2-subgroup of $PSL_{2n}(q)$, $ q\equiv 3$ (mod 4). One sees easily that $$S(T,R, J)/Z \subseteq W(TR,1,J)/Z = \overline { W(TR,1,J)}. \eqno (2.2)$$ Since $W(TR,1,J)\cong \prod w_{m_i}(TR)$ is a direct product of wreath products ((4.8) of \[L\]), one may Proposition 3.4 of \[L\] to conclude that $$r_2(\overline {W(TR,1,J)}) \ge r_2({W(TR,1,J)})-1 = 2n-1.\eqno (2.3A)$$ Note that $\overline { W(TR,1,J)}$ can be written as a semidirect product (see the proof of Lemma 2.5). $$\overline {W(TR,1,J)} \cong \left ( \prod_{i=2}^u w_{m_i}(TR) \right ) \rtimes { w_{m_1}(TR)}.$$ Applying Lemma 2.4 and Proposition 3.4 of \[L\], Hence $$r_2(\overline {W(TR,1,J)}) \le \sum r_2(w_{m_i}(TR)) + r_2(\overline{ w_{m_1}(TR)}) \le 2n-1.\eqno (2.3B)$$ Hence (see $(2.3A)$ and $(2.3B)$) $$r_2(\overline { W(TR,1,J)}) = 2n-1.\eqno (2.4)$$ [**Proposition 2.6.**]{} [ *Suppose that $q\equiv 3\,\,\,(mod\,\,4)$. Then $r_2(PSL_{4}(q)) = 4.$ If $n \ge 3$, then $r_2(PSL_{2n}(q)) = 2n-2.$*]{} [*Proof.*]{} It is well known that $r_2(PSL_4(q))=4.$ Suppose that $n \ge 3$. $\overline {S(T,R,J)}$ is a Sylow 2-subgroup of $PSL_{2n}(q)$. Since $r_2(SL_{2n}(q)) = 2n-1$ (see \[L\]), $r_2(PSL_{2n}(q)) $ $ \ge 2n-2$. By (2.4), we have $$2n-2\le r_2(\overline {S(T,R,J)}) \le r_2(\overline { W(TR,1,J)}) =2n-1.\eqno(2.5)$$ Since $T$ and $R$ (for $PSL_{2n}(q))$ satisfy the assumption of Lemma 2.5, we conclude that $ r_2(\overline {S(T,R,J)}) < r_2(\overline { W(TR,1,J)})$. Hence $2n-2 = r_2(\overline {S(T,R,J)})$. Suppose that $q\equiv 1$ (mod 4). Then (2.1) is a Sylow 2-subgroup of $PSU_{2n}(q)$. It follows that [**Proposition 2.7.**]{} [ *Suppose that $q\equiv 1\,\,\,(mod\,\,4)$. Then $r_2(PSU_{4}(q)) = 4.$ If $n \ge 3$, then $r_2(PSU_{2n}(q)) = 2n-2.$*]{} Projective Special Linear and Unitary Groups II ----------------------------------------------- The main purpose of this section is to determine the 2-rank of $PSL_{2n}(q)$ where $q\equiv 1$ (mod 4) and the 2-rank of $PSU_{2n}(q)$ where $q\equiv 3$ (mod 4). [**Lemma 2.8.**]{} [*Let $K$ be a $2$-group. $z_0 \in Z(K)^{\times}$. Suppose that $r_2(K) = r_2(\overline K) =2$, $r_2(\overline {K\times K}) = 4.$ Then $r_2(\overline {w_n(K)}) = 2^{n+1}$.*]{} [*Proof.*]{} By our assumption and lemma 2.3 of \[L\], $\overline {K\times K}$ has an elementary abelian subgroup of dimension 4 generated by $\overline g_i$ ($1\le i\le 4$), where $g_1 = $ diag$\,(k_1, 1)$, $g_2 = $ diag$\,(k_2, 1)$, $g_3 = $ diag$\,(a, k_3)$, $g_4= $ diag$\,(b, k_4)$, where $k_i, a, b \in K$. Let $V$ be the subgroup of $w_n(K)$ generated by diag$\,(a,a, \cdots , a, k_3)$, diag$\,(b,b, \cdots , b, k_4)$, and diag$\,(x_1,x_2, \cdots , x_{2^n-1}, 1)$, where $x_i$ is either $k_1$ or $k_2$. It is clear that $\overline V \subseteq \overline {w_n(K)}$ is elementary abelian of dimension $2^{n+1}$. Hence $$r_2(\overline {w_n(K)}) \ge r_2(\overline V) = 2^{n+1}.\eqno (2.6)$$ We shall now apply induction on $n$. It is clear that our assertion holds for $n=0$ as $\overline {w_0(K)} =\overline K$ is of rank 2. Suppose that $r_2(\overline {w_{m} (K)}) = 2^{m+1}$. Let $\overline E$ be elementary abelian of dimension $r_2(\overline {w_{m+1}(K)})$. Let $X = w_m(K)$. Then $w_{m+1}(K) = w(X)$. Suppose that $\overline E$ is not a subgroup of $\overline {B}$ (see (1.1)-(1.3) for notation). Then $r_2(\overline E) \le 2+ r_2(\overline X) = 2^{m+1} +1$ (Lemma 1.1). This contradicts (2.6). Hence $\overline E$ is a subgroup of $\overline {B} $. Applying Proposition 3.4 of \[L\] and our inductive hypothesis, we have $$r_2(\overline E)=r_2(\overline {B}) \le r_2(X) + r_2(\overline X) = 2^{m+1}+ 2^{m+1} = 2^{m+2}.$$ By (2.6), $r_2(\overline E) = 2^{m+2}.$ Let $n = 2^{m_1} + 2^{m_2} +\cdots + 2^{m_u}$ ($m_1 <m_2 <\cdots) $ be the 2-adic representation of $n$. Let $T = \left <v, w\right >$ be generalised quaternion of order $2^{t+1}$ $(o(v) = 2^t)$, where $ 2^{t+1}||(q^2-1)$, and let $R = \left <e\right > $, $E_0$, $R_0$ be given as in section 5.2 of \[L\]. Then $TR$ satisfies the assumption of Lemma 2.8. Further, $$S(T,R,J) =\mbox{diag}\, ( w_{m_1}(T,R,J), w_{m_2}(T,R,J), \cdots , w_{m_u}(T,R,J)) \rtimes U(R)$$ is a Sylow 2-subgroup of $SL_{2n}(q)$, $ q\equiv 1$ (mod 4). Let $2^m = gcd\,(2^{m_1}, q- 1)$ and let $z_0= (e^2v)^{2^{t-m}}$. Then the centre of $S(T,R,J)$ is generated by $$z = \mbox{diag}\, (z_0, z_0, \cdots , z_0), \,\,o(z_0)=2^m. \eqno (2.7)$$ By Theorem 6 of Wong \[W\], $$S(T,R,J)/\left <z\right > \eqno(2.8)$$ is a Sylow 2-subgroup of $PSL_{2n}(q)$, $ q\equiv 3$ (mod 4). One sees easily that $$S(T,R, J)/\left <z\right > \subseteq W(TR,1,J)/\left <z\right > = \overline { W(TR,1,J)}. \eqno (2.9)$$ Since $W(TR,1,J)\cong w_{m_i}(TR)$ is a direct product of wreath products ((4.8) of \[L\]) and $K=TR$ satisfies the assumption of Lemma 2.8, similar to Lemma 2.8, one may conclude that $$r_2(\overline {W(TR,1,J)})= 2n.\eqno (2.10)$$ [**Proposition 2.9.**]{} [ *Suppose that $q\equiv 1\,\,\,(mod\,\,4)$. Then $r_2(PSL_{4}(q)) =4 .$ If $n \ge 3$, then $r_2(PSL_{2n}(q)) = 2n-1.$*]{} [*Proof.*]{} Note that it is well known that $r_2(PSL_{4}(q)) =4 .$ Suppose that $n\ge 3$. Let $$V = \left < S(E_0, R_0, 1) \times U(R_0), \mbox{diag}\, (w,w, \cdots , w) \right >.$$ Applying Lemma 4.3 of \[L\], we have $$r_2(PSL_{2n}(q)) \ge r_2(\overline V) = 2n-1.\eqno(2.11)$$ $\overline {S(T,R,J)}$ is a Sylow 2-subgroup of $PSL_{2n}(q)$. Applying the above ((2.10, (2.11)), we have $$2n-1\le r_2(\overline {S(T,R,J)}) \le r_2(\overline { W(TR,1,J)}) =2n.\eqno(2.12)$$ Since $T$ and $R$ (for $PSL_{2n}(q))$ satisfy the assumption of Lemma 2.5, we conclude that $ r_2(\overline {S(T,R,J)}) < r_2(\overline { W(TR,1,J)})$. Hence $2n-1 = r_2(\overline {S(T,R,J)})$. Suppose that $q\equiv 3$ (mod 4). Then (2.8) is a Sylow 2-subgroup of $PSU_{2n}(q)$. It follows that [**Proposition 2.10.**]{} [ *Suppose that $q\equiv 3\,\,\,(mod\,\,4)$. Then $r_2(PSU_{4}(q)) = 4.$ If $n \ge 3$, then $r_2(PSU_{2n}(q)) = 2n-1.$*]{} Orthogonal Commutator Groups $\Omega _{2n+1}( q) = P\Omega_{2n+1}(q) $ ==================================== Let $2^{t+1}$ be the greatest power of 2 that divides $q^2-1$ and let $T =\left < v,w\right> $ be a dihedral group of order $2^{t}$, where $o(v)= 2^{t-1}, o(w)= 2, wvw= v^{-1}$. Further, $R = \left < e\right >$ is a group of order 2 acts on $T$ by $eve= v^{-1}, ewe= vw.$ Let $n = 2^{m_1} + 2^{m_2} + \cdots +2^{m_u}$ be the $2$-adic representation of $n$. Then $S(T,R,J)$ is a Sylow 2-subgroup of $\Omega_{2n+1}(q) = P\Omega_{2n+1}(q)$ (see (ii) of Theorem 7 of Wong \[W\]). By the results in section 5.4 of \[L\], the rank of $S(T,R,J)$ is $2n$. Orthogonal Commutator Groups $\Omega _{2n}(\eta, q)=P\Omega _{2n}(\eta, q) $ where $\eta = \pm 1$, $q^n \equiv -\eta$ (mod 4) ==================================================================== Applying Theorem 7 of Wong \[W\], a Sylow 2-subgroup of $\Omega_{2n}(\eta, q) = P\Omega_{2n}(\eta, q)$ is isomorphic to a Sylow 2-subgroup of $O_{2(n-1)} (\eta ', q)$, where $q^{n-1}\equiv \eta '$ (mod 4). Let $S$ be a Sylow 2-subgroup of $O_{2(n-1)} (\eta ', q)$, where $q^{n-1}\equiv \eta '$ (mod 4). Applying Theorem 3 of Carter and Fong \[CF\], $S$ is isomorphic to a Sylow 2-subgroup of $O^+_{2n-1}(q)$. We shall now describe $S$ as follows : Let $D$ be a dihedral group of order $2^{s+1}$, where $2^{s+1}$ is the greatest power of 2 that divides $q^2-1$. Then $D$ is isomorphic to a Sylow 2-subgroup of $O^+_3(q)$. Let $T_{r-1}$ be the wreath product of $r-1$ copies of $\Bbb Z_2$ and let $S_r$ be the wreath product of $D$ and $T_{r-1}$. Then $S_r$ is a Sylow 2-subgroup of $O_{2^r+1}^+(q)$. Let $2(n-1) = 2^{m_1} + 2^{m_2}+ \cdots + 2^{m_u}$ be the 2-adic representation of $2(n-1)$. Applying Theorem 2 of Carter and Fong \[CF\], $$S \cong S_{m_1} \times S_{m_2}\times \cdots \times S_{m_u}.$$ By the results of section 5.5 of \[L\], the rank of $S$ is $2n-2$. Orthogonal Commutator Groups $P\Omega _{2n}(\eta, q)$, where $n$ is even, $\eta = \pm 1$, $q^n \equiv \eta$ (mod 4) ============================================================================== Note first that since $n$ is even and $q^n \equiv \eta$, we have $\eta =1$. Let $2^{t+1}$ be the greatest power of 2 that divides $q^2-1$ and let $T$ be the central product of two dihedral groups of order $2^{t+1}$ : $$T = \left < d= \left (\begin{array} {cc} u&0\\ 0&u^{-1} \\ \end{array}\right ), g=\left (\begin{array} {cc} u&0\\ 0&u \\ \end{array}\right ), h= \left (\begin{array} {cc} 0&1\\ 1&0 \\ \end{array}\right ), k=\left (\begin{array} {cc} 0&w\\ w&0 \\ \end{array}\right )\right>,$$ where $o(u) = 2^t$, $o(w)=2$, $wuw=u^{-1}$. Let $ R = \left < e, f\right > \cong \Bbb Z_2 \times \Bbb Z_2$, where $$d^e=g^{-1}, g^e=d^{-1}, h^e=gk, k^e=dh,$$ $$d^f=g, g^f= d, h^f=k,k^f=h.$$ Let $n/2 = 2^{m_1} + 2^{m_2}+\cdots + 2^{m_u}$ $( n$ even, $m_1 <m_2 <\cdots$) be the 2-adic representation of $n$. By Theorem 11 of \[W\], $S(T,R,J)$ is a Sylow 2-subgroup of $\Omega_{2n}(\eta, q)$, where $n$ is even, $\eta = \pm 1$, $q^n \equiv \eta$ (mod 4). Let $$z = \mbox{diag}\, (g^{2^{t-1}},g^{2^{t-1}}, \cdots ).\eqno (5.1)$$ Then $\overline {S(T,R, J)} = S(T,R, J) /\left < z\right >$ is a Sylow 2-subgroup of $P\Omega_{2n}(\eta, q)$, where $n$ is even, $\eta = \pm 1$, $q^n \equiv \eta$ (mod 4). It is easy to see that $$T = (\left < d\right> \times \left < dg\right>) \rtimes \left < h,k\right >.$$ As a consequence, one can show that $r_2(T) =3$, $r_2(TR) = 4 = r_2(T) + 1$. It is also easy to see that $Z(T) =\left < g^{2^{t-1}}\right >$ is of order 2 and that $r_2(T/Z(T)) = 4$. [**Lemma 5.1.**]{} [*Let $z_0 = g^{2^{t-1}}$, $\overline {TR} = TR /\left < z_0\right >$, $\overline {TR\times TR} = (TR\times TR)/\left < diag\,(z_0, z_0)\right >$. Then $r_2(\overline {TR}) = 4$, $r_2(\overline {TR\times TR}) = 7$. In particular, $r_2(\overline {w(TR)}) = 7.$* ]{} [*Proof.*]{} Let $X = TR$. Then $B \cong TR\times TR$ (see (1.1)-(1.3) for notation). It is easy to show that $ r_2(\overline X) = r_2(\overline {TR}) = 4$. By Lemma 2.3 of \[L\], $$7 = r_2(B) -1 \le r_2(\overline {B}) \le 8.$$ Suppose that $r_2(\overline {B}) =8$. Let $ \overline E \subseteq \overline {B}$ be elementary abelian of dimension 8 and let $\overline E= \overline V \times \overline W$, where $\overline V = \overline E \cap \overline B_1$. Then both $\overline V$ and $\overline W$ are of dimension 4. Let $$\overline V = \left< \overline {\left (\begin{array} {cc} x_i&0\\ 0&1\\ \end{array}\right )}\,:\, 1\le i\le 4 \right >, \overline W = \left< \overline {\left (\begin{array} {cc} u_i&0\\ 0&w_i\\ \end{array}\right )} \,:\, 1\le i\le 4\right >.$$ Since $[\overline V, \overline W] = \overline I_2$, $$[x_i, u_j]= 1. \eqno(5.2)$$ Applying Lemma 2.3, $r_2(\overline W|\overline B_2) = 4.$ Since $r_2(\overline B_2) =4$, $\overline W|\overline B_2$ must contain all the involutions of $Z(\overline B_2)$. It follows that $$\overline {\left (\begin{array} {cc} g^{2^{t-2}}&0\\ 0& g^{2^{t-2}} \\ \end{array}\right )} \in \overline W|\overline B_2. \eqno(5.3)$$ Hence $$\overline {\left (\begin{array} {cc} \sigma &0\\ 0& g^{2^{t-2}} \\ \end{array}\right )} \in \overline W, \eqno(5.4)$$ for some $\sigma$. Since the above element if of order 2 in $\overline E$, $\sigma^2 = g^{2^{t-1}}$. By our results in Appendix A, $$\sigma \in \{\, g^{2^{t-2}},\,\,\, d^{2^{t-2}} ,\,\,\, g^{2^{t-2}}d^mh,\,\,\, d^{2^{t-2}}g^nk\,\}.\eqno (5.5)$$ Let [$A= \left \{ x\,:\, \overline {\left (\begin{array} {cc} x&0\\ 0&1\\ \end{array}\right )} \in \overline V \right \} \subseteq TR.$]{} Since $A\subseteq TR $ is elementary abelian of dimension 4 and $T$ is of rank 3, $A$ must possess an element (of order 2) of the form $tr$, where $t \in T$, $r\in R-\{1\}$. This particular element $tr$ must commute with $\sigma$ (see (5.2)). Since such elements $tr$ are completely known (see b(i) and its remark of section 5.6 of \[L\]) and they do not commute with $$g^{2^{t-2}}d^mh,\,\,\, d^{2^{t-2}}g^nk,$$ (5.5) can be refined into $$\sigma \in \{\, g^{2^{t-2}},\,\,\, d^{2^{t-2}} \,\}.\eqno (5.6)$$ Since $[A, \sigma ]=1$ (see (5.2)), $A \subseteq C = C_{TR}(\sigma )$. This is not possible as such $C$ is of rank at most 3 (see Appendix A). As a consequence, $r_2(\overline {B}) \ne 8$. It follows that $r_2(\overline {(TR\times TR)}) = r_2(\overline {B}) =7$. Let $\overline E$ be elementary abelian of dimension $r_2(\overline {w(TR)})\ge 7$. Suppose that $\overline E $ is not a subgroup of $\overline {B}$. By Lemma 1.1, $r_2(\overline E)\le 2+r_2(\overline {X}) = 6 <7$. A contradiction. Hence $\overline E $ is a subgroup of $\overline {B}$. This completes the proof of the lemma. Similar to Lemmas 1.5 and 1.6, which we prove that $r_2(\overline {w_n(Q)}) = 2^n+1$ under the assumption that $r_2(w_2(Q)) = 4$, $r_2(\overline {w_2(Q)}) = 6$, $r_2(\overline {w_3(Q)}) = 9$, one may apply our result of Lemma 5.1 ($r_2(\overline {w(TR)}) = 7$), $r_2(TR)= r_2(\overline {TR})=4$ to show that $r_2(\overline {w_n(TR)}) $ $= 2^{n+2}-1$. Consequently, one has, [**Lemma 5.2.**]{} Let $T$ and $R$ be given as in the above. Then [*$r_2(\overline {W(TR,1,J)}) = 2n-1$.*]{} [*Proof.*]{} Since $W(TR, 1,J) \cong \prod w_{m_i}(TR)$ is a direct product of wreath products, one may apply Proposition 3.4 of \[L\] to conclude that $r_2(W(TR, 1,J)) = 2n$. Since $r_2(W(TR,1,J))$ $ = 2n$, we have $$r_2(\overline {W(TR,1,J)}) \ge r_2( {W(TR,1,J)}) -1= 2n-1.\eqno(5.7)$$ Recall that $$\overline {W(TR,1,J)} =\overline B_1 \rtimes \overline B_2 \cong B_1 \rtimes \overline B_2 ,\eqno(5.8)$$ where $$B_1 =\mbox{ diag}\,( w_{m_2}(TR, 1,J),\cdots, w_{m_u}(TR, 1,J), I_{2^{m_1}} )\,,$$$$B_2 = \left < \mbox{diag}\, (t,t,\cdots)\,:\, t \in w_{m_1}(TR, 1,J) \right > \cong w_{m_1}(TR, 1,J).\eqno(5.9)$$ By Proposition 3.4 of \[L\] and Lemma 5.1, we have 1. if $m_1=0$, then $r_2(\overline {W(TR,1,J)}) \le \sum_{i=2}^{u} r_2(w_{m_i}(TR))+r_2(\overline {TR}) = 2n$, 2. if $m_1\ge 1$, then $r_2(\overline {W(TR,1,J)}) \le \sum_{i=2}^{u} r_2(w_{m_i}(TR)) +r_2(\overline {w(TR)}) = 2n-1$. \(i) Note first that $B_2 \cong TR$. Suppose that $r_2(\overline {W(TR,1,J)})=2n $. Let $\overline E$ be elementary abelian of dimension $2n$. Let $\overline E = \overline V \times \overline W$, where $\overline V = \overline E\cap \overline B_1$. The equality $r_2(\overline {W(TR,1,J)}) = r_2(\overline E) = \sum_{i=2}^{u} r_2(w_{m_i}(TR)) +r_2(\overline {TR}) =n+2$ implies that $ \overline V $ is of dimension $2n- 4$ and $\overline W$ is of dimension 4. Since $r_2(TR) =4$, applying Lemma 3.2 of \[L\], we have $$\overline V = \mbox{diag}\, (E_1, E_2, \cdots , E_{n/2-1}, 1),\eqno (5.10)$$ where $E_i \subseteq TR$ is elementary abelian of dimension 4. Applying Lemma 2.3 of \[L\], $r_2(\overline W) = r_2(\overline W|\overline B_2) = 4$. As a consequence, $ D= \left < \mbox{diag}\, (E_1, 1, \cdots , 1, 1), \overline W|\overline B_2 \,\right >$ is elementary abelian of dimension 8. Note that $D$ is isomorphic to an elementary abelian subgroup of $\overline {w(TR)}$. This is a contradiction (see Lemma 5.1). Hence $r_2(\overline {W(TR,1,J)}) < 2n$. By (5.7), $r_2(\overline {W(TR,1,J)}) =2n-1$. \(ii) By (5.7), $r_2(\overline {W(TR,1,J)}) =2n-1$. This completes the proof of Lemma 5.2. Similar to Propositions 2.6 and 2.9, we may prove that [**Proposition 5.3.**]{} [*Suppose that $n\ge 4$ is even. Then $r_2(\overline {S(T,R, J)}) = 2n-2$. In particular, the rank of $P\Omega _{2n}(\eta, q)$, where $n\ge 4$ is even, $\eta = \pm 1$, $q^n \equiv \eta\,\, (mod\,\, 4)$ is $2n-2$.* ]{} [*Proof.*]{} Since $S(T,R,J)$ is a Sylow 2 subgroup of $\Omega _{2n}(\eta, q)$ (where $n$ is even, $\eta = \pm 1$, $q^n \equiv \eta$ (mod 4)), we conclude that $r_2(S(T,R ,J)) = 2n-1$ (see \[L\]). This implies that $r_2(\overline {S(T,R ,J)}) \ge r_2({S(T,R ,J)}) -1= 2n-2.$ By Lemma 5.2, $$2n-1 =r_2(\overline {W(TR,1,J)}) \ge r_2(\overline {S(T,R ,J)}) \ge 2n-2.$$ Since $T$ and $R$ (for $P\Omega_{2n}(\eta, q)$) satisfy the assumption of Lemma 2.5, we have $r_2(\overline {S(T,R ,J)}) = 2n-2.$ Orthogonal Commutator Groups $P\Omega _{2n}(\eta, q)$ where $n$ is odd, $\eta = \pm 1$, $q^n \equiv \eta$ (mod 4) ============================================================================ Let $n = 1+n_1$, where $n_1$ is even. Since $P\Omega _{6}(\eta, q)$ ($n_1=2$) is not a simple group, we shall assume that $$n_1 \ge 4.$$ Let $S = S(T,R, J)$ be a Sylow 2-subgroup of $\Omega _{2n_1}(\eta', q)$ where $n_1$ is even, $\eta' = \pm 1$, $q^{n_1} \equiv \eta'$ (mod 4) and let diag$\,(e,1,1,\cdots , 1) = d(e)$, diag$\,(f,1,1,\cdots , 1)= d(f)$ ($S$, $e$ and $f$ are given in section 5). Let $$Y = \left < S, d(e), d(f)\right > \times \left < x, y\right >, \mbox{ where } o(x)=o(y) = 2, o(xy) = 2^t,$$ and $ 2^{t+1}$ is the largest power of 2 that divides $(q^2-1).$ Let $z = \mbox{diag}\, (g^{2^{t-1}},g^{2^{t-1}}, \cdots )$ be given as in section 5 (see (5.1)). By Theorem 12 of \[W\], $ V = \left < S, d(e)x,d(f)y\right >$ is a Sylow 2-subgroup of $\Omega _{2n}(\eta, q)$ and $$\overline V = \left < S, d(e)x,d(f)y\right > \Big / \left < z (xy)^{2^{t-1}}\right >\eqno(6.1)$$ is a Sylow 2-subgroup of $P\Omega _{2n}(\eta, q)$ ($n$ is odd, $\eta = \pm 1$, $q^n \equiv \eta$ (mod 4)). It is easy to see that $$\overline V \subseteq \overline W = \left < W(TR,1,J), x, y \right >\Big / \left < z (xy)^{2^{t-1}}\right >.\eqno(6.2)$$ Note that $[W(TR,1,J) , \left <x,y\right> ]=1$. As a consequence, $$\left < W(TR,1,J), x, y \right > \cong \left (\begin{array} {cc} W(TR,1,J)&0\\ 0&1\\ \end{array}\right ) \times \left (\begin{array} {cc} 1&0\\ 0&\left <g, k\right > \\ \end{array}\right ).\eqno(6.3)$$ Note that $\left <g, k\right >$ is dihedral of order $2^{t+1}$ (see section 5) and that $\left <x,y\right >\cong \left <g,h\right >$. It follows that one may decompose the group in (6.3) into $$(6.3)= \left (\begin{array} {cc} W(TR,1,J)&0\\ 0&1\\ \end{array}\right ) \rtimes D = \Omega \rtimes D,\eqno(6.4)$$ where $\Omega \cong W(TR,1,J)$, $D = \left < \mbox{diag}\, (v,v,\cdots)\,:\, v \in \left <g, k\right > \right >.$ Note that $D \cong \left <g, k\right >$ and that under the isomorphism we mention in (6.3), $z(xy)^{2^{t-1}}$ is mapped to $$\sigma = \mbox{diag}\,( z,g^{2^{t-1}})= \mbox{diag}\,( g^{2^{t-1}}, g^{2^{t-1}}, \cdots).\eqno(6.5)$$ Consequently, $$\overline W = \overline {\Omega}\rtimes \overline D \cong \Omega \rtimes \overline D, \eqno(6.6)$$ where $\overline D = D/\left < \sigma\right >.$ Since $\Omega \cong W(TR,1,J)\cong \prod w_{m_i} (TR)$ is a direct product of wreath products ((4.8) of \[L\]), we may apply Proposition 3.4 of \[L\] and conclude that $$r_2(\overline W) \le r_2(W(TR,1,J)) + r_2(D/\left < \sigma\right >) = 2n_1 +2.\eqno (6.7)$$ Suppose that the above is actually an equality. Let $\overline E \subseteq \overline \Omega $ be elementary abelian of dimension $2n_1 +2$. Let $\overline E = (\overline E \cap \overline \Omega ) \times \overline F$. Applying Lemma 2.3 of \[L\], $\overline E \cap \overline {\Omega}$ is of dimension $2n_1$ and $r_2(\overline F) =2$. Applying Lemmas 2.3, 3.2 of \[L\], $\overline E \cap \overline {\Omega }$ is of the form $$\mbox{diag}\, (E_1, E_2, \cdots, E_{n_1/2} , 1 )\left <\sigma\right >/\left <\sigma\right >. \eqno(6.8)$$ where $E_i \subseteq TR$ is of dimension 4 ($r_2(TR)=4$) and $r_2(\overline E|\overline D) = r_2(\overline F|\overline D) = r_2(\overline D) = 2$ (Lemma 2.3). Hence $\overline E|\overline D$ contains the centre of $\overline D$. Hence $\overline E|\overline D$ must contain the element $\overline \tau$, $$\tau =\mbox{diag}\,(v,v,\cdots, v),$$ $ v = g^{ \pm2^{t-2}}, \,\tau ^2 = \sigma$. It follows that the element in $\overline E$ projects to $\overline \tau$ is of the form $$\overline \nu =\overline {\mbox{diag}\,( A , g^{ \pm2^{t-2}} ) },\eqno (6.9)$$ where $A \in W(TR, 1,J)$, $A^2 = z$ ($A$ must satisfy the equation $A^2 = z$ as $\overline \nu \in \overline E$ is of order 2). Note that $[(6.8), (6.9)] = \overline 1.$ An easy observation of the last entry of (6.8) and (6.9) shows that diag$\,(A, 1)\in C_{\Omega}(\mbox{diag}\, (E_1, \cdots , E_{n_1/2},1 ))$ (recall that $\overline \Omega \cong \Omega$). By the remark of Lemma 1.5, we have $A \in \mbox{diag}\,(TR, TR, \cdots , TR)$. Since $A^2 = z$, we have $$A=\mbox{diag}\,(a_1,a_2, \cdots),$$ where $a_i $ is one of the members in (A2) (see Appendix A). Since $[a_i, E_i]=1$, the $a_i$ cannot be $g^{2^{t-2}}d^mh,\,\,\, d^{2^{t-2}}g^nk$ (see the discussion of (5.5) and (5.6) of section 5). Hence $a_i\in \{ g^{2^{t-2}},\,\,\, d^{2^{t-2}}\}.$ As a consequence, $$E_i \subseteq C_{TR}( g^{ \pm2^{t-2}} ) \mbox{ or } C_{TR}( d^{ \pm2^{t-2}} ).$$ By Appendix A, such centralisers are of rank 3. This is a contradiction ($r_2(E_i) = 4 )$. Hence (6.7) can be refined into $$r_2(\overline W) \le 2n_1 +1.\eqno (6.10)$$ Applying (6.3), it is clear that $ r_2(\overline W) \ge r_2(W(TR,1,J) + 2-1 = 2n_1 +1$. Hence $$r_2(\overline W) = 2n_1 +1.\eqno (6.11)$$ We are now ready to determine the rank of $\overline V$. Note first that $V$ ($V$ is a Sylow 2-subgroup of $\Omega_{2n_1}(\eta ', q)$) is of rank $2n_1 +1$. (see section 5.7 of \[L\]). It follows that $r_2(\overline V ) \ge 2n_1 +1 -1 = 2n_1.$ By (6.11), we have $$2n_1 \le r(\overline V) \le 2n_1 + 1 = r_2(\overline W).\eqno (6.12)$$ Suppose that $r_2(\overline V) = 2n_1 + 1$. It follows that if $\overline E \subseteq \overline V$ is of maximal dimension, then $\overline E$ is actually of maximal dimension in $\overline W$ ($r_2(\overline V) = 2n_1 + 1 = r_2(\overline W))$. Since $r_2(\overline W) = 2n_1+1, r_2(\overline {W(TR,1,J)}) = 2n_1-1$ ((6.12) and Lemma 5.2), we have $r_2(\overline E|\overline D) =2$ or 1. Suppose that $r_2(\overline E|\overline D) = 2$. 1. Similar to (6.9) , $\overline E$ possesses an element $\overline \tau,$ $$\tau = \mbox{diag}\,( A , g^{ \pm2^{t-2}} ),\,\, A \in W(TR, 1,J),\,\,A^2 = z.\eqno(6.13)$$ 2. $\overline E \cap \overline \Omega $ is of rank $2n_1-1$. By the remark of Lemma 3.2 of \[L\], $\overline E \cap \overline \Omega \subseteq \mbox{diag}\,( TR, TR, \cdots, TR, 1)\left <\sigma\right >/ \left <\sigma\right > .$ As a consequence, $$\overline E \cap \overline \Omega = \mbox{diag}\,( E_1, E_2, \cdots, E_{n_1/2}, 1) \left <\sigma\right >/ \left <\sigma\right > ,\eqno(6.14)$$ where all the $E_i$ are of rank 4 except one which is of rank 3. Note that $[(6.13), (6.14)] $ $ =\overline 1.$ It follows that $[A, \mbox{diag}\,( E_1, E_2, \cdots, E_{n_1/2})]=1$. By the remark of Lemma 1.5, we have $ A \subseteq \mbox{diag}\,(TR, TR, \cdots , TR)$. Since $A^2 = \sigma$, we have $$A=\mbox{diag}\,(a_1,a_2, \cdots),$$ where $a_i $ is one of the members in (A2) (see Appendix A). Since $[a_i, E_i]=1$, the $a_i$ cannot be $g^{2^{t-2}}d^mh,\,\,\, d^{2^{t-2}}g^nk$ if $r_2(E_i) = 4$ (see the discussion of (5.5) and (5.6) of section 5). Hence $a_i\in \{ g^{2^{t-2}},\,\,\, d^{2^{t-2}}\}$ for all $i$ except one. Since $n_1 \ge 4$, at least one of the $E_i$’s is of dimension 4. For such $E_i$, one has $$E_i \subseteq C_{TR}( g^{ \pm2^{t-2}} ) \mbox{ or } C_{TR}( d^{ \pm2^{t-2}} ).$$ By Appendix A, such centralisers are of rank 3. This is a contradiction ($r_2(E_i) = 4 )$. Hence $r_2(\overline E|\overline D) =1$ and $\overline E\cap \overline \Omega $ is of dimension $2n_1$. In particular, $\overline E\cap \,\overline \Omega$ and $\overline \Omega $ have the same dimension (recall that $\Omega \cong \overline \Omega$). By Lemma 3.2 of \[L\], $\overline E\cap \overline \Omega = \mbox{diag}\,( E_1, E_2, \cdots, E_{n_1/2}, 1) \left <\sigma \right >/ \left <\sigma \right > $, where $E_i \subseteq TR$ is elementary abelian of rank 4 for every $i$. In particular, $\overline E$ possesses an element of the form $$\overline {\mbox{diag}\,(tr,1, \cdots, 1)} \in \overline { S(T,R,J)}.$$ Since $r_2(TR) =4$, $r_2(T)=3$, we may choose $r\in R^{\times}$. This is a contradiction (see remark of section 4.2 of \[L\] and the last paragraph of the proof of Lemma 2.5). In summary, we have $$r_2(\overline V) = 2n_1.$$ Equivalently, we have [**Proposition 6.1.**]{} [*The $2$-rank of $P\Omega _{2n}(\eta, q)$ where $n\ge 5$ is odd, $\eta = \pm 1$, $q^n \equiv \eta$ $($mod $4)$ is $2n_1 = 2(n-1)$.*]{} [**Appendix A**]{} The main purpose of this appendix is to investigate the solutions of the following equation in $TR$ (see section 5). $$x^2 = z_0,\eqno(A1)$$ where $Z(TR) = \left < z_0= g^{2^{t-1}}\right >$. Note that $o(z_0) = 2$ and that elements in $TR$ admit the following (unique) form $g^n(dg)^mh^ok^p e^q f^r.$ Direct calculation shows that $x$ is one of the following : $$g^{2^{t-2}},\,\,\, d^{2^{t-2}} ,\,\,\, g^{2^{t-2}}d^mh,\,\,\, d^{2^{t-2}}g^nk,\,\, \mbox{ where } n, m \in \Bbb Z. \eqno(A2)$$ Let $w$ be an element in (A2) and let $C = C_{TR}(w)$ be its centraliser in $TR$. In the case $w= g^{2^{t-2}}$, one has $$C = \left < g, d, h, kef\right > = [(\left <g\right > \times \left < dg \right >) \rtimes \left < h \right >] \rtimes \left < kef \right >.$$ Suppose that the rank of $C$ is 4. Let $E \subseteq C$ be elementary abelian of dimension 4. Since $E$ is of dimension 4, $E \cap (\left <g\right > \times \left < dg \right >)\rtimes \left < h \right > $ is of dimension 3 (see Lemma 2.3 of \[L\]). Hence $E \cap (\left <g\right > \times \left < dg \right >)$ must be of dimension 2 (see Lemma 2.3 of \[L\]). Note that $\left <g\right > \times \left < dg \right >$ has a unique elementary abelian subgroup of dimension 2 : $\left < (dg)^{2^{t-2}}, (d^{-1}g)^{2^{t-2}} \right >$. It follows that $$\left < (dg)^{2^{t-2}}, (d^{-1}g)^{2^{t-2}} \right > \subseteq E$$ and that $E$ must possess an element of the form $d^mg^n h$ ( $r_2(E \cap (\left <g\right > \times \left < dg \right >)\rtimes \left < h \right >)=3>2= r_2(E \cap (\left <g\right > \times \left < dg \right >)$). This is not possible as $d^mg^n h$ does not commute with $\left < (dg)^{2^{t-2}}, (d^{-1}g)^{2^{t-2}} \right >$. Hence the rank of $C$ is at most 3. One can show similarly that the rank of $C_{TR}(g^{2^{t-2}})$ is at most 3 as well. [99999]{} R. Carter, P. Fong, [*The Sylow $2$-subgroups of the Finite Classical Groups*]{}, J. of Algebra [**1**]{} (1964), 139-151. G. Gorenstein, R. Lyons, R. Solomon, [ *The Classification of the Finite Simple Groups, Number $3$*]{}, Amer. Math. Soc. Providence, Rhode Island (1998). K. Harada, M. L. Lang, [*Indecomposable Sylow $2$-subgroups of simple groups*]{}, Acta Applicandae Mathematicae (2005) 85 : 161-194. M. L. Lang, [*Ranks of the Sylow $2$-subgroups of the classical groups*]{}, preprint. S. Malyushitsky, [*On Sylow $2$-subgroups of finite simple groups of order up to $2^{10}$*]{}, Ph.D Thesis, The Ohio State University, Columbus, Ohio, USA (2004). preprint. W. J. Wong, [ *Twisted Wreath Products and Sylow $2$-subgroups of Classical Simple Groups*]{}, Math. Z. [**97**]{} (1967), 406-424. [e-mail: matlml@math.nus.edu.sg]{} lang-68-4.tex
--- abstract: 'At the Kernfysisch Vensneller Institiutr (KVI) in Groningen, NL, a new facility (TRI$\mu$P) is under development. It aims for producing, slowing down, and trapping of radioactive isotopes in order to perform accurate measurements on fundamental symmetries and interactions. A production target station and a dual magnetic separator installed and commissioned. We will slow down the isotopes of interest using an ion catcher and in a further stage a radiofrequency quadropole gas cooler (RFQ). The isotopes will finally be trapped in an atomic trap for precision studies.' address: - - 'Kernfysisch Vensneller Instituut, Zernikelaan 25, 9747 AA Groningen, the Netherlands' author: - 'M. Sohani' title: 'TRI$\mu$P - A new facility to produce and trap radioactive isotopes[^1]' --- Introduction ============ Rare and short lived radioactive isotopes are of interest because they can offer unique possibilities for investigating fundamental physical symmetries [@HA04]. Fundamental symmetries are at the basis of the Standard Model (SM). Using radioactive isotopes, limits for the validity of the SM can be explored in high precision measurements. In particular, high accuracy can be achieved, when suitable radioactive isotopes are stored in atom or ion traps [@WI03; @JU05]. The TRI$\mu$P (Trapped Radioactive Isotopes: $\mu$icrolaboratories for fundamental Physics) facility at the Kernfysisch Vensneller Instituut(KVI) in Groningen, The Netherlands, is being developed to conduct such high precision studies. The local group concentrates on precision measurements of nuclear $\beta$-decays [@JU05-2] and search for permanent electric dipole moments [@JU05-2]. We will briefly describe the complete facility consisting of a production target, a Magnetic Separator with cooling stages and atom traps. We will describe also the method of the precision measurements in beta-decay studies. Magnetic Separator ================== Heavy-ion beams from the superconducting cyclotron AGOR at KVI are used to produce a wide range of products(Fig. \[fig:layout\]). ![Layout of the TRI$\mu$P separator.[]{data-label="fig:layout"}](layout2.eps){width="\textwidth"} A hydrogen gas target cooled to liquid nitrogen temperature [@YO05] and various solid targets have been employed in different types of reactions, from fusion evaporation to charge exchange. Using inverse kinematics, products are selected by the dual magnetic separator [@BE05]. The TRI$\mu$P separator has been commissioned.The experiments included $^{21}$Na production using $^{21}$Ne (43 MeV/n and 20 MeV/n). Typical rates were $3\times10^3$/s/pnA for $^{21}$Na. Recently $10^4$ /s/pnA for $^{19}$Ne and $10^3$/s/pnA $^{20}$Na was achieved. In a first physics experiment $^{21}$Na and $^{22}$Mg were used to measure branching ratio of $^{21}$Na $\beta$ decay to the exited state and of $^{21}$Ne at 350 keV [@Achouri]. The population of this state is of relevance for $\beta-\nu$ correlation measurements [@SC04]. A stack of two silicon detectors registered the incoming particles and the subsequent $\beta$ decay. A set of Ge detectors detected the $\gamma$ ray emission following part of the $\beta$ decays. The Ge Clover detectors include a BGO Compton-shield to reduce background. The systematic variation in the branching ratio, that has been observed in various experiments [@AL74-WI80], may is been caused by a line in the ambient background from the $^{238}$U decay chain (352 keV, $^{214}$Pb). This line is clearly seen in the spectrum (see Fig. \[fig:LPC\]). Cooling stages ============== In order to perform precision measurements, radioactive ions produced and separated from the primary beam and unwanted products need to be cooled and trapped in atom traps. The cooling procedure needs to be fast and efficient in view of the short lifetime of isotopes. For light isotopes such as $^{21}Na$, the high energy and fully stripped secondary beam passes trough several cooling stages before it can be delivered as a low energy and singly charged ion beam with acceptable emittance. Neutralization and laser trapping is the last step to collect radioactive atoms in a well defined cold cloud atoms. Ion Catcher ----------- The main principal of an Ion Catcher is to stop high-energy ions in matter, i.e. in a gas or a solid. Electronic stopping causes fast slowing. In order to extract a low energy ion beam at the end of the ion catching process, the particles must remain ionized. Neutralization of atoms is a poisoning process which must be avoided. The commonly used techniques include gas-filled ion catchers and thermal ionizing devices. *Gas-filled ion catcher:* Collisions and charge exchange processes bring ions to lower energies and charge states as long as they move trough the gas. At low energies, re-ionization and neutralization are in competition. Cross sections of these processes are highly dependent on the energy of the ions and the relative ionization potential of the particle and the stoping gas. There are two strategies: one either relies on the survival of the particles as singly charged ions or on active re-ionization after neutralization. In the latter case extraction can be done e.g. after resonant laser ionization [@HU02]. Principally, the gas-filled ion catcher can only work when the space charge built up during stopping does not hinder the extraction optics. Therefore, the efficiency for this device depends on the input ion rate [@HU02; @MOpr]. *Thermal ionizer:* a device consists of one or several metal foils that stop the beam. Atoms will defuse out of the foils particularly at high temperature. Collisions with the surface of the foils and the cavity body can ionize these atoms. A potential electric field allows to extract ions. Choosing proper material such as W for the stopping foils and the cavity is important. The difference between the ionization potential of the beam element and the foil element determines the ionization probability and re-neutralization after each collision with the surface. Therefore a thermal ionizer mainly works for alkali and alkali-earth elements that have low ionization potential [@KI90]. Investigations for both types of devices were performed in the frame work of the TRI$\mu$P program. Because of the priority of the experiments on Na and Ra isotopes, a thermal ionizer is under the construction and will be tested soon. A stack of W foils at high temperature has been chosen as stopper inside a hot W cavity. RFQ Cooler and Buncher ---------------------- A segmented radio frequency quadrupole cooler and buncher is the second stage of the cooling procedure in the TRI$\mu$P facility. Input ions coming from the thermal ionizer have an energy spread of order several eV. In the first part of the RFQ, they pass through a low-pressure gas-filled medium. A longitudinal drag voltage guides them to the exit while confined at the same time in the transverse direction by a set of radio frequency electric quadrupoles [@LU99]. The second part of the RFQ collects ions in a Paul-trap. By switching the last electrode of the trap, a bunch of ions can be released and sent to the atom traps through an electrostatic low-energy beam line. The TRI$\mu$P RFQ has been constructed and is commissioned. Measuring the transmission of the ions and optimizing the settings are the main issues of the commissioning. Atomic Trap ----------- The $\beta$-decay experiments st the TRI$\mu$P facility will be carried out using a set of two subsequent magneto optical traps. The first trap is to collect atoms and bunch them toward the second trap inside a detection chamber. The first trap is made of a small glass cavity and has wide laser beams for efficient collection of atoms. It will contain a hot Yttrium foil to neutralize the incoming ion beam. The second trap is located in a precision measurement chamber centering a reaction microscope and $\beta$-electron counter. Beta decay spectroscopy ======================= Correlations between particles from $\beta$-decay manifest the symmetries and symmetry violations of the weak interaction [@WI05; @SE05]. In weak interactions (a current-current interaction) several currents can contribute. these are Scalar(S), Vector (V), Axial-vector (A) and Tensor (T) currents. In the Standard Model (SM), the weak interaction is exclusively the result of V and A currents. The V current is observed in Fermi (F) decay and the A current in Gamow-Teller (GT) decay. Contribution of other currents in $\beta$-decay will affect the correlations between the particles and their kinematics. Deviations from the standard V-A model indicates physics beyond the SM. ![RIMS sketch[]{data-label="fig:RIMS"}](RIMS4.ps) It is not practical to measure the $\nu$ particle in $\beta-\nu$ correlations. Therefore, one measures the correlation between the $\beta$-electron and recoil ion in order to measure the complete phase-space of the $\beta$-decay. Recoil-Ion-Momentum Spectroscopy (RIMS) is an advanced method to measure the recoil ion. This method is used at KVI to study charge exchange process with the recoil energy in order of eV [@TU01; @KN05]. In RIMS the recoil ions are projected with an electric field on a position sensitive microchanel plate detector (MCP)(Fig. \[fig:RIMS\]). This method has been used successfully by other groups for $\beta$-decay studies [@SC04; @GO00]. The time of flight measurement of the recoil ion is started by observing a hit in the $\beta$-detector and is stopped by the hit on the MCP. Together with the position of the recoil ion on the MCP, this provides information about the initial momentum. A position sensitive $\beta$-detector allows to restrict the momentum of the $\beta$-electron. The differential rate of $\beta$-decay is proportional to the 3-body phase-space and the week interaction matrix element. The general derivation of the matrix element (eq. \[equ1\]) shows all the possible correlations between the particles [@JA57]. $$\begin{aligned} \frac{{\rm d}^2W}{{\color{green} {\rm d}\Omega_e}{\color{red}{\rm d}\Omega_\nu}} \sim \!\!& 1 &\!\!\! +\ a\,\frac{\mbox{\color{green} \boldmath $p$}\cdot{\color{red}\hat{\mbox{\boldmath $q$}}}}{\color{green} E} + b\,\Gamma\,\frac{\color{green} m_e}{\color{green} E} \nonumber \\ \!\!& + &\!\!\! \langle\mbox{\boldmath\color{blue} $J$}\rangle\cdot \left[ A\,\frac{\mbox{\color{green}\boldmath $p$}}{\color{green} E} + B\,{\color{red}\hat{\mbox{\boldmath $q$}}} + D\,\frac{\mbox{\color{green} \boldmath $p$}\times {\color{red} \hat{\mbox{\boldmath $q$}}}}{\color{green} E} \right] \nonumber \\ \!\!& + &\!\!\! \langle\mbox{\color{green} \boldmath $\sigma$}\rangle\cdot \left[G\,\frac{\mbox{\color{green}\boldmath $p$}}{\color{green} E} + Q\,\langle\mbox{\boldmath\color{blue} $J$}\rangle + R\,\langle\mbox{\boldmath\color{blue} $J$}\rangle \times\frac{\mbox{\color{green} \boldmath $p$}}{\color{green} E}\right] \ \label{equ1}\end{aligned}$$ where *p* and *q* are momentum of the $\beta$-electron and the neutrino, $\langle \it{J}\rangle$ and $\langle \sigma \rangle$ are polarization of the parent nuclei and the $\beta$-electron and *E* is the energy of the $\beta$-electron. *a,b,A,B,D,G,Q and R* coefficients are depend on the fundamental weak coupling constants and nuclear matrix elements. The D and R coefficients are zero if time reversal symmetry is conserved. Experimental searching for finite value of the D and R coefficients requires a sample of polarized nuclei ($\langle \it{J}\rangle\neq0$). For the R coefficient also measuring the polarization of the electrons is required. We aim primarily at measuring D. In the initial studies without polarization, measuring *a* and *b* coefficient explores the effect of the non-SM weak currents. In general, there are two free parameters out of 9 parameters for the 3-body phase-space, including all the conservation laws and geometrical symmetries for non-polarized atoms. Therefore one can choose any pair of the parameters to explore the $\beta$-decay events with a 2-D histogram. One choice uses the energy of the recoil ion and the $\beta$-particle as free parameters to express all the other parameters in terms of these two. A deviation of *a* from the SM prediction results in a change in the corresponding distribution as showed in Fig. \[fig:HIS\]. Outlook and conclusion ====================== The TRI$\mu$P facility is a user facility to deliver radioactive beams and to perform precision measurements. The TRI$\mu$P production target and the separator are functioning. A first user experiment has been already completed. Several experiments using a clean radioactive beam have been requested. [99]{} The TRI$\mu$P group contributing to this work: G.P. Berg, S. De, S. Dean, O.C. Dermois, U. Dammalapati, P. Dendooven, K. Jungmann, C.J.G. Onderwater, A. Rogachevskiy, M. Sohani, E. Traykov, A. Mol, L. Willmann and H.W. Wilschut. NuPECC Report *“NuPECC Long Range Plan 2004: Perspectives for Nuclear Physics Research in Europe in the Coming Decade and Beyond”* available from http://www.nupecc.org/pub/ K. Jungmann, Nucl. Phys. A [**751**]{} (2005) 87c. H.W. Wilschut, Hyperfine Interactions [**146/147**]{} (2003) 77. K. Jungmann, in Spin 2004, K. Aulenbacher, T. Bradamante, A. Bressan, A. Matini (eds.), World Scientific, Singapure (2005),108. A.R. Young, M. Boswell, G.P. Berg, A. Rogachevskiy, M. Sohani and E. Traykov, KVI Annual report 2004, p17. G.P.A. Berg, O.C. Dermois, U. Dammalapati, P. Dendooven M.N. Harakeh, K. Jungmann, C.J.G. Onderwater, A. Rogachevskiy, M. Sohani, E. Traykov, L.Willmann, and H.W. Wilschut, submited to Nucl.Instr.Meth. (nucl-ex/0509013). L. Achouri, J.-C. Angélique, G. Ban, G.P.A. Berg, B. Blank, G. Canchel, P.G. Dendooven, J. Giovinazzo, K. Jungman, E. Liénard, I. Matea, O. Navilat-Cuncic, N. Orr, A. Rogachevskiy, M. Sohani, E. Traykov, and H.W. Wilschut, KVI experiment P01 and Annual report 2004, p11. N.D. Scielzo,S.J. Freedman, B.K. Fujikawa and P.A. Vetter,Phys. Rev. Lett. [**93**]{}, (2004)102501. see e.g. D. E. Alburger, Phys. Rev. C [**9**]{}, (1974)991 and H. S. Wilson et al., Phys. Rev. C [**22**]{}, (1980)1696. see e.g. M.D. Lunney and R.B. Moore, Interernational Journal of Mass Spectrometry, [**190/191**]{}, (1999)153. M. Huyse et al., Nucl.Instr.Meth. B [**187**]{}, (2002)535. D.J. Morrissey, private communication. R. Kirchner, Nucl. Inst. Meth. A [**292**]{}, (1990)203. H.W. Wilschut, AIP Conferance proceedings [**802**]{}, (2005)223. N. Severijns, M. Beck and O. Naviliat-Cunicic, submited to Rev Mod. Phys.. J.W. Turkstra, R. Hoekstra, S. Knoop. D. Meyer, R. Morgenstern, and R.E. Olson, Phys. Rev. Lett. [**87**]{}, (2001)123202. S. Knoop, M. Keim, H. J. Lüdde, T. Kirchner, R. Morgenstern, and R. Hoekstra, J. Phys. B: At. Mol. Opt. Phys.[**38**]{}, (2005)3163. A. Gorelov et al., Phys. Rev. Lett. [**94**]{}, (2005)142501. J. D. Jackson, S. B. Treiman and H. W. Wyld, Phys. Lett. [**106**]{}, (1957)517. [^1]: Presented at the XXIX Mazurian Lakes Conference on Physics, Piaski, Poland, August 30 - Septenber 6, 2005
--- author: - 'Nayana Shah, David Pekker, Paul M. Goldbart' title: 'Inherent stochasticity of superconductive-resistive switching in nanowires' --- **Hysteresis in the current-voltage characteristic in a superconducting nanowire reflects an underlying bistability. As the current is ramped up repeatedly, the state switches from a superconductive to a resistive one, doing so at random current values below the equilibrium critical current. Can a single phase-slip event somewhere along the wire—during which the order-parameter fluctuates to zero—induce such switching, via the local heating it causes? We address this and related issues by constructing a stochastic model for the time-evolution of the temperature in a nanowire whose ends are maintained at a fixed temperature. The model indicates that although, in general, several phase-slip events are necessary to induce switching, there is indeed a temperature- and current-range for which a single event is sufficient. It also indicates that the statistical distribution of switching currents initially broadens, as the temperature is reduced. Only at lower temperatures does this distribution show the narrowing with cooling naively expected for resistive fluctuations consisting of phase slips that are thermally activated.** ![Model [a.]{} Schematic of an experimental configuration described by our model: a superconducting nanowire is suspended between two thermal baths. [b.]{} Sketch showing the attenuation of the order parameter in the core of a phase-slip. [c.]{} Schematic of the simplified model. All phase slips are taken to occur in the central (i.e. shaded) segment of length $l$, which is assumed to be at a uniform temperature $T$; heat is carried away through the end segments, which are assumed to have no heat capacity. The temperature at the ends of the wire is fixed to be $T_{\text{b}}$. [d.]{} Sketch of a typical temperature profile. []{data-label="FIG:schematic"}](Schematics5b){width="8cm"} The essential qualitative characteristics of quasi-one-dimensional superconducting nanowires are controlled by fluctuations of the superconducting order parameter, these fluctuations being predominantly thermal or quantal, depending on the temperature regime[@SkocpolT1975]. Bulk superconductors undergo a sharp transition from an electrically resistanceless (i.e. superconducting) to a resistive (i.e. normal) state, e.g., with increasing temperature. In contrast, as explained by Little[@Little1967] and Langer and Ambegaokar[@LangerA1967], in quasi-one-dimensional superconductors the resistanceless (and truly long-range-ordered) state is destabilized by a certain class of accessible order-parameter fluctuations that connect topologically distinct sectors of current-carrying states. In so doing, these fluctuations, which are known as phase-slip events, can dissipate supercurrent, and because of them such systems undergo a broad evolution between the (nominally) superconductive and normal states, e.g., with increasing temperature. Recent advances in the fabrication of ultra-narrow superconducting wires—using carbon nanotube-[@BezryadinLT2000] or DNA-templating[@HopkinsPGB2005]—have spurred renewed interest in one-dimensional superconductivity and opened up new avenues for investigating the impact of order-parameter fluctuations. One setting in which order-parameter fluctuations in superconducting nanowires have been widely investigated, both theoretically and experimentally, is that of transport properties in the vicinity of the normal-to-superconducting quantum phase transition[@BezryadinLT2000; @LopatinSV2005; @RogachevBB2005; @ShahL2007; @MaestroRSS2007]. In this setting, the primary mechanism underlying destruction of (nominal) superconducting order is depairing associated with magnetic fields or magnetic impurities. As is well known, applied currents also cause depairing and, if larger than a certain value (known as the thermodynamic critical or depairing current), would render the superconducting state locally unstable (regardless of the role of phase-slip fluctuations)[@Bardeen1962; @RomijnKRM1982; @Tinkham]. However, phase-slip fluctuations, which are responsible for the broad resistive transition in quasi-one-dimensional superconductors, also allow for premature switching[@Giordano1990; @TinkhamFLM2003; @AltomareCMHT2006] to the resistive state, i.e. a nonequilibrium transition from the (nominally) superconducting, low-resistivity state to the (nominally) normal, high-resistivity one. If damping of the order-parameter dynamics were low, a single phase-slip event would induce such switching, in analogy with what happens in underdamped Josephson junctions. By contrast, nanowires are generally overdamped, and so, whilst causing resistance, phase slippage does not, by itself, induce switching. As discussed in Ref. \[\], this resistance causes Joule heating which, if not overcome sufficiently rapidly by conductive cooling, effectively reduces the depairing current, ultimately to below the applied current, thus causing switching to the highly resistive state. Naturally, this switching is not deterministic, owing to the underlying stochasticity of the phase-slip events that are responsible for the resistance. Rather, for a given subcritical applied current there is a statistical distribution of times at which switching occurs, characterized by a mean switching time (i.e. a superconducting state lifetime). Our focus here is on *stochastic* aspects of the superconducting-to-resistive switching dynamics, an area that has not received much attention, to date. *Inter alia*, by obtaining the current-dependent mean switching time and convolving it with the sweep rate of the applied current that describes the experimental protocol, we shall determine the statistical distribution of currents at which switching occurs. Besides its fundamental significance, the characterization of switching dynamics in nanowires seems likely to have technological implications, such as for the integration of superconducting wires into electronic circuitry as controllable (current-limiting) switching elements, the implementation of nanowire-based devices[@HopkinsPGB2005; @PekkerBHG2005; @JohanssonSSJT2005], and the exploration of the use of nanowires in quantum computers. ![[ Hysteresis in the I-V characteristic obtained by finding the steady-state solutions of Eq. (\[HCE\]) in the continuous Joule heating limit. ]{}[]{data-label="FIG:hysteresis"}](hyst2b){width="8cm"} Having in mind the configuration in recent and ongoing experiments on superconducting nanowires, we consider a free-standing wire of length $L$ and cross-sectional area $A$, the ends of which are held at a fixed temperature $T_{\text{b}}$, as shown in Fig. \[FIG:schematic\]. The fact that the wire is free-standing (i.e. lacks any substrate) is conducive to a clear interpretation of the measurements. On the other hand, the absence of an overall thermal bath means that any heat generated locally in the wire by a source term $Q\ $can be taken away only through the ends; the corresponding heat conduction equation for the temperature ${\Theta}(x,t)$ at position $x$ along the wire at time $t$ reads$$C_{\text{v}}({\Theta})\,\partial_{t}{\Theta}(x,t)=\partial_{x}\left[ K_{\text{s}}({\Theta})\,\partial_{x}{\Theta}(x,t)\right] +Q(x,t), \label{HCE}$$ and is characterized by the specific heat $C_{\text{v}}({\Theta})$ and thermal conductivity $K_{\text{s}}({\Theta})$ of the wire, together with the boundary condition ${\Theta}(\pm L/2,t)=T_{\text{b}}$ at its ends. Note that although our analysis rests on the premise that there are no additional heat-removing channels, it can readily be extended to account for such possibilities. Before addressing dynamical issues, let us dwell briefly on the steady-state solutions of the heat conduction equation, obtained by setting $\partial _{t}{\Theta}(x,t)=0$ and assuming that the wire is subjected to temporally continuous Joule heating at a rate given by $ALQ(x)=I^{2}R({\Theta}(x),I)$. Here, the function $R({\Theta}^{\prime},I)$ is to be understood as the resistance of an entire wire held at a uniform temperature ${\Theta }(x)={\Theta}^{\prime}$. The system-wide I-V characteristic at a given boundary value $T_{\text{b}}$ can be traced by obtaining the temperature profile ${\Theta}(x)\ $for every value of current $I$ in both up and down (parametric) sweeps of $I$. On determining $R({\Theta},I)$ via the current-biased version of LAMH theory[@LangerA1967; @McCumber1968; @McCumberH1970] for phase-slips, the $I$-$V$ curves are indeed found to become progressively more hysteretic in $I\ $as $T_{\text{b}}$ is lowered (see Fig. \[FIG:hysteresis\]). This steady-state problem was previously studied by Tinkham et al.[@TinkhamFLM2003], who used for $R({\Theta},I)$ the experimental linear-response resistance measured at $T_{\text{b}}={\Theta}$, leading to qualitative agreement with the hysteresis observed in MoGe nanowires[@TinkhamFLM2003]. Our aim here is to study the inherent stochasticity in the switching process, and therefore it is necessary for us to explicitly take into consideration the fact that the resistive fluctuations of the superconducting nanowire consist of discrete phase-slip events (labelled by $i$) that take place at random moments of time $t_{i}$ and are centered at random spatial locations $x_{i}$. The work done on the wire by a phase slip may be obtained from the time integral of $IV(t)$, in which the Josephson relation $d\phi/dt=$ $2eV/\hslash$ may be used to relate the voltage pulse to the rate of change of the phase difference[@Tinkham], via fundamental constants $\hslash$ and $e$. Hence, a single phase slip (or anti-phase slip), which corresponds to a decrease (or increase) of $\phi$ by $2\pi$, will heat (or cool) the wire by a quantum of energy $hI/2e$. Thus we arrive at the central thrust of our paper: the dynamics of switching from the superconducting to the resistive state in the nanowires is controlled by a heat conduction equation that is stochastic by virtue of its source term: $$Q(x,t)=\frac{hI}{2e}\frac{1}{A}\sum_{i}\sigma_{i}F(x-x_{i})\delta(t-t_{i}),$$ where $F(x-x_{i})$ is a normalized (to unity) form factor representing the relative spatial distribution of heat produced by the $i^{\text{th}}$ phase-slip event, and $\sigma_{i}=\pm1$ for phase (anti-phase) slips. The probability per unit time $\Gamma_{\pm}$ for an anti-phase (phase) slip to take place, depends on the local temperature ${\Theta}(x,t)$ and the current $I$. The randomness in $x_{i}$ and $t_{i}$ generates the stochasticity in the switching from the superconductive to the resistive state. To capture the essential physics whilst making the problem amenable to analysis, we shall consider the simpler model, represented in Figs. \[FIG:schematic\]c and \[FIG:schematic\]d. Given that the edge effects favor phase-slip locations away from the wire ends, the source term is restricted to the region near the center of the wire. The system is thus modeled by assuming that (i) the heating takes place within a central segment of length $l$ to which a uniform temperature $T$ is assigned, and (ii) the heat is conducted away through the end segments, within which we ignore the heat capacity[@fn1]. To simplify the problem further, we make use of the fact that $\Gamma_{+}\ll\Gamma_{-}$ and ignore the process of cooling by anti-phase slips. To account indirectly for their presence, we use a reduced rate $\Gamma\equiv\Gamma_{-}-\Gamma_{+}$ instead of $\Gamma_{-}$ for phase-slip events. This ensures that the discrete expression for $Q$ will correctly reduce to the continuous Joule-heating expression, in view of the LAMH formula $R(T,I)=$ $h\Gamma/2eI$. By using the model defined above, the description reduces to a stochastic ordinary differential equation for the time-evolution of the temperature of the central segment: $$\frac{dT}{dt}=-\alpha(T,T_{\text{b}})(T-T_{\text{b}})+\eta(T,I)\sum_{i}\delta(t-t_{i}), \label{SDE}$$ where the second term on the RHS corresponds to heating by phase slips, and the first term to cooling as a result of conduction of heat from the central segment to the external bath via the two end-segments, each of length $(L-l)/2$. The temperature-dependent cooling rate $\alpha$ is given by $$\alpha(T,T_{\text{b}})\equiv\frac{4}{l(L-l)C_{\text{v}}(T)}\frac {1}{T-T_{\text{b}}}\int_{T_{\text{b}}}^{T}dT^{\prime}\,K_{\text{s}}(T^{\prime })\text{.}$$ If $T_{\text{i}}$ and $T_{\text{f}}$ are temperatures before and after a phase slip then, using $$A\text{ }l\int_{T_{\text{i}}}^{T_{\text{f}}}C_{\text{v}}(T^{\prime })\,dT^{\prime}=\frac{hI}{2e},$$ we can express the temperature ‘impulse’ due to a phase slip,i.e$~T_{\text{f}}-T_{\text{i}}\equiv\eta(T_{\text{i}},I)\equiv\widetilde{\eta}(T_{\text{f}},I)$, as function of either $T_{\text{i}}$ or $T_{\text{f}}$, depending on the context. Let us now elucidate the physical and mathematical structure of Eq. (\[SDE\]). To begin with, we shall consider the continuous-heating limit, $\eta(T,I)\Gamma(T,I)$, for the source term, and express Eq. (\[SDE\]) as $dT/dt=-\partial U/dT$. In Fig. \[FIG:potential\], we illustrate the form of the ‘potential’ $U(T,T_{\text{b}},I)$ for fixed $T_{\text{b}}$: there is a range of currents $I$ for which $U$ has two local minima, corresponding to the superconducting (at low-$T$) and the resistive (at high-$T$) states, separated by a local maximum. The resulting bistability is central to the underlying physics. On the one hand, it explains the origin of the hysteretic behavior; on the other hand, it provides a basis for phrasing the question of stochastic switching dynamics in superconducting nanowires in terms of an existing general framework for stochastic bistable systems. In what follows, we focus on the stochastic variable $T(t)$; to ease the notation we do not display the dependences on $I$ and $T_{\text{b}}$ unless essential. To continue the analysis of the stochastic equation, imagine turning off the cooling term. If we now start with an initial temperature $T_{0}$ then $$T_{0},T_{0}+\eta(T_{0}),T_{0}+\eta(T_{0})+\eta(T_{0}+\eta(T_{0})),... \label{sequence}$$ defines the discrete sequence of values that $T$ jumps to, as marked on the horizontal axes in Fig. \[FIG:potential\] for $T_{0}=T_{\text{b}}$. The probability per unit time, $\Gamma(T)$, to make a jump changes at each step, and so does the size $\eta(T)$ of the jump, owing to their explicit dependence on temperature. On the other hand, if we turn off the heating term then we have a deterministic problem in which $T\ $would decay at a rate $\alpha(T)$, from its initial value $T_{0}>T_{\text{b}}$ to the bath temperature $T_{\text{b}}$, which is the lowest value $T\ $can have. It is the competition between the discrete heating and the continuous cooling that makes for a rather rich stochastic problem. We hope that our solution will also furnish insight into other physical problems that possess a similar mathematical structure. ![Effective potential $U(T,T_{b},I)$ (dashed line) and the mean first-passage time $\tau$ as functions of the temperature $T\ $of the central segment for various bias currents $I$ and for $T_{\text{b}}=1.2\,$K. The marks on the temperature axis indicate the temperatures that the central segment would have after $1,2,...,10$ phase slips in the absence of cooling (as given by Eq. (\[sequence\]) for $T_{0}=T_{\text{b}}$).[]{data-label="FIG:potential"}](MFPT_Table3b){width="8cm"} The master equation for $P(T,t)$, the probability for the temperature of the central segment of the nanowire to be $T$ at a time $t$ (given that it had some initial value $T_{0}$ at time $t_{0}$), reads $$\begin{aligned} \partial_{t}P(T,t) & =\partial_{T}\,[(T-T_{\text{b}})\,\alpha (T)\,P(T,t)]-\Gamma(T)\,P(T,t)\nonumber\\ & +\Gamma\big (T-\widetilde{\eta}(T)\big)\,P\big (T-\widetilde{\eta }(T),t\big)\,\big (1-\partial_{T}\,\widetilde{\eta}(T)\big)\text{,} \label{ME}$$ where the first (i.e. the transport) term corresponds to the effect of cooling, and the last two terms correspond to the effects of heating. Note that the term $(1-\partial_{T}\widetilde{\eta}(T))$ appears because of the dependence of the jump size on $T$, as given by $\widetilde{\eta}(T)$. The fundamental quantity of interest is the mean switching time $\tau_{\text{s}}(T_{\text{b}},I),$ i.e. the mean time required for the wire to switch from being superconductive to resistive, assuming that the wire has temperature $T=T_{\text{b}}$ when the current $I$ is turned on at time $t=0$. The master equation, Eq. (\[ME\]), provides the starting point for generalizing the standard procedure for computing $\tau_{\text{s}}$ via the evaluation of the mean first-passage time[@Kampen]. The mean first-passage time $\tau(T\rightarrow T^{\text{*}})$, to go past a point $T=T^{\text{*}}$ for the first time having started from some $T<T^{\text{*}}$, can be shown to satisfy the equation$$-(T_{\text{b}}-T)\,\alpha(T)\,\partial_{T}\,\tau(T)+\Gamma(T)\left[ \tau\big (T\big)-\tau\big (T+\eta(T)\big)\right] =1, \label{MFPT}$$ together with the conditions $\tau(T)=0$ for $T>T^{\text{*}}$ and $d\tau(T)/dT=0$ at $T=T_{\text{b}}$, which are appropriate for our problem. Some illustrative plots for $\tau(T\rightarrow T^{\text{*}})$, obtained by numerically solving Eq. (\[MFPT\]) are shown in Fig. \[FIG:potential\], with the choice of $T^{\text{*}}$ being somewhat larger than the location of the local maximum of $U$. From these plots we see that the mean first-passage time has a plateau at low values of $T\ $and then rapidly decreases in the vicinity of the potential barrier. At very high currents, as can be seen from the last panel of Fig. \[FIG:potential\], the local stability of the superconducting state disappears, and so does the plateau in the mean first-passage time. In these plots, the tick marks on the $T$ axes correspond to the temperatures given by the sequence (\[sequence\]) for $T_{0}=T_{\text{b}}$. As long as the high-$T$ minimum is lower than the low-$T$ one, and $T^{\text{*}}$ is chosen to be appreciably past the intervening potential maximum (in order to eliminate the possibility of reversion to the superconducting state), we can make the identification $\tau_{\text{s}}(T_{\text{b}},I)\equiv\tau(T_{\text{b}}\rightarrow T^{\text{*}},T_{\text{b}},I)$. The number of tick marks (see sequence (\[sequence\])) between $T_{\text{b}}$ and $T^{\text{*}}$ is nothing but the of number $N(T_{\text{b}},I)$ of phase-slip events required to raise the temperature of the central segment from $T_{\text{b}}$ to $T^{\text{*}}$ in the absence of cooling. Accordingly, $N(T_{\text{b}},I)$ also provides an estimate of the number of phase-slip events needed to overcome the potential barrier if the timespan of these events is insufficient to allow significant cooling to occur. ‘Thermal runaway’—heating by rare sequences of closely-spaced phase slips that overcome the potential barrier—constitutes the mechanism of superconductive-to-resistive switching within our model. As the $N(T_{\text{b}},I)$ becomes large, the total number of phase-slip events taking place before switching can happen, and correspondingly the value of $\tau_{\text{s}}(T_{\text{b}},I)$, may indeed be quite large. ![Switching statistics [a. ]{}Map showing $N(T_{\text{b}},I)$ (see text) and the contour lines (solid lines) for the inverse of mean switching time, $\tau_{\text{s}}^{-1}=1,$ $10^{3},$ $10^{6}\text{ s}^{-1}$; the contour lines (dashed lines) for the phase-slip rate $\Gamma$ are also shown. The thermodynamic (depairing) critical current (dashed-dotted line) is plotted for reference. [b.]{} Switching-current distributions $P_{\text{SW}}$ obtained at various values of $T_{\text{b}}$ and for $r=58\,\mu\text{A}/\text{s}$. [c. ]{}The logarithms of $\tau_{\text{s}}^{-1}$ (colored lines) and of $\Gamma$ (thinner black lines) as a function of $I$, obtained for the same set of $T_{\text{b}}$values as in panel (a). The colors of $\tau_{\text{s}}^{-1}$ plots correspond to different values of $N(T_{\text{b}},I)$ \[as indicated in the legend of panel (a)\].[]{data-label="FIG:mean-switching-time"}](PSnumber_EscapeRate_hb){width="8cm"} Our key findings are summarized in Fig. \[FIG:mean-switching-time\]. There is a region of $I\ $and $T_{\text{b}}$ for which the occurrence of just one phase slip is sufficient to cause the nanowire to switch from the superconductive to the resistive state[@fn2]; in this case $\tau _{\text{s}}^{-1}=\Gamma$. A switching measurement in this range can thus provide a way of detecting and probing a single phase-slip fluctuation. As, outside this range, several phase-slip events are required for switching, $\tau_{\text{s}}^{-1}$ deviates from $\Gamma$ (see panel \[FIG:mean-switching-time\]c). A graphical representation of the contour lines for a few values of $\tau_{\text{s}}^{-1}$ and $\Gamma$, chosen in an experimentally accessible range, is provided in panel \[FIG:mean-switching-time\]a. Whilst the spacing between the $\Gamma$ contour lines decreases monotonically on lowering $T_{\text{b}}$, the spacing between $\tau_{\text{s}}^{-1}$ lines can be seen to behave non-monotonically. The mean switching time $\tau_{\text{s}}$ in bistable current-biased systems can be either directly measured or extracted from the switching-current statistics[@FultonD1974] generated via the repeated tracing of the $I$-$V\ $characteristic by ramping the current up and down at some sweep rate $r$. For this reason, in Fig. \[FIG:mean-switching-time\]b we have illustrated the behavior of this distribution of switching currents in superconducting nanowires based on the theory presented here. Upon raising $T_{\text{b}}$, one would naively expect the distribution to become broader for a model involving thermally activated phase slips. Such an broadening in the distribution-width is indeed obtained up to a crossover temperature scale $T_{\text{b}}^{\text{cr}}(r)$ (i.e. the temperature below which, loosely speaking, switching is induced by single phase slips). However, on continuing to raise $T_{\text{b}}$, but now through temperatures above $T_{\text{b}}^{\text{cr}}(r)$, the distribution-width shows a seemingly anomalous decrease. This is a manifestation of the now-decreasing spacing between the $\tau_{\text{s}}$ contour lines. This striking behavior above $T_{\text{b}}^{\text{cr}}$ may be understood by the following reasoning: the larger the number of phase-slips in the sequence inducing the superconductive-to-resistive thermal runaway, the smaller the stochasticity in the switching process and, hence, the sharper the distribution of switching currents. This non-monotonicity in the temperature dependence of the width of switching-current distribution, along with the existence of a regime in which a single phase-slip event can be probed, are the two key predictions of our theory. We gratefully acknowledge invaluable discussions with A. Bezryadin, M. Sahu, and T-C. Wei, and, on the issue of numerical approaches, with B. K. Clark. This work was supported by the DOE Division of Materials Sciences under Award No. DE-FG02-07ER46453, through the Frederick Seitz Materials Research Laboratory at the University of Illinois at Urbana-Champaign, and by NSF grant DMR 0605813. [99]{} url urlprefix \[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} & . ** ****, (). . ** ****, (). & . ** ****, (). , & . ** ****, (). , , & . ** ****, (). , & . ** ****, (). , & . ** ****, (). & . ** ****, (). , , & . (). . . ** ****, (). , , & . ** ****, (). ** (, ). . ** ****, (). , , & . ** ****, (). , , , & . ** ****, (). , , & . ** ****, (). , , , & . ** ****, (). . ** ****, (). & . ** ****, (). ** (, ). & . ** ****, ().
--- author: - | [Tongsuo Wu$^a$[^1], Dancheng Lu$^b$[^2], Yuanlin Li$^c$[^3]]{}\ [$^a$Department of Mathematics, Shanghai Jiaotong University]{}\ [Shanghai 200240, P. R. China]{}\ [$^b$Department of Mathematics, Suzhou University, Suzhou 215006, P.R. China]{}\ [$^c$Department of Mathematics, Brock University, St. Catharines, On., Canada L2S 3A1]{}\ title: ' **On zero divisors and prime elements of po-semirings**' --- [3mm]{}[**Abstract.**]{} [A semiring is an algebraic structure similar to a ring, but without the requirement that each element must have an additive inverse. A po-semiring is a semiring equipped with a compatible bounded partial order. In this paper, properties of zero divisors and prime elements of a po-semiring are studied. In particular, it is proved that under some mild assumption the set $Z(A)$ of nonzero zero divisors of $A$ is $A\setminus \{0,1\}$, each prime element of $A$ is a maximal element, and the zero divisor graph ${\Gamma}(A)$ of $A$ is a finite graph if and only if $A$ is finite. For a po-semiring $A$ with $Z(A)=A\setminus \{0,1\}$, it is proved that $A$ has finitely many maximal elements if ACC holds either for elements of $A$ or for principal annihilating ideals of $A$. As applications of prime elements, it is shown that the structure of a po-semiring $A$ is completely determined by the structure of integral po-semirings if either $|Z(A)|=1$ or $|Z(A)|=2$ and $Z(A)^2\not=0$. Applications to the ideal structure of commutative rings are considered. ]{} [3mm]{}[Key Words:]{} [po-semiring, zero divisor, prime element, small $Z(A)$, graph property]{} [4mm]{}[**1. Introduction and preliminaries** ]{} [3mm]{}Throughout this paper, all semigroups $S$ and all rings $R$ are assumed to be commutative with zero element $0_S$ ( i.e., $0S=0$) and with identity $1_R$ respectively. For a semigroup $S$, let $Z(S)$ be the set of nonzero zero divisors and $S^*$ the set of nonzero elements of $S$. For a ring $R$, let $U(R)$ be the set of invertible elements of $R$, $N(R)$ the nil radical and $J(R)$ the Jacobson radical of $R$. A [*commutative semiring*]{} is a set $A$ which contains at least two elements $0,1$ and which is equipped with two binary operations, $+$ and $\cdot$, called addition and multiplication respectively, such that the following conditions hold: 1\. $(A, +,0)$ is a commutative monoid with zero element $0$. 2\. $(A, \cdot,1)$ is a commutative monoid with identity element 1. 3\. Multiplication distributes over addition. 4\. $0$ annihilates $A$ with respect to multiplication, i.e., $0¡¤a = 0, \,\forall a \in A$. If there is no nonzero zero-divisor in a semiring $A$, then $A$ is called an [*integral semiring*]{}. Certainly, each ring is a semiring. Other important examples of semirings include the set $\mathbb I(R)$ of ideals of a commutative ring $R$, the set $\mathbb N$ of nonnegative integers, and the real segment $[0,1]$ whose addition is $max$ operation. Note that both $\mathbb N$ and $[0,1]$ are integral semirings. Based on a semiring structure, an excellent and rather general framework for constraint satisfaction and optimization was developed in [@BMR; @BMRSVF]. Next we introduce a new notion which will be the central topic of this paper. [3mm]{}[**Definition 1.1.**]{} A [*partially-ordered semiring*]{} is a commutative semiring $(A, +, \cdot,0,1 )$, together with a compatible partial order $\le$, i.e., a partial order $\le$ on the underlying set $A$ that is compatible with the semiring operations in the sense that it satisfies the following conditions: \(1) $x\le y$ implies $x+z\le y+z$, and \(2) $0\le x$ and $y\le z$ imply that $xy\le xz$ for all $x,y,z$ in $A$. If $A$ satisfies the following additional condition, then $A$ is called a [*po-semiring*]{}: \(3) The partial set $(A,\le, 0,1)$ is bounded, i.e., $1$ is the largest element and $0$ is the least element of $A$. [3mm]{}We remark that condition (3) is so strong that it enables a po-semiring $A$ to be a dioid, where a semiring is called a [*dioid*]{} if its addition is idempotent ($ a+a=a,\,\forall a\in A$). Furthermore, the above defined partial order $\le$ for a po-semiring $A$ is identical with the new partial order $\le_1 $ defined by the following $$\text{$a\le_1 b$ if and only if $a+b=b$}.$$In other words, $(A,+,0,1)$ is a bounded join-semilattice. Clearly, any bounded, distributive lattice is a commutative dioid under join and meet, where $a\le b$ iff $a\wedge b=a$. Each bounded, distributive lattice is certainly also a po-semiring under the Definition $1.1$. An element $p$ of a po-semiring is called [*prime*]{}, if $p\not=1$ and $xy\le p$ implies either $x\le p$ or $y\le p$. An element $x$ is called [*minimal*]{}, if $x\not=0$ and $0<y\le x$ implies $x=y$ . An element $\mathfrak{m}$ is called [*maximal*]{}, if $\mathfrak{m}\not=1$ and $\mathfrak{m}\le x<1$ implies $\mathfrak{m}=x$. An [*ideal*]{} $I$ of a po-semiring $A$ is an additive sub-semigroup containing the zero element $0_A$ such that $IA{\subseteq}I$. For each element $u$ of a po-semiring $A$, set $$<u>\,=\{x\in A\,|\,x\le u\}.$$Clearly, $<u>$ is an ideal of $A$, called the [*lower principle ideal generated by $u$*]{}. The principle ideal $uA$ is certainly another ideal generated by $u$, and clearly $uA{\subseteq}\, <u>$. An ideal $I$ of $A$ is called [*hereditary*]{}, if $<u>\,{\subseteq}I$ holds for all $u$ in $I$. For any element $u$ of $A$, both $<u>$ and $ann_A(u)$ are hereditary ideals of $A$. Our prototype of po-semiring is the po-semiring $\mathbb I(R)$ of a commutative ring $R$, which consists of all ideals of $R$. The multiplication is the ideal multiplication, and the addition is the addition of subsets. The partial order is the usual inclusion. The po-semiring $\mathbb I(R)$ also has the property that for each element $v\not=1$, there exists a maximal element $\mathfrak{m}$ such that $v\le \mathfrak{m}.$ In $\mathbb I(R)$, there is the notion of infinite sum of elements of $\mathbb I(R)$. Notice that $\mathbb I(R)$ is also a bounded lattice under $I\wedge J=I\cap J$ and $I\vee J=I+J$, but it is not necessarily a distributive lattice for a general ring $R$. Denote $\{0,1\}=\mathbb I(F)$ for any field $F$, and note that it is the smallest po-semiring, i.e., it can be embedded into any po-semiring. Throughout the paper, we always assume that one of the following three additional conditions on a po-semiring $A$ holds: [2mm]{}$(C_1)$: [*For each non-nilpotent element $u$ of $A$, there is a nonzero idempotent $w$ such that $w\le u$ and $w$ has an orthogonal idempotent complement in $A$, i.e., there exists an idempotent element $v$ in $A$ such that $w+v=1,wv=0$.*]{} [2mm]{} [2mm]{}$(C_2)$: [*For each nonzero idempotent element $u$ of $A$, there is a nonzero idempotent $w$ such that $w\le u$ and $w$ has an orthogonal idempotent complement in $A$.*]{} [2mm]{}$(C_3)$: [*Each idempotent minimal element of $A$ has an orthogonal idempotent complement in $A$.*]{} [2mm]{} Clearly, $(C_1)\Lra (C_2)\Lra(C_3)$ hold for any po-semiring $A$. In sections 2 and 3, examples will be given to show that the reverse implications do not hold. Note that if $Z(A)\not={\emptyset}$, then each minimal element of $A$ is a zero divisor (see Lemma 3.4(2)). [3mm]{}[**Proposition 1.2.**]{} [*For any commutative ring $R$, let $A=\mathbb{I}(R)$. Then $A$ satisfies the condition $(C_3)$.*]{} \(1) [*If further $R$ is a noetherian exchange ring, then $A$ satisfies condition $(C_2)$.*]{} \(2) [*If further $R$ is a noetherian exchange ring and $J(R)=N(R)$, then $A$ satisfies condition $(C_1)$. In particular, it holds for any artinian ring $R$.*]{} \(3) [*If further $A$ satisfies condition $(C_1)$, then $J(R)=N(R)$.*]{} [3mm]{}The first conclusion follows from Brauer’s Lemma, see [@Lam 10.22]. \(1) Let $I$ be a nontrivial idempotent ideal of $R$. Then $I$ is finitely generated, and thus $I\not{\subseteq}J(R)$ by Nakayama Lemma. Then the exchange property ensures the existence of a nonzero idempotent $e$ in $I$. Thus $(Re)^2=Re$, $Re$ is a nonzero idempotent and it has the orthogonal idempotent complement $R(1-e)$. Hence $(C_2)$ holds for $\mathbb I(R)$. \(2) An artinian ring $R$ is a noetherian exchange ring and each non-nilpotent ideal of $R$ contains a nonzero idempotent. \(3) For any $x\not\in N(R)$, $Rx$ is not nilpotent. By condition $(C_1)$, there exists a nonzero idempotent $e$ in $Rx$. Thus $x\not\in J(R)$. This shows $J(R)=N(R)$. [3mm]{}We remark that exchange rings include von Neumann regular rings, artinian rings, semilocal rings such that idempotents lift modulo their Jacobson radical. For further information on exchange rings, see [@LW] and the listed references. In this paper we investigate properties of zero divisors and prime elements of a po-semiring. Section 2 deals with the following questions: (1) Under what conditions can it occur that $Z(A)=A\setminus \{0,1\}$? (2) When does a po-semiring $A$ have only finitely many maximal elements? (3) When is every prime element also maximal? It is proved that $Z(A)=A\setminus \{0,1\}$, each prime element of $A$ is a maximal element, if one of the following conditions is satisfied: (i) Condition $(C_2)$ holds in $A$, and DCC holds for elements of $A$. (ii) Condition $(C_1)$ holds in $A$, and there exists no infinite set of orthogonal idempotents in $A$. For a po-semiring $A$ with $Z(A)=A\setminus \{0,1\}$, it is also proved that $A$ has finitely many maximal elements if ACC holds either for elements of $A$ or for principal annihilating ideals of $A$. In Section 3, we study the zero divisor graph ${\Gamma}(A)$ of a po-semiring $A$. In particular, for a po-semiring $A$ satisfying condition $(C_3)$, it is proved that the graph ${\Gamma}(A)$ is either a star graph or a two-star graph $K_1+K_1+K_1+D_r$ if $G$ contains no cycle. It is proved that ${\Gamma}(A)\cong K_1+K_1+K_1+D_r$ if and only if $A\cong \{0,1\}\times S$, where $S$ is a po-semiring with $|Z(S)|=1$ (in this case, $r=|S|-2$). Example 3.6 shows further that for any $r$ with $1\le r\le \aleph_0$, there exists a po-semiring $A$ such that condition $(C_3)$ holds and ${\Gamma}(A)\cong K_1+K_1+K_1+D_r$. In Section 4, we study the structure of a po-semiring $A$ with $1\le |Z(A)|\le 2$. It is shown that the structure of a po-semiring $A$ is completely determined by the structures of integral po-semirings, if either $|Z(A)|=1$ or $|Z(A)|=2$ and $Z(A)^2\not=0$ (Theorem 4.2). When $A$ is taken to be $\mathbb I(R)$ for some commutative ring $R$, applications to the ideal structure of a ring are provided. [4mm]{}[**2. Chain conditions on po-semirings** ]{} [3mm]{}In this section, we study properties of elements of $A\setminus \{0,1\}$. We begin with an easy proposition which will be used repeatedly. [3mm]{}[**Proposition 2.1.**]{} [*Let $A$ be a po-semiring. Then* ]{} \(1) [*Each maximal element of $A$ is prime.*]{} \(2) [*For a maximal element $\mathfrak{m}$ of $A$, if $\mathfrak{m}b=0$, then either $\mathfrak{m}^2=\mathfrak{m}$ or $b^2=0$.*]{} \(3) [*If $e_i+f_i=1$ where $e_if_i=0,\,e_i^2=e_i,\,f_i^2=f_i$, then $e_1>e_2$ implies $f_1<f_2$.*]{} [3mm]{}(1) Let $\mathfrak{m}$ be a maximal element of $A$. Assume that $ab\le \mathfrak{m}$ and $a\not\le \mathfrak{m}$. Then $a+\mathfrak{m}> \mathfrak{m}$, so $a+\mathfrak{m}=1$. Then $b=ab+\mathfrak{m}b\le\mathfrak{m}+\mathfrak{m}=\mathfrak{m},$ so $\mathfrak{m}$ is a prime element of $A$. \(2) For a maximal element $\mathfrak{m}$ of $A$ and any $b\in A$, either $\mathfrak{m}+b=\mathfrak{m}$ or $\mathfrak{m}+b=1$. If $\mathfrak{m}+b=\mathfrak{m}$, then $b^2=0$. If $\mathfrak{m}+b=1$, then $\mathfrak{m}=\mathfrak{m}^2.$ \(3) If $e_1\ge e_2$, then $e_1+f_2\ge e_2+f_2=1 $, and hence $f_1\le f_2f_1\le f_2.$ If $f_1=f_2$, then $e_1=e_1(e_2+f_2)=e_1e_2\le e_2$. This completes the verification. [3mm]{} Recall that a nonzero idempotent of a semiring is called a [*primitive idempotent*]{} if it cannot be written as a sum of two orthogonal nontrivial idempotents. [3mm]{}[**Theorem 2.2.**]{} [*For a po-semiring A, $Z(A)=A\setminus \{0, 1\}$ if one of the following conditions holds:*]{} \(1) [*Condition $(C_1)$ holds and there exists no infinite set of orthogonal idempotents in $A$.*]{} \(2) [*DCC holds for elements of $A$, and condition $(C_2)$ holds.*]{} [*In both cases, for any element $c$ in $A\setminus \{0, 1\}$, either $c$ is nilpotent or there exist a positive integer $n$ and a nontrivial idempotent $e$ such that $c^n=c^ne$, where $e$ has an orthogonal idempotent complement in $A$.* ]{} [3mm]{}(1) Suppose that condition $(C_1)$ holds in $A$. Assume to the contrary that there exists an element $a$ in $A\setminus \{0,1\}$ such that $a\not\in Z(A)$. Then $a$ is not nilpotent. Hence by condition $(C_1)$, there exist nonzero idempotents $e_1,f_1$ such that $e_1\le a, e_1f_1=0,e_1+f_1=1.$ If $af_1$ is nilpotent, then $a$ is a zero divisor. If further $a$ is idempotent, then $af_1=0$ and thus $a=ae_1\le e_1$, so $a=e_1$. Note that $af_1$ is not nilpotent by assumption on $a$. Again by condition $(C_1)$, there exist nonzero orthogonal idempotents $e_2,f_2$ such that $e_2+f_2=1,e_2\le af_1$. Then $e_1e_2=0$, so $e_1<e_1+e_2$ and $(e_1+e_2)+(f_1f_2)=1$. Note that $f_1f_2$ is a nonzero idempotent and is orthogonal to the idempotent $(e_1+e_2)$. If $a(f_1f_2)$ is nilpotent, then $a$ is a zero divisor. If further $a$ is idempotent, then $af_1f_2=0$, implying $a=a(e_1+e_2)\le e_1+e_2$, so $a=e_1+e_2$. Clearly, $a(f_1f_2)$ is not nilpotent by assumption on $a$. Then there exist nonzero orthogonal idempotents $e_3,f_3$ such that $e_3+f_3=1,e_3\le a(f_1f_2)$. Then we have an orthogonal idempotent decomposition $(e_1+e_2+e_3)+(f_1f_2f_3)=1$, and thus $e_1<e_1+e_2<e_1+e_2+e_3$, where $e_ie_j=\delta_{ij}e_i$. Finally, if $a$ is idempotent and $a(f_1f_2f_3)=0$, then $a=e_1+e_2+e_3$. Clearly, $a(f_1f_2f_3)$ is not nilpotent under the assumption on $a$. Continuing this process, we finally obtain an infinite set $\{e_1,e_2,\cdots,\}$ of orthogonal idempotents in $A$, contradicting the assumption on $A$. Note that $e_1<e_1+e_2<e_1+e_2+e_3<\cdots$ and $(e_1+\cdots+e_i)+(f_1\cdots f_i)=1$ implies $f_1>f_1f_2>\cdots$ by Lemma 2.1(3). Note also that $e_1+\cdots +e_i\le a$. Note that if assume $a(f_1f_2\cdots f_i)=0$ for some $i$, then $a=e_1+e_2+\cdots+e_i$. \(2) Assume the condition $(C_2)$ holds for $A$ and DCC holds for elements of $A$. Assume to the contrary that in $A\setminus \{0,1\}$ there exists an element $b$ such that $b\not\in Z(A)$. Then $b$ is not nilpotent and there exists a positive integer $m$ such that $b^m$ is idempotent. Let $a=b^m$. By repeating the discussions in the proof in part (1), we obtain an infinite descending chain of elements $f_1>f_1f_2>\cdots$, giving a contradiction. [3mm]{} By the proof of Theorem 2.2, we have the following improved result for idempotent elements. [3mm]{}[**Theorem 2.3.**]{} [*For a po-semiring $A$, if condition $(C_2)$ holds in $A$ and $A$ contains no infinite set of orthogonal idempotents, then each idempotent element of $A$ has an orthogonal idempotent complement. In particular, each nontrivial idempotent is a zero divisor of $A$. Furthermore, each nonzero idempotent of $A$ is a finite sum of orthogonal primitive idempotents.*]{} The result follows from the proof of (1) in Theorem 2.2 if we start with a nontrivial idempotent $a$. [3mm]{}[3mm]{}We remark that there exists no infinite set of orthogonal idempotents in a po-semiring $A$ provided that one of the following conditions holds: (1) ACC holds for idempotent elements of $A$. (2) DCC holds for idempotent elements of $A$, and condition $(C_2)$ also holds. The latter conclusion follows from Theorem 2.3 and Lemma 2.1(3). If ACC (respectively, DCC) holds for elements of $A$, then for each element $a\not=0,1$, clearly there is a maximal (respectively, minimal) element $x$ of $A$ such that $x\ge a$ (respectively, $x\le a$). Let $A=\mathbb{I}(R)$ for some commutative ring $R$. In view of Theorems 2.2 and 2.3, we have the following applications to commutative rings. [3mm]{}[**Corollary 2.4.**]{} [*Assume that a commutative ring $R$ satisfies one of the following conditions:*]{} \(1) [*Each non-nilpotent ideal of $R$ contains a nonzero idempotent, and $R$ contains no infinite set of orthogonal idempotents.*]{} \(2) [*$R$ is artinian.* ]{} [*Then for each nontrivial ideal $I$ of $R$, there exists a nontrivial idempotent element $e$ such that $I{\subseteq}Re$. Furthermore, each nonzero idempotent ideal of $R$ has the form $\sum_{i=1}^rRe_i$, where $e_1,\cdots,e_r$ are orthogonal primitive idempotents.*]{} [3mm]{}[**Corollary 2.5.**]{} [*For a commutative noetherian exchange ring $R$, each nontrivial idempotent ideal of $R$ is an annihilating ideal. Furthermore, each nonzero idempotent ideal of $R$ has the form $\sum_{i=1}^rRe_i$, where $e_1,\cdots,e_r$ are orthogonal primitive nonzero idempotents.* ]{} By applying Proposition 1.2(1) and Theorem 2.3 to $\mathbb{I}(R)$, we obtain the result. [3mm]{}Note that any artinian ring is a noetherian exchange ring. Thus Corollary 2.5 holds in particular for artinian rings. It is well-known that in a commutative artinian ring $R$, each prime ideal is a maximal ideal of $R$, i.e., each prime element of the po-semiring $\mathbb{I}(R)$ is a maximal element if DCC holds for elements of $\mathbb{I}(R)$. Also, if DCC holds for elements of $\mathbb{I}(R)$, then ACC also holds. The following example shows that the above mentioned results are not true for a general po-semiring. It also shows that the additional condition $(C_2)$ in Theorem 2.3 is needed, and that $Z(A)=A\setminus \{0,1\}$ does not imply condition $(C_2)$. [3mm]{}[**Example 2.6**]{} [*There exists an infinite po-semiring $A$ such that DCC holds but ACC fails for elements of $A$, $Z(A)=A\setminus \{0,1\}$ and $A$ has infinitely many prime elements but none of which is a maximal element.*]{} Let $A=\{0,1,a,b_1,b_2,\cdots\}$ be a countable set with $4\le |A|\le \aleph_0$, and define a partial order $\le$ by $0<a<b_1<b_2<\cdots<1$ on $A$. Define an addition by $x+y=\text{max\{$x,y$\}}, \,\forall x,y\in A$. Define a commutative multiplication by $$0x=0,\,1x=x\,(\forall x\in A),\,ab_i=0,\,a^2=0,\,b_ib_j=b_{min\{i,j\}}.$$ Then it is routine to check that $A$ is a po-semiring. Note that $a$ and $b_i$ are prime elements of $A$ for all $i$, and condition $(C_2)$ does not hold for $A$ although condition $(C_3)$ does hold. The elements of $A$ satisfies DCC, but they do not satisfy ACC if $|A|$ is infinite. Note also that $Z(A)=A\setminus \{0,1\}$, and there exists no infinite set of orthogonal idempotents. [*On the other hand, for a noetherian ring $R$ which is not artinian, ACC holds for elements of the po-semiring $\mathbb{I}(R)$ but DCC fails.*]{} [3mm]{} Despite of the above example, we are able to show that under some suitable condition the set of the maximal elements is the same as the set of the prime elements of $A$. [3mm]{}[**Theorem 2.7**]{} [*For a po-semiring $A$, each prime element of $A$ is a maximal element if one of the following condition holds:* ]{} \(1) [*Condition $(C_2)$ holds in $A$, and DCC holds for elements of $A$.*]{} \(2) [*Condition $(C_1)$ holds in $A$, and there exists no infinite set of orthogonal idempotents in $A$.*]{} \(1) Assume to the contrary that there exists a prime element $q$ such that $q<p$ for some $p<1$. Then $p$ is not nilpotent. By the DCC assumption, there exists a positive integer $r$ such that $p^r$ is nonzero and idempotent. By Theorem 2.3, $p^r$ is annihilated by a nonzero idempotent, say $f$. By condition $(C_2)$, there exist orthogonal nonzero idempotents $f_1,g_1$ such that $f_1\le f$, $g_1+f_1=1$. Then $p^r\cdot f_1=0$. Since $q<p$ and $q$ is prime, we have $f_1\le q$ and hence $g_1\not\le q$. Clearly, $p^rg_1=p^r\not\le q$ and it is idempotent. By condition $(C_2)$, there exist nonzero orthogonal idempotents $t_2,s_2$ such that $t_2\le p^rg_1$, $t_2+s_2=1$. Then we have $f_1t_2=0$. This together with $t_2+s_2=1$ and $f_1+g_1=1$ implies $f_1+t_2+g_1s_2=1,$ where $f_1,t_2,g_1s_2$ are mutually orthogonal idempotents. If $s_2\not\le q$, then $t_2\le q$. In this case, let $f_2=t_2,\,g_2=g_1s_2$. If $s_2\le q$, then $t_2\not\le q$ and $g_1s_2\le q$. In this case, let $f_2=g_1s_2,g_2=t_2$. In either case, we have an orthogonal idempotent decomposition $f_1+f_2+g_2=1$, where $f_1\le q,\,f_2\le q$ and $g_2\not\le q$. The next step is to consider the idempotent $p^rg_1g_2$. Clearly $p^rg_1g_2\not\le q$ and in particular, $p^rg_1g_2\not=0$. By condition $(C_2)$, there exist orthogonal idempotents $t_3,s_3$ such that $t_3+s_3=1$ and $t_3\le p^rg_1g_2$. Clearly, $t_3f_i=0$ for $i=1,2$. This together with $(f_1+f_2)+g_2=1$ implies $f_1+f_2+t_3+s_3g_2=1$. Since $f_1+f_2+t_3\le p$, $s_3g_2\not=0$. If $s_3\not\le q$, then $t_3\le q$. In this case, let $f_3=t_3,\,g_3=s_3g_2$. If $s_3\le q$, then $t_3\not\le q$ and $g_2s_3\le q$. In this case let $f_3=g_2s_3,\,g_3=t_3$. In either case, we have an orthogonal idempotent decomposition $(f_1+f_2+f_3)+g_3=1$, where $f_i\le q$ ($\forall i=1,2,3$) and $g_3\not\le q$. Continuing this process, we have got an infinite set of mutually orthogonal idempotents $\{f_1,f_2,\cdots\}$. Since $f_1<f_1+f_2<f_1+f_2+f_3<\cdots$, by Lemma 2.1(3) we have an infinite descending chain of idempotents $g_1>g_2>\cdots, $ contradicting the DCC assumption. \(2) Assume to the contrary that there exists a prime element $q$ such that $q<p$ for some $p<1$. Then $p$ is not nilpotent. By Theorem 2.2, there exists a positive integer $r$ such that $p^r$ is annihilated by a nonzero idempotent, say, $f$. The rest of the proof is almost the same as in (1). Note that $p^r(g_1g_2\cdots g_s)\not\le q$ implies that $p^r(g_1g_2\cdots g_n)$ is not nilpotent and thus condition $(C_1)$ applies. [3mm]{}[**Corollary 2.8.**]{} [*For a commutative ring $R$, each prime ideal of $R$ is a maximal ideal if one of the following condition is satisfied:*]{} \(1) [*There exists no infinite set of orthogonal idempotents in $R$ and each non-nilpotent ideal contains a nonzero idempotent.*]{} \(2) [*$R$ is an exchange ring, $J(R)=N(R)$ and there exists no infinite set of orthogonal idempotents in $R$.* ]{} \(3) [*$R$ is an artinian ring.*]{} [3mm]{} Call an ideal $I$ of a po-semiring $A$ a [*principal annihilating ideal*]{} if $I=ann_A(u)$ for some element $u$ of $A$. The proof of the following theorem is a typical example of use of ideas in proving the Chinese Remainder Theorem. [3mm]{}[**Theorem 2.9.**]{} [*For a po-semiring $A$, assume $Z(A)=A\setminus \{0,1\}$. If ACC holds either for elements of $A$ or for principal annihilating ideals of $A$, then $A$ has finitely many maximal elements.* ]{} [3mm]{}Suppose that $Z(A)=A\setminus \{0,1\}$. Assume to the contrary that $A$ has infinitely many maximal elements and let $\mathfrak{m}_i$ ($i\in N^{+}$) be distinct maximal elements of $A$. Then $\mathfrak{m}_1+\mathfrak{m}_3=1$ and $\mathfrak{m}_2+\mathfrak{m}_3=1$ and hence $1=\mathfrak{m}_3(\mathfrak{m}_1+\mathfrak{m}_2+\mathfrak{m}_3)+\mathfrak{m}_1\mathfrak{m}_2= \mathfrak{m}_3+\mathfrak{m}_1\mathfrak{m}_2$. In a similar way, we obtain $\mathfrak{m}_1\mathfrak{m}_2\cdots \mathfrak{m}_m+\mathfrak{m}_{m+1}=1$. Now consider the following ascending chain of principal annihilating ideals of the semiring $A$ $$ann_A(\mathfrak{m}_1){\subseteq}\cdots {\subseteq}ann_A(\mathfrak{m}_1\cdots \mathfrak{m}_m){\subseteq}ann_A(\mathfrak{m}_1\cdots \mathfrak{m}_m\mathfrak{m}_{m+1}){\subseteq}\cdots.$$ If ACC holds for principal annihilating ideals of $A$, then there exists an integer $m$ such that $ann_A(\mathfrak{m}_1\cdots \mathfrak{m}_m)= ann_A(\mathfrak{m}_1\cdots \mathfrak{m}_m\mathfrak{m}_{m+1})$. By assumption, there is a nonzero element $x\in ann_A(\mathfrak{m}_{m+1})$. But then $x=x\mathfrak{m}_{m+1}+x(\mathfrak{m}_1\cdots \mathfrak{m}_m)=0$, a contradiction. If ACC holds for elements of $A$, we claim that there exists no strict ascending chain $ann_A(\mathfrak{m}_1)<ann_A(\mathfrak{m}_1\mathfrak{m}_2)<\cdots $, and the result thus follows. In fact, if this were not the case, then for each $m$ there would exist an element $y_{m+1}$ such that $$y_{m+1}\in ann_A(\mathfrak{m}_1\cdots \mathfrak{m}_m\mathfrak{m}_{m+1})\setminus ann_A(\mathfrak{m}_1\cdots \mathfrak{m}_m).$$ Then we would have obtained an infinite ascending chain of elements of $A$: $$y_2<y_2+y_3<y_2+y_3+y_4<\cdots.$$ This completes the proof. [3mm]{} Combining Theorems 2.2, 2.7 and 2.9, we have the following results. [3mm]{}[**Corollary 2.10.** ]{} [*Let $A$ be a po-semiring satisfying condition $(C_2)$. If both ACC and DCC holds for elements of $A$, then $A$ has only a finite number of prime elements. These primes are precisely the maximal elements of $A$. Under the assumption, $Z(A)=A\setminus \{0,1\}$.*]{} [3mm]{}[**Corollary 2.11.** ]{} [*Let $A$ be a po-semiring satisfying condition $(C_1)$. If ACC holds for elements of $A$, then $A$ has only a finite number of maximal elements.*]{} [3mm]{} We remark that if a po-semiring $A$ has a unique maximal element $\mathfrak{m}$, then $\mathfrak{m}$ is nilpotent if condition $(C_1)$ is further assumed to hold in $A$. [3mm]{} Applying Theorems 2.9 to $\mathbb{I}(R)$ for a commutative ring $R$, we have the following. [3mm]{}[**Corollary 2.12.**]{} [*Let $R$ be a commutative noetherian ring.*]{} \(1) ([@BR \[Proposition 1.7]) [*If each nontrivial ideal is an annihilating ideal, then $R$ is a semilocal ring.* ]{} \(2) [*If each non-nilpotent ideal of $R$ contains an idempotent, then $R$ is a semilocal ring.*]{} \(3) [*Any Noetherian exchange ring $R$ with $J(R)=N(R)$ is a semilocal ring.*]{} [3mm]{}Recall that an ideal $I$ of a po-semiring $A$ is called [*hereditary*]{}, if $<u>\,{\subseteq}I$ holds for all $u$ in $I$. Motivated by Theorem 2.9 we have the following. [3mm]{}[**Proposition 2.13**]{} [*Let $A$ be a po-semiring. Then*]{} \(1) [*ACC holds for elements of $A$ if and only if ACC holds for hereditary ideals of $A$.*]{} \(2) [*If DCC holds for hereditary ideals of $A$, then DCC also holds for elements of $A$. The converse holds, if there is the concept of an infinite sum of elements in $A$ and any hereditary ideal $I$ is closed under taking infinite sums.* ]{} $\Lla$ of $(1)$ and $(2)$: For any $u\in A$, $<u>$ is clearly a hereditary ideal of the po-semiring $A$. For elements $u,v$ of $A$, $u\le v$ if and only if $<u>\,{\subseteq}\, <v>$, and $u< v$ if and only if $<u>\,\subset\, <v>$. This implies the sufficiency part of the proposition. $\Lra $: (1) If ACC does not hold for hereditary ideals of the po-semiring $A$, then there exists a strict ascending chain of hereditary ideals of $A$, say, $X_1{\subset}X_2{\subset}\cdots$. Then for any $n\ge 2$, take an element $u_n$ of $R$ such that $u_n\in X_n\setminus X_{n-1}$. Clearly, there is an infinite ascending chain $u_1< u_1+u_2< u_1+u_2+u_3< \cdots,$ and hence ACC does not hold in $A$. \(2) If DCC does not hold for hereditary ideals of $A$, then there exists a strict descending chain of hereditary ideals of $A$, say, $Y_1\supset Y_2\supset \cdots$. Then for any $n\ge 1$, take an $v_n$ such that $v_n\in Y_n\setminus Y_{n+1}$. Clearly, there is an infinite descending chain of elements of $R$, namely $\sum_{i\ge 1} v_i> \sum_{i\ge 2} v_i>\cdots,$ where $\sum_{i\ge m} v_i\in Y_m$. Then DCC does not hold for elements of $A$. This completes proof. [3mm]{}[**Corollary 2.14.**]{} (1) [*If ACC (DCC, respectively) holds for ideals of a po-semiring $A$, then ACC (DCC, respectively) also holds for elements of $A$.*]{} \(2) [*If ACC holds for elements of $A$, then ACC holds for principal annihilating ideals of $A$.*]{} \(3) [*For a po-semiring $A$, assume that there is the concept of an infinite sum of elements in $A$ and any hereditary ideal $I$ is closed under taking infinite sums. If further DCC holds for elements of $A$, then DCC holds for principal annihilating ideals of $A$.* ]{} [3mm]{}[**Corollary 2.15.**]{} [*Let $R$ be a commutative ring and denote $A=\mathbb I(R)$. Then ACC (DCC, respectively) holds for elements of $A$ if and only if ACC (DCC, respectively) holds for hereditary ideals of $A$.* ]{} [3mm]{}The proof of the following result is routine and is omitted here. [3mm]{}[**Proposition 2.16**]{} [*For a po-semiring $A$ and an element $p$ of $A$, $p$ is a prime element of $A$ if and only if $<p>$ is a prime ideal of $A$.*]{} [4mm]{}[**3. The graph ${\Gamma}(A)$ for a po-semiring $A$**]{} [3mm]{} For any multiplicative semigroup $S$ with zero element 0, there is a zero divisor graph ${\Gamma}(S)$ whose vertices are nonzero divisors£¬ and there is an edge $x-y$ if $x\not=y$, $xy=0$. Some fundamental properties of the semigroup graphs were given in [@DMS]. For a semiring $A$, denote by $Z(A)$ the set of all nonzero multiplicative zero-divisors. The zero divisor graph of $(A,\cdot, 1)$ is also denoted as ${\Gamma}(A)$. Note that in a semiring, there exists two semigroup structures, and all known results on the zero divisor graphs of semigroups certainly holds for ${\Gamma}(A)$ (see [@DMS]). Following [@BR], denote ${\Gamma}(\mathbb I(R))=\mathbb{AG}(R)$ and call it [*the annihilating ideal graph of $R$*]{}. The graph was studied recently by quite a few authors, see[@BR; @AA; @AA2]. Recall that a graph is called [*complete*]{} ([*or discrete*]{}) if every pair (or no pair ) of vertices are adjacent. We denote a complete graph by $K$, a complete (or discrete) graph with $n$ vertices by $K_n$ (or $D_n$). An induced subgraph $K$ of a graph $G$ is called a [*clique*]{} if any two distinct vertices of $K$ are adjacent and the [*clique number*]{} ${\omega}(G)$ of $G$ is the least upper bound of the size of the cliques. Similarly, we denote by $K_{m,n}$ the complete bipartite graph with two partitions of sizes $m,n$. A complete bipartite graph $K_{1,n}$ is also called a [*star graph*]{}. Recall that a [*cycle*]{} in a graph is a path $v_1-v_2-\cdots-v_n$ together with an additional edge $v_n-v_1$ ($n\ge 3$). For any vertex $x$ of $G$, let $N(x)$ be the vertices adjacent to $x$. If $|N(x)|=1$, then $x$ is called an [*end vertex*]{}. Recall a convenient construction from graph theory, [*the sequential sum*]{} $$G_1+G_2+\cdots+G_r$$ of a sequence of graphs $G_1,G_2,\ldots,G_r$ . The resulting graph $G_1+G_2+\cdots+G_r$ is obtained by adding an edge between each vertex of $G_i$ and every vertex of $G_{i+1}$ for all $i=1,2,\cdots, r-1$. We illustrate the construction in Figure 1 for the sequence of graphs $D_2, K_1, K_1,D_3$, where $D_j$ is the discrete graph of $j$ vertices. Such a resulting graph is a special case of two-star graphs which has two vertices as subcenters. Recall that a finite or an infinite graph $G$ is called a [*two-star graph*]{} if $G\cong D_r+K_1+K_1+D_s$ ($\infty\ge r\ge 1\le s\le \infty$). By [@AL Theorem 2.2], for a commutative ring $R$, the zero divisor graph ${\Gamma}(R)$ is finite if and only if either $R$ is a finite ring or an integral domain. The analog of the above result does not hold for a semigroup. However, such a result does exist for a class of po-semirings. [3mm]{}[**Theorem 3.1.**]{} [*For a po-semiring $A$ which is not integral, assume that a condition (1) or (2) as in Theorem 2.2 holds. Then ${\Gamma}(A)$ is a finite graph if and only if the po-semiring $A$ is finite. In this case, $|V({\Gamma}(A))|=|A|-2$.*]{} If $A$ is finite or integral, then $0\le |V({\Gamma}(A))|<\infty$. The converse follows from Theorem 2.2. By Theorem 2.2, we also have $|V({\Gamma}(A))|=|A|-2$ in the case. [2mm]{}The following subtle example shows that condition $(C_2)$ is crucial to Theorem 3.1. The construction in this example could be easily extended to obtain a finite or an infinite po-semiring $A$ such that ${\Gamma}(A)\cong K_n$ for each natural number $n$. The example is also related to Theorem 2.2. [3mm]{}[**Example 3.2.**]{} [*There exists an infinite po-semiring $A$ such that the graph ${\Gamma}(A)$ is an isolated vertex.* ]{} Let $A=\{0,1,a,b_1,b_2,\cdots\}$ be a countable set with $|A|\ge 4$. Define the partial order and the addition as in Example 2.6, i.e., $0<a<b_1<b_2<\cdots<1$, $x+y=\text{max\{$x,y$\}}, \,\forall x,y\in A$. Define a new multiplication by $$0z=0,\,1z=z,\,(\forall z\in A),\,a^2=0,\,xy=\text{min\{x,y\} for other nonzero $x,y$}.$$ Then it is routine to check that $A$ is a po-semiring. Clearly, $Z(A)=\{a\}$ but $|A|$ can be any cardinal number from 4 to $\aleph_0$. Note that DCC holds for elements of $A$, and that there exists no infinite set of orthogonal idempotents in $A$. Note also that condition $(C_2)$ does not hold in $A$, but condition $(C_3)$ does. Clearly ${\Gamma}(A)$ is a finite graph. Note that ${\Gamma}(A)$ is an infinite star graph for another po-semiring defined on the set $A$, which is constructed in Example 2.6. [3mm]{}[**Lemma 3.3.**]{} [*Let $A=A_1\times A_2$ be a direct sum of two semirings and denote $G={\Gamma}(A)$. Then*]{} $(1)$ [*$G$ contains no triangle if and only if one $A_i$ is integral while $0\le |Z(A_j)|\le 1$ for the other $A_j$.*]{} \(2) [*$G$ contains no cycle if and only if one $A_i$ is $\{0,1\}$ while $0\le |Z(A_j)|\le 1$ for the other $A_j$.*]{} \(3) [*$G$ contains no quadrilateral if and only if one $A_i$ is $\{0,1\}$ while for the other $A_j$, either $0\le |Z(A_j)|\le 1$, or $|Z(A_j)|=2$ and there exists no nilpotent element in $A_j$.*]{} [3mm]{}(1) $\Lla$: If both $A_i$ are integral, then $G$ is a complete bipartite graph and hence contains no triangle. In this case, one part contains $|A_1|-1$ vertices and the other $|A_2|-1$ vertices. If $A_1$ is integral and $Z(A_2)=\{x\}$, then $x^2=0$ and $G$ is also a complete bipartite graph together with $|A_1|-1$ end vertices adjacent to $(0,x)$. $G$ clearly contains no triangles and is a bipartite graph, one part of which contains $2(|A_1|-1)$ vertices and the other part $|A_2|-1$ vertices. $\Lra$: If neither $A_1$ nor $A_2$ is integral, assume $a_ib_i=0$ with $a_i,b_i\in A_i\setminus \{0\}$ for $i=1,2$. Then in $G$ there is a triangle $(0,a_2)-(a_1,0)-(b_1,b_2)-(0,a_2)$. Now assume that $A_1$ is integral. If $|Z(A_2)|\ge 2$, then there exists in $G$ a triangle $(1,0)-(0,a)-(0,b)-(1,0)$. \(2) $\Lla$: This follows from the sufficiency proof of (1). $\Lra$: If $|A_i|\ge 3$ for all $i$, then there is in $G$ a square $(1,0)-(0,a)-(b,0)-(0,1)-(1,0)$. Assume $A_1=\{0,1\}$. If $|Z(A_2)|\ge 2, $ then there exists in $G$ a triangle $(1,0)-(0,a)-(0,b)-(1,0)$. \(3) $\Lla$: Assume $A_1=\{0,1\}$. If $0\le |Z(A_j)|\le 1$, then $G$ clearly contains no cycle. If $|Z(A_j)|=2$ and there exists no nilpotent element in $A_j$, then there exists a triangle in $G$ but there is no square in $G$. $\Lra$: If $|A_i|\ge 3$ for all $i$, then there is in $G$ a square. Now assume $A_1=\{0,1\}$. If $|Z(A_2)|\ge 3, $ then there is a path $u-v-w$ in ${\Gamma}(A_2)$. Then there is in $G$ a square $(1,0)-(0,u)-(0,v)-(0,w)-(1.0)$. Now assume $|Z(A_2)|=2$. If there is a nilpotent element in $A_2$, assume $u^2=0$. Then there is a square $(1,0)-(0,u)-(1,u)-(0,v)-(1,0)$. This completes the verification. [3mm]{} Minimal elements of a po-semiring $A$ play a special role in the graph ${\Gamma}(A)$. More precisely, we have the following. [3mm]{}[**Lemma 3.4.**]{} [*Let $A$ be a po-semiring and denote $G={\Gamma}(A)$. Assume further $Z(A)\not={\emptyset}$. Then*]{} \(1) [*Let $a-u-b$ be a path in $G$. If it is contained in neither triangle nor quadrilateral, then $u$ is a minimal element of $A$.*]{} \(2) [*Each minimal element is a zero divisor of $A$, and thus the clique number $\omega(G)$ is greater than or equal to the number of minimal elements of $A$.* ]{} \(3) [*Let $u$ be a minimal element of $A$. Then*]{} (3.1) [*$d(u,x)\le 2,\,\forall x\in V(G)$.* ]{} (3.2) [*For each clique $K$ of $G$, either $V(K){\subseteq}N(u)$ or $V(K)\setminus \{v\}{\subseteq}N(u)$ for a vertex $v$ in $V(K)$.*]{} \(1) Assume to the contrary that $u$ is not a minimal element of $A$. Assume further $0<v<u$. If $v=a,$ then there is a triangle $a-u-b-a$. If $v\not=a,b$, then there is a square $a-u-b-v-a$. \(2) Let $x\in Z(A)$. First, for any minimal element $e$ of $A$, either $ex=0$ or $ex=e$. In particular, if $ex\not=0$, then $N(x){\subseteq}N(e)$ in ${\Gamma}(A)$. Thus each minimal element of $A$ is a zero divisor of $A$. It implies that the clique number of ${\Gamma}(A)$ is greater than or equal to the number of minimal elements of $A$. \(3) By (2), $u\in V(G)$. $\forall u\not=x\in V(G)$, if $ux\not=0$, then $N(x){\subseteq}N(u)$. This shows $d(u,y)\le 2,\,\forall y\in V(G)$. (3.2) then also follows from the observation. [3mm]{} We remark that for the po-semiring $A=\{0,1\}^{(n)}$ (see Example 3.6 for the definition), the clique number $\omega({\Gamma}(A))$ is identical with the number of minimal elements of $A$. [3mm]{} Here is the main result of this section. [3mm]{}[**Theorem 3.5.**]{} [*Let $A$ be either a commutative ring or a po-semiring satisfying condition $(C_3)$. Let ${\Gamma}(A)=G$.* ]{} \(1) [*If $G$ contains no cycle, then $G$ is either a star graph or a two-star graph $K_1+K_1+K_1+D_r$.* ]{} \(2) [*${\Gamma}(A)\cong K_1+K_1+K_1+D_r$ if and only if $A\cong \{0,1\}\times S$, where $S$ is a po-semiring with $|Z(S)|=1$. In this case, $r=|S|-2$.*]{} [3mm]{}(1) By [@DMS Proposition 1.3], for any commutative semigroup $S$ with 0, if ${\Gamma}(S)$ is a tree, then ${\Gamma}(S)$ is a connected subgraph of a two-star graph. It holds in particular for po-semirings. If further the diameter of $G$ is two, then $G$ is clearly a star graph. Now assume $diam(G)=3$ and assume $x-u-v-y$ be a path in $G$. Then $G$ is a two-star graph with subcenters $u,v$. By [@WL Corollary 2.13], we can assume $u^2=u$. Then $v^2=0$. If $A$ is a ring, then there is a ring isomorphism $A\cong Au\times A(1-u)$. The result follows from Lemma 3.3(2). If $A$ is a po-semiring, then by Lemma 3.4(1), both $u$ and $v$ are minimal element of $A$. Since condition $(C_3)$ holds, there exists an idempotent $w$ such that $A=Au\times Aw$, where $w+u=1,wu=0$. By Lemma 3.3(2), we can assume $Au=\{0,u\}$ and $0\le |Z(Aw)|\le 1.$ By the sufficiency proof of Lemma 3.3(1), we have $|Z(Aw)|=1$ and $G\cong K_1+K_1+K_1+D_r$, where $r=|Aw|-2$. \(2) This follows from the proof of (1) and the proof of Lemma 3.3(2). [3mm]{} The following example shows that Theorem 3.5 is the best possible result for po-semirings satisfying condition $(C_3)$. [3mm]{}[**Example 3.6.**]{} [*For any $1\le r\le \aleph_0$, there exists a po-semiring $A$ such that condition $(C_3)$ holds and ${\Gamma}(A)\cong K_1+K_1+K_1+D_r$.*]{} In fact, for any po-semirings $A_i$ ($1\le i\le n$), define a partial order $\le $ in the semiring $A_1\times A_2\times \cdots\times A_n$ in a natural way, i.e., $(x_1,\ldots,x_n)\le (y_1,\ldots,y_n)$ if $x_i\le y_i$ for all $i$. Then $A_1\times A_2\times \cdots\times A_n$ is a po-semiring, called the [*direct product of $A_1,\,\cdots, A_2$*]{}. When $A_1=\cdots=A_n=A$, denote by $A^{(n)}$ the above direct product. Take $A_1=\{0,1\}$ and $$A_2=\{0,1,a,b_1,b_2,\cdots\}$$ as in Example 3.2. Then ${\Gamma}(A_1\times A_2)=K_1+K_1+K_1+D_r$, where $r=|A_2|-2$ can be any cardinal number from $1$ to $\aleph_0$. Clearly, condition $(C_3)$ holds in $A_1\times A_2$ since there are only two minimal elements (i.e., $(1,0),(0,a)$). [3mm]{}[**Example 3.7.**]{} ${\Gamma}(\mathbb Z_2\times\mathbb Z_4)={\Gamma}(\mathbb Z_2[x]/(x^2))=K_1+K_1+K_1+D_2$. For any commutative ring $R$, by [@AA Theorem 2], if $\mathbb {AG}(R)$ is a tree, then it is either a star graph or the two-star graph $K_1+K_1+K_1+K_1$. [3mm]{}[**Corollary 3.8.**]{} [*If condition $(C_1)$ holds in a po-semiring $A$ and there exists no infinite set of orthogonal idempotents in $A$, then*]{} \(1) [*${\Gamma}(A)$ is either a star graph or the two-star graph $K_1+K_1+K_1+K_1$, if further the graph ${\Gamma}(A)$ contains no cycle.* ]{} \(2) [*${\Gamma}(A)\cong K_1+K_1+K_1+K_1$ if and only if $A\cong \{0,1\}\times S$, where $S$ is the po-semiring $\{0,a,1\}$ with $a^2=0$.*]{} [3mm]{}[**Remark 3.9.**]{} (1) It is easy to see that there are isomorphically two semirings $A$ with $A=\{0,1\}$, namely the ring $\mathbb Z_2$ and the po-semiring $\mathbb I(K)$ for any field $K$. Also, there exist exactly two po-semirings $A$ with $|A|=3$, namely $A=\{0,a,1\}$ with partial order $0<a<1$, and with multiplication defined by either $a^2=0$ or $a^2=a$ respectively. \(2) In a semiring $A$, if $|Z(A)|=1$, then either the ring $\mathbb Z_2[x]/(x^2)$ or $\mathbb Z_4$ can be embedded into $A$. While in the ring $\mathbb Z_n$, if there is a semiring $S$ contained in $\mathbb Z_n$ such that $|Z(S)|=1$, then $n=4$ and $S=\mathbb Z_4$. \(3) It is natural to ask the following question: Can any two-star graph be realized as the zero divisor graph of a po-semiring when condition $(C_3)$ is dropped? The answer is yes and see [@YW] for a complete answer. [4mm]{}[**4. The structure of a po-semiring $A$ with small $|Z(A)|$**]{} [3mm]{}In this section, motivated by Lemma 3.3 and Theorem 3.5, we study the structure of a po-semiring $A$ with small $Z(A)$. We first have the following Lemma. [3mm]{}[**Lemma 4.1.**]{} [*Let $A$ be a po-semiring.*]{} \(1) [*If $Z(A)=\{c\}$, then $c^2=0$, $c$ is the least nonzero element of $A$ and is a prime element of $A$.*]{} \(2) [*If $Z(A)=\{c,u\}$, then exactly one of the following holds:*]{} (2.1) [*Both c and u are minimal elements of $A$, $c<x,\,u<x,\, \forall x\in A\setminus\{0,c,u\}$, $u^2=u$ and $ c^2=c$. Furthermore, both $c$ and $u$ are prime elements of $A$.*]{} (2.2) [*$c$ is the least nonzero element of $A$, and $c^2=0$. $u$ is a prime element of $A$, and $u<p$ for each prime element $p\not=c,u$.* ]{} \(1) If $Z(A)=\{c\}$, then clearly $c^2=0$. Now if $xy\le c$ for some $x,y\in A$, then $x(xy^2)=0$. If further $x\not\le c$, then $xy^2=0$ and hence $y^2=0$. Then $y\le c$. This shows that $c$ is a prime element of $A$. Now for any $x\in A\setminus \{0,c\}$, clearly $c=xc$ and hence $c<x$. Thus $c$ is the least nonzero element of $A$. \(2) Let $Z(A)=\{c,u\}$. First, assume that $c$ and $u$ are incomparable. Then $c+u\not\in \{c,u\}$ and hence $c^2=c,u^2=u$. It follows that $xc=c,\,xu=u,\,\forall x\not=0,c,u$, and hence $c<x,u<x$ hold for each $x\not=0,c,u$. It implies that both $c$ and $u$ are minimal elements of $A$. To verify that $c$ is a prime element, assume $xy\le c$ and $y\not\le c$. If $y=u$, then $x\le c$. If further $y\not=u$, then $0=(xy)u=xu$ and so $x\le c$. Thus $c$ is a prime element of $A$. By symmetry, $u$ is also a prime element of $A$. Next we assume $c< u$. Then $c^2=0$. For any $x\in A\setminus \{0,c,u\}$, clearly $c=xc$ and hence $c<x$. Thus $c$ is the least nonzero element of $A$. For any $x\not\le u,y\not\le u$, $xy\not\le u$ since otherwise, $0=(xy)c=x(yc)=c$, a contradiction. Thus $u$ is a prime element of $A$. Finally, for any prime element $p\not=c,u$, if $u^2\in \{0,c\}$, then $u^3=0$ and hence $u< p$. If $u^2=u$, then $pu\not=c$ and hence $u=pu< p$. [3mm]{}[**Theorem 4.2.**]{} (1) [*$A$ is a po-semiring with $|Z(A)|=1$ if and only if there exist an integral po-semiring $A_1$ and an element $c\not\in A_1$ such that $A\cong \{c\}\cup A_1$, where the partial order of $ \{c\}\cup A_1$ is extended from $A_1$ by adding $0<c<a$ $(\forall a\in A_1^*)$, and the commutative addition and multiplication in $\{c\}\cup A_1$ are extended respectively from that of the po-semiring $A_1$, and the following conditions are fulfilled: $$0+c=c,\, c+c=c,\, c^2=0,\,c+y=y,\, cy=c \,(\forall y\in A_1^*).$$*]{} (2) [*$A$ is a po-semiring such that $|Z(A)|=2$ and $Z(A)^2\not=0$ if and only if one of the following holds:*]{} (2.1) [*There exist an integral po-semiring $A_1$ and elements $c,u\not\in A_1$ such that $A\cong \{c,u\}\cup A_1$, where $A_1$ has a least nonzero element $a_0$, the partial order of $\{c,u\}\cup A_1$ is extended from $A_1$ by adding $0<c<x$ and $0<u<x$ $(\forall x\in A_1^*)$, and the commutative addition and multiplication in $\{c\}\cup A_1$ are extended respectively from that of the po-semiring $A_1$, and the following conditions are fulfilled: $$c+u=a_0,\,0+x=x,\,x+x=x,\, x+y=y\,(\forall x\in \{c,u\},\, \forall y\in A_1^*),$$ $$u^2=u,\,c^2=c,\, cu=0=0x,\,xy=x\, (\forall x\in\{c,u\},\,\forall y\in A_1^*).$$* ]{} (2.2.) [*There exist an integral po-semiring $A_1$ and elements $c,u\not\in A_1$ such that $A\cong \{c,u\}\cup A_1$, where the partial order of $ \{c,u\}\cup A_1$ is extended from $A_1$ by adding $0<c<u<a$ $(\forall a\in A_1^*)$, and the commutative addition and multiplication in $\{c\}\cup A_1$ are extended respectively from that of the po-semiring $A_1$, and the following conditions are fulfilled: $$x+y=max\{x,y\},\, \forall x\in \{c,u\},\,\forall y\in \{c,u\}\cup A_1,$$ $$c^2=0,\,cu=0=0x,\, u^2\in \{c,u\},\,xy=x\, (\forall x\in\{c,u\},\,\forall y\in A_1^*).$$* ]{} \(1) $\Lra$: Let $A$ be a po-semiring and denote $A_1=A\setminus Z(A)$. Then $A_1$ is an integral sub-po-semiring of $A$, and the rest results follow from Lemma 4.1(1). $\Lla$: It is routine to check that $\{c\}\cup A_1$ is a po-semiring under the assumptions. Clearly $|Z(A)|=1.$ (2)$\Lra$: Assume that $A$ is a po-semiring such that $|Z(A)|=2$. Assume further $Z(A)=\{c,u\}$, and set $A_1=A\setminus \{c,u\}$. (2.1) If $c$ and $u$ are incomparable, then $c+u\in A_1^*$ and $Z(A)^2\not=0$ by Lemma 4.1(2). Assume $c+u=a_0$. By the proof of Lemma $4.1(2)$, we have $a_0y=(c+u)y=c+u=a_0, \forall y\in A_1^*$. Thus $a_0$ is the least nonzero element of $A_1$. Since both $c$ and $u$ are prime elements of $A$ by Lemma 4.1(2), $A_1$ is an integral sub-po-semiring of $A$. The other statements also follow directly from Lemma 4.1(2). (2.2) In the following, assume that $c$ and $u$ are comparable, assume further $c<u$. Then $c^2=0$, and $c$ is the least nonzero element of $A$. Thus $ac=c$ for any nonzero element $a$ in $A_1^*$. If $Z(A)^2\not=0$, then $u^2\not=0$. This implies $au=u, \forall a\in A_1^*$. Then $u<a,\,\forall a\in A_1^*$. Since $u$ is a prime element of $A$, $u\not\in A_1^*A_1^*$. Clearly, $c\not\in A_1^*A_1^*$. Thus $A_1$ is an integral sub-po-semiring of $A$. This completes the necessary part of the proof. $\Lla$: It is not hard to check that $\{c,u\}\cup A_1$ is a po-semiring under either assumption. Clearly, $|Z(A)|=2$ and $Z(A)^2\not=0$ hold in either case. Note that $(C_3)$ holds in Case (2.2). [3mm]{} We remark that Theorem 4.2 (2.1) gives a complete characterization of po-semirings $A$ with $|Z(A)|=2$, in which there exist no nilpotent elements. In particular, we have the following. [3mm]{}[**Corollary 4.3**]{} (1) [*A po-semiring $A$ satisfies condition $(C_3)$, $|Z(A)|=2$ and $Z(A)^2\not=0$ if and only if either $A\cong \{0,1\}\times \{0,1\}$, or $A$ is the po-semiring constructed in Theorem 4.2(2.2).*]{} (2)[*A po-semiring $A$ satisfies condition $(C_3)$, $|Z(A)|=2$ and there exist no nilpotent elements in $A$ if and only if $A\cong \{0,1\}\times \{0,1\}$.*]{} [3mm]{}Note that the po-semiring $\mathbb{I}(R)$ always satisfies condition $(C_3)$. Applying Corollary 4.3 to the po-semiring $\mathbb{I}(R)$, we have the following corollary which is essentially the same with [@AA2 Theorem 3] by [@McL Theorem 2.1]. [3mm]{}[**Corollary 4.4.**]{} [*For any commutative ring $R$, The following statements are equivalent:*]{} \(1) [*Either $R\cong F_1\times F_2$ for some fields $F_i$, or $R$ is local with two nontrivial ideals.*]{} \(2) [*The annihilating graph $\mathbb{AG}(R)$ is isomorphic to the complete graph $K_2$.*]{} \(3) [*Either $R\cong F_1\times F_2$ for some fields $F_i$, or $R$ is a local ring with the maximal ideal $J(R)=R{\alpha}$ for some ${\alpha}$ satisfying ${\alpha}^3=0$ and ${\alpha}^2\not=0$.*]{} $(1)\Lra (2):$ Clear. $(2)\Lra (3):$ Assume $\mathbb{AG}(R)\cong K_2$. Then $R$ has exactly two proper ideals by [@BR Theorem 1.4]. In particular, $R$ is artinian and thus the ideal $J(R)$ is finitely generated. It follows that $J(R)=R{\alpha}$ for some ${\alpha}\in J(R)$. If $J(R)^2=0$, then either $R\cong F_1\times F_2$ for some fields $F_i$, or $R$ is a local ring by Lemma 4.1(2) and Corollary 4.3. If further $R$ is a local ring, then $J(R)=U(R){\alpha}$ since $J(R){\alpha}=0$. Under the assumption, $R$ has exactly one nontrivial ideal, a contradiction. The contradiction shows $R\cong F_1\times F_2$ if $J(R)^2=0$. Now assume $J(R)^2\not=0$. By Corollary 4.3, $R$ is a local ring with exactly two nontrivial ideals. Since $J(R)^2\not=J(R)$ by Nakayama Lemma, $J(R)$ and $J(R)^2$ are the nontrivial ideals of $R$ by Theorem 4.2(2.2). Hence $J(R)^3=0$ and $J(R)^2\not=0$. In the case, $R$ is a local ring whose unique maximal ideal is $J(R)=R{\alpha}$, where ${\alpha}^3=0, {\alpha}^2\not=0$. $(3)\Lra (1):$ Now assume that $R$ is a local ring with $J(R)=R{\alpha}$, where ${\alpha}^3=0,{\alpha}^2\not=0$. Clearly, $R{\alpha}\not=R{\alpha}^2$ and $R{\alpha}\cdot R{\alpha}^2=0$. $R=U(R)\cup R{\alpha}$ and hence $R{\alpha}=U(R){\alpha}\cup U(R){\alpha}^2\cup\{0\}$. Thus for any $\beta\in J(R)$, $R\beta=R{\alpha}$ if $\beta\in U(R){\alpha}$, while $R\beta=R{\alpha}^2$ if $0\not=\beta\in U(R){\alpha}^2$. It shows that $R{\alpha}$ and $R{\alpha}^2$ are all the possible nontrivial ideals of $R$. The complete isomorphic classification of finite local rings with at most three nontrivial ideals will be discussed in a separate paper, see [@WL2]. [3mm]{} Note that a po-semiring $A$ with $|Z(A)|=1$ always satisfies condition $(C_3)$. By Theorem 4.2, [*the structure of a po-semiring $A$ is completely determined by structures of integral po-semirings if $A$ satisfies one of the following conditions:*]{} \(1) $|Z(A)|=1$. \(2) $|Z(A)|=2$ and $Z(A)^2\not=0$. On the other hand, the structure of $A$ with $Z(A)^2=0$ seems to be a little bit more complicated. Let $A$ be a po-semiring with $Z(A)=\{c,u\}$ and $Z(A)^2=0$. Set $A_1=A\setminus Z(A)$. Then $A_1$ is an integral sub-po-semiring (see Proposition 4.5 below). Clearly, $u<x+u<1$ for any $x\not\in \{0,1,c,u\}$. By Lemma 4.1(2.2), $c$ is the least nonzero element of $A$, so $cA_1^*=\{c\},c+x=x,\,\forall x\in A^*$. But it is hard to determine the partial order between $u$ and elements of $A_1^*$. By Examples 4.6 and 4.7 below, it seems that Lemma 4.1(2.2) is the best possible result. [3mm]{}[**Proposition 4.5.**]{} [*$A$ is a po-semiring with condition $(C_3)$, $|Z(A)|=2$ and $Z(A)^2=0$ if and only if there exist an integral po-semiring $A_1$ and elements $c,u\not\in A_1$, such that $A\cong Z(A)\cup A_1$, where $Z(A)=\{c,u\}$, the addition and multiplication in $Z(A)\cup A_1$ are extended respectively from that of the po-semiring $A_1$, and the following conditions are fulfilled:*]{} \(1) [*$0+x=x \,(\forall x\in Z(A)),c+y=y \,(\forall y\in A^*), u+1=1,u+u=u$, and $u+(x+y)=(u+x)+y,\,u+(u+x)=u+x\,(\forall x,y\in A_1^*)$.*]{} \(3) [*$Z(A)^2=0, 0Z(A)=0, cx=c\,(\forall x\in A_1^*),0\not\in u\cdot A_1^*$.*]{} \(4) [*$xu=u$ and $yu=u$ implies $(xy)u=u$.*]{} \(5) [*$x\ge y$ in $A_1$ and $yu=u$ implies $xu=u$.*]{} \(6) [*For any $y,z\in A_1$, $x(y+u)=xy+xu$. Also, $uy=c=uz$ implies $u(y+z)=c$.*]{} $\Lra$: By Lemma 4.1, it suffices to show that $A_1=A\setminus Z(A)$ is a sub-po-semiring. In fact, if $xy\in \{c,u\}$ for some $x,y\in A_1^*$, then we have $xc=x(yc)=(xy)c=0$, giving a contradiction. $\Lla$: We omit the detailed verification here. [3mm]{} Next we use Proposition 4.5 to construct some po-semirings $A$ with $|Z(A)|=2,\,Z(A)^2=0$. [3mm]{}[**Example 4.6.**]{} Let $A=\{0,1,c,u,b_1,b_2,\cdots\}$ be a poset with $4\le |A|\le \aleph_0$, where $0<c<u<b_1<b_2<\cdots$. Define the binary addition by max operation. Define three commutative multiplications by $$0\cdot x=0,\,1\cdot x=x\,(\forall x\in A),\,cu=0,\,u^2\in\{0,c,u\}, \,y\cdot z=min\{y,z\}(\text{for other $y,z$}).$$ Then it is easy to check that $A$ is a po-semiring for each multiplication. Clearly, $|Z(A)|=2,Z(A)^2=0$. [3mm]{}[**Example 4.7.**]{} Let $Z(A)=\{c,u\}$ and $A=Z(A)\cup A_1$, where $A_1=\{0,1,b_1,b_2,\cdots\}$ is a chain $0<b_1<b_2<\cdots<1$, with $max$ as the binary addition and with multiplication $b_ib_j=b_1$ ($\forall i,j\ge 1 $). Then clearly $A_1$ is an integral po-semiring. Fix a positive integer $n>1$ and extend the partial order of $A_1$ to $A$ by $0<c<u<b_n$. Extend the commutative addition to $A$ by $$0+x=x,\,1+x=1 \, , x+x=x\, (\forall x\in \{c,u\}, c+y=y\,(\forall y\in A^*)$$ and $$u+b_i=b_n \,(1\le i\le n-1),u+b_j=b_j\,(j\ge n).$$ Extend the commutative multiplication to $A$ by $$0Z(A)=0=Z(A)^2,\, ub_i=c\,(\forall i).$$ Note that $ c+y=y\,(\forall y\in A^*)$ implies that $c$ is the least element of $A$, thus $c^2=0$ means that condition $(C_3)$ holds. Note that $0+x=x,\,x+1=1$ implies $0\le x\le 1$. Then it is easy to apply Proposition 4.5 check that $A$ is a po-semiring with $|Z(A)|=2, Z(A)^2=0$. [*Note that $u$ and $b_i$ are incomparable for any $i$ with $1\le i\le n-1$, while by Lemma 4.1(2), $u< p$ for any prime element $p$ with $p\not=c,u$.*]{} [3mm]{}For any po-semiring $A$, recall that $A^{(r)}$ is the direct product of $r$ copies of $A$. We have the following. [3mm]{}[**Proposition 4.8.**]{} [*$A$ is a po-semiring, in which both DCC and condition $(C_3)$ holds for elements of $A$, if and only if there exist an integer $n\ge 1$ and a po-semiring $A_1$ such that either $A\cong A_1$ or $A\cong \{0,1\}^{(n)}\times A_1$, where $c^2=0$ for each minimal element $c$ of $A_1$ and DCC holds for elements of $A_1$.* ]{} $\Lla:$ Let $A_1$ be a po-semiring, and assume that DCC holds for elements of $A_1$ and $c^2=0$ for each minimal element $c$ of $A_1$. Then there exists at leat one minimal element $c$ in $A$ and hence $Z(A_1)\not={\emptyset}$. If $A\cong A_1$, then clearly condition $(C_3)$ holds in $A$. If $A\cong \{0,1\}^n\times A_1$ for some finite $n\ge 1$, then both DCC and condition $(C_3)$ also hold for elements of $A$. $\Lra:$ Assume that DCC holds for elements of $A$. Then there exists at leat one minimal element in $A$. If further $c^2=0$ for each minimal element $c$ of $A$, then $Z(A)\not={\emptyset}$ and condition $(C_3)$ holds in $A$. Then we take $A_1=A$. In the following assume that there exists an idempotent minimal element $e_1$ in $A$. Then by condition $(C_3)$, there exists a nonzero idempotent $f_1\in A$ such that $A\cong \{0,1\}\times Af_1$. Clearly, both DCC and condition $(C_3)$ hold for elements of the po-semiring $Af_1$ and an induction shows that $A\cong \{0,1\}^{(n)}\times A_1$ for some $n\ge 1$, where $A_1$ is a po-semiring in which $c^2=0$ for each minimal element $c$ of $A_1$ and DCC holds for elements of $A_1$. [gg]{} D.F. Anderson and P.S. Livingston. The zero-divisor graph of a commutative ring, [**J. Algebra**]{} [**217**]{}(1999) $434-447$. G. Aalipour, S. Akbari, R. Nikandish, M.J. Nikmehr and F. Shaveisi. Minimal prime ideals and cycles in annihilating-ideal graphs, [**Rocky Mountain J. Math.**]{} (accepted). G. Aalipour, S. Akbari, M. Behboodib, R. Nikandish M.J. Nikmehr and F. Shaveisi. The Classification of the Annihilating-Ideal Graph of a Commutative Ring. [**Algebra Colloquium**]{} (accepted) M.F. Atiyah and I.G. MacDonald. [*Introduction to Commutative Algebra*]{}, [**Addison-Wesley**]{}, Reading, MA, $1969$. M. Behboodi and Z. Rakeei. The annihilating-ideal graph of commutative rings I, [**Journal of Algebra and its Applications**]{} (accepted). S. Bistarelli, U. Montanari, and F. Rossi. Semiring-based constraint satisfaction and optimization, [**Journal of the ACM**]{}, [**44:2**]{}(1997) $201-236$. S. Bistarelli, U. Montanari, F. Rossi, Schiex, T.; G. Verfaillie, H. Fargier. Semiring-based CSPs and valued CSPs: frameworks, properties, and comparison, [**Constraints**]{} [**4:3**]{} (1999) $199-240$. F.R. DeMeyer, T. McKenzie, and K. Schneider. The zero-divisor graph of a commutative semigroup, [**Semigroup Forum**]{} [**65**]{}(2002) $206-214$. T.Y. Lam. A First Course in Non-Commutative Rings, [**Springer-Verlag**]{}, New York, Inc $1991$. D.C. Lu and T.S. Wu. On the normal ideals of exchange rings, [**Siberian Math.J.**]{} [**49**]{}(2008) $663-668$. K.R., McLean. Commutative artinian principal ideal rings. [**Proc. London Math. Soc.**]{} (3) [**26**]{} (1973), $249-272$. T.S. Wu and D.C. Lu. Sub-semigroups determined by the zero-divisor graph, [**Discrete Math.**]{} [**308**]{}(2008) $5122-5135$. T.S. Wu and D.C. Lu. Finite local rings with at most three nontrivial ideals. Preprint. H.Y. Yu and T.S. Wu. On realizing zero-divisor graphs of po-semirings. Preprint. [^1]: Corresponding author, tswu@sjtu.edu.cn (T. Wu) [^2]: ludancheng@suda.edu.cn (D. Lu) [^3]: yli@brocku.ca (Y. Li)
--- abstract: 'A non-iterative auto-calibration algorithm is presented. It deals with a minimal set of six scene points in three views taken by a camera with fixed but unknown intrinsic parameters. Calibration is based on the image correspondences only. The algorithm is implemented and validated on synthetic image data.' address: 'South Ural State University, 76 Lenin Avenue, Chelyabinsk 454080, Russia' author: - 'E.V. Martyushev' date: 'July 15, 2013' title: 'A Minimal Six-Point Auto-Calibration Algorithm' --- Introduction ============ The problem of camera calibration is a necessary part of computer vision applications such as path-planning and navigation for robots, self-parking systems, camera based industrial detection and recognition, etc. At present, a great deal of calibration algorithms and techniques have been developed. Some of them require to observe a planar pattern viewed at several different orientations [@Heikkila; @Zhang]. Other methods use the 3-dimensional calibration objects consisting of two or three pairwise orthogonal planes, whose geometry is known with good accuracy [@Tsai]. In contrast with the just mentioned methods, the *auto-calibration* does not require any special calibration objects [@Hartley92; @Hartley94; @MF; @MC; @QT; @Triggs], so only point correspondences in several uncalibrated views are required. This provides the auto-calibration approach with a great flexibility and makes it indispensable in some real-time applications. In this paper we give a new non-iterative solution to the auto-calibration problem in a minimal case of six scene points in three views, provided that the intrinsic parameters of a moving camera are fixed. Our method consists of two major steps. First, we use the efficient six-point three-view algorithm from [@SZHT] to solve for projective reconstruction. Then, using the well-known constraints on the absolute dual quadric [@HZ; @Triggs], we produce a system of non-linear polynomial equations, and resolve it in a numerically stable way by a series of Gauss-Jordan eliminations with partial pivoting. The rest of the paper is organized as follows. In Section 2, we briefly recall how to construct a projective reconstruction from six matched scene points in three uncalibrated views. In Section 3, an algorithm of metric upgrading of the projective reconstruction is described. In Section 4, we test the algorithm on a set of synthetic data. Section 5 concludes. Notation -------- We use $\mathbf a, \mathbf b, \ldots$ for column vectors, and $\mathbf A, \mathbf B, \ldots$ for matrices. For a matrix $\mathbf A$, the entries are $A_{ij}$ or $(\mathbf A)_{i, j}$, the transpose is $\mathbf A^{\mathrm T}$, and the determinant is $\det(\mathbf A)$. For two vectors $\mathbf a$ and $\mathbf b$, the vector product is $\mathbf a\times \mathbf b$, and the scalar product is $\mathbf a^{\mathrm T} \mathbf b$. We use $\mathbf I_n$ for identical matrix of size $n\times n$ and $\mathbf 0_n$ for zero $n$-vector. Projective Reconstruction {#sec:proj} ========================= First of all, to avoid any degeneracies, we restrict ourselves to the “general position case” both for scene points and camera motions, i.e., the sequence of camera motions is assumed to be non-critical and all the observed points do not lie on critical surfaces in a sense of [@Sturm]. In particular, this means that the scene is non-planar and the motion is not a pure translation or rotation around the same axis. Given three uncalibrated images of six points of a rigid scene, we first produce a projective reconstruction of the cameras applying the minimal 3-view algorithm from [@SZHT]. Recall that the output of this algorithm is either one or three real solutions for the homogeneous coordinates of the sixth scene point $\mathbf X_6$, whereas the first five points are chosen to be the vectors of standard basis of the projective 3-space. The twelve entries of the camera matrix $\mathbf P_i$ are then recovered by solving the twelve linearly independent equations (for each $i = 1, 2, 3$): $$\mathbf x_{ij} \times \mathbf P_i \mathbf X_j = \mathbf 0_3, \quad j = 1, \ldots, 6,$$ where $\mathbf x_{ij}$ is the image of $\mathbf X_j$ under the projection $\mathbf P_i$. Thus we found $$\mathbf P_i = \begin{bmatrix}\mathbf A_i & \mathbf a_i\end{bmatrix}, \quad i = 1, 2, 3.$$ Using the projective ambiguity [@HZ], we transform the obtained camera matrices to $$\begin{split} \label{eq:projective} \mathbf P'_1 &= \mathbf P_1 \mathbf H_0 = \begin{bmatrix}\mathbf I_3 & \mathbf 0_3\end{bmatrix},\\ \mathbf P'_2 &= \mathbf P_2 \mathbf H_0 = \begin{bmatrix}\mathbf B_2 & \mathbf b_2\end{bmatrix},\\ \mathbf P'_3 &= \mathbf P_3 \mathbf H_0 = \begin{bmatrix}\mathbf B_3 & \mathbf b_3\end{bmatrix}, \end{split}$$ where $$\mathbf H_0 = \begin{bmatrix}\mathbf A_1^{-1} & -\mathbf A_1^{-1}\mathbf a_1 \\ \mathbf 0_3^{\mathrm T} & 1\end{bmatrix}.$$ Metric Reconstruction {#sec:metric} ===================== The projective reconstruction  is the starting point for our auto-calibration algorithm. Let the metric camera matrices be represented as $$\begin{split} \label{eq:metric} \mathbf P_1^M &= \mathbf K\begin{bmatrix}\mathbf I_3 & \mathbf 0_3 \end{bmatrix},\\ \mathbf P_2^M &= \mathbf K\begin{bmatrix}\mathbf R_2 & \mathbf t_2 \end{bmatrix},\\ \mathbf P_3^M &= \mathbf K\begin{bmatrix}\mathbf R_3 & \mathbf t_3 \end{bmatrix}, \end{split}$$ where $\mathbf R_i$ and $\mathbf t_i$ are the rotation matrix and translation vector respectively, and $\mathbf K$ is an upper triangular matrix called the *calibration matrix* of the camera. It is assumed to be identical for all three views. Our goal is to estimate $\mathbf K$ and then upgrade the projective cameras to the metric ones. Auto-calibration determines a $4\times 4$ projective matrix $\mathbf H$, that transforms the projective camera $\mathbf P'_i$ from  into a metric camera $\mathbf P_i^M$ from , i.e., $$\label{eq:update} \mathbf P_i^M = \mathbf P'_i \mathbf H, \quad i = 1, 2, 3.$$ The matrix $\mathbf H$ must have the form [@HZ]: $$\mathbf H = \begin{bmatrix}\mathbf K & \mathbf 0_3 \\ -\mathbf p^{\mathrm T}\mathbf K & 1 \end{bmatrix}$$ for some 3-vector $\mathbf p$. Then the entries of $\mathbf H$ are constrained by [@Faugeras; @HZ] $$\begin{split} \label{eq:autocalib} \lambda\boldsymbol{\omega}^* &= \mathbf P'_2\mathbf{Q}^*_\infty {\mathbf P'_2}^{\mathrm T},\\ \mu\boldsymbol{\omega}^* &= \mathbf P'_3\mathbf{Q}^*_\infty {\mathbf P'_3}^{\mathrm T}, \end{split}$$ where $\boldsymbol{\omega}^* = \mathbf K \mathbf K^{\mathrm T}$ is the *dual image of the absolute conic*, $\lambda$, $\mu$ are scalars and $4\times 4$ matrix $$\mathbf{Q}^*_\infty = \begin{bmatrix}\boldsymbol{\omega}^* & \mathbf q \\ \mathbf q^{\mathrm T} & r \end{bmatrix},$$ with $\mathbf q = - \boldsymbol{\omega}^* \mathbf p$, $r = \mathbf p^{\mathrm T} \boldsymbol{\omega}^* \mathbf p$, is called the *absolute dual quadric* [@Triggs]. Thus, constraints  give $12$ equations in $11$ variables: $r$, $q_1$, $q_2$, $q_3$, five components of $\boldsymbol{\omega}^*$ (recall that $\omega^*_{33} = 1$), $\lambda$ and $\mu$. Let us rewrite these equations in form $$\label{eq:Cx} \mathbf C\,\mathbf x = \mathbf 0_{12},$$ where $$\label{eq:matrixC} \mathbf C = \mathbf C(\lambda, \mu) = \begin{bmatrix} \mathbf 0_{6\times 4} & \lambda\mathbf I_6 \\ \mathbf 0_{6\times 4} & \mu\mathbf I_6 \end{bmatrix} - \mathbf D,$$ $\mathbf D$ is a $12\times 10$ scalar matrix, and $$\mathbf x = \begin{bmatrix} r & q_1 & q_2 & q_3 & \omega^*_{11} & \omega^*_{12} & \omega^*_{13} & \omega^*_{22} & \omega^*_{23} & 1\end{bmatrix}^{\mathrm T}$$ is a monomial vector. It follows that the determinant of any $10\times 10$ submatrix of $\mathbf C$ must vanish. Denote by $S_i(\lambda, \mu)$ the determinant of a submatrix of $\mathbf C$ obtained by eliminating the rows with numbers $i$ and $i+6$ for $i =1, \ldots, 6$. Hence we get the system $S_i = 0$ of polynomial equations in only two variables $\lambda$ and $\mu$. Due to the form  of matrix $\mathbf C$, we do not need to compute a $10\times 10$ functional determinant here. Each polynomial $S_i$ can be found as $$\det(\mathbf C_1 + \lambda \mathbf C_2 + \mu \mathbf C_3),$$ where the $5\times 5$ scalar matrices $\mathbf C_j$ are obtained by a patrial Gauss-Jordan elimination on matrix $\mathbf C$. Let us rewrite the system $S_i = 0$, $i = 1, \ldots, 6$, in form: $$\label{eq:F0y} \mathbf F_0\,\mathbf y = \mathbf 0_6,$$ where $\mathbf F_0$ is a $6\times 18$ coefficient matrix, and $$\begin{gathered} \label{eq:monomials} \mathbf y = \left[\lambda^4\mu \quad \lambda^3\mu^2 \quad \lambda^2\mu^3 \quad \lambda\mu^4 \quad \lambda^4 \quad \mu^4 \quad \lambda^3\mu \quad \lambda^2\mu^2 \right.\\ \left. \lambda\mu^3 \quad \lambda^3 \quad \lambda^2\mu \quad \lambda\mu^2 \quad \mu^3 \quad \lambda^2 \quad \lambda\mu \quad \mu^2 \quad \lambda \quad \mu\right]^{\mathrm T}\end{gathered}$$ is a monomial vector. To solve the system  in a numerically stable way, we perform the following sequence of matrix transformations: $$\label{eq:seq} \mathbf F_0 \to \tilde{\mathbf F}_0 \to \mathbf F_1 \to \tilde{\mathbf F}_1 \to \mathbf F_2 \to \tilde{\mathbf F}_2 \to \mathbf F_3 \to \tilde{\mathbf F}_3,$$ where each $\tilde{\mathbf F}_i$ is obtained from $\mathbf F_i$ by the Gauss-Jordan elimination with partial pivoting. The matrix $\mathbf F_1$ of size $8\times 18$ is obtained from $\tilde{\mathbf F}_0$ by adding two new rows: first one corresponds to the last row of $\tilde{\mathbf F}_0$ multiplied by $\lambda$, second one — to the next to last row of $\tilde{\mathbf F}_0$ multiplied by $\mu$. The matrix $\mathbf F_2$ of size $12\times 18$ is obtained from $\tilde{\mathbf F}_1$ by adding four new rows corresponding to the last two rows of $\tilde{\mathbf F}_1$ multiplied by $\lambda$ and $\mu$. The matrix $\mathbf F_3$ of size $17\times 18$ is obtained from $\tilde{\mathbf F}_2$ by adding five new rows. We multiply the last two rows of $\tilde{\mathbf F}_2$ by $\lambda$ and $\mu$, and thus get four additional rows. One more row is obtained by multiplying the 10th row of $\tilde{\mathbf F}_2$ by $\mu$. Finally we get $$\mu = - (\tilde{\mathbf F}_3)_{16, 18}, \quad \lambda = - \mu\,(\tilde{\mathbf F}_3)_{17, 18}.$$ From algebraic point of view, the above sequence  *interreduces* the ideal $J = \langle S_i \mid i = 1, \ldots, 6\rangle$. The result is the Gröbner basis of $J$ with respect to the graded lexicographic order. It consists of two polynomials represented by the last two rows of matrix $\tilde{\mathbf F}_3$. Having found $\lambda$ and $\mu$, we compute the entries of $\boldsymbol{\omega}^*$ performing the Gauss-Jordan elimination with partial pivoting on matrix $\mathbf C$ in . Finally, we compute the calibration matrix by the Cholesky decomposition of $\boldsymbol{\omega}^* = \mathbf K \mathbf K^{\mathrm T}$, and then find (up to scale) the metric camera matrices $\mathbf P_i^M$ by . Note that the matrices $\mathbf R_i$ estimated from  are not in general rotations and thus need to be corrected [@Zhang]. We used the singular value decomposition $\mathbf R_i = \mathbf U_i \mathbf D_i \mathbf V_i^{\mathrm T}$ and then replaced $\mathbf R_i$ by $\tilde{\mathbf R}_i = \mathbf U_i \mathbf V_i^{\mathrm T}$. It is well-known that the rotation matrix $\tilde{\mathbf R}_i$ is the closest to $\mathbf R_i$ with respect to Frobenius norm. Experiments on Synthetic Data {#sec:synth} ============================= ![Numerical error distribution. Median error is $2.8\times 10^{-9}$.[]{data-label="fig:numer_err"}](numer_err.eps) ![Rotational and translational errors relative to noise level.[]{data-label="fig:transl_err"}](transl_err.eps) The algorithm has been implemented in C/C++. All computations were performed in double precision. Synthetic data setup is given in Table \[tab:setup\], where the baseline length is the distance between the first and third camera centers. The second camera center varies randomly around the baseline middle point with amplitude $0.025$. Distance to the scene 1 ----------------------- ---------------------------------------------------------------------------- Scene depth 0.5 Baseline length 0.1 Image dimensions $352 \times 288$ Calibration matrix $\begin{bmatrix}425 & 0 & 176 \\ 0 & 425 & 144 \\ 0 & 0 & 1 \end{bmatrix}$ : Synthetic data setup.[]{data-label="tab:setup"} We have measured the numerical error by the value $$\frac{\|\mathbf K - \hat{\mathbf K}\|}{\|\hat{\mathbf K}\|},$$ where $\hat{\mathbf K}$ is the ground truth calibration matrix, $\|\cdot \|$ is the Frobenius norm. The distribution of the numerical error is reported in Figure \[fig:numer\_err\], where the total number of trials is $10^6$. The running time information for our implementation of the algorithm is given in Table \[tab:timing\]. Step Projective reconstr. Metric reconstr. --------- ---------------------- ------------------ $\mu s$ 7.9 28.4/root : Average running times for the algorithm steps on a system with Intel Core i5 2.3 GHz processor.[]{data-label="tab:timing"} In Figure \[fig:transl\_err\], we demonstrate the stability of the algorithm under increasing image noise. We have added a Gaussian noise with a standard deviation varying from 0 to 1 pixel in a $352 \times 288$ image. Each point is a median of $10^6$ trials. Outliers {#ssec:ransac} -------- To test the algorithm in presence of outliers (incorrect matches), we have modeled a sequence of 70 cameras with centers on a circle, and 400 scene points viewed by all the cameras. For each image, we have added a Gaussian noise with one pixel standard deviation and $20\%$ of outliers (uniformly distributed points in the image plane). The auto-calibration algorithm was used as a hypothesis generator within a random sample consensus (RANSAC) framework [@FB]. For better computational efficiency we used the *preemptive* RANSAC from [@Nister]. The motion hypotheses were scored by the Sampson approximation to geometric error [@HZ]. The number of hypotheses was set to 400 for each camera position, and the preemption block size was set to 100. The results are presented in Figure \[fig:calib\] and Figure \[fig:track\]. No iterative refinements were performed in the estimation. The calibration matrix averaged from the image sequence is as follows: $$\mathbf K = \begin{bmatrix}399.52 & 2.16 & 161.54 \\ 0 & 405.37 & 142.14 \\ 0 & 0 & 1 \end{bmatrix}.$$ ![Skew parameter $K_{12}$ estimated from the sequence of 70 synthetic images. Average value of $K_{12}$ is $2.16$.[]{data-label="fig:calib"}](calib.eps) ![The camera track estimated from the sequence of 70 synthetic images. The red solid boxes are the ground truth camera positions.[]{data-label="fig:track"}](track.eps) Discussion {#sec:concl} ========== A new non-iterative auto-calibration algorithm is presented. It derives the camera calibration from the smallest possible number of views and scene points. A computation on synthetic data confirms its accuracy and high speed performance. The algorithm is quite flexible. It is reliable, for example, even in case of pure rotations (baseline $= 0$), if the calibration matrix is only needed. [99]{} Faugeras, O. *Three-Dimensional Computer Vision: A Geometric Viewpoint*. MIT Press, 1993. Fischler, M., Bolles, R. *Random Sample Consensus: a Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography*. Commun. Assoc. Comp. Mach., Vol. 24, 381–395, 1981. Hartley R. *Estimation of Relative Camera Positions for Uncalibrated Cameras*. Proceedings of the 2nd European Conference on Computer Vision, Vol. 588 of Lecture Notes in Computer Science, 579–587, 1992. Hartley, R.I. *Self-calibration from Multiple Views with a Rotating Camera*. Proceedings of the 3rd European Conference on Computer Vision, Vol. 800–801 of Lecture Notes in Computer Science, 471–478, 1994. Hartley, R., Zisserman, A. *Multiple View Geometry in Computer Vision. Second Edition*. Cambridge University Press, 2004. Heikkilä, J. *Geometric Camera Calibration Using Circular Control Points*. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 10, 1066–1077, 2000. Maybank, S.J., Faugeras, O.D. *A Theory of Self Calibration of a Moving Camera*. International Journal of Computer Vision, Vol. 8, No. 2, 123-–151, 1992. Mendonca, P.R.S., Cipolla, R. *A Simple Technique for Self-Calibration*. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 500–505, 1999. Nistér, D. *Preemptive RANSAC for Live Structure and Motion Estimation*. Proceedings of the Ninth IEEE International Conference on Computer Vision, 199–206, 2003. Quan, L., Triggs, B. *A Unification of Autocalibration Methods*. Proceedings of the Fourth Asian Conference on Computer Vision, 917–922, 2000. Schaffalitzky, F., Zisserman, A., Hartley, R.I., Torr, P.H.S. *A Six Point Solution for Structure and Motion*. Proceedings of the European Conference on Computer Vision, Vol. 1, 632–648, 2000. Sturm, P. *Critical Motion Sequences for Monocular Self-Calibration and Uncalibrated Euclidean Reconstruction*. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1100–1105, 1997. Triggs, B. *Autocalibration and the Absolute Quadric*. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 609–614, 1997. Tsai, R.Y. *A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology Using Off-the-shelf TV Cameras and Lenses*. J. Robotics and Automation, Vol. 3, No. 4, 323–344, 1987. Zhang, Z. *A Flexible New Technique for Camera Calibration*. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, 1330–1334, 2000.
--- abstract: | Choose a topos $\calE$. There are several different “notions of sheafness” on $\calE$. How do we visualize them? Let’s refer to the classifier object of $\calE$ as $Ω$, and to its Heyting Algebra of truth-values, $\Sub(1_\calE)$, as $H$; we will sometimes call $H$ the “logic” of the topos. There is a well-known way of representing notions of sheafness as morphisms $j:Ω→Ω$, but these ‘$j$’s yield big diagrams when we draw them explicitly; here we will see a way to represent these ‘$j$’s as maps $J:H→H$ in a way that is much more manageable. In the previous paper of this series — called [@OchsPH1] from here on — we showed how certain toy models of Heyting Algebras, called “ZHAs”, can be used to develop visual intuition for how Heyting Algebras and Intuitionistic Propositional Logic work; here we will extend that to sheaves. The full idea is this: [*notions of sheafness*]{} correspond to [*local operators*]{} and vice-versa; [ *local operators*]{} correspond to [*J-operators*]{} and vice-versa; if our Heyting Algebra $H$ is a ZHA then [*J-operators*]{} correspond to [*slashings*]{} on $H$, and vice-versa; [*slashings*]{} on $H$ correspond to [*“sets of question marks”*]{} and vice-versa, and each set of question marks induces a notion of [*erasing and reconstructing*]{}, which induces a sheaf. Also, every ZHA $H$ corresponds to an [(acyclic) 2-column graph]{}, and vice-versa, and for any two-column graph $(P,A)$ the logic of the topos $\Set^{(P,A)}$ is exactly the ZHA $H$ associated to $(P,A)$. The introduction of [@OchsPH1] discusses two different senses in which a mathematical text can be “for children”. The first sense involves some precise metamathetical tools for transfering knowledge back and forth between a general case “for adults” and a toy model “for children”; the second sense is simply that the text’s presentation has few prerequisites and never becomes too abstract. Here we will use the second sense: everything here, except for the last section, should be accessible to students who have taken a course on Discrete Mathematics and read [@OchsPH1]. This means that categories, toposes, sheaves and the maps $j:Ω→Ω$ only appear in the last section, and before that we deal only with the J-operators $J:H→H$, how they correspond to slashings and sets of question marks, and how they form an algebra. author: - Eduardo Ochs bibliography: - 'catsem-u.bib' title: 'Planar Heyting Algebras for Children 2: Local Operators, J-Operators, and Slashings' --- =1 2017planar-has-defs.tex 2019J-ops-defs.tex .dnt \#1 2019J-ops-slashings.tex 2019J-ops-logic.tex 2019J-ops-midway.tex 2019J-ops-cubes.tex 2019J-ops-valuations.tex 2019J-ops-algebra.tex 2019J-ops-categories.tex 2019J-ops-classifier.tex 2019J-ops-kan.tex
--- abstract: 'Fragmentation is the dominant mechanism for hadron production with high transverse momentum. For spin-triplet S-wave heavy quarkonium production, contribution of gluon fragmenting to color-singlet channel has been numerically calculated since 1993. However, there is still no analytic expression available up to now because of its complexity. In this paper, we calculate both polarization-summed and polarized fragmentation functions of gluon fragmenting to a heavy quark-antiquark pair with quantum number ${{{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}}$. Our calculations are performed in two different frameworks. One is the widely used nonrelativistic QCD factorization, and the other is the newly proposed soft gluon factorization. In either case, we calculate at both leading order and next-to-leading order in velocity expansion. All of our final results are presented in terms of compact analytic expressions.' author: - 'Peng Zhang$^{1}$' - 'Yan-Qing Ma$^{1,2,3}$' - 'Qian Chen$^{1}$' - 'Kuang-Ta Chao$^{1,2,3}$' title: 'Analytical calculation for the gluon fragmentation into spin-triplet S-wave quarkonium' --- Introduction ============ As heavy quark mass $m_Q$ is much larger than the QCD nonperturbative scale $\Lambda _{QCD}$, the production of heavy quark-antiquark ($Q\bar{Q}$) pair is perturbatively calculable. Due to the binding energy of $Q\bar{Q}$ for a heavy quarkonium being at the order of $\Lambda _{QCD}$, hadronization of $Q\bar{Q}$ to heavy quarkonium is nonperturbative. Therefore, study of quarkonium production can help to understand both perturbative and nonperturbative physics in QCD. Nevertheless, more than 40 years after the discovery of the $J/\psi$, the production mechanism of heavy quarkonium, the simplest system under strong interaction, is still not well understood. Recently, two of the present authors proposed a soft gluon factorization (SGF) theory to describe quarkonium production and decay [@Ma:2017xno; @machao]. On the one hand, SGF is as rigorous as the currently widely used nonrelativistic QCD factorization (NRQCD) [@Bodwin:1994jh], which means either both of them are correct to all orders in perturbation theory, or both of them are broken down at a sufficient large order in $\alpha_s$ expansion. On the other hand, it was argued that the convergence of velocity expansion in SGF should be much better than that in NRQCD [@Ma:2017xno]. Thus, SGF may resolve some difficulties encountered in NRQCD for quarkonium production. In this paper, we use SGF and NRQCD to compute the gluon fragmentation function (FF) to $J^{PC}=1^{--}$ quarkonia, which is useful for understanding the production of these quarkonia at high transverse momentum $p_T$. According to QCD collinear factorization [@Collins:1989gx], the inclusive production cross section of a specific hadron $H$ at very high $p_T$ is dominated by the fragmentation mechanism at leading power (LP) [^1], $${\mathrm{d}}\sigma _{A+B\rightarrow H(p_T)+X} = \sum_i {\mathrm{d}}\hat{\sigma} _{A+B \rightarrow i(p_T /z)+X} \otimes D_{i\to H}(z,\mu) + \co (1/p_T^2) \, ,$$ where $i$ sums over all quarks and gluons, and $z$ is the light-cone momentum fraction carried by $H$ with respect to the parent parton. The hard part $d \hat{\sigma} _{A+B \rightarrow i(p_T /z)+X}$ can be calculated perturbatively, while the FF $D_{i\to H}(z,\mu)$, describing the probability distribution of the hadronization from $i$ to $H$, is nonperturbative and universal. The dependence of FF on factorization scale $\mu$ is controlled by the DGLAP evolution equation[@Gribov:1972ri; @Altarelli:1977zs; @Dokshitzer:1977sg], $$\mu \frac{{\mathrm{d}}}{{\mathrm{d}}\mu} D_{i\to H}(z,\mu) = \sum_j \int_z^1 \frac{{\mathrm{d}}\xi}{\xi} P_{ij} \left( \frac{z}{\xi},\alpha_s(\mu) \right) D_{j\to H}(\xi,\mu) \, ,$$ where $P_{ij}$ are splitting functions that can be calculated perturbatively. Based on this evolution, FF at an arbitrary perturbative scale $\mu$ can be determined by FF at an initial scale $\mu_0$. In the case that $H$ is a heavy quarkonium, there is an intrinsic hard scale $m_Q$ in FFs. Usually, we can choose the initial scale $\mu_0 \gtrsim 2 m_Q $, so that $\ln (\mu_0^2/m_Q^2)-$type logarithms are not large. As $m_Q\gg\Lambda_{\text{QCD}}$, FFs evaluated at $\mu_0$ can be further factorized as perturbative calculable short-distance coefficients (SDCs) multiplied by nonpertubative part at the scale $m_Q v$ and below. If one uses either SGF or NRQCD to do this factorization, one needs to sum over states of intermediate $Q\bar{Q}$ pair, which are usually expressed as spectroscopic notation ${{^{{2S+1}}\hspace{-0.6mm}L_{J}^{[c]}}}$ with $c=1~\text{or}~8$ to denote color singlet or color octet. For the gluon fragmentation to $1^{--}$ quarkonium, like the ${{J/\psi}}$, $\psi(2S)$ and $\Upsilon(nS)$, the dominant contribution comes from ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ intermediate state according to the velocity scaling rule [@Bodwin:1994jh]. In the SGF framework, calculation of fragmentation function from gluon to ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ intermediate state is still absent. In the NRQCD framework, SDCs of gluon fragmenting into polarization summed ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ intermediate state has been calculated numerically in Refs. [@Braaten:1993rw; @Braaten:1995cj] for $v^0$ contribution and in Ref. [@Bodwin:2003wh] for $v^2$ correction. However, analytical results are still absent because of the complexity of the problem. One reason is that there are two gluons emitted in the final state, so the phase space integral is similar to two-loops integral. The other reason is that light-cone momentum is involved in the definition of fragmentation function, which makes the phase space integral more complicated than usual. In Refs. [@Braaten:1993rw; @Braaten:1995cj], there is a four-dimensional integral left for numerical computing. In Ref. [@Bodwin:2003wh], the authors make some transformation of the variables and analytically integrate out two more dimensions, but there is still a two-dimensional integral that has to be calculated numerically. Besides, there is no calculation of polarized SDCs based on the definition of fragmentation function, of which the result is useful for understanding the polarization puzzle [@Abulencia:2007us; @Butenschoen:2012px; @Chao:2012iv; @Gong:2012ug; @Bodwin:2014gia]. In this paper, we analytically calculate SDCs of gluon fragmenting into ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ state separately in SGF and NRQCD frameworks. We include both $v^0$ contribution and $v^2$ contribution in nonrelativistic expansion. In all cases, we provide transversely polarized SDCs in addition to polarization summed SDCs, while longitudinally polarized SDCs can be obtained by their difference. The rest of the paper is organized as follows. In Sec. \[sec:def\], we first introduce the definition of gluon FF, and then describe how to apply SGF and NRQCD to calculate the FF in detail. The resulting expressions are complicated phase space integrals. In Sec. \[sec:cal\], we use integration-by-part (IBP) method [@Chetyrkin:1981qh; @Smirnov:2012gma; @Smirnov:2014hma; @Lee:2013mka] to express these phase space integrals in terms of some bases, which are called master integrals. We then calculate these master integrals. Almost all master integrals can be easily calculated except one, which we calculate by constructing and solving a differential equation with a trivial initial condition. Then we exhibit the analytical results and the large $z$ behaviour of the FFs. Finally, we present numerical results and a discussion in Sec. \[sec:summary\]. Some coefficients of analytical results calculated in this paper are given in the Appendix. Factorization of Quarkonium Fragmention Functions {#sec:def} ================================================= Definition of fragmentation functions ------------------------------------- In this paper, we use light-cone coordinates where a four-vector $V$ can be expressed as $$\begin{aligned} \begin{split} V & = (V^+,V^-,\boldsymbol{V}_{\perp})=(V^+,V^-,V^1,V^2) \, , \\ V^+ & = (V^0+V^3)/\sqrt{2} \, , \\ V^- & = (V^0-V^3)/\sqrt{2} \, . \end{split}\end{aligned}$$ The scalar product of two four-vector $V$ and $W$ then becomes $$V \cdot W = V^+ W^- + V^- W^+ - \boldsymbol{V}_{\perp} \cdot \boldsymbol{W}_{\perp} \, .$$ We introduce a light-like vector $n=(0,1,\boldsymbol{0}_{\perp})$, so that $n\cdot V=V^+$. The Collins-Soper definition of FF for a gluon fragmenting into a hadron (quarkonium) is given by [@Collins:1981uw] $$\begin{aligned} \label{eq:defFF} \begin{split} D_{g \rightarrow H}(z,\mu_0)= & \frac{-g_{\mu\nu}z^{D-3}}{2 \pi P_c^{+}(N_{c}^{2}-1)(D-2)} \int_{-\infty}^{+\infty}\mathrm{d}x^{-} e^{-i z P_c^{+} x^{-}} \\ & \times \langle 0 | G_{c}^{+\mu}(0) \mathcal{E}^{\dag}(0,0,\boldsymbol{0}_{\perp})_{cb} \mathcal{P}_{H(P_H)} \mathcal{E}(0,x^{-},\boldsymbol{0}_{\perp})_{ba} G_{a}^{+\nu}(0,x^{-},\boldsymbol{0}_{\perp}) | 0 \rangle \, , \end{split}\end{aligned}$$ where $G^{\mu\nu}$ is the gluon field-strength operator, $P_H$ and $P_c$ are respectively the momenta of the fragmenting hadron and initial virtual gluon, and $z$ is the “$+$” momentum fraction of the initial virtual gluon carried by the hadron. It is convenient to choose the frame so that the hadron has zero transverse momentum, $P_H = (z P_c^+, M_H^2/(2 z P_c^+),\boldsymbol{0}_{\perp})$, where $M_H$ is the mass of the hadron. The projection operator $\mathcal{P}_{H(P_H)}$ is given by $$\label{eq:projectH} \mathcal{P}_{H(P_H)} = \sum_X |H(P_H)+X \rangle \langle H(P_H)+X|\,,$$ where $X$ sums over all unobserved particles. The gauge link $\mathcal{E}(x^{-})$ is an eikonal operator that involves a path-ordered exponential of gluon field operators along a light-like path, $$\mathcal{E}(0,x^{-},\boldsymbol{0}_{\perp})_{ba}= \mathrm{P} \, \text{exp} \left[+i g_s \int_{x^{-}}^{\infty}\mathrm{d}z^{-} A^{+}(0,z^{-},\boldsymbol{0}_{\perp}) \right]_{ba} \, ,$$ where $g_s=\sqrt{4 \pi \alpha_s}$ is the QCD coupling constant and $A^{\mu}(x)$ is the matrix-valued gluon field in the adjoint representation: $[A^{\mu}(x)]_{ac} = i f^{abc} A^{\mu}_{b}(x)$. In the light-cone gauge $A^+=A\cdot n=0$, the gauge link $\mathcal{E}(0,x^{-},\boldsymbol{0}_{\perp})$ becomes 1 and thus it does not show up in the Feynman diagrams. In fact, for the problem studied in this paper, the gauge link has no contribution even if we work in Feynman gauge. Applying SGF to fragmentation functions --------------------------------------- The one-dimensional SGF for gluon fragmenting to quarkonium $H$ is given by [@Ma:2017xno] $$\label{eq:SGF} D_{g\to H}(z,\mu_0) = \sum_n\int dr\, d_{n} (z/r,M_H/r,m_Q,\mu_0)\,r F_n^H(r) \, ,$$ where $n={{^{{2S+1}}\hspace{-0.6mm}L_{J}^{[c]}}}$ is in spectroscopic notation to denote quantum numbers of the intermediate $Q\bar{Q}$ pair, $d_{n} (z/r,M_H/r,m_Q,\mu_0)$ are SDCs to produce a $Q\bar{Q}$ pair with invariant mass $M_H/r$ and quantum number $n$, $F_n^H(r)$ are one-dimensional soft gluon distributions (SGDs) defined by four-dimensional SGDs $$\begin{aligned} F_{n}^H(r)=\int \frac{d^4P}{(2\pi)^4} \delta(r-\sqrt{P_H^2/P^2})\, F^H_{n}(P,P_H),\end{aligned}$$ and four-dimensional SGDs are defined as expectation values of bilocal operators in QCD vacuum, $$\begin{aligned} \label{eq:SGDs} \begin{split} &F^H_{n}(P,P_H)=\int d^4x\, e^{iP\cdot x}\, \langle 0| \bar\psi(0)\Gamma_{n}^\prime \mathcal{E}^\dagger(0)\psi(0) \mathcal{P}_{H(P_H)} \bar\psi(x) \Gamma_{n}\mathcal{E}(x)\psi(x) |0\rangle, \end{split}\end{aligned}$$ where $\Gamma$ and $\Gamma^\prime$ are color and angular momentum projection operators that define $n$, $\mathcal{E}(x)$ are gauge links that enable gauge invariance [@machao; @Ma:2017xno]. ![One typical diagram for the gluon fragmenting into ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}} \ Q\bar{Q}$ pair in the light-cone gauge at LO order in $\alpha_s$. The other diagrams are obtained by permutation. \[fig:FeynmanDiagram\]](FeynmanDiagram.eps){width="40.00000%"} As mentioned in the introduction, for gluon fragmenting to $1^{--}$ quarkonium in this paper we keep only $n={{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ intermediate $Q\bar{Q}$ state, we thus suppress the subscript $n$ in the rest of this paper. Then the lowest order in $\alpha_s$ expansion of $d (z/r,M_H/r,m_Q,\mu_0)$ is described by Feynman diagrams of a virtual gluon decaying to a $Q\bar{Q}$ pair with quantum number ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ combined with two more gluons, as shown in Fig. \[fig:FeynmanDiagram\] , which can be formally defined as the lowest order in $\alpha_s$ expansion of the following matrix element, $$\begin{aligned} \label{eq:defsdc} \begin{split} & d^{\infty}(z,M,m_Q,\mu_0)= \frac{-g_{\mu\nu}z^{D-3}}{2 \pi P_c^{+}(N_{c}^{2}-1)(D-2)} \int_{-\infty}^{+\infty}\mathrm{d}x^{-} e^{-i z P_c^{+} x^{-}} \\ & \times \langle 0 | G_{c}^{+\mu}(0) \mathcal{E}^{\dag}(0,0,\boldsymbol{0}_{\perp})_{cb} |Q\bar{Q}({{^{3}\hspace{-0.6mm}S_{1}^{[1]}}})+g+g \rangle \langle Q\bar{Q}({{^{3}\hspace{-0.6mm}S_{1}^{[1]}}})+g+g| \mathcal{E}(0,x^{-},\boldsymbol{0}_{\perp})_{ba} G_{a}^{+\nu}(0,x^{-},\boldsymbol{0}_{\perp}) | 0 \rangle \, , \end{split}\end{aligned}$$ where $M$ is the invariant mass of the $Q\bar{Q}$ pair. In the momentum space, we have the lowest order in $\alpha_s$ expansion $$\label{eq:sdcdef0} d(z,M,m_Q,\mu_0) = \frac{N_{\mathrm{CS}}}{D-1} \int \mathrm{d} \Phi \, \left| \mathcal{M}(P, k_i, m_Q) \right|^2 \, ,$$ where $N_{\mathrm{CS}}=\frac{z^{D-2}}{(N_{c}^{2}-1)(D-2)} $, $D=4$ is the space-time dimension, $P$ is the total momentum of the $Q\bar{Q}$ pair, $k_i$ ($i=1,2$) is the momentum of the $i$th final-state gluon, and final-state phase space is defined as $$\begin{aligned} \label{eq:phase} \begin{split} \mathrm{d} \Phi &= \frac {1}{2!} \delta \left( z - \frac{P^{+}}{P_{c}^{+}} \right) (2\pi)^{D} \delta^{D} \left( P_{c} - P - k_1-k_2 \right) \frac{\mathrm{d}^{D} P_{c}}{(2\pi)^{D}} \prod_{i=1}^{2} \frac{\mathrm{d} k_{i}^{+}}{4\pi k_{i}^{+}} \frac{\mathrm{d}^{D-2} k_{i\perp}}{(2\pi)^{D-2}} \theta(k_{i}^{+}) \\ &= \frac {P^{+}}{z^{2} 2!} \delta \left( \frac{1-z}{z} P^{+} - k_{1}^{+} - k_{2}^{+}\right) \prod_{i=1}^{2} \frac{\mathrm{d} k_{i}^{+}}{4\pi k_{i}^{+}} \frac{\mathrm{d}^{D-2} k_{i\perp}}{(2\pi)^{D-2}} \theta(k_{i}^{+}) \, \end{split}\end{aligned}$$ where $2!$ is the symmetry factor for identical gluons in the final state. The matrix elements $\mathcal{M}({P},{k}_{i},m_Q)$ are defined as $$\begin{aligned} \left| \mathcal{M}({P},{k}_{i},m_Q) \right|^2=\sum_{\lambda \lambda_1 \lambda_2 \lambda_3} \left| \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}(P,k_{i},m_Q) \right|^2,\end{aligned}$$ where $\lambda$ and $\lambda_i$ ($i=1, 2, 3$) are polarizations of the $Q\bar{Q}$ pair and gluons, respectively, and $\mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}(P,k_{i},m_Q)$ is the amplitude to produce a $Q\bar{Q}$ pair with momentum $P$ and quantum numbers ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$. Summation over all polarizations of the heavy quark pair with momentum $P$ gives $$\begin{aligned} \label{eq:polarsum} I^{\alpha\beta}(P)=\sum_{\lambda=0,\pm 1} \epsilon_\lambda^\alpha& \epsilon_\lambda^{*\beta} = -g^{\alpha \beta} + \frac{P^{\alpha} P^{\beta}} {P^{2}} \, , \end{aligned}$$ and summation over all polarizations of gluons or summation over transverse polarizations of the heavy quark pair with momentum $k$ gives $$\begin{aligned} \label{eq:polarsumT} \sum_{\lambda_i=\pm 1} \epsilon_{\lambda_i}^\mu& \epsilon_{\lambda_i}^{*\nu} = -g^{\mu \nu} + \frac{k^{\mu} n^{\nu} + k^{\nu} n^{\mu}} {k^{+}} - \frac{k^2 n^{\mu} n^{\nu}} {(k^{+})^{2}} \, .\end{aligned}$$ In the above, polarizations are quantized along the vector $n$. The amplitude to produce a ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ state can be obtained by $$\begin{aligned} \label{eq:amplitude} \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}(P,k_{i},m_Q)=\int {\mathrm{d}}^2 \Omega\, \text{Tr}\left[ \Gamma_\lambda \mathcal{M}_{\lambda_1 \lambda_2 \lambda_3}(P,k_{i},q,m_Q) \right]\, ,\end{aligned}$$ where $\mathcal{M}_{\lambda_1 \lambda_2 \lambda_3}(P,k_{i},q,m_Q)$ is the amplitude to produce an open $Q\bar{Q}$ pair, with momenta $p=P/2+q$ for $Q$ and $\overline{p}=P/2-q$ for $\bar Q$, and $\Gamma_\lambda $ is used to project the $Q\bar{Q}$ pair to ${{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}$ state, with definition $$\Gamma_{\lambda} = \frac{1} {\sqrt{N_{c}}} \frac{1} {\sqrt{2 E} (E + m_Q)} (\slashed{\overline{p}} - m_Q) \frac{2 E - \slashed{P}}{4 E} \slashed{\epsilon}_\lambda \frac{2 E + \slashed{P}}{4 E} (\slashed{p} - m_Q) \, ,$$ where $\epsilon_\lambda^\mu$ are polarization vectors. As both $p$ and $\overline{p}$ are approximated to be on mass shell [@machao; @Ma:2017xno], we have $$\begin{aligned} \label{eq:relation} P\cdot q =0, \quad \quad M^2=P^2=4E^2=4(m_Q^2-q^2).\end{aligned}$$As a result, the four-momentum $q$ has only two degrees of freedom, which are chosen to be the two-dimensional spatial angles $\Omega$ at the rest frame of $P$. After integration over spatial angles, the obtained $\mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}(P,k_{i},m_Q)$ in Eq.  has no dependence on $q$ any more. Note that the dominant contribution of Eq.  comes from the region where $1\gg1-r^2=1-M_H^2/M^2\approx1-4m_Q^2/M^2=-4q^2/M^2$ [@machao; @Ma:2017xno], thus we can simplify SDCs by expanding $q^2/M^2$ . In the SGF framework, this expansion is obtained by first expressing $m_Q^2={M^2/4+q^2}$, and then fixing $M$ but expanding $q^2$ at the origin [^2]. Because neither phase space integration nor polarization vectors depend on $m_Q$ and $q$, the above expansion can be achieved by a similar expansion of the amplitude in Eq. , $$\label{eq:amplitudeExp} \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}(P,k_{i},m_Q) = \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(0)}(P,k_{i}) -q^2 \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(2)}(P,k_{i})+O(q^4)\, ,$$ with $$\begin{aligned} \label{eq:amplitudeExp2} \begin{split} \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(0)}(P,k_{i}) & = \text{Tr}\left[ \Gamma_\lambda \mathcal{M}_{\lambda_1 \lambda_2 \lambda_3}(P,k_{i},0,M/2) \right] \, , \\ \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(2)}(P,k_{i}) & = \frac{I^{\mu\nu}(P)}{2(D-1)} \left\{\frac{\partial^2 }{\partial q^\mu \partial q^\nu} \text{Tr}\left[ \Gamma_\lambda \mathcal{M}_{\lambda_1 \lambda_2 \lambda_3}\left(P,k_{i},q,\sqrt{\frac{M^2}{4}+q^2}\right) \right]\Bigg|_{q=0}\right\}\, , \end{split}\end{aligned}$$ where $I^{\mu\nu}(P)$ is defined in Eq. . With this expansion, SDCs have expansion $$\begin{aligned} \label{eq:sdcExp} d(z,M,m_Q,\mu_0) =& \frac{N_{\mathrm{CS}}}{D-1} \int \mathrm{d} \Phi \, \sum_{\lambda \lambda_1 \lambda_2 \lambda_3} \left| \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(0)}(P,k_{i}) \right|^2 - q^2 \left[ \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(0)}(P,k_{i}) \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{*(2)}(P,k_{i})+ c.c.\right] + O(q^4) \notag\\ \equiv& d^{(0)}(z, M,\mu_0 ) - \frac{4 q^2}{M^2} d^{(2)}(z, M ,\mu_0) + O(q^4).\end{aligned}$$ Similarly, if we sum over only transerve polarizations of the $Q\bar{Q}$ pair by using the projection operator in Eq.  instead of that in Eq. , we can obtain transversely polarized SDCs $$\begin{aligned} \label{eq:sdcTExp} d_T(z,M,m_Q,\mu_0) = d_T^{(0)}(z, M,\mu_0 ) - \frac{4 q^2}{M^2} d_T^{(2)}(z, M ,\mu_0) + O(q^4).\end{aligned}$$ Longitudinal polarized SDCs can be obtained by subtracting out transversely polarized SDCs from corresponding polarization-summed SDCs. Applying NRQCD to fragmentation functions ----------------------------------------- While if applying the NRQCD, we get $$D_{g\to H}(z,\mu_0) = \sum_n d_n^O (z,2m_Q,\mu_0) {{\langle{{\mathcal O}}^{H}_n\rangle}}+ d_n^{P} (z,2m_Q,\mu_0) {{\langle{{\mathcal P}}^{H}_n\rangle}}+\cdots\, ,$$ where $d_{n}^{O,P} (z,2m_Q,\mu_0)$ are SDCs to produce a $Q\bar{Q}$ pair with invariant mass $2m_Q$ and quantum numbers $n$, and ${{\langle{{\mathcal O}}^{H}_n\rangle}}$ and ${{\langle{{\mathcal P}}^{H}_n\rangle}}$ are respectively NRQCD long-distance matrix elements (LDMEs) at first and second order in $v^2$ expansion [@Bodwin:1994jh], which can be expressed as the vacuum expectation value of a four-fermion operator in NRQCD vacuum $$\begin{aligned} {{\langle{{\mathcal O}}^{H}_n\rangle}}&= \langle 0 | \chi ^{\dag} \kappa _n \psi \mathcal{P}_{H(P)} \psi ^{\dag} \kappa'_n \chi |0 \rangle \, ,\\ {{\langle{{\mathcal P}}^{H}_n\rangle}}&= \langle 0 |\frac{1}{2}\left[ \chi ^{\dag} \kappa _n \psi \mathcal{P}_{H(P)} \psi ^{\dag} \kappa'_n (-\frac{i}{2} \overleftrightarrow{\mathbf{D}})^2 \chi + h.c. \right]|0 \rangle \, ,\end{aligned}$$ where $\psi ^{\dag}$ and $\chi$ are the two-component operators to creat a heavy quark and a heavy antiquark, respectively, and $\kappa _n$ and $\kappa'_n$ are combinations of Pauli and color matrices. These LDMEs are defined in the rest frame of $H$ and expected to be universal. If the hadron $H$ is the free $Q \bar{Q}$ pair, we have ${{\langle{{\mathcal P}}^{H}_n\rangle}}=(-q^2/m_Q^2) {{\langle{{\mathcal O}}^{H}_n\rangle}}$. As mentioned above, we only consider $n={{{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}}$ intermediate state, and thus will drop the subscript $n$ in the following. The calculations of $d^O$ and $d^P$ in NRQCD are very similar to the calculation of $d^{(0)}$ and $d^{(2)}$ in SGF defined in Eq. . The only difference is that, in the NRQCD, one expands $q^2$ with fixed $m_Q$ but not $M$, which implies that phase space also needs to be expanded. For this purpose, we first extract the dependence on $q$ explicitly by rescaling momenta in the delta function in Eq.  by $M$ as following, $$\label{eq:dless} \hat{P} = \frac {P}{M} \, , \quad \hat{k}_i = \frac {k_i}{M} \, .$$ Thus the phase space in Eq.  changes to $${\mathrm{d}}\Phi = M^{4} {\mathrm{d}}\hat{\Phi} \, ,$$ where ${\mathrm{d}}\hat{\Phi}$ is the same as ${\mathrm{d}}\Phi$ except that momenta in it have been changed to the dimensionless ones, and therefore it has no dependence on $q$. If we further denote $$\hat{\mathcal{M}}_{\lambda \lambda_1 \lambda_2 \lambda_3}(\hat{P},\hat{k}_{i},m_Q) = M^2 \mathcal{M}_{\lambda \lambda_1 \lambda_2 \lambda_3}(M \hat{P}, M \hat{k}_{i},m_Q) \, ,$$ we get a similar relation as that in Eq. , $$d(z,M,m_Q,\mu_0) = \frac{N_{\mathrm{CS}}}{D-1} \int \mathrm{d} \hat{\Phi} \, \left| \hat{\mathcal{M}}(\hat{P},\hat{k}_i, m_Q) \right|^2 \,.$$ Then the expansion of amplitude $\hat{\mathcal{M}}_{\lambda \lambda_1 \lambda_2 \lambda_3}(\hat{P},\hat{k}_{i},m_Q)$ can be achieved similarly as that in Eq.  and Eq. , except that we express $M^2=4(m_Q^2-q^2)$ and fix $m_Q$. Eventually, we get $$\begin{aligned} \label{eq:NRsdcExp} \begin{split} d(z,M,m_Q,\mu_0) = & \frac{N_{\mathrm{CS}} }{D-1} \int \mathrm{d} \hat\Phi\, \sum_{\lambda \lambda_1 \lambda_2 \lambda_3} \left| \mathcal{\hat{M}}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(0)}(\hat{P},\hat{k}_{i},m_Q) \right|^2 \\ & \phantom{\frac{N_{\mathrm{CS}} }{D-1} \int \mathrm{d} \hat\Phi\, \sum_{\lambda \lambda_1 \lambda_2 \lambda_3} } - q^2 \left[ \mathcal{\hat{M}}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{(0)}(\hat{P},\hat{k}_{i},m_Q) \mathcal{\hat{M}}_{\lambda \lambda_1 \lambda_2 \lambda_3}^{*(2)}(\hat{P},\hat{k}_{i},m_Q)+ c.c.\right] + O(q^4) \\ \equiv& d^{O}(z, 2m_Q,\mu_0 ) -\frac{q^2}{m_Q^2} d^{P}(z, 2m_Q ,\mu_0) + O(q^4) \, . \end{split}\end{aligned}$$ Clearly, we have the relation $$\begin{aligned} \label{eq:relationU} d^{O}(z, M,\mu_0 )=d^{(0)}(z,M,\mu_0),\end{aligned}$$ but $d^{P}(z, M,\mu_0 )$ is different from $d^{(2)}(z,M,\mu_0)$. Again, we can obtain transversely polarized SDCs $$\begin{aligned} \label{eq:NRsdcTExp} d_T(z,M,m_Q,\mu_0) = d_T^{O}(z, 2m_Q,\mu_0 ) -\frac{q^2}{m_Q^2} d_T^{P}(z, 2m_Q ,\mu_0) + O(q^4),\end{aligned}$$ where we also have $$\begin{aligned} \label{eq:relationT} d_T^{O}(z, M,\mu_0 )=d_T^{(0)}(z,M,\mu_0).\end{aligned}$$ Calculation of the short-distance coefficient {#sec:cal} ============================================= For the process of gluon fragmenting to spin-triplet color-singlet S-wave quarkonium at LO in $\alpha_s$, there are two soft gluons in the final state as shown in Fig. \[fig:FeynmanDiagram\]. We denote the “$+$” component of the first gluon as $k_1^+ = (1-z) z_1 P_c^+ $, then for the second gluon we have $k_2^+ = (1-z)(1-z_1) P_c^+ $. To simplify our notation, we will use only dimensionless momenta defined in Eq.  but omit the superscript “ $\hat{}$ ” in the rest of this paper. According to Sec. \[sec:def\], calculation of SDCs can be decomposed into the sum of a series of integrals with the form $$\label{eq:iniff} \int \mathrm{d} \Phi f(z,z_{1}) \frac {(k_1 \cdot k_2)^{n_{5}}}{E_1^{n_{1}} E_2^{n_{2}} E_3^{n_{3}} E_4^{n_{4}}} \, ,$$ where $n_i>0(i=1,2,3,4,5)$, $f(z,z_{1})$ is fractional polynomials with respect to $z$ and $z_1$, and $$\label{eq:defdeno1} E_1 = k_1 \cdot P \, , \ E_2 = k_2 \cdot P \, , \ E_3 = 2 k_1 \cdot k_2 + k_1 \cdot P + k_2 \cdot P \, , \ E_4 = 1 + 2 k_1 \cdot k_2 + 2 k_1 \cdot P + 2 k_2 \cdot P \, .$$ We note that $z_1$ does not appear in the denominators, which is because, as we pointed out, the gauge link in the definition of FFs has no contribution in our case. Reduction to Master Integrals ----------------------------- Calculating general integrations in Eq.  analytically is not an easy task, and only numerical results are available in literature [@Braaten:1993rw; @Braaten:1995cj; @Bodwin:2003wh]. To perform them analytically, we employ the IBP reduction method [@Chetyrkin:1981qh; @Smirnov:2012gma; @Smirnov:2014hma; @Lee:2013mka] that are widely used for high loops calculation. Especially, we will use the program FIRE [@Smirnov:2014hma]. Feynman integrals that can be reduced by FIRE can be generally expressed as $$\label{eq:defhloop} F(a_{1}, \ldots ,a_{n}) = \idotsint \, \frac{{\mathrm{d}}^{D} l_{1} \ldots {\mathrm{d}}^{D} l_{h}} {D_{1}^{a_{1}} \ldots D_{n}^{a_{n}}} \, ,$$ where $a_i\,(i=1, \ldots, n)$ are integers that can be either positive or negative, and denominators $D_i \,( i=1, \ldots, n)$ are linear functions with respect to scalar products of loop momenta $l_{i}\, (i=1,\ldots,h)$ and external mementa. The program FIRE, by employing IBP, can reduce these complex integrals into limited number of simpler integrals which are called master integrals. Nevertheless, integrations in Eq.  are not directly handleable by FIRE because there are delta functions in the phase space, which becomes more clearly if we rewrite the phase space as $$\label{eq:phase3} \mathrm{d} \Phi = \frac{\mathrm{d}^{D} k_1}{(2\pi)^{D}} \frac{\mathrm{d}^{D} k_2}{(2\pi)^{D}} \frac{P \cdot n}{z^{2} 2!} \delta _+ (k_1^{2}) \delta _+ (k_2^{2}) \delta \left( k_1 \cdot n + k_2 \cdot n - \frac{1-z}{z} P \cdot n \right) \, ,$$ where subscript “$+$” of a delta function means that energy of the momentum inside the delta function is positive. To make delta functions handleable by FIRE, we rewrite a delta function as $$\delta (x) = \frac{i}{2 \pi} \lim_{\varepsilon \rightarrow 0} \left( \frac{1}{x + i \varepsilon} - \frac{1}{x - i \varepsilon} \right) \, ,$$ which changes the delta function to a propagator denominator. We can further identify $z_1=z \, k_1 \cdot n / (1-z) P \cdot n$, and choose the following notations $$\label{eq:defdeno2} E_5 = k_1^2 + i \varepsilon \, , \ E_6 = k_2^2 + i \varepsilon \, , \ E_7 = k_1 \cdot n + k_2 \cdot n - \frac{1-z}{z} P \cdot n + i \varepsilon \, , \ E_8 = k_1 \cdot n \, ,$$ then integrals in Eq.  are casted to $$\label{eq:intfire} \int \frac{\mathrm{d}^{D} k_1}{(2\pi)^{D}} \frac{\mathrm{d}^{D} k_2}{(2\pi)^{D}} f(z) \frac{P \cdot n}{z^{2} 2!} \left ( \frac{z}{(1-z) P \cdot n} \right)^{n_{6}} \left ( \frac{i}{2 \pi} \right )^{3} \frac{(k_1 \cdot k_2)^{n_{5}} E_8^{n_{6}}}{E_1^{n_{1}} E_2^{n_{2}} E_3^{n_{3}} E_4^{n_{4}} E_5 E_6 E_7 } \, ,$$ together with 7 other kinds of integrals with similar form except that some of small imaginary parts of $E_5,\, E_6,\, E_7$ change from “ $+i\varepsilon$ ” to “ $-i\varepsilon$ ”. Since IBP reduction is independent of the small imaginary part, these 8 kinds of integrals have similar reduced results. Therefore, after reduction, we can change $E_5,\, E_6,\, E_7$ back to corresponding delta functions, and thus we can obtain master integrals of Eq. . One important point is that any master integral with non-positive power of $E_5,\, E_6,\, E_7$ must be canceled by other master integrals reduced by the other 7 kinds of integrals. Combining with the fact that powers of $E_5,\, E_6,\, E_7$ can be always chosen to no larger than 1, our obtained master integrals have the same phase space integration as that in Eq. . The denominators $E_1, \cdots, E_7$ in Eq.  are linearly dependent, which can be easily changed to be linearly independent with the same integrations structure. We further add a denominator $E_8$ to some integrals to make them complete. After reduction by applying FIRE [@Smirnov:2014hma], SDC, say $d^{(0)}$, becomes $$\label{eq:sdcmi} d^{(0)} (z,M,\mu_0) = \sum_{a=1}^{13} f_{a} (z, \epsilon)I_{a} \, ,$$ where coefficients $f_{a}$ are fractional polynomials in terms of $z$, which can be expanded in powers of $\epsilon$, and master integrals $I_{a}$ can be defined as $$\label{eq:defmi} I_{a} = \int {\mathrm{d}}\Phi \, F_{a} = \frac{1}{(4 \pi)^2 z (1 -z ) 2!} \int_{0}^{1} \frac{\mathrm{d} z_{1}}{z_{1} (1 - z_{1})} \iint \frac{\mathrm{d}^{D-2} k_{1\perp}}{(2\pi)^{D-2}} \frac{\mathrm{d}^{D-2} k_{2\perp}}{(2\pi)^{D-2}} F_{a} \, ,$$ with $F_{a} (a=1,\ldots,13) $ choosing from $$\label{eq:allmi} \frac{1}{E_3} \, , \ \frac{1}{E_4} \, , \ \frac{1}{E_1 E_2} \, , \ \frac{1}{E_1 E_3} \, , \ \frac{1}{E_1 E_4} \, , \ \frac{E_2}{E_1 E_3} \, , \ \frac{E_4}{E_1 E_3} \, , \ \frac{E_2}{E_1 E_4} \, , \ \frac{E_3}{E_1 E_4} \, , \ \frac{1}{E_1 E_3^{2}} \, , \ \frac{1}{E_1^{2} E_4} \, , \ \frac{1}{E_3 E_4} \, , \ \frac{1}{E_1 E_2 E_4} \, ,$$ where $E_i (i=1,\ldots,4)$ are defined in Eq. . Calculation of Master Integrals ------------------------------- Calculation of SDCs is now reduced to calculation of the thirteen master integrals defined in Eq. . Among them, each of the first 11 master integrals involves only one denominator that has cross term $k_1\cdot k_2$. In this case, the cross term can be removed by shifting $k_2$, and then we can integrate over $k_{2\perp}$, $k_{1\perp}$ and $z_1$ sequentially. For the 12th master integral, as both of its denominators depend on $k_1\cdot k_2$, we can first do a Feynman parametrization, and then integrate over $k_{2\perp}$, $k_{1\perp}$, Feynman parameter, and $z_1$ sequentially. Although they are easy to calculate, expressions of the first 12 master integrals are quite long, we will not list them in this paper. The most complicated master integral is the last one, which is hard to integrate directly. We will find other way to get the analytical result. In this section, at first we discuss some difficulties encountered in the calculation of the first 12 master integrals, and then concentrate on calculating the last master integral. For the 4th to 10th master integrals, after integrating over $k_{2\perp}$, there is still a term proportional to $$\label{eq:kuvdiv} \int \frac {{\mathrm{d}}^{D-2} k_{1\perp}}{(2 \pi)^{D-2}} \frac{1}{(k_{1\perp}^{2}+a) (k_{1\perp}^{2}+b)^{\epsilon}} \, ,$$ where $a$ and $b$ are both nonnegative functions of $z$ and $z_1$. This integral on the one hand is cumbersome to expand $\epsilon$ after the integration, and on the other hand is ultraviolet divergent and thus cannot expand $\epsilon$ at the integrand level. We rewrite Eq.  as $$\int \frac {{\mathrm{d}}^{D-2} k_{1\perp}}{(2 \pi)^{D-2}} \frac{b-a}{(k_{1\perp}^{2}+a) (k_{1\perp}^{2}+b)^{1+\epsilon}} + \int \frac {{\mathrm{d}}^{D-2} k_{1\perp}}{(2 \pi)^{D-2}} \frac{1}{(k_{1\perp}^{2}+b)^{1+\epsilon}} \, ,$$ where the second term can be integrated and then expand $\epsilon$ easily, while the first term is ultraviolet finite and thus can expand $\epsilon$ at the integrand level. For the first term, we need to expand to second order in $\epsilon$ and thus results in one-dimensional integrals $$\int_{0}^{\infty} {\mathrm{d}}x \frac{b-a}{(x+a)(x+b)} = \ln b - \ln a \, ,$$ and $$\int_{0}^{\infty} {\mathrm{d}}x \frac{(b-a) \left[ \ln x + \ln (x+b) \right]} {(x+a)(x+b)} = \mathrm{Li}_2 \left( 1-\frac{a}{b} \right) - \ln a \, \ln b - \frac{1}{2} \ln^{2} a + \frac{3}{2} \ln^2 b \, .$$ Then we can integrate over $z_1$ easily. For the 11th master integral, after integrating over $k_{2\perp}$, the master integral is proportional to $$\begin{aligned} & \int_0^1 {\mathrm{d}}z_1 \int \frac {{\mathrm{d}}^{D-2} k_{1\perp}}{(2 \pi)^{D-2}} \frac{z_1^{1+\epsilon} (1-z_1)^{-\epsilon} (1+a z_1)^{-1+2\epsilon} }{(k_{1\perp}^{2}+a^2 z_1^2)^2 (k_{1\perp}^{2}+a^2 z_1^2+a z_1)^{\epsilon}} \, ,\end{aligned}$$ with $a=(1-z)/z$, which is infrared divergent when integrating over $z_1$ near the region $z_1=0$. Thus one cannot expand $\epsilon$ at the integrand level. Yet, we can re-scale $k_{1\perp}$ by a factor of $z_1$, and we get $$\label{eq:irint} \int_0^1 {\mathrm{d}}z_1 \, z_1^{-1-2\epsilon} \int \frac {{\mathrm{d}}^{D-2} k_{1\perp}}{(2 \pi)^{D-2}} \frac{(1-z_1)^{-\epsilon} (1+a z_1)^{-1+2\epsilon} }{(k_{1\perp}^{2}+a^2)^2 (z_1 k_{1\perp}^{2}+a^2 z_1+a)^{\epsilon}}\, ,$$ which although is still infrared divergent, but we can expand the integrand other than $z_1^{-1-2\epsilon}$ as a power series of $\epsilon$. Now let’s concentrate on the last master integral $$\begin{aligned} \int {\mathrm{d}}\Phi \frac{1}{E_1 E_2 E_4} ,\end{aligned}$$ which is hard to calculate using the traditional integration method with Feynman parametrization. Yet we can calculate it by constructing and soloving a differential equation [@Kotikov:1990kg; @Remiddi:1997ny; @Argeri:2007up; @Henn:2014qga]. We define $$g(z) = \int {\mathrm{d}}\Phi \frac{z^2}{E_1 E_2 E_4} = \int \frac{\mathrm{d}^{D} k}{(2\pi)^{D}} \frac{\mathrm{d}^{D} l}{(2\pi)^{D}} \left ( \frac{i}{2 \pi} \right )^{3} \frac{P \cdot n}{S} \frac{1}{E_1 E_2 E_4 E_5 E_6 E_7} + \ldots \, ,$$ where we omit 7 other similar terms. Denominators of it do not contain $z$ except $E_7$. If we take the derivative of $g(z)$, it becomes $$\frac{{\mathrm{d}}g(z)}{{\mathrm{d}}z} = \int \frac{\mathrm{d}^{D} k}{(2\pi)^{D}} \frac{\mathrm{d}^{D} l}{(2\pi)^{D}} \left ( \frac{i}{2 \pi} \right )^{3} \frac{- (P \cdot n)^2}{z^2 S} \frac{1}{E_1 E_2 E_4 E_5 E_6 E_7^2 } + \ldots \, .$$ Then we can reduce the integrals again by using IBP and arrive at a differential equation about $g(z) $ $$\label{eq:dfe} \frac{{\mathrm{d}}g(z)}{{\mathrm{d}}z} = \frac{2 (z-1) z }{2 z - 1} \epsilon \, g(z) + h(z) \, ,$$ where $h(z)$ is a linear combination of the first 12 master integrals, which gives $$h(z) = \frac{ (\ln z - \ln (1-z) )^2}{128 \pi ^4 (1-2 z)} \, .$$ It is easy to see that $g(z)$ has no divergence, and thus the term proportional to $\epsilon$ in Eq.  can be safely omitted. Thus the differential equation can be solved by integrating $h(z)$ over $z$ combined with an initial value. A good choice of the initial value can be at $z=1$, where one gets $g(1) = 0$ as the integral over plus direction is suppressed. With this initial value, we eventually get $$\begin{aligned} \label{eq:mi13} \begin{autobreak} I_{13} = - \frac{1}{128 \pi ^4 z^2} \Bigg( \mathrm{Li}_3 \left( \frac{2 z-1}{z} \right) + \mathrm{Li}_3 \left( \frac{z}{z-1} \right) + \mathrm{Li}_3 \left( \frac{2 z - 1}{z-1} \right) - \mathrm{Li}_2 (z) \ln \left( \frac{1-z}{z} \right) + \mathrm{Li}_2 \left( \frac{2 z-1}{z-1} \right) \ln \left( \frac{1-z}{z} \right) + \frac{\ln^3 \left( \frac{1-z}{z} \right)}{6} - \frac{\ln z \, \ln (1-z) \, \ln \left( \frac{1-z}{z} \right)}{2} - \zeta (3) \Bigg) \end{autobreak}.\end{aligned}$$ Analytical results ------------------ Substituting analytical results for the thirteen master integrals into Eq. , we find all kinds of divergences are canceled, and finite result gives $$\label{eq:d0} d^{(0)} (z,M,\mu_0) = \frac{128 (N_c^2 - 4) \pi^3 \alpha_s^3}{3 N_c^2 M^3} \left( C I_{13} +\sum_{i=0}^{11} C_i \, L_i \right) \, ,$$ where $I_{13}$ is given in Eq. , coefficients $C$ and $C_i(i=0,\ldots,11)$ are given in Eq.  in the Appendix, and $L_i(i=0,\ldots,11)$ are defined as $$\begin{aligned} \label{eq:d0logs} \begin{split} & L_0 = 1 \, , \ L_1 = \ln z \, , \ L_2 = \ln (1-z) \, , \ L_3 = \ln (2-z) \, , \ L_4 = \ln^2 z \, , \ L_5 = \ln^2 (1-z) \, , \ L_6 = \ln^2 (2-z) \, , \\ & L_7 = \ln z \, \ln (1-z) \, , \ L_8 = \ln z \, \ln (2-z) \, , \ L_9 = \li_2 (1-z) \, , \ L_{10} = \li_2 \left(\frac{z-1}{z-2}\right) \, , \ L_{11} = \li_2 \left(\frac{2 (z-1)}{z-2}\right) \, . \end{split}\end{aligned}$$ For the transversely polarized SDC $d_T^{(0)} (z,M,\mu_0)$, we can express it similar to $d^{(0)} (z,M,\mu_0)$ in Eq. , but with different coefficients $C^T$ and $C^T_i(i=0,\ldots,11)$ given in Eq. . The relativistic correction SDC $d^{(2)}(z,M,\mu_0)$ and corresponding transverse polarized SDC $d_T^{(2)}(z,M,\mu_0)$ can also be expressed the same as that in Eq.  with corresponding coefficients given in Eq.  and Eq. . As we note in Sec. \[sec:def\], LO SDCs $d^O(z,2m_Q,\mu_0)$ (similar for $d_T^O(z,2m_Q,\mu_0)$) in NRQCD factorization can be obtained by replacing $M$ in Eq.  by $2m_Q$ and keeping other coefficients unchanged. Coefficients of the relativistic correction SDCs $d^P(z,2m_Q,\mu_0)$ and $d_T^P(z,2m_Q,\mu_0)$ are given in Eq.  and Eq. . Large z behaviour ----------------- At hadron colliders, high $p_T$ quarkonium production is most sensitive to fragmentation function at large $z$ region. Thus we investigate SDCs obtained above at this region by expanding them around $z\to1$, and we get $$\begin{aligned} \label{eq:largez} \begin{autobreak} d^{(0)} (z,M,\mu_0) = \frac{4 (N_c^2 - 4) \alpha_s^3}{3 \pi N_c^2 M^3} \bigg((1-z) \big(-\ln (1-z)-3 \big) +\frac{(1-z)^2}{18} \big(36 \ln ^2(1-z) +18 \ln (1-z) +4 \pi ^2 +93\big) +O\big((1-z)^3\big) \bigg) \, , \end{autobreak}\notag\\ \begin{autobreak} d_T^{(0)} (z,M,\mu_0) = \frac{4 (N_c^2 - 4) \alpha_s^3}{3 \pi N_c^2 M^3} \bigg((1-z) \big(-\ln (1-z)-3 \big) +\frac{(1-z)^2}{9} \big(18 \ln ^2(1-z) +18 \ln (1-z) +2 \pi ^2 +51\big) +O\big((1-z)^3\big) \bigg) \, , \end{autobreak}\notag\\ \begin{autobreak} d^{(2)} (z,M,\mu_0) = \frac{4 (N_c^2 - 4) \alpha_s^3}{3 \pi N_c^2 M^3} \bigg(\frac{2}{135} \big(-15+22 \pi^2\big) +\frac{1-z}{27} \big(-63 \ln ^2(1-z) -81 \ln (1-z) -239\big) +O\big((1-z)^2\big) \bigg) \, , \end{autobreak}\notag\\ \begin{autobreak} d_T^{(2)} (z,M,\mu_0) = \frac{4 (N_c^2 - 4) \alpha_s^3}{3 \pi N_c^2 M^3} \bigg(\frac{2}{135} \big(-15+22 \pi ^2\big) +\frac{1-z}{27} \big(-63 \ln ^2(1-z) -81 \ln (1-z) -257\big) +O\big((1-z)^2\big) \bigg) \, , \end{autobreak}\notag\\ \begin{autobreak} d^P (z,2m_Q,\mu_0) = \frac{4 (N_c^2 - 4) \alpha_s^3}{3 \pi N_c^2 (2m_Q)^3} \bigg(\frac{2}{135} \big(-15+22 \pi^2\big) +\frac{1-z}{54} \big(-126 \ln ^2(1-z) -81 \ln (1-z) -235\big) +O\big((1-z)^2\big) \bigg) \, , \end{autobreak}\notag\\ \begin{autobreak} d_T^P (z,2m_Q,\mu_0) = \frac{4 (N_c^2 - 4) \alpha_s^3}{3 \pi N_c^2 (2m_Q)^3} \bigg(\frac{2}{135} \big(-15+22 \pi ^2\big) +\frac{1-z}{54} \big(-126 \ln ^2(1-z) -81 \ln (1-z) -271\big) +O\big((1-z)^2\big) \bigg) \, . \end{autobreak}\end{aligned}$$ We find that, for all cases, polarization-summed SDC equals to transversely polarized SDC at lowest order in $1-z$ expansion, while there are differences at higher orders. Thus longitudinal polarized SDCs are negligible at large $z$ region. The physical reason is very simple. As the two final state gluons are very soft when $z\to1$, heavy quark spin symmetry ensures that soft gluon emission will not change the spin of heavy quark. Therefore, the finial state heavy quark pair has almost the same polarization as that of the fragmenting gluon, which is transversely polarized. The consequence is that, for high $p_T$ quarkonium production, contributions from gluon fragmentating to ${{{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}}$ channel are transversely polarized, for both SGF and NRQCD factorization. Another information from Eq.  is that, at large $z$ region, relativistic correction terms are much larger than corresponding lowest order terms. This is because nonrelativistic expansion enhances the power of heavy quark propagator denominators, which vanish as $z\to1$. If fact, there are even infrared divergences if one expands to $O(v^4)$ terms [@Bodwin:2012xc], and the divergences need to be removed by color-octet mechanism. Based on this, it makes no sense to compare the convergence of velocity expansion between SGF and NRQCD factorization for the current problem. ![Polarization-summed SDCs as functions of $z$. Solid curve corresponds to lowest order in $v^2$ expansion in either SGF (with superscript “$(0)$") or NRQCD (with superscript “$O$"), based on the relation in Eq. . Dashed curve corresponds to order $v^2$ expansion in SGF. Dash-dotted curve corresponds to order $v^2$ expansion in NRQCD. []{data-label="fig:cpall"}](compair-LSN.eps){width="95.00000%"} \[fig:cpallT\] ![Transversely polarized SDCs as functions of $z$. Meaning of each curve is similar to that in Fig. \[fig:cpall\].[]{data-label="fig:cpallT"}](compair-LSNT.eps){width="95.00000%"} ![SDCs at lowest order in $v^2$ expansion in either SGF (with superscript “$(0)$") or NRQCD (with superscript “$O$"), based on relations in Eqs.  and . Solid curve represents polarization summed SDCs, dashed curve represents transversely polarized SDCs, and dash-dotted curve represents longitudinally polarized SDCs. \[fig:cpLOTL\]](compair-LTL.eps){width="95.00000%"} ![SDCs at order $v^2$ expansion in SGF framework. Meaning of each curve is similar to that in Fig. \[fig:cpLOTL\]. \[fig:cpSGFTL\]](compair-STL.eps){width="95.00000%"} ![SDCs at order $v^2$ expansion in NRQCD framework. Meaning of each curve is similar to that in Fig. \[fig:cpLOTL\]. \[fig:cpNRQCDTL\]](compair-NTL.eps){width="95.00000%"} Numerical results and discussion {#sec:summary} ================================ We plot our polarization-summed and transversely polarized SDCs in Fig. \[fig:cpall\] and Fig. \[fig:cpallT\], respectively. We find that our $d^O(z,2m_Q,\mu_0)$ is compatible with the numerical result in Refs. [@Braaten:1993rw; @Braaten:1995cj; @Bodwin:2003wh], and $d^P(z,2m_Q,\mu_0)$ is compatible with the numerical result in Ref. [@Bodwin:2003wh]. Our polarized SDC $d_T^O(z,2m_Q,\mu_0)$ seems to be not compatible with the result extracted from physical cross section in Ref.[@Qi:2007sf]. Other results calculated in this paper are new. In Fig. \[fig:cpLOTL\], Fig. \[fig:cpSGFTL\] and Fig. \[fig:cpNRQCDTL\], we compare polarization-summed, transversely polarized, and longitudinal polarized SDCs for each case. As expected, polarization-summed SDCs approach transversely polarized SDCs as $z\to1$. -- ----------- ----- ------- ------- ----- ------- ------- ----- ------- ------- $F$ $c_1$ $c_2$ $F$ $c_1$ $c_2$ $F$ $c_1$ $c_2$ Sum 1 9.07 1 13.9 1 18.4 Transvers 0.679 8.42 0.724 13.2 0.759 17.7 Sum 1 7.58 1 12.4 1 16.9 Transvers 0.679 7.41 0.724 12.1 0.759 16.5 -- ----------- ----- ------- ------- ----- ------- ------- ----- ------- ------- : Coefficients to estimate relative importance of each part of FFs calculated in either SGF or NRQCD framework. \[table:FFpro\] To estimate the relative contribution of each term for cross section, we integrate FFs calculated in this paper with a test function, $$\begin{aligned} \label{eq:numff} \begin{split} \int_0^1 {\mathrm{d}}z \, z^{n} \, D^{\textrm{SGF}}_{g\to H} (z) & = F \cdot \lambda^3 \frac{\alpha_s^3}{m_Q^3} {\langle{{\mathcal O}}^{H}({{{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}})\rangle} (c_1+c_2 \lambda^2 v^2+O(v^4) ) \, , \\ \int_0^1 {\mathrm{d}}z \, z^{n} \, D^{\textrm{NRQCD}}_{g\to H} (z) & = F \cdot \frac{\alpha_s^3}{m_Q^3} {\langle{{\mathcal O}}^{H}({{{^{3}\hspace{-0.6mm}S_{1}^{[1]}}}})\rangle} (c_1+c_2 v^2+O(v^4) ) \, , \end{split}\end{aligned}$$ where we denote $\lambda = m_Q/E$ and $v^2=E^2/m_Q^2-1$. The factors $F$, $c_1$ and $c_2$ depend on $n$, polarization and factorization method. For $n=2, 4, 6$, corresponding factors are shown in the Table. \[table:FFpro\]. With larger $n$, the integration in Eq.  probes larger $z$, we then find $c_2/c_1$ also becomes larger which is consistent with our observation of large $z$ behaviour. We thank Haoyu Liu, Ce Meng, Chenyu Wang, Yujie Zhang and Huaxing Zhu for many useful communications and discussions. The work is supported in part by the National Natural Science Foundation of China (Grants No. 11475005 and No. 11075002), and the National Key Basic Research Program of China (No. 2015CB856700). [^1]: When $p_T$ is not large enough, next-to-leading power (NLP) contribution will be also important. For quarkonium production, the NLP contribution can also be factorized in terms of perturbative hard part convoluting with double parton fragmentation function [@Kang:2011mg; @Fleming:2012wy; @Kang:2014tta], which will not be discussed in this paper. [^2]: It is needed to pointed out that $q$ in the amplitude is not the same as that, say $q^\prime$, in the complex conjugate of the amplitude, but $q^2=q^{\prime2}$.
--- abstract: | Let $E$ and $E'$ be elliptic curves over ${{\mathbb C}}$, with Tate parametrizations $p:{{\mathbb C}}^*\to E$, $p':{{\mathbb C}}^*\to E'$. We have the map $p*p':{{\mathbb C}}^*\otimes{{\mathbb C}}^*\to F^2 {{\rm CH}}^2(E\times E')$ sending $u\otimes v$ to the class of the zero cycle $(x,y)-(x,0)-(0,y)+(0,0)$, where $x=p(u)$, $y=p'(v)$. We show that, for general $u\in{{\mathbb C}}^*$, $p*p'(u\otimes(1-u))$ is not zero in ${{\rm CH}}^2$. We also show that the cycle $p*p'(u\otimes(1-u))$ is not detectable by a certain class of cohomology theories, including the cohomology of the analytic motivic complex involving the dilogarithm function defined by S. Bloch in [@Bl]. This is in contrast to its étale version defined by S. Lichtenbaum [@Li], which contains the Chow group.  \ address: | Universität GH Essen\ FB6 Mathematik und Informatik\ 45117 Essen\ Germany\ Department of Mathematics\ Northeastern University\ Boston, MA 02115\ USA author: - Hélène Esnault - Marc Levine title: The Steinberg Curve --- \[subsection\][Theorem]{} \[subsection\][Proposition]{} \[subsection\][Lemma]{} \[subsection\][Corollary]{} \[subsection\][Definition]{} \[subsection\][Remark]{} \[subsection\][Remarks]{} \[subsection\][Example]{} \[subsection\][Examples]{} [^1] Tate curves and line bundles ============================ For a scheme $X$ over ${{\mathbb C}}$, we let $X_{{\rm an}}$ denote the set of ${{\mathbb C}}$-points with the classical topology. We let ${{\mathcal O}}_{X_{{\rm an}}}$ denote the sheaf of holomorphic functions on $X_{{\rm an}}$. We begin by describing a construction of the universal analytic Tate curve over ${{\mathbb C}}$. We first form the analytic manifold $\hat{{\mathcal C}}^*$ as the quotient of the disjoint union $\sqcup_{i=-\infty}^\infty U_i$, with each $U_i={{\mathbb C}}^2$, by the equivalence relation $$(x,y)\in U_i\setminus\{Y=0\} \sim (\frac{1}{y},xy^2)\in U_{i+1}\setminus\{X=0\}.$$ The function $\tilde\pi(x,y)=xy$ on $\hat{{\mathcal C}}^*$ is globally defined. Letting $D\subset{{\mathbb C}}$ be the disk $\{|z|<1\}$, we define ${{\mathcal C}}^*=\tilde\pi^{-1}(D)$, so $\tilde\pi$ restricts to the analytic map $\pi:{{\mathcal C}}^*\to D$. We let $\tilde 0:D\to{{\mathcal C}}^*$ be the section $z\mapsto (z,1)\in U_0$. Let $D^*\subset D$ be the punctured disk $z\ne0$. Since the map $(x,y)\mapsto(\frac{1}{y},xy^2)$ is an automorphism of $({{\mathbb C}}^*)^2$, the open submanifold $\pi^{-1}(D^*)$ of ${{\mathcal C}}^*$ is isomorphic to $({{\mathbb C}}^*)^2$, and the restriction of the map $\pi$ is just the map $(x,y)\mapsto xy$. Thus, the projection $p_2:({{\mathbb C}}^*)^2\to{{\mathbb C}}^*$ gives an isomorphism of the fiber ${{\mathcal C}}^*_t:=\pi^{-1}(t)$ with ${{\mathbb C}}^*$, for $t\in D^*$. The fiber $\pi^{-1}(0)$, on the other hand, is an infinite union of projective lines. Indeed, define the map $f_i:{{\mathbb C}}{{\mathbb P}}^1\to {{\mathcal C}}^*_0$ by sending $(a:1)\in {{\mathbb C}}{{\mathbb P}}^1\setminus\infty$ to $(0,a)\in U_i$, and $\infty=(1:0)$ to $(0,0)\in U_{i+1}$, and let $C_i=f_i({{\mathbb C}}{{\mathbb P}}^1)$. Then $\pi^{-1}(0)=\cup_{i=-\infty}^\infty C_i$, with $\infty\in C_i$ joined with $0\in C_{i+1}$. Note in particular that the value $\tilde{0}(0)$ of the zero section avoids the singularities of $\pi^{-1}(0)$. Define the automorphism $\phi$ of ${{\mathcal C}}^*$ over $D$ by sending $(x,y)\in U_i$ to $(x,y)\in U_{i-1}$. This gives the action of ${{\mathbb Z}}$ on ${{\mathcal C}}^*$, with $n$ acting by $\phi^n$. It is easy to see that this action is free and proper, so the quotient space ${{\mathcal E}}:={{\mathcal C}}^*/{{\mathbb Z}}$ exists as a bundle $\pi:{{\mathcal E}}\to D$. The section $\tilde 0:D\to{{\mathcal C}}^*$ induces the section $0: D\to{{\mathcal E}}$. Take $t\in D^*$. Identifying ${{\mathcal C}}^*_t$ with ${{\mathbb C}}^*$ as above, we see that $\phi$ restricts to the automorphism $z\mapsto tz$. Thus, the fiber ${{\mathcal E}}_t:=\pi^{-1}(t)$ for $t\in D^*$ is the [*Tate elliptic curve*]{} ${{\mathbb C}}^*/t^{{\mathbb Z}}$, with identity $0(t)$. On ${{\mathcal C}}^*_0$, however, $\phi$ is the union of the “identity" isomorphisms $C_i\to C_{i-1}$. Thus $\phi(\infty\in C_i)=0\in C_i$, so the restriction of ${{\mathcal C}}^*_0\to {{\mathcal E}}_0$ to $C_0$ identifies ${{\mathcal E}}_0$ with the nodal curve ${{\mathbb C}}{{\mathbb P}}^1/0\sim \infty$. We let $*\in {{\mathcal E}}_0$ denote the singular point. Then $\tilde{0}(0) \in {{\mathcal E}}_0 \setminus *$. The map $(t,w)\in D\times{{\mathbb C}}^*\mapsto (\frac{t}{w},w)\in U_0$ gives an isomorphism $\psi:D\times{{\mathbb C}}^*\to U_0\setminus\{Y=0\}$ over $D$. The composition $$D\times{{\mathbb C}}^*\to U_0\setminus\{Y=0\}\subset{{\mathcal C}}^*\xrightarrow{q}{{\mathcal E}}$$ defines the map $p:D\times{{\mathbb C}}^*\to {{\mathcal E}}$ over $D$, with image ${{\mathcal E}}\setminus\{*\}$. Take $u\in{{\mathbb C}}^*$. We have the local system on ${{\mathcal E}}$ $${{\mathcal L}}_u:= {{\mathcal C}}^*\times{{\mathbb C}}/(z,\lambda)\sim(\phi(z),u\lambda)\to{{\mathcal E}},$$ and the associated holomorphic line bundle ${{\mathcal L}}^{{\rm an}}_u$ on ${{\mathcal E}}$. Let $E_t$ be the algebraic elliptic curve associated to the analytic variety ${{\mathcal E}}_t$, let $L_u(t)$ and $L_u^{{\rm an}}(t)$ denote the restriction of ${{\mathcal L}}_u$ and ${{\mathcal L}}_u^{{\rm an}}$ to ${{\mathcal E}}_t$, and let $L_u^{{\rm alg}}(t)$ be the algebraic line bundle on $E_t$ corresponding to $L_u^{{\rm an}}(t)$ via [@Se]. The restriction of $p$ to $t\times{{\mathbb C}}^*$ defines the map $p_t:{{\mathbb C}}^*\to E_{t{{\rm an}}}$. For $t\ne0$, $p_t$ is a covering space of $E_{t{{\rm an}}}$. The map $p_0:{{\mathbb C}}^*\to E_{0{{\rm an}}}$ is the analytic map associated to the algebraic open immersion $${{\mathbb P}}^1\setminus\{0,\infty\}\xrightarrow{j}{{\mathbb P}}^1\to {{\mathbb P}}^1/0\sim\infty=E_0.$$ If $E$ be an elliptic curve over ${{\mathbb C}}$, then $E_{{\rm an}}\cong{{\mathbb C}}/\Lambda$, where $\Lambda\subset{{\mathbb C}}$ is a lattice spanned by $1$ and some $\tau$ in the upper half plane. Taking $t=e^{2\pi i\tau}$ gives the isomorphism $E_{{\rm an}}\cong{{\mathcal E}}_t$, so each elliptic curve over ${{\mathbb C}}$ occurs as an $E_t$ for some (in fact for infinitely many) $t\in D^*$. Sending $u\in{{\mathbb C}}^*$ to the isomorphism class of $L_u^{{\rm alg}}(t)$ defines a homomorphism $\tilde p_t:{{\mathbb C}}^*\to{{\rm Pic}}(E_t)$. We denote the identity $0(t)\in E_t$ simply by 0 if $t$ is given. \[C1Comp\] For all $t\in D$, $c_1(L_u^{{\rm alg}}(t))=(p_t(u))-(0)$. We first handle the case $t\ne0$. Let $q:{{\mathbb C}}\to E:=E_t$ be the map $q(z)=p_t(e^{2\pi iz})$, let $\tau\in {{\mathbb C}}$ be an element with $e^{2\pi i\tau}=t$, and let $\Lambda\subset{{\mathbb C}}$ be the lattice generated by $1$ and $\tau$. The map $q$ identifies $E$ with ${{\mathbb C}}/\Lambda$, and $L_u(t)$ with the local system defined by the homomorphism $\rho:\Lambda\to{{\mathbb C}}^*$, $\rho(a+b\tau)=u(b)$. There is a unique cocycle $\theta$ in $Z^1(\Lambda,H^0({{\mathbb C}},{{\mathcal O}}^*_{{{\mathbb C}}_{{\rm an}}}))$ with $\theta(1)=1$, $\theta(\tau)=e^{-2\pi iz}$; let $L$ be the corresponding holomorphic line bundle on $E$. Computing $c_1^{\rm top}(L)\in H^2(E,{{\mathbb Z}})$ by using the exponential sequence, we find that $\deg(L)=1$. By Riemann-Roch, we have $H^0(E,L)={{\mathbb C}}$; let $\Theta(z)$ be the corresponding global holomorphic function on ${{\mathbb C}}$, i.e., $$\Theta(z+1)=\Theta(z),\ \Theta(z+\tau)=e^{-2\pi iz}\Theta(z),$$ and the divisor of $\Theta$ on $E$ is $(x)$, with $L\cong{{\mathcal O}}_E(x)$. Take $v,w\in{{\mathbb C}}$ with $u=e^{2\pi iv}$ and $q(w)=x$. Let $f(z)=\frac{\Theta(z+w-v)}{\Theta(z+w)}$. Then $$f(z+1)=f(z),\ f(z+\tau)=uf(z),$$ and ${{\rm Div}}(f)=(p(u))-(0)$. Thus, multiplication by $f$ defines an isomorphism $$\times f:{{\mathcal O}}_{E_{{\rm an}}}((p(u))-(0))\to L_u^{{\rm an}}.$$ The proof for $E_0={{\mathbb P}}^1/0\sim\infty$ is essentially the same, where we replace $\frac{\Theta(z+w-v)}{\Theta(z+w)}$ with the rational function $\frac{X-u}{X-1}$. Thus, the image of $\tilde p_t$ in ${{\rm Pic}}(E_t)$ is ${{\rm Pic}}^0(E)$. After identifying the smooth locus of $E_t^0$ of $E_t$ with ${{\rm Pic}}^0(E_t)$ by sending $x\in E_t^0$ to the class of the invertible sheaf ${{\mathcal O}}_{E_t}((x)-(0))$, we have $\tilde p_t=p_t$. The Albanese kernel and the Steinberg relation {#AlbKer} ============================================== Let $X$ be a smooth projective variety. We let ${{\rm CH}}_0(X)$ denote the group of zero cycles on $X$, modulo rational equivalence, $F^1{{\rm CH}}_0(X)$ the subgroup of cycles of degree zero, and $F^2{{\rm CH}}_0(X)$ the kernel of the Albanese map $\alpha_X:F^1{{\rm CH}}_0(X)\to{{\rm Alb}}(X)$. The choice of a point $0\in X$ gives a splitting to the inclusion $F^1{{\rm CH}}_0(X)\to {{\rm CH}}_0(X)$. Let $E$, $E'$ be smooth elliptic curves. As ${{\rm Alb}}(E\times E')=E\times E'$, the inclusion $F^2{{\rm CH}}_0(E\times E')\to F^1{{\rm CH}}_0(E\times E')$ is split by sending $(x,y)-(0,0)$ to $(x,y)-(x,0)-(0,y)+(0,0)$. Thus $F^2{{\rm CH}}_0(E\times E')$ is generated by zero-cycles of the form $(x,y)-(x,0)-(0,y)+(0,0)$. Choosing an isomorphism $E\cong E_t$, $E'\cong E_{t'}$, we have the covering spaces $p:{{\mathbb C}}^*\to E_{{\rm an}}$, $p':{{\mathbb C}}^*\to E'_{{\rm an}}$, and the map $$\begin{gathered} p*p':{{\mathbb C}}^*\otimes{{\mathbb C}}^* \to F^2{{\rm CH}}_0(E\times E')\label{BasicSurjDef1}\\ u\otimes v \mapsto p(u)*p'(v):=\notag \\ (p(u),p'(v))-(p(u),0)-(0,p'(v))+(0,0).\notag\end{gathered}$$ By the theorem of the cube [@M], the map $p*p'$ is a group homomorphism, and thus is surjective. In case one or both of $E$, $E'$ is the singular curve $E_0$, we will need to use the the theory of zero-cycles mod rational equivalence defined in [@LevWeib]. If $X$ is a reduced, quasi-projective variety over a field $k$ with singular locus $X_{{\rm sing}}$, the group ${{\rm CH}}_0(X)$ (denoted ${{\rm CH}}_0(X, X_{{\rm sing}})$ in [@LevWeib]) is defined as the quotient of the free abelian group on the regular closed points of $X$, modulo the subgroup generated by zero-cycles of the form ${{\rm Div}}f$, where $f$ is a rational function on a dimension one closed subscheme $D$ of $X$ such that 1. No irreducible component of $D$ is contained in $X_{{\rm sing}}$. 2. In a neighborhood of each point of $D\cap X_{{\rm sing}}$, the subscheme $D$ is a complete intersection. 3. $f$ is in the subgroup ${{\mathcal O}}_{D,D\cap X_{{\rm sing}}}^*$ of $k(D)^*$. It follows in particular from these conditions that ${{\rm Div}}f$ is a sum of regular points of $X$. For $X$ a reduced curve, sending a regular closed point $x\in X$ to the invertible sheaf ${{\mathcal O}}_X(x)$ extends to give an isomorphism ${{\rm CH}}_0(X)\cong {{\rm Pic}}(X)$. We extend the definition of $F^i{{\rm CH}}_0$ to $E\times E'$ with either $E=E_0$ or $E'=E_0$ or $E=E'=E_0$, by defining $F^1{{\rm CH}}_0(E\times E')$ as the subgroup of ${{\rm CH}}_0(E\times E')$ generated by the differences $[x]-[y]$, and $F^2{{\rm CH}}_0(E\times E')$ the subgroup generated by expressions $[(x,y)]-[(x,0)]-[(0,y)]+[(0,0)]$, where $x$ is a smooth point of $E$ and $y$ a smooth point of $E'$. The surjection $p*p':{{\mathbb C}}^*\otimes{{\mathbb C}}^*\to F^2{{\rm CH}}_0(E\times E')$ is then defined by the same formula as . Take $E=E'=E_0$. Then $p(u)*p(1-u)=0$ in ${{\rm CH}}_0(E_0\times E_0)$ for all $u\in {{\mathbb C}}\setminus\{0,1\}$. Let $X$ be a quasi-projective surface over a field $k$. By [@Lev], there is an isomorphism $\phi:H^2(X,{{\mathcal K}}_2)\to CH_0(X)$. The product ${{\mathcal O}}_X^*\otimes{{\mathcal O}}_X^*\to{{\mathcal K}}_2$ gives the cup product $$H^1(X,{{\mathcal O}}_X^*)\otimes H^1(X,{{\mathcal O}}_X^*)\xrightarrow{\cup} H^2(X,{{\mathcal K}}_2).$$ In addition, let $D$, $D'$ be Cartier divisors which intersect properly on $X$, and suppose that ${{\rm supp}\,}D\cap{{\rm supp}\,}D'\cap X_{{\rm sing}}={\emptyset}$. Then $$\label{ProdComp} \phi({{\mathcal O}}_X(D)\cup{{\mathcal O}}_X(D'))=[D\cdot D'],$$ where $\cdot$ is the intersection product and $[-]$ denotes the class in ${{\rm CH}}_0$. Since $L_u^{{\rm alg}}={{\mathcal O}}_{E_0}(p(u)-0)$, implies $$p(u)*p(1-u)=\rho(p_1^*L_u^{{\rm alg}}\cup p_2^*L_{1-u}^{{\rm alg}}),$$ so it suffices to show that $p_1^*L_u^{{\rm alg}}\cup p_2^*L_{1-u}^{{\rm alg}}=0$ in $H^2(E_0\times E_0,{{\mathcal K}}_2)$. Write $X$ for $E_0\times E_0$. Let $\bar {{\mathcal K}}_2$ be the image of ${{\mathcal K}}_2$ in the constant sheaf $K_2({{\mathbb C}}(X))$. By Gersten’s conjecture, the surjection $\pi:{{\mathcal K}}_2\to\bar{{\mathcal K}}_2$ is an isomorphism at each regular point of $X$, hence $\pi$ induces an isomorphism on $H^2$. Let $q:{{\mathbb P}}^1\to E_0$ be the normalization, giving the normalization $q\times q:{{\mathbb P}}^1\times{{\mathbb P}}^1\to X$. Let $i:*\to E_0$ be the inclusion of the singular point. We have the exact sequence of sheaves on $E_0$ $$\label{K1Comp} q_*{{\mathcal K}}_1\xrightarrow{\beta} i_*K_1({{\mathbb C}})\to0$$ and the exact sequence of sheaves on $X$: $$\label{K2Comp} (q\times q)_*{{\mathcal K}}_2\xrightarrow{\alpha} (i\times q)_*{{\mathcal K}}_2\oplus(q\times i)_*{{\mathcal K}}_2 \to (i\times i)_* K_2({{\mathbb C}})\to 0,$$ with augmentations $\epsilon_1:{{\mathcal K}}_1\to\eqref{K1Comp}$, $\epsilon_2:\bar{{\mathcal K}}_1\to\eqref{K2Comp}$. The various cup products in $K$-theory give the map of complexes $$\label{CupProd} p_1^*\eqref{K1Comp}\otimes p_2^*\eqref{K1Comp}\to \eqref{K2Comp}$$ over the cup product $$\label{CupProd2} p_1^*{{\mathcal K}}_1\otimes p_2^*{{\mathcal K}}_1\to \bar{{\mathcal K}}_2.$$ The augmentation $\epsilon_1:{{\mathcal K}}_1\to \ker\beta$ is an isomorphism. The augmentation $\epsilon_2:\bar{{\mathcal K}}_2\to\ker\alpha$ is an injection, and the cokernel is supported on $*\times *$, so $\epsilon_2:\bar{{\mathcal K}}_2\to\ker\alpha$ induces an isomorphism on $H^2$. Thus, the complexes and give rise to maps $$\begin{gathered} \delta_2:K_2({{\mathbb C}})\to H^2(X,\ker\alpha)=H^2(X,\bar{{\mathcal K}}_2)=H^2(X,{{\mathcal K}}_2)\\ \delta_1:{{\mathbb C}}^*=K_1({{\mathbb C}})\to H^1(E_0,{{\mathcal K}}_1).\end{gathered}$$ The compatibility of with yields the commutativity of the diagram $$\xymatrix{ {{\mathbb C}}^*\otimes{{\mathbb C}}^*\ar[r]^{\cup}\ar[d]_{\delta_1\otimes\delta_1}&K_2({{\mathbb C}})\ar[d]^ {\delta_2}\\ H^1(E_0,{{\mathcal K}}_1)\otimes H^1(E_0,{{\mathcal K}}_1)\ar[r]_-{p_1^*\cup p_2^*}& H^2(X,{{\mathcal K}}_2). }$$ Since $L_v^{{\rm alg}}=\delta_1(v)$ for each $v\in{{\mathbb C}}^*$, we have $$p_1^*L_u^{{\rm alg}}\cup p_2^*L_{1-u}^{{\rm alg}}=\delta_2(\{u,1-u\})=0.$$ The main point of this section is that the Steinberg relation is [*not*]{} satisfied in ${{\rm CH}}_0(E\times E')$ except in the case $E=E'=E_0$. We first require the following lemma: \[NonAlgLem\] Let $s:{{\mathbb C}}\setminus\{0,1\}\to E\times E'$ be the analytic map $s(u)=(p(u),p'(1-u))$. Then $s({{\mathbb C}}\setminus\{0,1\})$ is not contained in any algebraic curve on $E\times E'$, except in case $E=E'=E_0$. We first consider the case in which both $E$ and $E'$ are smooth elliptic curves, $E=E_t$, $E'=E_{t'}$, where $t$ and $t'$ are in ${{\mathbb C}}^*$ and $|t|<1$, $|t'|<1$. We have the maps $$p:{{\mathbb C}}^*\to E,\ p':{{\mathbb C}}^*\to E',$$ which are group homomorphisms with $\ker p=t^{{\mathbb Z}}$, $\ker p'= t^{\prime{{\mathbb Z}}}$. Suppose that $s({{\mathbb C}}^*)$ is contained in an algebraic curve $D\subset E\times E'$. For each $x\in E$, $(x\times E')\cap D$ is a finite set (possibly empty), hence, for each $u\in{{\mathbb C}}\setminus\{0,1\}$, the set of points of ${{\mathbb C}}^*\times{{\mathbb C}}^*$ of the form $(t^nu,1-t^nu)$ has finite image in $E\times E'$. Thus, for each $u$, there are integers $n$, $m$ and $p$, depending on $u$, such that $n\neq m$ and $$\label{FinEq} 1-t^mu=t^{\prime p}(1-t^nu).$$ Since there are uncountably many $u$, there is a single choice of $n$, $m$ and $p$ for which holds for uncountably many $u$. But then $$\label{FinEq2} (t^{\prime p}t^n-t^m)u=1-t^{\prime p}.$$ If $t^{\prime p}t^n-t^m=0$, then $|t'|=1$, contradicting the condition $|t'|<1$. If $t^{\prime p}t^n-t^m\neq0$, then we can solve for $u$, so only holds for this single $u$, a contradiction. If say $E'=E_0$, then $p':{{\mathbb C}}^*\to E'$ is injective, and we have the infinite set of points $p'(1-t^nu)$ in the image of $s$, all lying over the single point $p(u)$. \[thm2.3\] Let $E=E_t$, $E'=E_{t'}$, with at least one of $E$, $E'$ non-singular. Then, for all $u$ outside a countable subset of ${{\mathbb C}}\setminus\{0,1\}$, $p(u)*p'(1-u)$ is not a torsion element in $F^2{{\rm CH}}_0(E\times E')$. We first give the proof in case $E$ and $E'$ are both non-singular. For a quasi-projective ${{\mathbb C}}$-scheme $X$, we let $S^nX$ denote the $n$th symmetric power of $X$. For $X$ smooth, we have the map $$\begin{aligned} \rho_n: S^nX({{\mathbb C}})\times S^nX({{\mathbb C}})&\to {{\rm CH}}_0(X)\\ (\sum_{i=1}^nx_i,\sum_{j=1}^ny_j)&\mapsto [\sum_{i=1}^nx_i-\sum_{j=1}^ny_j].\end{aligned}$$ For each integer $n\ge1$, we have the morphism $$\begin{aligned} \phi_n:E\times E'&\to S^{2n}(E\times E')\times S^{2n}(E\times E') \\ (x,y)&\mapsto(n(x,y)+n(0,0),n(x,0)+n(0,y)),\end{aligned}$$ By [@Roit Theorem 1], $(\rho_{2n}\circ\phi_n)^{-1}(0)$ is a countable union of Zariski closed subsets of $E\times E'$. On the other hand, since $p_g(E\times E')=1$, the Albanese kernel $F^2{{\rm CH}}_0(E\times E')$ is “infinite dimensional" [@Mumford]; in particular, $F^2{{\rm CH}}_0(E\times E')_{{\mathbb Q}}\neq0$. Since $F^2{{\rm CH}}_0(E\times E')$ is generated by cycles of the form $p(u)*p(v)$, it follows that $(\rho_{2n}\circ\phi_n)^{-1}(0)$ is a countable union of [*proper*]{} closed subsets of $E\times E'$. If $D$ is a proper algebraic subset of $E\times E'$, then, by Lemma \[NonAlgLem\], $s^{-1}(D)$ is a proper analytic subset of ${{\mathbb C}}\setminus\{0,1\}$, hence $s^{-1}(D)$ is countable. Thus, the set of $u\in {{\mathbb C}}\setminus\{0,1\}$ such that $p(u)*p'(1-u)$ is torsion is countable, which completes the proof in case both $E$ and $E'$ are non-singular. If say $E'=E_0$, we use essentially the same proof. We let $X$ be the open subscheme $E\times(E_0\setminus\{*\})$ of $E\times E_0$. We have the map $\rho_n: S^nX({{\mathbb C}})\times S^nX({{\mathbb C}}) \to {{\rm CH}}_0(E\times E_0)$ defined as above. By [@LevWeib Theorem 4.3], $(\rho_{2n}\circ\phi_n)^{-1}(0)$ is a countable union of closed subsets $D_i$ of $X$. By [@Srini], we have the similar infinite dimensionality result for ${{\rm CH}}_0(E\times E_0)$ as in the smooth case, from which it follows that each $D_i$ is a proper closed subset of $X$. Thus, the closure of each $D_i$ in $E\times E_0$ is a proper algebraic subset of $E\times E_0$. The same argument as in the smooth case finishes the proof. Indetectability =============== The zero-cycle $p(u)*p(1-u)$ is indetectable by cohomology theories built on the sheaf ${{\mathcal O}}_{E_{{\rm an}}\times E'_{{\rm an}}}^*$. We first consider the following abstract situation. Let $\Gamma_0(2)$ be the complex: $$\begin{aligned} {{\mathbb Z}}[{{\mathbb C}}\setminus\{0,1\}]&\to {{\mathbb C}}^*\otimes{{\mathbb C}}^*\\ u&\mapsto u\otimes(1-u),\end{aligned}$$ with ${{\mathbb C}}^*\otimes{{\mathbb C}}^*$ in degree two. Let $X=E\times E'$, and let $\Gamma(2)_{{\rm an}}$ be a complex of sheaves on $X_{{\rm an}}$ with the following properties: $$\label{Props}$$ 1. There is a group homomorphism ${{\rm cl}}:{{\rm CH}}_0(X)\to {{\mathbb H}}^4(X_{{\rm an}},\Gamma(2)_{{\rm an}})$. 2. There is a map in the derived category of sheaves $D^b({\rm Sh}_{X_{{\rm an}}})$, $\rho:{{\mathcal O}}^*_{X_{{\rm an}}}\otimes {{\mathcal O}}^*_{X_{{\rm an}}}[-2]\to \Gamma(2)_{{\rm an}}$. 3. The composition $${{\mathbb C}}^*\otimes{{\mathbb C}}^*[-2]\to {{\mathcal O}}^*_{X_{{\rm an}}}\otimes{{\mathcal O}}^*_{X_{{\rm an}}}[-2]\to\Gamma(2)_{{\rm an}}$$ extends to a map in $D^b({\rm Sh}_{X_{{\rm an}}})$, $\Gamma_0(2)\to \Gamma(2)_{{\rm an}}$. 4. The composition $$\begin{gathered} {{\rm Pic}}(X)\otimes {{\rm Pic}}(X)\cong H^1(X_{{\rm an}},{{\mathcal O}}^*_{X_{{\rm an}}})\otimes H^1(X_{{\rm an}},{{\mathcal O}}^*_{X_{{\rm an}}})\\ \xrightarrow{\cup}H^2(X_{{\rm an}},{{\mathcal O}}^*_{X_{{\rm an}}}\otimes {{\mathcal O}}^*_{X_{{\rm an}}}) \xrightarrow{\rho}{{\mathbb H}}^4(X_{{\rm an}},\Gamma(2)_{{\rm an}})\end{gathered}$$ agrees with the composition $${{\rm Pic}}(X)\otimes {{\rm Pic}}(X)\xrightarrow{\cup}{{\rm CH}}_0(X)\xrightarrow{{{\rm cl}}}{{\mathbb H}}^4(X_{{\rm an}},\Gamma(2)_{{\rm an}}) .$$ \[thm3.1\] Let $E=E_t$ and $E'=E_{t'}$, and let $\Gamma(2)_{{\rm an}}$ be a complex of sheaves on $E_{{\rm an}}\times E'_{{\rm an}}$ satisfying the conditions . Then ${{\rm cl}}(p(u)*p(1-u))=0$ for all $u\in{{\mathbb C}}\setminus\{0,1\}$. We give the proof in case both $E$ and $E'$ are non-singular; the singular case is similar, but easier, and is left to the reader. Since $$p(u)*p(1-u)=[p_1^*c_1(L_u^{{\rm alg}})]\cap[p_2^*c_1(L_{1-u}^{{\rm alg}})],$$ it follows from (4) that we need to show that $\rho([L_u^{{\rm an}}]\cup[L_{1-u}^{{\rm an}}])=0$. The class $[L_u^{{\rm an}}]\in H^1(E_{{\rm an}},{{\mathcal O}}^*_{E_{{\rm an}}})$ is the image of $[L_u]\in H^1(E_{{\rm an}},{{\mathbb C}}^*)$ under the map of sheaves ${{\mathbb C}}^*\to {{\mathcal O}}^*_{E_{{\rm an}}}$, and similarly for $L_{1-u}$ and $L_{1-u}^{{\rm an}}$. Thus, by (3), it suffices to see that $p_1^*[L_u]\cup p_2^*[L_{1-u}]\in H^2(E\times E',{{\mathbb C}}^*\otimes{{\mathbb C}}^*)$ vanishes in ${{\mathbb H}}^4(E\times E',\Gamma_0(2))$. The ${{\mathbb Z}}$-covers $p:{{\mathbb C}}^*\to E=E_t$, $p':{{\mathbb C}}^*\to E'=E_{t'}$ give natural maps $$\begin{gathered} \alpha:H^*({{\mathbb Z}},H^0({{\mathbb C}}^*,{{\mathbb C}}^*))\to H^*(E_{{\rm an}},{{\mathbb C}}^*), \notag\\ \beta:H^*({{\mathbb Z}},H^0({{\mathbb C}}^*,{{\mathbb C}}^*))\to H^*(E'_{{\rm an}},{{\mathbb C}}^*). \notag\end{gathered}$$ Similarly, the ${{\mathbb Z}}^2$-cover $p\times p':{{\mathbb C}}^*\times{{\mathbb C}}^*\to E\times E'$ gives the natural map $$\gamma:{{\mathbb H}}^*({{\mathbb Z}}^2,H^0({{\mathbb C}}^*\times{{\mathbb C}}^*,\Gamma_0(2)))\to {{\mathbb H}}^*(E_{{\rm an}}\times E'_{{\rm an}},\Gamma_0(2)).$$ Letting $\iota:{{\mathbb C}}^*\otimes{{\mathbb C}}^*\to\Gamma_0(2)$ denote the natural inclusion, the maps above are compatible with the respective cup products: $$\iota\circ(\alpha(a)\cup\beta(b))=\gamma\circ\iota(a\cup b).$$ Each $v\in{{\mathbb C}}^*$ gives the corresponding homomorphism $v:{{\mathbb Z}}\to{{\mathbb C}}^*$, $v(n)=v^n$. Since $[L_u]\in H^1(E_{{\rm an}},{{\mathbb C}}^*)$ is $\alpha(u:{{\mathbb Z}}\to{{\mathbb C}}^*)$ and $[L_{1-u}]\in H^1(E'_{{\rm an}},{{\mathbb C}}^*)$ is $\beta(1-u:{{\mathbb Z}}\to{{\mathbb C}}^*)$, it suffices to show that $\iota(p^*_1u\cup p_2^*(1-u))=0$ in ${{\mathbb H}}^4({{\mathbb Z}}^2,\Gamma_0(2))$, where $p_1^*u,p_2^*(1-u):{{\mathbb Z}}^2\to{{\mathbb C}}^*$ are the respective homomorphisms $(a,b)\mapsto u^a$, and $(a,b)\mapsto (1-u)^b$. We have the spectral sequence $$E_2^{p,q}=H^p({{\mathbb Z}}^2,H^q(\Gamma_0(2)))\Longrightarrow {{\mathbb H}}^{p+q}({{\mathbb Z}}^2,\Gamma_0(2)).$$ Since ${{\mathbb Z}}^2$ has cohomological dimension two, and since $H^q(\Gamma_0(2))=0$ for $q\ne1,2$, it follows that the natural map ${{\mathbb H}}^4({{\mathbb Z}}^2,\Gamma_0(2))\to H^2({{\mathbb Z}}^2,H^2(\Gamma_0(2)))$ is an isomorphism. Since $H^2(\Gamma_0(2))=K_2({{\mathbb C}})$, we need to show that the image of $p_1^*u\cup p_2^*(1-u)$ in $H^2({{\mathbb Z}}^2,K_2({{\mathbb C}}))$ is zero. By definition of the cup product in group cohomology, we have $$\begin{aligned} [p_1^*u\cup p_2^*(1-u)]((a,b),(c,d))&= p_1^*u(a,b)\otimes p_2^*(1-u)(c-a,d-b)\\ &= u^a\otimes (1-u)^{d-b},\end{aligned}$$ which clearly vanishes in $K_2({{\mathbb C}})$. \[ex2\] In [@Bl], S. Bloch defines a quotient complex ${{\mathcal B}}(2)$ of the analytic complex $ {{\mathcal O}}^*_{X_{{\rm an}}}(1) \xrightarrow{2\pi i \otimes 1} {{\mathcal O}}_{X_{{\rm an}}} \otimes {{\mathcal O}}^*_{X_{{\rm an}}}$ fulfilling ${{\mathcal H}}^i({{\mathcal B}}(2)) = 0$ for $i \neq 1, 2$, $${{\mathcal H}}^1({{\mathcal B}}(2))= {\rm Im}\Big(r: K_{3, {\rm ind}}({{\mathbb C}}) \to {{\mathbb C}}/{{\mathbb Z}}(2)\Big) =:\Delta^*(1),$$ where $r$ is the regulator map, and ${{\mathcal H}}^2({{\mathcal B}}(2))= {{\mathcal K}}_{2, {\rm an}}$. He shows in the same article that $r(K_{3, {\rm ind}})({{\mathbb C}}))= r(K_{3, {\rm ind}})(\bar{{{\mathbb Q}}}))$, thus $\Delta^*(1)$ is a countable subgroup of ${{\mathbb C}}/{{\mathbb Z}}(2)$, and also that ${{\mathcal B}}(2)$ maps to the complex ${{\mathbb Z}}(2) \to {{\mathcal O}}_{X_{{\rm an}}} \to \Omega^1_{X_{{\rm an}}}$ which computes the Deligne cohomology $H_{{{\mathcal D}}}^*(X, 2)$ when $X$ is projective smooth over $ {{\mathbb C}}$. In fact, the cycle map ${{\rm CH}}^2(X) \to H^4_{{{\mathcal D}}}(X, 2)$ is shown to factor through $H^4_{{{\mathcal D}}}(X_{{\rm an}}, {{\mathcal B}}(2))$ ([@E]). S. Bloch ([@B]) asked whether the cycle map ${{\rm CH}}^2(X) \to H^4_{{{\mathcal D}}}(X_{{\rm an}}, {{\mathcal B}}(2))$ could possibly be injective. The computations of this article show that it is not. Indeed, by Lemma (1.3) of [@Bl], the complex $\Gamma_0(2)$ maps to the complex $$\epsilon ({{\mathbb Z}}[{{\mathbb C}}\setminus \{0,1\}]) \to {{\mathbb C}}\otimes {{\mathbb C}}^*,$$ where $\epsilon$ is defined via the dilogarithm function $$\epsilon(a)= [\log (1-a) \otimes a] + [2\pi i \otimes {\rm exp} \Big( \frac{-1}{2\pi i} \int_0^a \log (1-t) \frac{dt}{t} \Big)],$$ and the latter complex maps to $${{\mathcal B}}(2)_{X}: {{\mathcal O}}^*_{X_{{\rm an}}}(1) \xrightarrow{2\pi i \otimes 1} {{\mathcal O}}_{X_{{\rm an}}} \otimes {{\mathcal O}}^*_{X_{{\rm an}}}/(\epsilon ({{\mathbb Z}}[{{\mathbb C}}\setminus \{0,1\}])$$ for $X= {{\rm Spec \,}}{{\mathbb C}}$. Let us take $\Gamma(2)_{{\rm an}}= {{\mathcal B}}(2)$. We now verify the conditions \[Props\]. The condition 1 is given by [@E]. Indeed, one computes the Leray spectral sequence associated to $\alpha: X_{{\rm an}}\to X_{{\rm Zar}}$ and the first term entering $H^4({{\mathcal B}}(2))$ is $$E^{2,2}= H^2_{\rm Zar}(R\alpha_* {{\mathcal B}}(2))= H^2({{\mathcal K}}_{2, {{\mathbb Z}}}),$$ where ${{\mathcal K}}_{2, {{\mathbb Z}}} := {\rm Ker}\Big( \alpha_*{{\mathcal K}}_{2,{{\rm an}}} \xrightarrow{d\log \wedge d\log} H^2({{\mathbb C}}/{{\mathbb Z}}(2))\Big).$ Then the cycle map ${{\rm cl}}$ is induced by ${{\mathcal K}}_2 \to {{\mathcal K}}_{2, {{\mathbb Z}}}$ on $X_{{\rm Zar}}$, which is obviously compatible with the product in ${\rm Pic}$. Thus we have 4. We have already discussed 2 and 3. Hence we can apply Theorem \[thm2.3\] to take a 0-cycle $p(u)*p(1-u)$ on $E \times E'$ where both $E$ and $E'$ are smooth elliptic curves which does not die in the Chow group ${{\rm CH}}_0(E \times E')$, whereas it dies by Theorem \[thm3.1\] in ${{\mathbb H}}^4({{\mathcal B}}(2))$. In [@Li], S. Lichtenbaum constructs an étale version $\Gamma(2)$ of S. Bloch’s analytic complex ${{\mathcal B}}(2)$, the cohomology of which contains ${{\rm CH}}^2(X)$. This contrasts with the examples discussed above. Over a $p-$adic field, W. Raskind and M. Spie[ß]{} ([@RS]) show that the Albanese kernel modulo $n$ of a product of two Tate elliptic curves is dominated by $K_2(k)/n$. This result is not immediately comparable to ours, but is obviously related. The Relative Situation ====================== In this section, we study the cycles constructed in section 2 on $X=E \times E_0$, where as there, $E$ is smooth, and $E_0$ is a nodal curve. Let $\nu=1\times q : E \times {{\mathbb P}}^1 \to X$ be the normalization. We define $$\begin{gathered} \bar{{{\mathcal K}}_2}= {\rm Ker} \Big(\nu_*{{\mathcal K}}_2 \xrightarrow{|_{E \times 0} - |_{E \times \infty}} {{\mathcal K}}_2|_{E }\Big)\end{gathered}$$ \[lem4.1\] One has $${{\rm CH}}^2(X)= H^2(X, \bar{{{\mathcal K}}_2}),$$ and the Chow group ${{\rm CH}}_0(X)$ fits into an exact sequence $$0 \to H^1(E, {{\mathcal K}}_2) \xrightarrow{\gamma} {{\rm CH}}_0(X) \xrightarrow{\nu^*} {{\rm CH}}_0(E \times {{\mathbb P}}^1)={\rm Pic}(E)\otimes {\rm Pic}({{\mathbb P}}^1) \to 0.$$ Moreover, the map $\gamma$ is defined by $$\gamma(\sum_{x \in E^{(1)}} x \otimes \lambda_x) = \sum_{x \in E^{(1)}} (x, p_0(\lambda_x))-(x,0).$$ The map $\nu^*: {{\mathcal K}}_2 \to \bar{{{\mathcal K}}_2}$ is obviously surjective, and by the Gersten resolution on the smooth points of $X$, the kernel is supported in codimension 1. Thus $\nu^*$ induces an isomorphism on $H^2$. On the other hand, $$H^1(E \times {{\mathbb P}}^1, {{\mathcal K}}_2)= H^1(E, {{\mathcal K}}_2) \oplus H^0(E, {{\mathcal K}}_1) \cup c_1({{\mathcal O}}(1)).$$ The term $H^1(E, {{\mathcal K}}_2)$ maps to $0 \in H^1(E, {{\mathcal K}}_2)$ via the difference of the restrictions to $E \times 0$ and $E \times \infty$, while $c_1({{\mathcal O}}(1))$ restricts to 0 to either $E\times 0$ or $E \times \infty$. This shows the long exact sequence associated to the short one defining $\bar{{{\mathcal K}}_2}$. Finally, the value $\gamma(x\otimes \lambda_x)$ of the map is given by the boundary morphism ${{\mathbb C}}^* \to H^1(X, {{\mathcal O}}^*_X)$ induced by the normalization sequence $$0 \to {{\mathcal O}}^*_X \to q_*{{\mathcal O}}^*_{{{\mathbb P}}^1} \xrightarrow{|_0 - |_{\infty}} {{\mathbb C}}^* \to 0$$ on the right argument $\lambda_x$. The formula for $\gamma$ thus follows from Lemma \[C1Comp\]. Let ${\rm Nm}: H^1(E, {{\mathcal K}}_2) \to {{\mathbb C}}^*$ be the norm map defined by $$\begin{gathered} \label{Nm} {\rm Nm}\Big(\sum_{x \in E^{(1)}}x \otimes \lambda_x \Big) = \prod_{x \in E^{(1)}} \lambda_x.\end{gathered}$$ We set $$\begin{gathered} \label{dfnV} V(E) ={\rm Ker} {\rm Nm}.\end{gathered}$$ One has \[lem4.2\] $F^2{{\rm CH}}_0(X) = \gamma\Big(V(E)\Big).$ By the definition given in §\[AlbKer\], $F^2{{\rm CH}}_0(X)$ is generated by the expressions $[(x,y)]-[(x,0)]-[(0,y)]+[(0,0)]$, with $x\in E({{\mathbb C}})$ and $y\in E_0({{\mathbb C}})\setminus\{*\}$. By the formula for $\gamma$ given in Lemma \[lem4.1\], this expression is $\gamma(x\otimes y-0\otimes y)$, after identifying $y\in{{\mathbb C}}^*$ with $p_0(y)\in E_0({{\mathbb C}})$. Clearly $V(E)$ is generated by the elements of $H^1(E,{{\mathcal K}}_2)$ of the form $x\otimes y-0\otimes y$, whence the lemma. Next we want to map ${{\rm CH}}_0(X)$ to a relative version of S. Bloch’s analytic motivic cohomology. So we define $$\begin{gathered} \bar{{{\mathcal B}}}(2) = {\rm Ker} \Big(\nu_*{{\mathcal B}}(2) \xrightarrow{|_{E \times 0} - |_{E \times \infty}} {{\mathcal B}}(2)|_{E }\Big)\end{gathered}$$ In particular, $\bar{{{\mathcal B}}}(2)$ is an extension of $$\bar{{{\mathcal K}}}_{2, {{\rm an}}}= {\rm Ker} \Big(\nu_*{{\mathcal K}}_{2,{{\rm an}}} \xrightarrow{|_{E \times 0} - |_{E \times \infty}} {{\mathcal K}}_{2, {{\rm an}}}|_{E }\Big)$$ placed in degree 2, by $\Delta^*(1)$, placed in degree 1. In other words, ${{\mathcal B}}(2)$ is the pull-back of $\bar{{{\mathcal B}}}(2)$ via the map $ \nu^*:{{\mathcal K}}_{2, {{\rm an}}} \to \bar{{{\mathcal K}}}_{2, {{\rm an}}}$, and in particular, it receives the complex $\Gamma_0(2)$ as explained in the example \[ex2\]. Considering again the Leray spectral sequence attached to the identity $\alpha: X_{{\rm an}}\to X_{{\rm zar}}$, we see that $$\begin{gathered} \bar{{{\mathcal K}}}_{2, {{\mathbb Z}}} := {\rm Ker}\Big(\alpha_* \bar{{{\mathcal K}}}_{2, {{\rm an}}} \to {{\mathcal H}}^2({{\mathbb C}}/{{\mathbb Z}}(2))\Big)\end{gathered}$$ receives $\bar{{{\mathcal K}}_2}$ and that the first map of the spectral sequence is then $$\begin{gathered} H^2(X, \bar{{{\mathcal K}}}_{2, {{\mathbb Z}}}) \to {{\mathbb H}}^4(X_{{\rm an}}, \bar{{{\mathcal B}}}(2)).\end{gathered}$$ In conclusion, we have shown One has a cycle map $$\psi_X: {{\rm CH}}_0(X) \to {{\mathbb H}}^4(X_{{\rm an}}, \bar{{{\mathcal B}}}(2))$$ compatible with the cycle map $$\psi_{E \times {{\mathbb P}}^1}: {{\rm CH}}_0(E \times {{\mathbb P}}^1) \to {{\mathbb H}}^4((E \times {{\mathbb P}}^1)_{{\rm an}}, {{\mathcal B}}(2))$$ on the normalization. Moreover, $\psi_X$ fulfills the conditions described in \[Props\]. We just have to verify the condition 4 of \[Props\]. [From]{} the normalization sequence $$0 \to {{\mathcal O}}^*_X \to \nu_*{{\mathcal O}}^*_{E\times {{\mathbb P}}^1} \xrightarrow{|_{E \times 0} - |_{E \times \infty}} {{\mathcal O}}^*_E \to 0,$$ one has a natural map $${{\mathcal O}}^*_{X_{{\rm an}}} \otimes {{\mathcal O}}^*_{X_{{\rm an}}} \to \bar{{{\mathcal K}}}_{2,{{\rm an}}}$$ which obviously fulfills \[Props\] 4. Now we can apply Theorem \[thm3.1\] to conclude \[SingVanThm\] The 0-cycles defined by the Steinberg curve on $E\times E_0$ die in the analytic motivic cohomology ${{\mathbb H}}^4(X_{{\rm an}}, \bar{{{\mathcal B}}}(2))$. Let $K$ be a subfield of ${{\mathbb C}}$. We next consider for any algebraic variety $Z$ defined over $K$, the cycle map with values in the absolute Hodge cohomology $$\begin{gathered} H^m(Z, {{\mathcal K}}_2) \xrightarrow{d\log \wedge d\log}H^m(Z, \Omega^2_{Z/{{\mathbb Q}}}) \end{gathered}$$ induced by the absolute $d\log$ map $$\begin{gathered} {{\mathcal O}}^*_Z \xrightarrow{d\log} \Omega^1_{Z/{{\mathbb Q}}}.\end{gathered}$$ This cycle map is obviously compatible with the map $\gamma$, and with extension of scalars. Let $E\to{{\rm Spec \,}}K$ be an elliptic curve over a subfield $K$ of ${{\mathbb C}}$. We have the exact sheaf sequence $$0\to{{\mathcal O}}_E\otimes\Omega^1_{K/{{\mathbb Q}}}\to\Omega^1_{E/{{\mathbb Q}}}\to\Omega^1_{E/K}\to0,$$ which induces a two-term filtration $F^*\Omega^2_{E/{{\mathbb Q}}}$ of $\Omega^2_{E/{{\mathbb Q}}}$ with $F^2\Omega^2_{E/{{\mathbb Q}}}={{\mathcal O}}_E\otimes\Omega^2_{K/{{\mathbb Q}}}$. This gives us the natural maps $$\begin{gathered} \gamma_1:H^*(E,{{\mathcal O}}_E)\otimes\Omega^1_{K/{{\mathbb Q}}}\to H^*(E,\Omega^1_{E/{{\mathbb Q}}})\\ \gamma_2:H^*(E,{{\mathcal O}}_E)\otimes\Omega^2_{K/{{\mathbb Q}}}\to H^*(E,\Omega^2_{E/{{\mathbb Q}}}).\end{gathered}$$ We have the norm map ${{\operatorname{Nm}}}:H^1(E,{{\mathcal K}}_2)\to H^0(K,{{\mathcal K}}_1)=K^*$ as in \[Nm\], but over $K$; we let $V(E)\subset H^1(E,{{\mathcal K}}_2)$ be the kernel of ${{\operatorname{Nm}}}$ (see ). \[AHCompLem\] Let $K$ be an algebraically closed subfield of ${{\mathbb C}}$, $E\to{{\rm Spec \,}}K$ an elliptic curve over $K$. Then the cycle map with values in absolute Hodge cohomology maps $V(X)$ to the subgroup $\gamma_2[H^1(E, {{\mathcal O}}_E) \otimes\Omega^2_{E/{{\mathbb Q}}}]$ of $H^1(E,\Omega^2_{E/{{\mathbb Q}}})$. The kernel of the composition $${{\rm Pic}}(E)=H^1(E,{{\mathcal K}}_1)\xrightarrow{d\log}H^1(E,\Omega^1_{E/{{\mathbb Q}}})\to H^1(E,\Omega^1_{E/K})\cong K$$ is the composition $${{\rm Pic}}(E)\xrightarrow{\deg}{{\mathbb Z}}\subset K,$$ hence the $d\log$ map sends ${{\rm Pic}}^0(E)$ to the subgroup $\gamma_1[H^1(E,{{\mathcal O}}_E)\otimes\Omega^1_{K/{{\mathbb Q}}}]$ of $H^1(E,\Omega^1_{E/{{\mathbb Q}}})$. Take $\tau\in{{\rm Pic}}^0(E)$, $u\in H^0(E,{{\mathcal K}}_1)=K^*$, and let $\xi=\tau\cup u\in H^1(E,{{\mathcal K}}_2)$. Then $$d\log(\xi)=d\log(\tau)\cup d\log(u).$$ Since $d\log:K^*\to\Omega^1_{K/{{\mathbb Q}}}$ is just the absolute $d\log$ map, we see that $d\log(\xi)$ lands in the image of the cup product map $$[H^1(E,{{\mathcal O}}_E)\otimes\Omega^1_{K/{{\mathbb Q}}}]\otimes \Omega^1_{K/{{\mathbb Q}}}\to H^1(E,\Omega^2_{E/{{\mathbb Q}}}),$$ which is $\gamma_2(H^1(E,{{\mathcal O}}_E)\otimes\Omega^2_{K/{{\mathbb Q}}})$. Since $K$ is algebraically closed, the cup product ${{\rm Pic}}(E)\otimes K^*\to H^1(E,{{\mathcal K}}_2)$ is surjective, from which one sees that the cup product maps ${{\rm Pic}}^0(E)\otimes K^*$ onto $V(E)$. Combining this with the computation above completes the proof. [From]{} the surjectivity of the cup product ${{\rm Pic}}^0(E)\otimes K^*\to V(E)$ for $K$ algebraically closed, we see that the injection $H^1(E,{{\mathcal K}}_2)\to {{\rm CH}}_0(X)$ sends $V(E)$ isomorphically onto $F^2{{\rm CH}}_0(X)$. Let $K$ be a subfield of ${{\mathbb C}}$. We say that an element $\xi$ of ${{\rm CH}}_0(X)$ is [*defined over $K$*]{} if there is an $K$-scheme $X^0$, an element $\xi^0$ of ${{\rm CH}}_0(X^0)$ and an isomorphism $\alpha:X^0_{{\mathbb C}}\to X$ such that $\xi=\alpha_*(\xi^0_{{\mathbb C}})$. From Lemma \[AHCompLem\] and the compatibility of $d\log$ with extension of scalars, we have \[AHVanLem\] Take $K={{\mathbb C}}$, and let $\xi$ be an element of $F^2{{\rm CH}}_0(X)=V(E)$. If $\xi$ is defined over a field of transcendence degree one over ${{\mathbb Q}}$, then $\xi$ vanishes under the cycle map to absolute Hodge cohomology. If $E$ is an elliptic curve with complex multiplication, then there are non-torsion cycles $\xi \in F^2{{\rm CH}}_0(X)$ dying in the analytic motivic cohomology as well as in absolute Hodge cohomology. By the remark above, we may replace $F^2{{\rm CH}}_0(X)$ with $V(E)$. Let $\bar E$ be a model for $E$, with equation $y^2=4x^3-ax-b$ defined over a number field $K\subset{{\mathbb C}}$. Let $\omega=\frac{dx}{y}$ be the standard global one-form on $\bar E$. Choosing an isomorphism $\bar E_{{\mathbb C}}\cong E_{{\mathbb C}}$ defines the period lattice $L_\omega\subset{{\mathbb C}}$ for $\omega$. Choose a basis for $L_\omega$ of the form $\{\Omega,\tau\Omega\}$, and let $t=e^{2\pi i\tau}$. Let $${{\mathcal P}}:{{\mathbb C}}\to {{\mathbb C}}{{\mathbb P}}^1$$ be the Weierstraß $P$-function for the lattice $L_\omega$. The map $\times\Omega^{-1}:{{\mathbb C}}\to{{\mathbb C}}$ gives rise to the isomorphism of Riemann surfaces $\alpha_{{\rm an}}:\bar E_{{\mathbb C}}^{{\rm an}}\to E_t^{{\rm an}}$ making the diagram $$\xymatrix{ {{\mathbb C}}\ar[r]^{\times\Omega^{-1}}\ar[dd]_{({{\mathcal P}},{{\mathcal P}}')}&{{\mathbb C}}\ar[d]^{\text{exp}}\\ &{{\mathbb C}}^*\ar[d]^p\\ \bar E_{{\mathbb C}}^{{\rm an}}\ar[r]_{\alpha_{{\rm an}}}&E_t^{{\rm an}}}$$ commute, i.e., $$p(u)=\alpha_{{\rm an}}({{\mathcal P}}(\frac{\Omega}{2\pi i}\text{log}u), {{\mathcal P}}'(\frac{\Omega}{2\pi i}\text{log}u)).$$ We let $$\alpha:\bar E_{{\mathbb C}}\to E_t$$ be the corresponding isomorphism of algebraic elliptic curves over ${{\mathbb C}}$. By [@Be], théorème 1, ${{\mathcal P}}(\frac{\Omega}{2\pi i}\text{log}u)$ has transcendence degree 1 over $\bar{{\mathbb Q}}$ for all $u\in {{\mathbb N}}$, $u\ge2$. (We thank Y. André for giving us this reference). Fix a $u\ge2$, let $K$ be the algebraic closure of the field ${{\mathbb Q}}({{\mathcal P}}(\frac{\Omega}{2\pi i}\text{log}u))$, and let $x\in\bar E(K)$ be the point $({{\mathcal P}}(\frac{\Omega}{2\pi i}\text{log}u), {{\mathcal P}}'(\frac{\Omega}{2\pi i}\text{log}u))$. Then $x$ is a generic point of $\bar E$ over $\bar{{\mathbb Q}}$. We take $$\xi:= p(u)*p(1-u).$$ By construction, $\xi=\alpha(\xi_K\times_K{{\mathbb C}})$, where $\xi_K\in H^1(\bar E,{{\mathcal K}}_2)$ is the element $[(x)-(0)]\cup[1-u]$. Here $[(x)-(0)]$ denotes the class in ${{\rm Pic}}(E)=H^1(\bar E,{{\mathcal K}}_1)$, and $[1-u]$ denotes the class in $H^0(\bar E,{{\mathcal K}}_1)=K^*$. Since $K$ has transcendence degree one over $\bar{{\mathbb Q}}$, the class of $\xi$ in the absolute Hodge cohomology of $E$ vanishes, by Lemma \[AHVanLem\]. By Theorem \[SingVanThm\], $\xi$ dies in the analytic motivic cohomology of $E$ as well. It remains to show that $\xi$ is a non-torsion element of $H^1(E_K,{{\mathcal K}}_2)$. We give an analytic proof of this using the regulator map with values in Deligne-Beilinson cohomology. Let $Y$ be a smooth projective surface over ${{\mathbb C}}$, and let ${{\operatorname{NS}}}(Y)$ denote the Néron-Severi group of divisors modulo homological equivalence. Then Hodge theory implies that $${{\operatorname{NS}}}(Y)= \{(z, \varphi) \in (H^2(Y_{{\rm an}}, {{\mathbb Z}}(1)) \times F^1H^2(Y_{{\rm an}}, {{\mathbb C}})), z\otimes {{\mathbb C}}= \varphi\},$$ and that $${{\operatorname{NS}}}(Y) \cap F^2H^2_{DR}(Y) =\emptyset.$$ We note that the map ${{\rm Pic}}(Y)\otimes{{\mathbb C}}^*\to H^3_{{\mathcal D}}(Y,{{\mathbb Z}}(2))$ induced by the cup product in Deligne cohomology factors through ${{\operatorname{NS}}}(Y)\otimes{{\mathbb C}}^*$, and that the induced map $\iota:{{\operatorname{NS}}}(Y)\otimes{{\mathbb C}}^*\to H^3_{{\mathcal D}}(Y,{{\mathbb Z}}(2))$ is injective. Indeed, $$H^3_{{\mathcal D}}(Y,{{\mathbb Z}}(2))= H^2(Y_{{\rm an}}, {{\mathbb C}}/{{\mathbb Z}}(2))/F^2.$$ Now take $Y=E\times E$, and let $U\subset E$ be the complement of a non-empty finite set $\Sigma$ of points of $E$. Let $[E\times0]$ be the class of $E\times 0$ in ${{\operatorname{NS}}}(Y)$, and let $\gamma:{{\mathbb C}}^*\to {{\operatorname{NS}}}(Y)\otimes{{\mathbb C}}^*$ be the map $\gamma(v)=[E\times0]\otimes v$. Let $$\iota_U:{{\operatorname{NS}}}(Y)\otimes{{\mathbb C}}^*\to H^3_{{\mathcal D}}(E\times U,{{\mathbb Z}}(2))$$ be the composition of $\iota$ with the restriction map $H^3_{{\mathcal D}}(Y,{{\mathbb Z}}(2))\to H^3_{{\mathcal D}}(E\times U,{{\mathbb Z}}(2))$. We claim that the sequence $${{\mathbb C}}^*\xrightarrow{\gamma}{{\operatorname{NS}}}(Y)\otimes{{\mathbb C}}^*\xrightarrow{\iota_U} H^3_{{\mathcal D}}(E\times U,{{\mathbb Z}}(2))$$ is exact. Indeed, we have the localization sequence $$\oplus_{s\in\Sigma}H^1_{{\mathcal D}}(E\times s,{{\mathbb Z}}(1))\xrightarrow{\oplus_s\iota_s} H^3_{{\mathcal D}}(Y,{{\mathbb Z}}(2))\to H^3_{{\mathcal D}}(E\times U,{{\mathbb Z}}(2))\to,$$ the isomorphism $H^1_{{\mathcal D}}(E\times s,{{\mathbb Z}}(1))\cong{{\mathbb C}}^*$ and the identity $$\iota_s(v)=\gamma(v),\ v\in{{\mathbb C}}^*,$$ which proves our claim. In particular, let $[\Xi]=[\Delta-\{0\}\times E]\otimes v$, where $\Delta$ is the diagonal, $v$ is an element of ${{\mathbb C}}^*$ which is not a root of unity, and $[\Delta-\{0\}\times E]$ is the class in ${{\operatorname{NS}}}(Y)$. Since $[\Delta-\{0\}\times E]$ is not torsion in ${{\operatorname{NS}}}(Y)/[E\times \{0\}]$, we see that $[\Xi]$ has non-torsion image $[\Xi_{{{\mathbb C}}(E)}]$ in $$H^3_{{\mathcal D}}(E\times_{{\mathbb C}}{{\mathbb C}}(E),{{\mathbb Z}}(2)):=\lim_{\substack{\to\\{\emptyset}\neq U\subset E}}H^3_{{\mathcal D}}(E\times U,{{\mathbb Z}}(2)),$$ where the limit is over non-empty Zariski open subsets $U$ of $E$. Let $\Xi$ be the image of $(\Delta-0\times E)\otimes v$ in $H^1(Y,{{\mathcal K}}_2)$. Then $[\Xi]$ is the image of $\Xi$ under the regulator map $H^1(Y,{{\mathcal K}}_2)\to H^3_{{\mathcal D}}(Y,{{\mathbb Z}}(2))$. Similarly, letting $\Xi_{{{\mathbb C}}(E)}$ be the pull-back of $\Xi$ to $E\times_{{\mathbb C}}{{\mathbb C}}(E)$, $[\Xi_{{{\mathbb C}}(E)}]$ is the image of $\Xi_{{{\mathbb C}}(E)}$ under the regulator map $H^1(E\times_{{\mathbb C}}{{\mathbb C}}(E),{{\mathcal K}}_2)\to H^3_{{\mathcal D}}(E\times_{{\mathbb C}}{{\mathbb C}}(E),{{\mathbb Z}}(2))$. Thus, $\Xi_{{{\mathbb C}}(E)}$ is a non-torsion element of $H^1(E\times_{{\mathbb C}}{{\mathbb C}}(E),{{\mathcal K}}_2)$ for each non-torsion element $v\in{{\mathbb C}}^*$. Let $\bar\Delta$ be the diagonal in $\bar E\times\bar E$, let $\bar\xi$ be the image of $(\bar\Delta-0\times \bar E)\otimes(1-u)$ in $H^1(E,{{\mathcal K}}_2)$, and let $\bar\xi_{\bar{{\mathbb Q}}(E)}$ be the image of $\bar\xi$ in $H^1(\bar E\times_{\bar{{\mathbb Q}}}\bar{{\mathbb Q}}(\bar E),{{\mathcal K}}_2)$. Clearly, after choosing a complex embedding $\bar{{{\mathbb Q}}} \subset {{\mathbb C}}$, $\Xi_{{{\mathbb C}}(E)}$ (for $v=1-u$) is the image of $\bar\xi_{\bar{{\mathbb Q}}(E)}$ under the extension of scalars $\bar{{\mathbb Q}}(\bar E)\to {{\mathbb C}}(\bar E)\cong{{\mathbb C}}(E)$, hence $\bar\xi_{\bar{{\mathbb Q}}(E)}$ is a non-torsion element of $H^1(\bar E\times_{\bar{{\mathbb Q}}}\bar{{\mathbb Q}}(\bar E),{{\mathcal K}}_2)$. Since $x$ is a geometric generic point of $\bar{E}$ over $\bar{{{\mathbb Q}}}$, there is an embedding $\sigma:\bar{{\mathbb Q}}( E)\to{{\mathbb C}}$ such that $x:{{\rm Spec \,}}{{\mathbb C}}\to\bar E$ is the composition ${{\rm Spec \,}}{{\mathbb C}}\to{{\rm Spec \,}}\bar{{\mathbb Q}}(E)\to \bar E$. Thus, $\xi$ is the image of $\bar\xi$ under $({{\operatorname{id}}}\times x)^*:H^1(\bar E\times_{\bar{{{\mathbb Q}}}}\bar E,{{\mathcal K}}_2)\to H^1(E,{{\mathcal K}}_2)$, and hence $\xi$ is the image of $\bar\xi_{\bar{{\mathbb Q}}(E)}$ under the map ${{\operatorname{id}}}\times\sigma_*:H^1(\bar E\times_{\bar{{\mathbb Q}}}\bar{{\mathbb Q}}(\bar E),{{\mathcal K}}_2)\to H^1(E,{{\mathcal K}}_2)$ induced by the extension of scalars $\sigma$. Since the kernel of ${{\operatorname{id}}}\times \sigma_*$ is torsion, it follows that $\xi$ is a non-torsion element of $H^1(E,{{\mathcal K}}_2)$, as desired. Going back to $X=E \times E'$, where both elliptic curves are smooth, we are lacking the transcendence theorem which would force the existence of a cycle $0\neq \xi= p(u) * p(1-u)\in F^2{{\rm CH}}_0(X)$ dying both in ${{\mathbb H}}^4(X, {{\mathcal B}}(2))$ and in absolute Hodge cohomology. [99]{} Bertrand, D.: Valeurs de fonctions thêta et hauteurs $p$-adiques, Progress in Math. [**22**]{} (1982), 1-11, Birkhäuser Verlag. Biswas, J.; Srinivas V.: Chow ring of a singular surface, appendix to “Roitmann theorem for singular projective varieties”, preprint 1995, to appear in Compositio. Bloch, S.: Applications of the dilogarithm function in algebraic $K$-theory and algebraic geometry, Int. Symp. on Alg. Geom., Kyoto (1977), 2036-2060. Bloch, S.: letter to H. Esnault, March 30, 1988. Esnault, H.: A note on the cycle map, J. reine angew. Math. [**411**]{} (1990), 51-65. Levine, M.; Weibel, C.: Zero-cycles and complete intersections on affine surfaces, J. Reine u. Ang. Math. [**359**]{}(1985) 106-120. Levine, M.: Bloch’s formula for a singular surface, Topology [**124**]{} No. 2(1985) 165-174. Lichtenbaum, S.: The construction of weight-two motivic cohomology, Invent. math. [**88**]{} (1987), 183-215. Mumford, D.: Rational equivalence of $0$-cycles on surfaces, J. Math. Kyoto Univ. [**9**]{} (1968) 195-204. Mumford, D.: Abelian Varieties, Oxford University Press (1970). Raskind, W.; Spie[ß]{}, M.: Milnor $K$-groups and zero-cycles on products of curves over ${p}$-adic fields, preprint 31 pages. Roitman, A. A.: $\Gamma$-equivalence of zero-dimensional cycles. Mat. Sb. (N.S.) [**86**]{}(128) (1971) 557-570. Serre, J.-P.: Géométrie algébrique et géométrie analytique, Ann. Inst. Fourier [**6**]{} (1956), 1-42. Srinivas, V.: Zero cycles on singular avrieties, preprint 1998, 26 pages, appears in the Proceedings of the Banff conference on algebraic cycles. [^1]: Partially supported by the NSF and by the DFG-Forschergruppe “Arithmetik und Geometrie”
--- author: - Fatih Nar ^^ title: '[SAR]{} Image Despeckling Using Quadratic-Linear Approximated $\ell_1$-Norm' --- Introduction ============ Synthetic aperture radar (SAR) is a microwave sensor system that allows acquiring high-resolution images at day or night, and almost in all weather conditions. However, speckle noise degrades SAR image quality and causes difficulties for various image analysis tasks (i.e. edge detection, change detection, segmentation) [@argenti2013] [@ozcan2015sdd]. In this letter, variational approach for despeckling is employed due to its excellent performance in various image processing tasks [@ozcan2015sdd]. In the literature, an anisotropic diffusion process for edge preserving noise reduction is proposed by Perona and Malik [@perona1990]. Then, variational noise reduction, ROF model, is proposed in [@rudin1992rof] where the diffusion process is controlled with a data fidelity term. Afterwards, various despeckling methods are proposed for SAR images such as speckle reducing anisotropic diffusion (SRAD) [@yu2002srad], improved anisotropic diffusion [@fabbrini2013], and sparsity-driven despeckling (SDD) [@ozcan2015sdd]. In this study, the approximation of the TV regularization term in SDD is improved to increase despeckling accuracy while reducing execution time. Proposed method =============== Speckle reduction for the SAR image is defined as the minimization of the following variational cost function: $$\label{TV_denoising_cost_function} J(f) = \frac{1}{2N} \sum_{p=1}^{N} (f_p - g_p)^2 + \lambda|(\nabla{f})_p|$$ where $g$ is observed speckled image, $f$ is the desired despeckled image, $N$ is the pixel count, $p$ is the pixel index number, $\lambda$ is a positive value determining smoothing level, and $\nabla$ is the gradient operator. In the cost function, the data fidelity term ensures $f$ stays similar to $g$ in $\ell_2$-norm manner and total variation (TV) regularization term implies penalty on the changes in image gradients in $\ell_1$-norm manner. Although $\ell_1$-norm TV regularization preserves details, its efficient minimization is difficult since it is not differentiable. SDD [@ozcan2015sdd] proposed to approximate the non-differentiable term quadratically as below: $$\label{quadratic_approximation} |z| \approx (|\hat{z}| + \varepsilon) ^{-1} {z^2}$$ where $\hat{z}$ is a proxy constant for $z$ and $\varepsilon$ is a small positive constant. Accuracy of the approximation increases as $\varepsilon$ gets closer to $0$. In this study, this quadratic approximation is further improved by combining it with a linear approximation as given in equation (\[quadratic\_linear\_approximation\]). $$\label{quadratic_linear_approximation} |z| \approx (1 - \alpha)(|\hat{z}| + \varepsilon) ^{-1} {z^2} + \alpha sgn(\hat{z})z$$ where $0 \leqslant \alpha \leqslant 1$, $sgn(.)$ is the signum function, and $sgn(\hat{z})z$ is the linear approximation of $|z|$. Equation (\[quadratic\_linear\_approximation\]) is convex combination of quadratic and linear approximations and is accurate around $\hat{z}$ (see Fig. \[QL\_L1\_appoximation\_figure\]). As $z$ goes to $0$, linear term vanishes and quadratic-linear (QL) approximation becomes quadratic around $0$ which also avoids staircase artifacts. If we define $|(\nabla{f})_p|$ as $|(\partial_x{f})_p| + |(\partial_y{f})_p|$ for a 2D SAR image and use the QL approximation given in equation (\[quadratic\_linear\_approximation\]) then the cost function in equation (\[TV\_denoising\_cost\_function\]) becomes as below: $$\begin{aligned} \label{TV_denoising_cost_function_approximated} J^{(n)}(f) = \frac{1}{2N} & \sum_{p=1}^{N} (f_p - g_p)^2 + (f_p - \hat{f}_p)^2 \\ & + \lambda [ (1 - \alpha)(w_{x,p} {(\partial_x{f})_p^2} + w_{y,p} {(\partial_y{f})_p^2}) \\ & ~~~~~~~ + \alpha (s_{x,p} {(\partial_x{f})_p} + s_{y,p} {(\partial_y{f})_p}) ] \end{aligned}$$ where $n$ is the iteration number, $\hat{f}_p$ is a proxy constant for $f_p$, $(f_p - \hat{f}_p)^2$ is a new regularization term for forcing $f_p$ stays close to $\hat{f}_p$ since QL approximation is only accurate around $\hat{f}_p$, $w_{x,p}=(|(\partial_x{\hat{f}})_p| + \varepsilon) ^{-1}$, $s_{x,p}=sgn((\partial_x{\hat{f}})_p)$, and $w_{y,p}$ and $s_{y,p}$ are defined correspondingly. Superscript $n$ in $J^{(n)}(f)$ shows that cost function must be minimized in an iterative manner due to employed QL approximation. Equation (\[TV\_denoising\_cost\_function\_approximated\]) can be represented in matrix-vector form as below: $$\begin{aligned} \label{TV_denoising_cost_function_matrix_vector} J^{(n)}(f) = \frac{1}{2N} & \Big( (v_f - v_g)^{\top}(v_f - v_g) + (v_f - v_{\hat{f}})^{\top}(v_f - v_{\hat{f}}) \\ & + \lambda [ (1 - \alpha)(v_f^{\top} C_x^{\top} W_x C_x v_f + v_f^{\top} C_y^{\top} W_y C_y v_f) \\ & ~~~~~~~ + \alpha (s_x^{\top} C_x v_f + s_y^{\top} C_y v_f) ] \Big) \end{aligned}$$ where $v_g$, $v_f$, $v_{\hat{f}}$, $s_x$, $s_y$ are vector forms of $g_p$, $f_p$, $\hat{f}_p$, $s_{x,p}$, $s_{y,p}$, and $W_x$, $W_y$ are diagonal matrix form of $w_{x,p}$, $w_{y,p}$, and $C_x$, $C_y$ are the Toeplitz matrices as the forward difference gradient operators where derivatives are zero at the right and bottom boundaries respectively. Equation (\[TV\_denoising\_cost\_function\_matrix\_vector\]) is strictly convex and differentiable; thus, one can take its derivative with respect to $v_f$ and equalize it to zero to obtain its minimum which leads to a linear system as given below: $$\begin{aligned} \label{TV_denoising_linear_system} A v_f^{(n+1)} = b \end{aligned}$$ where $A = 2I+ \lambda (1 - \alpha) (C_x^{\top} W_x C_x + C_y^{\top} W_y C_y)$, $I$ is identity matrix, and $b = v_g + v_{\hat{f}} - \lambda (\alpha / 2) (C_x^{\top} s_x + C_y^{\top} s_y) $. Iteration number is $n$ for the $A$, $W_x$, $W_y$, $b$, $v_f$, $v_{\hat{f}}$, $s_x$, and $s_y$ unless it is explicitly stated. Pseudo-code of the proposed method is given in algorithm \[QL\_L1\_PseudoCode\] where implementation of all the steps are easy and computationally cheap, except for solving the linear system in line 11. To obtain computational efficiency in line 11, preconditioned conjugate gradient (PCG) with incomplete Cholesky preconditioner (ICP) is used where maximum PCG iteration is set to $10^2$ and convergence tolerance is set to $10^{-2}$. Note that, all the matrices ($C_x$, $C_y$, $W_x$, $W_y$, $A$) in algorithm \[QL\_L1\_PseudoCode\] are sparse. $v_f \gets v_g \gets g$ $v_{\hat{f}} \gets v_f$ $W_x \gets [\textit{diag}(|C_x v_{\hat{f}}| + \varepsilon)]^{-1}$ $W_y \gets [\textit{diag}(|C_y v_{\hat{f}}| + \varepsilon)]^{-1}$ $s_x \gets \textit{sgn}(C_x v_{\hat{f}})$ $s_y \gets \textit{sgn}(C_y v_{\hat{f}})$ $A \gets 2I+ \lambda (1 - \alpha) (C_x^{\top} W_x C_x + C_y^{\top} W_y C_y)$ $b \gets v_g + v_{\hat{f}} - \lambda (\alpha / 2) (C_x^{\top} s_x + C_y^{\top} s_y)$ solve $A v_f = b$ $f \gets v_f$ **return** $f$ As $\alpha$ gets closer to $1$, $A$ become more diagonally dominant and efficiency for solving the linear system increases. However, in that case diffusion process is calculated in a local manner which leads to tiny dithering artifacts in the result. For $\alpha = 1$, $A$ becomes diagonal and solution of the linear system in equation (\[TV\_denoising\_linear\_system\]) becomes very efficient but more outer iterations ($n_{max}$) are required. For $\alpha < 1$, $A$ becomes a positive definite and sparse 5-point Laplacian matrix which can be solved with an efficient iterative solver such as PCG. As $\alpha$ gets closer to $0$, $A$ become less diagonally dominant; therefore, efficiency for solving the linear system decreases while diffusion becomes more global and only few outer iterations ($n_{max} =5$) are required. In SDD-QL, best accuracy and computational efficiency is achieved when $\alpha$ is around $0.5$. Results and analysis ==================== In this section, SDD with QL (SDD-QL), is analyzed qualitatively and quantitatively to show its despeckling accuracy and computational efficiency. SDD and SDD-QL are both developed in C++ using the coding optimizations given in [@ozcan2015sdd] and compiled as 64 bit executables. In all the experiments, (a) Intel i7-6700K 4 GHz CPU is used as hardware, (b) TerraSAR-X sample SAR image of India Visakhapatnam port (spot-mode, 16 bit, VV polarization, resolution $\approx$ 1 meter, number of looks $\approx$ 1) is used as test image, and (c) $\lambda=100$, $\varepsilon=10^{-2}$, $\alpha=0.5$, and $n_{max}=5$ are default parameters. As seen in Figure \[SDDversusSDDQL\], SDD and SDD-QL produce similar despeckling results since SDD-QL is a variant of SDD. However, SDD-QL preserves reflectivity levels in each region better due to the applied improvements on SDD while homogeneous regions are smoothed equivalently. Better reflectivity preservation of the regions leads to better preservation of details such as point scatterers and edges. Improvements obtained by SDD-QL can be observed with a closer investigation in Figure \[SDDversusSDDQL\]. In Figure \[SyntheticImageExample\], a synthetically generated SAR image of KFAU logo and its SDD-QL despeckling result are shown. For this synthetic data, Figure \[SNR\_SSIM\_Comparison\] shows signal to noise ratio (SNR) and structural similarity (SSIM) index values for different $\lambda$ parameters. In this experiment, speckled image has SNR=$12.969$dB with SSIM=$0.725$, best result of SDD-QL has SNR=$21.395$dB with SSIM=$0.956$, and best result of the SDD has SNR=$21.133$dB with SSIM=$0.907$. As seen in Figure \[SNR\_SSIM\_Comparison\], both methods achieve similar level of SNR while SDD-QL achieves higher value of SSIM which shows that SDD-QL preserves edges better than SDD.   Quadratic part of the QL approximation hence QL approximation itself gets better as $\varepsilon$ gets smaller. However, $A$ matrix becomes more ill-conditioned as the $\varepsilon$ gets smaller; thus, solving the linear system in equation (\[TV\_denoising\_linear\_system\]) gets longer. Note that, SDD and SDD-QL use PCG with ICP to solve the linear system which leads to significantly faster computation compared to no preconditioning. Even so, like the other preconditioners, ICP also sacrifices the preconditioning performance to obtain efficient construction of the preconditioner to decrease the overall computation in PCG. SDD-QL method produces better conditioned $A$ matrix compared to the one produced by SDD; thus, SDD-QL is faster. For general despeckling tasks, $\varepsilon$ can be set as $10^{-1}$ where SDD-QL is 2 times faster compared to SDD; and for very accurate despeckling tasks, $\varepsilon$ can be set as $10^{-5}$ where SDD-QL is almost 3 times faster compared to SDD (see Fig. \[executionTimeComparison\]). For $\varepsilon = 10^{-1}$, SDD-QL despeckles a 512x512 pixels SAR image in 0.28 second in single thread and despeckles a 13312x8192 pixels SAR image in 23.20 seconds with 8 threads. Conclusion ========== In this letter, approximation of the TV regularization term in SDD method is improved by fusion of a quadratic and linear approximators. Presented quadratic-linear approximator is derived for $\ell_1$-norm, but it can be easily extended to other norms that provides sparsity. Experiments show that, proposed method leads to more accurate despeckling with up to 3 times faster execution times comparing to SDD even though SDD already uses satisfactory $\ell_1$-norm approximation and an efficient numerical schema. Fatih Nar (*Konya Food and Agriculture University (KFAU), Turkey*)   E-mail: fatih.nar@gidatarim.edu.tr Argenti, F. and Lapini, A. and Bianchi, T. and Alparone, L.: ‘A tutorial on speckle reduction in synthetic aperture radar images’, *IEEE Geosci. Remote Sens. Mag.*, 2013, **1**, (3), p. 6-35 Ozcan, C., Sen, B., and Nar, F.: ‘Sparsity-driven despeckling for SAR images’, *IEEE Geosci. Remote Sens. Lett.*, 2015, **13**, (1), p. 115-119 Perona, P. and Malik, J.: ‘Scale space and edge detection using anisotropic diffusion’, *Phys. D*, 1990, **12**, p. 629-639 Rudin, L., Osher, S., and Fatemi, E.: ‘Nonlinear total variation based noise removal algorithms’, *Phys. D*, 1992, **60**, p. 259-268 Yu, Y. and Acton, S.T.: ‘Speckle reducing anisotropic diffusion’, *IEEE Trans. Image Process.*, 2002, **11**, (11), p. 1260-1270 Fabbrini, L. and Greco, M. and Messina, M. and Pinelli, G.: ‘Improved anisotropic diffusion filtering for SAR image despeckling’, *Electron. Lett.*, 2013, **49**, (10), p. 672-674
--- abstract: | In this article we study the Gieseker-Maruyama moduli spaces $\mathcal{B}(e,n)$ of stable rank 2 algebraic vector bundles with Chern classes $c_1=e\in\{-1,0\},\ c_2=n\ge1$ on the projective space $\mathbb{P}^3$. We construct two new infinite series $\Sigma_0$ and $\Sigma_1$ of irreducible components of the spaces $\mathcal{B}(e,n)$ for $e=0$ and $e=-1$, respectively. General bundles of these components are obtained as cohomology sheaves of monads, the middle term of which is a rank 4 symplectic instanton bundle in case $e=0$, respectively, twisted symplectic bundle in case $e=-1$. We show that the series $\Sigma_0$ contains components for all big enough values of $n$ (more precisely, at least for $n\ge146$). $\Sigma_0$ yields the next example, after the series of instanton components, of an infinite series of components of $\mathcal{B}(0,n)$ satisfying this property. [**2010 MSC:**]{} 14D20, 14E08, 14J60 [**Keywords:**]{} rank 2 bundles, moduli of stable bundles, symplectic bundles address: - | Faculty of Mathematics\ National Research University Higher School of Economics\ 6 Usacheva Street\ 119048 Moscow, Russia - | Faculty of Mathematics and Physics, Yaroslavl State Pedagogical University named after K.D.Ushinskii, 108 Respublikanskaya Street, 150000 Yaroslavl, Russia\ ${\ \ \ \ }$Koryazhma Branch of Northern (Arctic) Federal University named after M.V.Lomonosov\ 9 Lenin Avenue\ 165651 Koryazhma, Russia - | Faculty of Mathematics\ National Research University Higher School of Economics\ 6 Usacheva Street\ 119048 Moscow, Russia author: - Alexander Tikhomirov - Sergey Tikhomirov - Danil Vasiliev title: 'Construction of stable rank 2 vector bundles on $\mathbb{P}^3$ via symplectic bundles' --- Introduction {#section 1} ============ For $e\in\{-1,0\}$ and $n\in\mathbb{Z}_+$ let $\mathcal{B}(e,n)$ be the Gieseker-Maruyama moduli space of stable rank 2 algebraic vector bundles with Chern classes $c_1=e,\ c_2=n$ on the projective space $\mathbb{P}^3$. R. Hartshorne [@H-vb] showed that $\mathcal{B}(e,n)$ is a quasi-projective scheme, nonempty for arbitrary $n\ge1$ in case $e=0$ and, respectively, for even $n\ge2$ in case $e=-1$, and the deformation theory predicts that each irreducible component of ${{\mathcal B}}(e,n)$ has dimension at least $8n-3+2e$. In case $e=0$ it is known by now (see [@H-vb], [@ES], [@B1], [@CTT], [@JV], [@T1], [@T2]) that the scheme $\mathcal{B}(0,n)$ contains an irreducible component $I_n$ of expected dimension $8n-3$, and this component is the closure of the smooth open subset of $I_n$ constituted by the so-called mathematical instanton vector bundles. Historically, $\{I_n\}_{n\ge1}$ was the first known infinite series of irreducible components of $\mathcal{B}(0,n)$ having the expected dimension $\dim I_n=8n-3$. In [@H-vb Ex. 4.3.2] R. Hartshorne constructed a first infinite series $\{\mathcal{B}_0(-1,2m)\}_{m\ge1}$ of irreducible components $\mathcal{B}_0(-1,2m)$ of $\mathcal{B}(-1,2m)$ having the expected dimension $\dim\mathcal{B}_0(-1,2m)=16m-5$. The other infinite series of families of vector bundles of dimension $3k^2+10k+8$ from ${{\mathcal B}}(0,2k+1)$ was constructed in 1978 by W. Barth and K. Hulek [@BH], and G. Ellingsrud and S. A. Strømme in [@ES (4.6)-(4.7)] showed that these families are open subsets of irreducible componens distint from the instanton components $I_{2k+1}$. Later in 1985-87 V. K. Vedernikov [@V1] and [@V2] constructed two infinite series of families of bundles from ${{\mathcal B}}(0,n)$, and one infinite family of bundles from ${{\mathcal B}}(-1,2m)$. A more general series of rank 2 bundles depending on triples of integers $a,b,c$, appeared in 1984 in the paper of A. Prabhakar Rao [@Rao]. Soon after that, in 1988, L. Ein [@Ein] independently studied these bundles and proved that they constitute open parts of irreducible components of ${{\mathcal B}}(e,n)$ for both $e=0$ and $e=-1$. A new progress in the description of the spaces ${{\mathcal B}}(0,n)$ was achieved in 2017 by J. Almeida, M. Jardim, A. Tikhomirov and S. Tikhomirov in [@AJTT], where they constructed a new infinite series of irreducible components $Y_a$ of the spaces ${{\mathcal B}}(0,1+a^2)$ for $a\in\{2\}\cup\mathbb{Z}_{\ge4}$. These components have dimensions $\dim Y_a=4\binom{a+3}{3}-a-1$ which is larger than expected. General bundles from these components are obtained as cohomology bundles of rank 1 monads, the middle term of which is a rank 4 symplectic instanton with $c_2=1$, and the lefthand and the righthand terms are ${{\mathcal O}_{\mathbb{P}^{3}}}(-a)$ and ${{\mathcal O}_{\mathbb{P}^{3}}}(a)$, respectively. The aim of present article is to provide two new infinite series of irreducible components ${{\mathcal M}}_n$ of ${{\mathcal B}}_(e,n)$, one for $e=0$ and another for $e=-1$ which in some sense generalizes the above construction from [@AJTT]. Namely, in case $e=0$ we construct an infinite series $\Sigma_0$ of irreducible components ${{\mathcal M}}_n$ of ${{\mathcal B}}(0,n)$, such that a general bundle of ${{\mathcal M}}_n$ is a cohomology bundle of a monad of the type similar to the above, the middle term of which is a rank 4 symplectic instanton with arbitrary second Chern class. The first main result of the article, Theorem \[Thm A\], states that the series $\Sigma_0$ contains components ${{\mathcal M}}_n$ for all big enough values of $n$ (more precisely, at least for $n\ge146$). The series $\Sigma_0$ is a first example, besides the instanton series $\{I_n\}_{n\ge1}$, of the series with this property. (For all the other series mentioned above the question whether they contain components with all big enough values of second Chern class $n$ is open.) In case $e=-1$ we construct in a similar way an infinite series $\Sigma_1$ of irreducible components ${{\mathcal M}}_n$ of ${{\mathcal B}}(-1,n)$, such that a general bundle of ${{\mathcal M}}_n$ is a cohomology bundle of a monad of the type similar to the above, in which the lefthand and the righthand terms are ${{\mathcal O}_{\mathbb{P}^{3}}}(-a-1)$ and ${{\mathcal O}_{\mathbb{P}^{3}}}(a)$, respectively, and the middle term is a twisted symplectic rank 4 bundle with first Chern class -2. The second main result of the article, Theorem \[Thm B\], states that $\Sigma_1$ contains components ${{\mathcal M}}_n$ asymptotically for almost all big enough values of $n$. (A precise statement about the behaviour of the set of values of $n$ for which ${{\mathcal M}}_n$ is contained in $\Sigma_1$ is given in Remark \[Rem B1\]). Now give a brief sketch of the contents of the article. In Section \[section 2\] we study some properties of pairs $([{{\mathcal E}}_1],[{{\mathcal E}}_2])$ of mathematical instanton bundles and prove the vanishing of certain cohomology groups of their twists by line bundles ${{\mathcal O}_{\mathbb{P}^{3}}}(a)$ and ${{\mathcal O}_{\mathbb{P}^{3}}}(-a)$ (see Propositon \[Prop 1\]). The direct sum $\mathbb{E}= {{\mathcal E}}_1\oplus{{\mathcal E}}_2$ is then used in Section \[section 3\] as a test rank 4 symplectic instanton bundle. This bundle and its deformations are used as middle terms of anti-self-dual monads of the form $0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-a)\to\mathbb{E}\to{{\mathcal O}_{\mathbb{P}^{3}}}(a)\to0$, the cohomology bundles of which provide general bundles of the components ${{\mathcal M}}_n$ of of the series $\Sigma_0$ (see Theorem \[Thm A\]). In Section \[section 4\] we study direct sums $\mathbb{E}={{\mathcal E}}_1\oplus{{\mathcal E}}_2$ of vector bundles, ${{\mathcal E}}_i$ are the bundles from the R. Hartshorne series $\{{{\mathcal B}}_0(-1,2n)\}_{n\ge1}$ mentioned above. We prove certain vanishing properties for cohomology of twists of ${{\mathcal E}}_i$ (see Proposition \[Prop 2\]). These properties are then used in Theorem \[Thm B\] in the construction of general vector bundles of coomponents ${{\mathcal M}}_n$ of the series $\Sigma_1$. In Section \[section 5\] we give a list of components ${{\mathcal M}}_n\in\Sigma_0$ for $n\le20$ and of components ${{\mathcal M}}_n\in\Sigma_1$ for $n\le40$. **Conventions and notation**. - Everywhere in this paper we work over the base field of complex numbers $\mathbf{k}=\mathbb{C}$. - $\mathbb{P}^3$ is a projective 3-space over $\mathbf{k}$. - For a stable rank 2 vector bundle $E$ with $c_1(E)=e$, $c_2(E)=n$ on ${{\mathbb{P}^{3}}}$, we denote by $[E]$ its isomorphism class in ${{\mathcal B}}(e,n)$. **Acknowledgements**. A. Tikhomirov was supported by the Academic Fund Program at the National Research University Higher School of Economics (HSE) in 2018-2019 (grant no. 18-01-0037) and by the Russian Academic Excellence Project “5-100”. He also acknowledges the hospitality of the Max Planck Institute for Mathematics in Bonn, where this work was partially done during the winter of 2017. Some properties of mathematical instantons {#section 2} ========================================== Let $a$ and $m$ be two positive integers, where $a\ge2$, and let $\varepsilon\in\{0,1\}$. In this section we prove the following proposition about mathematical instanton vector bundles which will be used in the proof of Theorem \[Thm A\]. \[Prop 1\] A general pair $$\label{2 inst} ([{{\mathcal E}}_1],[{{\mathcal E}}_2])\in I_m\times I_{m+\varepsilon},$$ of instanton vector bundles satisfies the following conditions: $$\label{not equal} [{{\mathcal E}}_1]\ne[{{\mathcal E}}_2];$$ for $i=1,\ m\le a+1,$ respectively, $i=2,\ m+\varepsilon\le a+1$, $$\label{vanish 1} h^1({{\mathcal E}}_i(a))=0,$$ $$\label{vanish 2} h^2({{\mathcal E}}_i(-a))=0,\ \ \ if\ \ \ a\ge12;$$ for $i=1,\ m\le a-4,\ a\ge5,$ respectively, $i=2,\ m+\varepsilon\le a-4,\ a\ge5,$ $$\label{vanish 2a} h^2({{\mathcal E}}_i(-a))=0;$$ for $j\ne1$, $$\label{vanish 3} h^j({{\mathcal E}}_1\otimes{{\mathcal E}}_2)=0.$$ It is clearly enough to treat the case $i=1$, as the case $i=2$ is treated completely similarly. Consider two instanton vector bundles such that the condition (\[not equal\]) can be evidently achieved. Show that the condition (\[vanish 1\]) can also be satisfied for general bundles $[{{\mathcal E}}_1]\in I_m$ and $[{{\mathcal E}}_2]\in I_{m+\varepsilon}$. For this, consider a smooth quadric surface $S\in{{\mathbb{P}^{3}}}$, together with an isomorphism $S\simeq{{\mathbb{P}^{1}}}\times{{\mathbb{P}^{1}}}$, and let $Y=\overset{m+1}{\underset{i=1}{\sqcup}}l_i$ be a union of $m+1$ distinct projective lines $l_i$ in ${{\mathbb{P}^{3}}}$ belonging to one of the two rulings on $S$. Considering $Y$ as a reduced scheme, we have ${{\mathcal I}}_{Y,S}\simeq{{\mathcal O}_{\mathbb{P}^{1}}}(-m-1)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}$. Thus the exact triple $0\to{{\mathcal I}}_{S,{{\mathbb{P}^{3}}}}\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}\to{{\mathcal I}}_{Y,S}\to0$ can be rewritten as $$\label{triple of ideals} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-2)\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}\to{{\mathcal O}_{\mathbb{P}^{1}}}(-m-1)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}\to0.$$ Tensor multiplication of (\[triple of ideals\]) by ${{\mathcal O}_{\mathbb{P}^{3}}}(a+1)$, respectively, by ${{\mathcal O}_{\mathbb{P}^{3}}}(a-3)$ yields an exact triple $$\label{triple(a)} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(a-1)\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a+1)\to{{\mathcal O}_{\mathbb{P}^{1}}}(a-m)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}} (a+1)\to0.$$ $$\label{triple(-a)} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(a-5)\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-3)\to{{\mathcal O}_{\mathbb{P}^{1}}}(a-4-m)\boxtimes {{\mathcal O}_{\mathbb{P}^{1}}}(a-3)\to0.$$ By the Künneth formula $h^1({{\mathcal O}_{\mathbb{P}^{1}}}(a-m)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}(a+1))=0$ for $a\ge2$ and $m\le a+1$, and (\[triple(a)\]) implies that $$\label{vanish 4} h^1({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a+1))=0.$$ Now consider an extension of ${{\mathcal O}_{\mathbb{P}^{3}}}$-sheaves of the form $$\label{trC} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-1)\to{{\mathcal E}}_1\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1)\to0.$$ Such extensions are classified by the vector space $V=\mathrm{Ext}^1({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1),{{\mathcal O}_{\mathbb{P}^{3}}}(-1))$, and it is known that, for a general point $\xi\in V$ the extension sheaf ${{\mathcal E}}_1$ in (\[trC\]) is a locally free instanton sheaf from $I_m$(see, e. g., [@NT]) called a [*’t Hooft instanton*]{}. Now tensoring the triple with ${{\mathcal O}_{\mathbb{P}^{3}}}(a)$ and passing to cohomology, in view of we obtain (\[vanish 1\]) for $i=1$. To prove , assume that $m\le a-4$; then similar to we obtain using : $$\label{vanish 5} h^1({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-3))=0,\ \ \ m\le a-4.$$ Tensoring with ${{\mathcal O}_{\mathbb{P}^{3}}}(a-4)$ we obtain the triple $$\label{trC new} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(a-5)\to{{\mathcal E}}_1(a-4)\xrightarrow{\varepsilon}{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-3)\to0.$$ From and it follows that $$\label{vanish 6} h^1({{\mathcal E}}_1(a-4))=0,\ \ \ m\le a-4.$$ This together with Serre duality for ${{\mathcal E}}_1$ yields for $i=1$. To prove , consider a smooth quadric surface $S'\subset{{\mathbb{P}^{3}}}$, together with an isomorphism $S'\simeq{{\mathbb{P}^{1}}}\times{{\mathbb{P}^{1}}}$, and let $Z=\overset{d}{\underset{i=1}{\sqcup}}\tilde{l}_i$ be a union of $d$ distinct projective lines $\tilde{l}_i$ in ${{\mathbb{P}^{3}}}$, belonging to one of the two rulings on $S'$, where $1\le d\le 5$. Considering $Z$ as a reduced scheme, we have ${{\mathcal I}}_{Z,S'}\simeq{{\mathcal O}_{\mathbb{P}^{1}}}(-d)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}$. Without loss of generality we may assume that $Z\cap Y= \emptyset$ and that $Z$ intersects the quadric surface $S$ treated above in $2d$ distinct points $x_1,...,x_{2d}$ such that the points $\mathrm{pr}_2(x_i),\ i=1,...,2d,$ are also distinct, where $\mathrm{pr}_2: S\simeq{{\mathbb{P}^{1}}}\times{{\mathbb{P}^{1}}}\to{{\mathbb{P}^{1}}}$ is the projection onto the second factor. Tensoring the exact triple with ${{\mathcal O}_{\mathbb{P}^{3}}}(a-3)$ and restricting it onto $Z$ we obtain a commutative diagram of exact triples $$\label{diagr 1} \xymatrix{ & 0 & 0 & 0 &\\ 0\ar[r] &{{\mathcal O}}_Z(a-5)\ar[r]\ar[u] & {{\mathcal O}}_Z(a-3)\ar[r]\ar[u] & \underset{1}{\overset{2d}{\oplus}}~\mathbf{k}_{x_j}\ar[u]\ar[r] & 0\\ 0\ar[r]& {{\mathcal O}_{\mathbb{P}^{3}}}(a-5)\ar[r]\ar^-f[u] & {{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-3)\ar[r] \ar_-{g}[u] &{{\mathcal O}_{\mathbb{P}^{1}}}(a-m-4)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}(a-3) \ar^-h[u]\ar[r] & 0,}$$ where $f,\ g$ and $h$ are the restriction maps. The sheaf $\ker f={{\mathcal I}}_{Z,{{\mathbb{P}^{3}}}}(a-5)$ similar to satisfies the exact triple $$\label{triple(b)} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(a-5)\to\ker f\to{{\mathcal O}_{\mathbb{P}^{1}}}(a-d-5)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}(a-5)\to0.$$ Passing to cohomology of this triple we obtain in view of the conditions $1\le d\le 5$ and $a\ge12$ that $h^1(\ker f)=0$, i. e. $$h^0(f):\ H^0({{\mathcal O}_{\mathbb{P}^{3}}}(a-5))\to H^0({{\mathcal O}}_Z(a-5))$$ is an epimorphism. On the other hand, we have: i) $a-m-4\ge0$, ii) $a-3\ge2d-1$, since $a\ge12$ and $d\le5$, and iii) the points $\mathrm{pr}_2(x_i),\ i=1,...,2d,$ are distinct. Therefore, $$h^0(h):\ H^0({{\mathcal O}_{\mathbb{P}^{1}}}(a-m-4)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}(a-3))\to H^0(\overset{2d}{\underset{1}{\oplus}}~\mathbf{k}_{x_j})$$ is also an epimorphism. Whence by the diagram we obtain an epimorphism $$\label{h0(g) epi} h^0(g):\ H^0({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-3))\twoheadrightarrow H^0({{\mathcal O}}_Z(a-3)).$$ Now consider the $g\circ\varepsilon:\ {{\mathcal E}}_1(a-4)\twoheadrightarrow{{\mathcal O}}_Z(a-3)$, where $\varepsilon$ is the epimorphism in the triple and set $E:=\ker(g\circ\varepsilon)\otimes{{\mathcal O}_{\mathbb{P}^{3}}}(4-a)$. Thus, since ${{\mathcal O}}_Z=\overset{d}{\oplus}{{\mathcal O}}_{\tilde{l}_i}$, we have an exact triple: $$\label{triple with l's} 0\to E(a-4)\to{{\mathcal E}}_1(a-4)\xrightarrow{g\circ\varepsilon} \overset{d}{\underset{1}{\oplus}}{{\mathcal O}}_{\tilde{l}_i}(a-3)\to0.$$ From the triple it follows that $h^0(\varepsilon):H^0({{\mathcal E}}_1(a-4))\to H^0({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-3))$ is an epimorphism, hence by $h^0(g\circ\varepsilon):H^0({{\mathcal E}}_1(a-4)) \to H^0(\overset{d}{\oplus}{{\mathcal O}}_{\tilde{l}_i}(a-3))$ is also an epimorphism. This together with (\[triple with l’s\]) and yields that $$\label{vanish 7} h^1(E(a-4))=0.$$ Note that from it follows also that $$\label{c2(E)} c_2(E)=c_2({{\mathcal E}}_1)+d=m+d\le a+1,$$ since $d\le5$ and $m\le a-4$. Now show that $$\label{E in closure} [E]\in\bar{I}_{m+d},$$ where $\bar{I}_{m+d}$ is the closure of $I_{m+d}$ in the Gieseker-Maruyama moduli scheme $M(0,m+d,0)$ of semistable rank 2 coherent sheaves with Chern classes $c_1=c_3=0$ and $c_2=m+d$. (Recall that $M(0,m+d,0)$ is a projective scheme containing ${{\mathcal B}}(0,m+1)$ as an open subscheme - see e. g., [@H-vb], [@HL].) It is enough to treat the case $d=2$, since the argument for any $d\le5$ is completely similar. Consider the triple and denote by $E'_0$ the kernel of the composition $${{\mathcal E}}_1\xrightarrow{g\circ\varepsilon} {{\mathcal O}}_{\tilde{l}_1}(1)\oplus{{\mathcal O}}_{\tilde{l}_2}(1) \xrightarrow{\mathrm{pr_1}}{{\mathcal O}}_{\tilde{l}_1}(1).$$ We then obtain an exact triple $$\label{tr E E'0} 0\to E\to E'_0\xrightarrow{\varepsilon'}{{\mathcal O}}_{\tilde{l}_2}(1) \to0.$$ Now we invoke one of the main results of the paper [@JMT] according to which the sheaf $E'_0$ lies in the closure $\bar{I}_{m+1}$ of $I_{m+1}$ in the Gieseker-Maruyama moduli scheme $M(0,m+1,0)$. This implies that there exists a punctured curve $(C,0)\in\bar{I}_{m+1}$ and a flat over $C$ coherent ${{\mathcal O}}_{{{\mathbb{P}^{3}}}\times C}$-sheaf $\mathbb{E'}$ such that the sheaf $E'_t:=\mathbb{E'}|_{{{\mathbb{P}^{3}}}\times\{t\}}$ is an instanton bundle from $I_{m+1}$ for $t\ne0$ and coincides with $E'_0$ for $t=0$. Now, without loss of generality, after possible shrinking the curve $C$, one can extend the epimorphism $\varepsilon'$ in to an epimorphism $$\mathbf{e}:\mathbb{E'}\twoheadrightarrow{{\mathcal O}}_{\tilde{l}_2}(1) \boxtimes{{\mathcal O}}_C$$ such that $\mathbf{e}\otimes\mathbf{k}(0)=\varepsilon'$. Set $\mathbb{E}=\ker\mathbf{e}$ and denote $E_t=\mathbb{E}|_{{{\mathbb{P}^{3}}}\times\{t\}}$, $t\in C$. As for $t\ne0$ the sheaf $E'_t$ is an instanton bundle from $I_{m+1}$, and it fits in an exact triple $0\to E_t\to E'_t\to{{\mathcal O}}_{\tilde{l}_2}(1)\to0$, the above mentioned result from [@JMT] yields that $[E_t]\in\bar{I}_{m+2}$ for $t\ne0$. Hence, since $[E_t]\in\bar{I}_{m+2}$ is projective, it follows that $E_0\in\bar{I}_{m+2}$. Now by construction $E_0\simeq E$. Thus, $[E]\in\bar{I}_{m+2}$, i. e. we obtain the desired result for $d=2$. Formula now follows from for a general ${{\mathcal E}}$ by Semicontinuity and Serre duality. To prove the vanishing , consider the triple twisted by ${{\mathcal E}}_2$: $$\label{trC times E2} 0\to{{\mathcal E}}_2(-1)\to{{\mathcal E}}_1\otimes{{\mathcal E}}_2 \to{{\mathcal E}}_2\otimes{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1)\to0,$$ and the exact triple $0\to{{\mathcal E}}_2\otimes{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1)\to{{\mathcal E}}_2(1)\to \oplus_{i=1}^{m+1}({{\mathcal E}}_2|_{l_i})\to0.$ Since ${{\mathcal E}}_2$ is an instanton bundle, it follows that $$\label{vanish inst} h^2({{\mathcal E}}_2(1))=0,\ \ \ h^2({{\mathcal E}}_2(-1))=0.$$ On the other hand, without loss of generality, by the Grauert-Mülich Theorem [@OSS Ch. 2] we may assume that ${{\mathcal E}}_2|_{l_i}\simeq{{\mathcal O}_{\mathbb{P}^{1}}}$. This together with the last exact triple and the first equality yields $h^2({{\mathcal E}}_2\otimes{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1))=0$. Therefore, in view of and the second equality we obtain the equality for $j=1$. Last, this equality for $j=0,3$ follows from and the stability of ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$. \[rem A\] Note that, under the conditions of Proposition \[Prop 1\], the equalities (\[vanish 3\]) together with Riemann-Roch yield $$\label{h1 12} h^1({{\mathcal E}}_1\otimes{{\mathcal E}}_2)=8m+4\varepsilon-4.$$ Construction of stable rank two bundles with even determinant {#section 3} ============================================================= We first recall the notion of symplectic instanton. By a *symplectic structure* on a vector bundle $E$ on a scheme $X$ we mean an anti-self-dual isomorphism $\theta: E \xrightarrow{\sim}E^{\vee},\ \theta^{\vee}=-\theta$, considered modulo proportionality. Clearly, a symplectic vector bundle $E$ has even rank: $$\label{rk E} \operatorname{{rk}}E=2r,\ \ \ r\ge1,$$ and, if $X={{\mathbb{P}^{3}}}$, vanishing odd Chern classes: $$\label{odd c_i} c_1(E)=c_3(E)=0.$$ Following [@AJTT], we call a symplectic vector bundle $E$ on ${{\mathbb{P}^{3}}}$ a *symplectic instanton* if $$\label{def of sympl inst} h^0(E(-1))=h^1(E(-2))=h^2(E(-2))=h^3(E(-3))=0,$$ and $$\label{c_2=n} c_2(E)=n,\ \ \ n\ge1.$$ Consider the instanton bundles ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ introduced in Section \[section 2\] (see Proposition \[Prop 1\]). Since $\det{{\mathcal E}}_1\simeq\det{{\mathcal E}}_2\simeq{{\mathcal O}_{\mathbb{P}^{3}}}$, there are symplectic structures $\theta_i:\ {{\mathcal E}}_i\xrightarrow{\simeq}{{\mathcal E}}_i^{\vee},\ i=1,2,$ which yield a symplectic structure on the direct sum $\mathbb{E}={{\mathcal E}}_1\oplus{{\mathcal E}}_2$: $$\label{sympl str} \theta=\theta_1\oplus\theta_2:\ \mathbb{E}={{\mathcal E}}_1\oplus{{\mathcal E}}_2 \xrightarrow{\simeq}{{\mathcal E}}_1^{\vee}\oplus{{\mathcal E}}_2^{\vee}= \mathbb{E}^{\vee}.$$ Clearly, $\mathbb{E}$ is a symplectic instanton. Now assume that ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ are chosen in such a way that there exist sections $$\label{empty intersectn} s_i:\ {{\mathcal O}_{\mathbb{P}^{3}}}\to{{\mathcal E}}_i(a), \ \ \ such\ that\ \ \ \dim(s_i)_0=1,\ \ i=1,2,\ \ \ (s_1)_0\cap(s_2)_0=\emptyset.$$ (Such ${{\mathcal E}}_1\in I_m,\ [{{\mathcal E}}_2]\in I_{m+\varepsilon}$ always exist; for instance, two general ’t Hooft instantons ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ satisfy the property (\[empty intersectn\]) already for $a=1$, hence also for $a\ge2$.) The condition (\[empty intersectn\]) implies that the section $$\label{subbundle} s=(s_1,s_2):\ {{\mathcal O}_{\mathbb{P}^{3}}}(-a)\to\mathbb{E}$$ is a subbundle morphism, hence its transpose $$\label{ta} {}^ts:=s^{\vee}\circ\theta:\ \mathbb{E}\to{{\mathcal O}_{\mathbb{P}^{3}}}(a)$$ is a surjection. As $\theta$ in (\[sympl str\]) is symplectic, the composition ${}^ts\circ s:{{\mathcal O}_{\mathbb{P}^{3}}}(-a)\to{{\mathcal O}_{\mathbb{P}^{3}}}(a)$ is also symplectic. Since ${{\mathcal O}_{\mathbb{P}^{3}}}(\pm a)$ are line bundles, it follows that ${}^ts\circ s=0$. Therefore the complex $$\label{monad} K^{\cdot}:\ \ \ 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-a)\xrightarrow{s}\mathbb{E}\xrightarrow{{}^ts} {{\mathcal O}_{\mathbb{P}^{3}}}(a)\to0$$ is a monad and its cohomology sheaf $$\label{coho} E=\frac{\ker({}^ts)}{\mathrm{im}(s)}$$ is locally free. Note that, since the instanton bundles ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ are stable, they have zero spaces of global sections, hence also $h^0(\mathbb{E})=0$, and and yield $h^0(E)=0$, i. e. $E$ as a rank 2 vector bundle with $c_1=0$ is stable. Besides, since $c_2(\mathbb{E})=c_2({{\mathcal E}}_1)+c_2({{\mathcal E}}_2)=2m+\varepsilon$, it follows from that $c_2(E)=2m+\varepsilon +a^2$. Thus, $$\label{E in B} [E]\in{{\mathcal B}}(0,2m+\varepsilon +a^2),$$ and the deformation theory yields that, for any irreducible component ${{\mathcal M}}$ of ${{\mathcal B}}(0,2m+\varepsilon +a^2)$, $$\label{def theory ineq} \dim{{\mathcal M}}\ge1-\chi(\mathcal{E}nd~E) =8(2m+\varepsilon +a^2)-3.$$ Note that, since ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ are instanton bundles, for $a\ge2$ one has $h^1({{\mathcal E}}_i(-a))= h^j({{\mathcal E}}_i(a))=0,\ i=1,2,\ j=2,3$, hence by $$\label{h1bbE(-a)=0} h^1(\mathbb{E}(-a))=0,\ \ \ a\ge2,$$ $$\label{h2,3bbE(a)=0} h^j(\mathbb{E}(a))=0,\ \ \ j=2,3,\ \ \ a\ge2.$$ Similarly, in view of , $$\label{h1E(a)=0} h^1(\mathbb{E}(a))=0,\ \ \ m+\varepsilon\le a+1,\ \ \ a\ge2.$$ This together with and Riemann-Roch yields: $$\label{h0bbE(a)=} h^0(\mathbb{E}(a))=\chi(\mathbb{E}(a))= 4\binom{a+3}{3}-(2m+\varepsilon)(a+2),\ \ \ m+\varepsilon\le a+1,\ \ \ a\ge2.$$ Next, $$\label{decomp EndE} \mathcal{E}nd~\mathbb{E}\simeq\mathbb{E}\otimes \mathbb{E}\simeq S^2\mathbb{E}\oplus\wedge^2\mathbb{E},$$ and it follows from (\[sympl str\]) that $$\label{decomp S^2E} S^2\mathbb{E}\simeq S^2\mathcal{E}_1\oplus(\mathcal{E}_1\otimes\mathcal{E}_2)\oplus S^2\mathcal{E}_2,\ \ \ \wedge^2\mathbb{E}\simeq \wedge^2\mathcal{E}_1\oplus(\mathcal{E}_1\otimes\mathcal{E}_2)\oplus \wedge^2\mathcal{E}_2.$$ Now, since $\mathcal{E}nd~\mathcal{E}_i\simeq{{\mathcal E}}_i\otimes {{\mathcal E}}_i\simeq S^2{{\mathcal E}}_i\oplus \wedge^2{{\mathcal E}}_i,\ \wedge^2{{\mathcal E}}_i\simeq{{\mathcal O}_{\mathbb{P}^{3}}},\ i=1,2,$ it follows from [@JV] that $h^1(\mathcal{E}nd~\mathcal{E}_1)\simeq h^1(S^2{{\mathcal E}}_1)=8m-3,\ h^1(\mathcal{E}nd~\mathcal{E}_2)\simeq h^1(S^2{{\mathcal E}}_2)=8m+8\varepsilon-3,$ and $h^j(\mathcal{E}nd~\mathcal{E}_i)=h^j(S^2{{\mathcal E}}_i)=0,\ i=1,2,\ j\ge2$. This together with -(\[decomp S\^2E\]), (\[vanish 1\]) and (\[h1 12\]) implies that $$\label{h1 S2bbE} h^1(\mathcal{E}nd~\mathbb{E})=32m+16\varepsilon-14,\ \ \ h^1(S^2\mathbb{E})=24m+12\varepsilon-10,$$ $$\label{hi S2bbE} h^i(\mathcal{E}nd~\mathbb{E})=h^i(S^2\mathbb{E})=0,\ \ \ i\ge2.$$ Now assume that $$\label{basic cond} either\ \ \ 5\le a\le 12,\ 1+\varepsilon\le m+\varepsilon\le a-4,\ \ \ or\ \ \ a\ge12,\ 1+\varepsilon\le m+\varepsilon\le a+1.$$ It follows from (\[vanish 2\]), (\[vanish 2a\]) and that $$\label{h2E(-a)=0} h^2(\mathbb{E}(-a))=0.$$ Consider the total complex $T^{\cdot}$ of the double komplex $K^{\cdot}\otimes K^{\cdot}$, where $K^{\cdot}$ is the monad (\[monad\]): $$\label{total T} \begin{split} & T^{\cdot}:\ \ \ 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-2a)\xrightarrow{d_{-2}} 2\mathbb{E}(-a)\xrightarrow{d_{-1}}\mathbb{E}\otimes \mathbb{E}\oplus2{{\mathcal O}_{\mathbb{P}^{3}}}\xrightarrow{d_0} 2\mathbb{E}(a)\xrightarrow{d_1}{{\mathcal O}_{\mathbb{P}^{3}}}(2a)\to0,\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ E\otimes E=\frac{\ker(d_0)} {\mathrm{im}(d_{-1})}. \end{split}$$ Following Le Potier [@LP], consider the symmetric part $ST^{\cdot}$ of the complex $T^{\cdot}$: $$\label{sym of monad} ST^{\cdot}:\ \ \ 0\to\mathbb{E}(-a)\xrightarrow{\alpha}S^2\mathbb{E}\oplus {{\mathcal O}_{\mathbb{P}^{3}}}\xrightarrow{{}^t\alpha}\mathbb{E}(a)\to0,\ \ \ \ \ \ \ S^2E=\frac{\ker({}^t\alpha)}{\mathrm{im}(\alpha)},$$ where $\alpha$ is the induced subbundle map. The inclusion of complexes $ST^{\cdot}\hookrightarrow T^{\cdot}$ induces commutative diagrams $$\label{diagr D1} \xymatrix{ 0\ar[r] & \mathbb{E}(-a))\ar[r]\ar@{_{(}->}[d] & \ker({}^t\alpha)\ar[r]\ar@{_{(}->}[d] & S^2E \ar[r]\ar@{_{(}->}[d] & 0\\ 0\ar[r] & \mathrm{im}(d_{-1})\ar[r]& \ker(d_0) \ar[r] & E\otimes E\ar[r] & 0,}$$ $$\label{diagr D2} \xymatrix{ 0\ar[r] & \ker({}^t\alpha)\ar[r]\ar@{_{(}->}[d] & S^2\mathbb{E}\oplus{{\mathcal O}_{\mathbb{P}^{3}}}\ar[r]\ar@{_{(}->}[d] & \mathbb{E}(a) \ar[r]\ar@{_{(}->}[d] & 0\\ 0\ar[r] & \ker(d_0)\ar[r]& \mathbb{E}\otimes\mathbb{E}\oplus 2{{\mathcal O}_{\mathbb{P}^{3}}} \ar[r] & \mathrm{im}(d_0)\ar[r] & 0,}$$ and an exact triple $$\label{exact ker} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-2a)\xrightarrow{d_{-2}}2\mathbb{E}(-a) \to\mathrm{im}(d_{-1})\to0$$ Passing to cohomology in - and using , , and the equality $h^0(S^2\mathbb{E})=0$, we obtain an equality $h^0(\mathrm{coker}\alpha)=1$ and an exact sequence $$\label{seq S2E} 0\to H^0(\mathbb{E}(a))/\mathbb{C}\to H^1(S^2E)\xrightarrow{\mu}H^1(S^2\mathbb{E})\to0,$$ which fits in a commutative diagram $$\label{diagr D1} \xymatrix{ 0\ar[r] & H^0(\mathbb{E}(a))/\mathbb{C})\ar[r] & H^1(S^2E)\ar[r]^-\mu\ar@{=}[d] & H^1(S^2\mathbb{E}) \ar[r]\ar@{_{(}->}[d] & 0\\ & & H^1(E\otimes E)) \ar[r] & \mathbb{E}\otimes \mathbb{E}. &}$$ From , and it follows that $$\label{h1 S2E} h^1(S^2E)=h^0(\mathbb{E}(a))+24m+12\varepsilon-11= 4\binom{a+3}{3}+(2m+\varepsilon)(10-a)-11.$$ Note that, since $E$ is a stable rank-2 bundle, $H^1({{\mathcal E}}nd~E)=H^1(S^2E)$ is isomorphic to the Zariski tangent space $T_{[E]}{{\mathcal B}}(0,2m+\varepsilon +a^2)$: $$\label{Kod-Sp1} \theta_{E}:\ T_{[E]}{{\mathcal B}}(0,2m+\varepsilon +a^2){\stackrel{\sim}{\to}}H^1({{\mathcal E}}nd~E)=H^1(S^2E).$$ (Here $\theta_{E}$ is the Kodaira-Spencer isomorphism.) Thus, we can rewrite as $$\label{dim Zar tang sp} \dim T_{[E]}{{\mathcal B}}(0,2m+\varepsilon +a^2)= 4\binom{a+3}{3}+(2m+\varepsilon)(10-a)-11.$$ We will now prove the following main result of this section. \[Thm A\] Under the condition , there exists an irreducible family ${{\mathcal M}}_n(E)\subset{{\mathcal B}}(0,n)$, where $n=2m+\varepsilon +a^2$, of dimension given by the right hand side of and containing the above constructed point $[E]$. Hence the closure ${{\mathcal M}}_n$ of ${{\mathcal M}}_n(E)$ in ${{\mathcal B}}(0,n)$ is an irreducible component of ${{\mathcal B}}(0,n)$. The set $\Sigma_0$ of these components ${{\mathcal M}}_n$ is an infinite series distinct from the series of instanton components $\{I_n\}_{n\ge1}$ and from the series of components described in [@Ein] and [@AJTT]. Furthermore, at least for each $n\ge146$ there exists an irreducible component ${{\mathcal M}}_n$ of ${{\mathcal B}}(0,n)$ belonging to the series $\Sigma_0$. According to J. Bingener [@BH Appendix], the equality $h^2({{\mathcal E}}nd\mathbb{E})=0$ (see ) implies that there exists (over $\mathbf{k}=\mathbb{C}$) a versal deformation of the bundle $\mathbb{E}$, i. e. a smooth variety $B$ of dimension $\dim B=h^1({{\mathcal E}}nd\mathbb{E})$, with a marked point $0\in B$, and a locally free sheaf $\boldsymbol{{{\mathcal E}}}$ on ${{\mathbb{P}^{3}}}\times B$ such that $\boldsymbol{{{\mathcal E}}}|_{{{\mathbb{P}^{3}}}\times\{0\}}\simeq\mathbb{E}$ and the Koaira-Spencer map $\theta:\ T_{[\mathbb{E}]}B\to H^1({{\mathcal E}}nd\mathbb{E})$ is an isomorphism. For $b\in B$ denote $E_b:=\boldsymbol{{{\mathcal E}}}|_{{{\mathbb{P}^{3}}}\times\{b\}}$ and consider in $B$ a closed subset $$U=\{b\in B\ |\ E_b\ is\ a\ symplectic\ instanton\}.$$ By definition, $U=\tilde{U}\cap B^*$, where $\tilde{U}=\{b\in B\ |\ E_b\ is\ a\ symplectic\ bundle\}$ is a closed subset of $B$ and $$\begin{split} &B^*=\{b\in B\ |\ E_b \ satisfies\ the\ vanishing\ conditions\ \eqref{def of sympl inst}\ and\ the\ condition\ \\ & h^0(E_b)=h^i(E_b(-a))=h^j(E_b(a))=h^k(S^2E_b)=0,\ i=1,2,\ j\ge1,\ k=0,2,3\} \end{split}$$ is an open subset of $B$ by the Semicontinuity. (Here $a$ is taken from ). Since $\mathbb{E}$ is symplectic, so that ${{\mathcal E}}nd\mathbb{E}\simeq S^2\mathbb{E}\oplus\wedge^2\mathbb{E}$, it follows from [@R] that the Kodaira-Spencer map $\theta$ yields an isomorphism $\theta:T_{[\mathbb{E}]}U=T_{[\mathbb{E}]}\tilde{U}{\stackrel{\sim}{\to}}H^1(S^2\mathbb{E})$. Thus, $U$ is a smooth variety of dimension $$\dim U=h^1(S^2\mathbb{E})=24m+12\varepsilon−10.$$ (We use Riemann-Roch and the vanishing of $h^i(S^2\mathbb{E}),\ i\ne1$.) Let $p:{{\mathbb{P}^{3}}}\times B\to B$ be the projection. By the Base-Change and the vanishing conditions defining $B^*$, respectively, $U$ the sheaf ${{\mathcal A}}:=p_*(\boldsymbol{{{\mathcal E}}}\otimes{{\mathcal O}_{\mathbb{P}^{3}}}(a)\boxtimes{{\mathcal O}}_U)$ is a locally free sheaf of rank $\chi(\mathbb{E}(a))=h^0(\mathbb{E}(a))$ given by . Hence $\pi:\ \tilde{X}=\mathbf{Proj} (S^{\cdot}_{{{\mathcal O}_{\mathbb{P}^{3}}}}{{\mathcal A}}^{\vee})\to U$ is a projective bundle with the Grothendieck sheaf ${{\mathcal O}}_{\tilde{X}/U}(1)$ and a morphism $\mathbf{s}:\ {{\mathcal O}_{\mathbb{P}^{3}}}\boxtimes{{\mathcal O}}_{\tilde{X}/U}(-1)\to \tilde{\pi}^*(\boldsymbol{{{\mathcal E}}}\otimes{{\mathcal O}_{\mathbb{P}^{3}}}(a) \boxtimes{{\mathcal O}}_U)$ defined as the composition of canonical evaluation morphisms ${{\mathcal O}_{\mathbb{P}^{3}}}\boxtimes{{\mathcal O}}_{\tilde{X}/U}(-1)\to\tilde{p}^*\pi^*{{\mathcal A}}\to\tilde{\pi}^*(\boldsymbol{{{\mathcal E}}}\otimes{{\mathcal O}_{\mathbb{P}^{3}}}(a)\boxtimes {{\mathcal O}}_U)$, where $\tilde{X}\xleftarrow{\tilde{p}}{{\mathbb{P}^{3}}}\times \tilde{X}\xrightarrow{\tilde{\pi}}{{\mathbb{P}^{3}}}\times U$ are the induced projections. Let $X=\{x\in\tilde{X}\ |\ \mathbf{s}^{\vee}|_{{{\mathbb{P}^{3}}}\times\{x\}}\ $is surjective $\}$. This is an open dense subset of the smooth irreducible variety $\tilde{X}$ since it contains the point $x_0=(s:\ {{\mathcal O}_{\mathbb{P}^{3}}}\to\mathbb{E}(a))$ given in . Hence $X$ is smooth and irreducible. In addition, since $\boldsymbol{{{\mathcal E}}}$ is a versal family of bundles, it follows that $X$ is an open subset of the Quot-scheme $\mathrm{Quot}_{{{\mathbb{P}^{3}}}\times B/B}(\boldsymbol{{{\mathcal E}}},P(n))$, where $P(n):=\chi({{\mathcal O}_{\mathbb{P}^{3}}}(a+n))$. Therefore, by [@HL Prop. 2.2.7] in view of there is an exact triple $$\label{tangent seq} 0\to H^0(\mathbb{E}(a))/\mathbb{C}\to T_{x_0}X\xrightarrow{d\pi}T_{[\mathbb{E}]}B\to0,$$ which is obtained as the cohomology sequence $$\label{coho seq} 0\to H^0(\mathbb{E}(a))/\mathbb{C}\to H^1({{\mathcal H}}om(F,\mathbb{E}))\to H^1({{\mathcal E}}nd~\mathbb{E})\to0$$ of the exact triple $0\to{{\mathcal H}}om(F,\mathbb{E})\to{{\mathcal H}}om(\mathbb{E}, \mathbb{E})\to\mathbb{E}(a)\to0$ obtained by applying the functor ${{\mathcal H}}om(-,\mathbb{E})$ to the exact triple $0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-a)\xrightarrow{s}\mathbb{E}\to F\to0$, where $F:=\mathrm{coker}(s)$. Next, since $\boldsymbol{{{\mathcal E}}}$ is a versal family of bundles, it follows that $\mathbf{E}=\boldsymbol{{{\mathcal E}}}|_{{{\mathbb{P}^{3}}}\times U}$ is a versal family of symplectic instantons. Hence, denoting $Y=U\times_BX$, we extend the exact triple to commutative diagram $$\label{diagr 4} \xymatrix{ 0\ar[r] & H^0(\mathbb{E}(a))/\mathbb{C}\ar[r] & T_{x_0}X\ar[r] & T_{[\mathbb{E}]}B\ar[r] & 0\\ 0\ar[r] & H^0(\mathbb{E}(a))/\mathbb{C}\ar[r]\ar[r]\ar@{=}[u] & T_{x_0}Y \ar[r]^-{d\pi} \ar@{^{(}->}^-{i_Y}[u] & T_{[\mathbb{E}]}U \ar@{^{(}->}^-{i_U}[u]\ar[r] & 0,}$$ where $i_Y$ and $i_U$ are natural inclusions. (Note that, under the Kodaira-Spencer isomorphisms $\theta:T_{[\mathbb{E}]}U) \xrightarrow{\simeq}H^1(S^2\mathbb{E})$ and $T_{[\mathbb{E}]}B \xrightarrow{\simeq}H^1(\mathbb{E}\otimes\mathbb{E})\simeq H^1({{\mathcal E}}nd~\mathbb{E})$ the rightmost inclusions in diagrams and coincide.) Consider the modular morphism $$\Phi:\ Y\to{{\mathcal B}}:={{\mathcal B}}(0,2m+\varepsilon +a^2),\ \ (b,s)\mapsto\bigg[\frac{\mathrm{Ker} ({}^ts)}{\mathrm{Im}(s)}\bigg],$$ where, as before, $s:{{\mathcal O}_{\mathbb{P}^{3}}}(-a)\to E_b$ is a subbundle morphism. Its differential $d\Phi$ composed with the Kodaira-Spencer map $\theta_E$ from is a linear map $$\phi=\theta_E\circ d\Phi:\ T_{x_0}Y\to H^1(S^2E)=H^1(E\otimes E).$$ Now from the functorial properties of the Kodaira-Spencer maps $\phi$ and $\theta$ it follows that the triple and the lower triple in the diagram fit in a commutative diagram $$\xymatrix{ 0\ar[r] & H^0(\mathbb{E}(a))/\mathbb{C}\ar[r]\ar[r] & H^1(S^2E) \ar[r]^-{\mu} & H^1(S^2\mathbb{E})\ar[r] & 0\\ 0\ar[r] & H^0(\mathbb{E}(a))/\mathbb{C}\ar[r]\ar[r]\ar@{=}[u] & T_{x_0}Y \ar[r]^-{d\pi}\ar^-{\phi}[u] & T_{[\mathbb{E}]}U \ar^-{\theta}_-{\simeq}[u]\ar[r] & 0.}$$ This implies that $\phi$ is an isomorphism, so that, since $Y$ is smooth at $x_0$ and irreducible, ${{\mathcal M}}_n(E)=\Phi(Y)$ is an open subset of an irreducible component ${{\mathcal M}}_n$ of ${{\mathcal B}}(0,n)$, of dimension given by . It is easy to check that the dimension $\dim{{\mathcal M}}_n$ given by , with $m,\varepsilon$ and $a$ subjected to the condition , satisfies the strict inequality $\dim{{\mathcal M}}_n>8n-3=\dim I_n$. This shows that the series $\Sigma_0$ is distinct from $\{I_n\}_{n\ge1}$. To distinguish $\Sigma_0$ from from the series of components described in [@Ein], it is enough to see that the spectra of general bundles of these two series are different. (We leave to the reader a direct verification of this fact.) Note that, for each $a\ge12$ we have $1\le m\le a+1$ and $0\le\varepsilon\le1$, so that $n=2m+\varepsilon +a^2$ ranges through the whole interval of positive integers $(a^2+2,(a+1)^2+1)\subset\mathbb{Z}_{+}$. Hence, $n$ takes at least all positive values $\ge12^2+2=146$. This shows that for each $n\ge146$ there exists an irreducible component ${{\mathcal M}}_n\in\Sigma_0$. Last, remark that, for the series of components ${{\mathcal M}}_n$ described in [@AJTT], $n$ takes values $n=1+k^2,\ k\in\{2\}\cup(4,\infty)$. Hence this series is distinct from $\Sigma_0$. Theorem is proved. Construction of stable rank two bundles with odd determinant {#section 4} ============================================================ In this section we will construct an infinite series of stable vector bundles from ${{\mathcal B}}(-1,2m)$, $m\in\mathbb{Z}_+$. It is known from [@H-vb Example 4.3.2] that, for each $m\ge1$ there exists an irreducible component ${{\mathcal B}}_0(-1,2m)$ of ${{\mathcal B}}(-1,2m)$, of the expected dimension $$\label{dim B0} \dim{{\mathcal B}}_0(-1,2m)=16m-5,$$ which contains bundles ${{\mathcal E}}_1$ obtained via the Serre constructions as the extensions of the form $$\label{Serre-1} 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-2)\to{{\mathcal E}}_1\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1)\to0,$$ where $Y$ is a union of $m+1$ disjoint conics in ${{\mathbb{P}^{3}}}$. Below we will need the following analogue of the Proposition \[Prop 1\]. \[Prop 2\] Let $a,m\in\mathbb{Z}_+$, $a\ge2$, and let $\varepsilon\in\{0,1\}$. A general pair $$\label{2 inst a} ([{{\mathcal E}}_1],[{{\mathcal E}}_2])\in {{\mathcal B}}_0(-1,2m)\times {{\mathcal B}}_0(-1,2(m+\varepsilon)),$$ of vector bundles satisfies the following conditions: $$[{{\mathcal E}}_1]\ne[{{\mathcal E}}_2];$$ for $i=1,\ a\ge2m+4,$ respectively, for $i=2,\ a\ge2(m+\varepsilon)+4$, $$\label{vanish 1a} h^1({{\mathcal E}}_i(a))=0,$$ $$\label{vanish 2a a} h^2({{\mathcal E}}_i(-a))=0;$$ $$\label{vanish 2b} h^1({{\mathcal E}}_i(-a))=0;$$ $$\label{vanish 3a} h^j({{\mathcal E}}_1(1)\otimes{{\mathcal E}}_2)=0,\ \ \ \ j\ne1.$$ Let $Y=\sqcup_1^{m+1}C_i$ be a disjoint union of conics $C_i=l_i\cup l'_i$ decomposable into pairs of distinct lines $l_i,l'_i$, such that\ (i) there exist two smooth quadrics $S\simeq{{\mathbb{P}^{1}}}\times{{\mathbb{P}^{1}}}$ and $S'\simeq{{\mathbb{P}^{1}}}\times{{\mathbb{P}^{1}}}$ with the property that $l_1,...,l_{m+1}$, respectively, $l'_1,...,l'_{m+1}$ are the lines of one ruling on $S$, respectively, on $S'$; for instance, denoting $Y_0=l_1\sqcup...\sqcup l_{m+1}, \ Y'=l'_1\sqcup...\sqcup l'_{m+1},$ we may assume that $${{\mathcal O}}_S(Y_0)\simeq{{\mathcal O}_{\mathbb{P}^{1}}}(m+1)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}},\ \ \ {{\mathcal O}}_{S'}(Y')\simeq{{\mathcal O}_{\mathbb{P}^{1}}}(m+1)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}};$$ (ii) the set of $m+1$ distinct points $Z=(Y'\cap S)\setminus(Y_0\cap Y')$ satisfies the condition that $pr_1(Z)$ is a union of $m+1$ distinct points, where $pr_1:S'\to{{\mathbb{P}^{1}}}$ is the projection. We then have a diagram similar to : $$\label{diagr 6} \xymatrix{ & 0 & 0 & 0 &\\ 0\ar[r] &{{\mathcal O}}_{Y'}(a-4)\ar[r]\ar[u] & {{\mathcal O}}_{Y'}(a-3)\ar[r]\ar[u] & {{\mathcal O}}_{Z}(a-3) \ar[u]\ar[r] & 0\\ 0\ar[r]& {{\mathcal O}_{\mathbb{P}^{3}}}(a-4)\ar[r]\ar^-f[u] & {{\mathcal I}}_{Y_0,{{\mathbb{P}^{3}}}}(a-2)\ar[r] \ar_-{g}[u] &{{\mathcal O}_{\mathbb{P}^{1}}}(a-m-3)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}(a-2) \ar^-h[u]\ar[r] & 0.}$$ Under the assumptions $a\ge2m+4$ and $m\ge2$ the cohomology of the lower triple of this diagram yields $$\label{h1(IY0)} h^1({{\mathcal I}}_{Y_0,{{\mathbb{P}^{3}}}}(a-2))=0.$$ Next, similar to we have an exact triple $0\to{{\mathcal O}_{\mathbb{P}^{3}}}(a-6)\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-4)\to{{\mathcal O}_{\mathbb{P}^{1}}}(a-5-m)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}} (a-4)\to0$ which implies that $h^1({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-4))=0$ since $a-5-m\ge0$ for $a\ge2m+4$ and $m\ge1$. Since ${{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-4)=\ker f$, it follows that $$\label{h0(f) is surj} h^0(f):\ H^0({{\mathcal O}_{\mathbb{P}^{3}}}(a-4))\to H^0({{\mathcal O}}_{Y'}(a-4))\ \ \ is\ surjective.$$ On the other hand, since $a-3-m\ge m+1=h^0(Z)$, from the above condition (ii) on $Z$ it follows that $h^0(h):\ H^0({{\mathcal O}_{\mathbb{P}^{1}}}(a-m-3)\boxtimes{{\mathcal O}_{\mathbb{P}^{1}}}(a-2))\to H^0({{\mathcal O}}_{Z}(a-3))$ is surjective. This together with and diagram yields that $h^0(g):\ H^0({{\mathcal I}}_{Y_0,{{\mathbb{P}^{3}}}}(a-2))\to H^0({{\mathcal O}}_{Y'}(a-3))$ is surjective. Since $\ker g\simeq{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-2)$, it follows by that $$\label{h1(IY)} h^1({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(a-2))=0.$$ Now, twisting the triple by ${{\mathcal O}_{\mathbb{P}^{3}}}(a-3)$ and using we obtain $h^1({{\mathcal E}}_1(a-3))=0$, hence by Serre duality $h^2({{\mathcal E}}_1(-a))=0$. Besides, $h^1({{\mathcal E}}_1(a-3))=0$ clearly implies $h^1({{\mathcal E}}_1(a))=0$ since $a\ge2m+4.$ Now, by Semicontinuity, this yields and for a general $[{{\mathcal E}}_1]\in{{\mathcal B}}_0(-1,2m)$. The same equalities are clearly true for $i=2$. Next, since $a\ge2$, it follows that $h^0({{\mathcal O}}_{C_i}(1-a))=0$ for any conic $C_i\subset Y$, hence the cohomology of the triple $0\to{{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1-a)\to{{\mathcal O}_{\mathbb{P}^{3}}}(1-a)\to \oplus_{i=1}^{m+1}{{\mathcal O}}_{C_i}(1-a)\to0$ yields $h^1({{\mathcal I}}_{Y,{{\mathbb{P}^{3}}}}(1-a))=0$; this together with and the Semicontinuity yields for $i=1$ and similarly for $i=2$. Last, the equalities are proved similarly to . \[rem B\] Note that, under the conditions of Proposition \[Prop 2\], the equalities (\[vanish 3a\]) together with Riemann-Roch yield $$\label{h1 12 a} h^1({{\mathcal E}}_1(1)\otimes{{\mathcal E}}_2)=16m+8\varepsilon-6.$$ Now, to construct new series of components of ${{\mathcal B}}(-1,4m+2\varepsilon)$, we proceed along the same lines as in Section \[section 3\]. We first introduce the notion of a twisted symplectic structure on a vector bundle. By a *twisted symplectic structure* on a vector bundle $E$ on ${{\mathbb{P}^{3}}}$ we mean an isomorphism $\theta: E \xrightarrow{\sim}E^{\vee}(-1)$ such that $\theta^{\vee}(1)=-\theta$, considered modulo proportionality. (Here by definition $\theta^{\vee}(1):= \theta^{\vee}\otimes\mathrm{id}_{{{\mathcal O}_{\mathbb{P}^{3}}}(1)}$.) Clearly, a vector bundle $E$ with twisted symplectic structure has even rank:  $\operatorname{{rk}}E=2r,\ r\ge1.$ Consider the vector bundles ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ introduced in Proposition \[Prop 2\]. Since $\det{{\mathcal E}}_1\simeq\det{{\mathcal E}}_2\simeq{{\mathcal O}_{\mathbb{P}^{3}}}(-1)$, there are twisted symplectic structures $\theta_i:\ {{\mathcal E}}_i\xrightarrow{\simeq}{{\mathcal E}}_i^{\vee}(-1),\ i=1,2,$ which yield a twisted symplectic structure on the direct sum $\mathbb{E}={{\mathcal E}}_1\oplus{{\mathcal E}}_2$: $$\label{twisted sympl str} \theta=\theta_1\oplus\theta_2:\ \mathbb{E}={{\mathcal E}}_1\oplus{{\mathcal E}}_2 \xrightarrow{\simeq}{{\mathcal E}}_1^{\vee}(-1)\oplus{{\mathcal E}}_2^{\vee}(-1)=\mathbb{E}^{\vee}(-1).$$ Now assume that ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ are chosen in such a way that there exist sections $$\label{empty intersectn a} s_i:\ {{\mathcal O}_{\mathbb{P}^{3}}}\to{{\mathcal E}}_i(a+1), \ \ \ s.t.\ \ \ \dim(s_i)_0=1,\ \ i=1,2,\ \ \ (s_1)_0\cap(s_2)_0=\emptyset.$$ (Such ${{\mathcal E}}_1\in{{\mathcal B}}_0(-1,2m),\ [{{\mathcal E}}_2]\in {{\mathcal B}}_0(-1,2(m+\varepsilon))$ always exist, since already for $a=1$, hence also for $a\ge2$ two general bundles of the form satisfy the property (\[empty intersectn a\]).) The assumption (\[empty intersectn a\]) implies that the section $s=(s_1,s_2):\ {{\mathcal O}_{\mathbb{P}^{3}}}(-a-1)\to\mathbb{E}$ is a subbundle morphism, hence its transpose ${}^ts:=s^{\vee}(-1)\circ\theta:\ \mathbb{E}\to{{\mathcal O}_{\mathbb{P}^{3}}}(a)$ is an epimorphism. As $\theta$ in (\[twisted sympl str\]) is twisted symplectic, the composition ${}^ts\circ s:{{\mathcal O}_{\mathbb{P}^{3}}}(-a-1)\to{{\mathcal O}_{\mathbb{P}^{3}}}(a)$ is also twisted symplectic. Therefore, since ${{\mathcal O}_{\mathbb{P}^{3}}}(a)$ and ${{\mathcal O}_{\mathbb{P}^{3}}}(-a-1)$ are line bundles, it follows that ${}^ts\circ s=0$, i. e. the complex $$\label{monad a} K^{\cdot}:\ \ \ 0\to{{\mathcal O}_{\mathbb{P}^{3}}}(-a-1)\xrightarrow{s}\mathbb{E}\xrightarrow{{}^ts} {{\mathcal O}_{\mathbb{P}^{3}}}(a)\to0,\ \ \ \ \ E=\frac{\ker({}^ts)}{\mathrm{im}(s)},$$ is a monad and its cohomology sheaf $E$ is locally free. Note that, since the bundles ${{\mathcal E}}_1$ and ${{\mathcal E}}_2$ are stable, they have zero spaces of global sections, hence also $h^0(\mathbb{E})=0$, and yields $h^0(E)=0$, i. e. $E$ as a rank 2 vector bundle with $c_1=-1$ is stable. Besides, since $c_2(\mathbb{E})=c_2({{\mathcal E}}_1)+c_2({{\mathcal E}}_2)=4m+2\varepsilon$, it follows from that $c_2(E)=4m+2\varepsilon +a(a+1)$. Thus, $$[E]\in{{\mathcal B}}(-1,4m+2\varepsilon +a(a+1)),$$ and the deformation theory yields that, for any irreducible component ${{\mathcal M}}$ of ${{\mathcal B}}(-1,4m+2\varepsilon +a(a+1))$, $$\dim{{\mathcal M}}\ge1-\chi(\mathcal{E}nd~E)=8(4m+2\varepsilon +a(a+1))-5.$$ Now, as in , consider the symmetric part of the total complex of the double komplex $K^{\cdot}\otimes (K^{\cdot})^{\vee}$, where $K^{\cdot}$ is the monad (\[monad a\]): $$\label{sym of monad a} 0\to\mathbb{E}(-a)\xrightarrow{\alpha}S^2\mathbb{E}(1) \oplus {{\mathcal O}_{\mathbb{P}^{3}}}\xrightarrow{{}^t\alpha}\mathbb{E}(a+1)\to0,\ \ \ \ S^2E(1)=\frac{\ker({}^t\alpha)}{\mathrm{im}(\alpha)}.$$ Here $\alpha$ is the induced subbundle map and $S^2E(1)$ is its cohomology sheaf. The monad (\[sym of monad a\]) can be rewritten as a diagram of exact triples similar to : $$\label{diagr 2a} \xymatrix{ & & & 0 &\\ & & & \mathbb{E}(a+1)\ar[u]\\ 0\ar[r]& \mathbb{E}(-a)\ar^-\alpha[r] & S^2\mathbb{E}(1)\oplus{{\mathcal O}_{\mathbb{P}^{3}}}\ar[r]& \mathrm{coker}\alpha\ar[u]\ar[r] & 0\\ & & & S^2E(1)\ar[u]&\\ & & & 0.\ar[u]& }$$ Note that, by and one has $$\label{h1bbE(-a)=0 a} h^1(\mathbb{E}(-a))=0,\ \ \ a\ge2,$$ $$\label{h2,3bbE(a)=0 a} h^j(\mathbb{E}(a+1))=0,\ \ \ j=2,3,\ \ \ a\ge2m+3.$$ Similarly, in view of , $$\label{h1E(a)=0 a} h^1(\mathbb{E}(a+1))=0,\ \ \ a\ge2m+3.$$ This together with and Riemann-Roch yields: $$\label{h0bbE(a)= a} h^0(\mathbb{E}(a+1))=\chi(\mathbb{E}(a+1))= 4\binom{a+3}{3}+2\binom{a+3}{2}-(2m+\varepsilon)(2a+5).$$ Next, $$\label{decomp EndE a} \mathcal{E}nd~\mathbb{E}\simeq\mathbb{E}(1)\otimes \mathbb{E}\simeq S^2\mathbb{E}(1)\oplus\wedge^2\mathbb{E}(1),$$ and it follows from (\[twisted sympl str\]) that $$\label{decomp S^2E a} \begin{split} & S^2\mathbb{E}(1)\simeq S^2\mathcal{E}_1(1)\oplus(\mathcal{E}_1(1)\otimes \mathcal{E}_2)\oplus S^2\mathcal{E}_2(1),\\ & \wedge^2\mathbb{E}(1)\simeq \wedge^2\mathcal{E}_1(1)\oplus(\mathcal{E}_1(1)\otimes \mathcal{E}_2)\oplus\wedge^2\mathcal{E}_2(1). \end{split}$$ Now, since $\mathcal{E}nd~\mathcal{E}_i\simeq{{\mathcal E}}_i(1)\otimes {{\mathcal E}}_i\simeq S^2{{\mathcal E}}_i(1)\oplus \wedge^2{{\mathcal E}}_i(1),\ \wedge^2{{\mathcal E}}_i\simeq{{\mathcal O}_{\mathbb{P}^{3}}},\ i=1,2,$ it follows from [@JV] that $h^1(\mathcal{E}nd~\mathcal{E}_1)\simeq h^1(S^2{{\mathcal E}}_1(1))=16m-5,\ h^1(\mathcal{E}nd~\mathcal{E}_2)\simeq h^1(S^2{{\mathcal E}}_2(1))=16(m+\varepsilon)-5,$ and $h^j(\mathcal{E}nd~\mathcal{E}_i)=h^j(S^2{{\mathcal E}}_i(1))=0,\ i=1,2,\ j\ge2$. This together with -(\[decomp S\^2E a\]), (\[vanish 1a\]) and (\[h1 12 a\]) implies that $$\label{h1 S2bbE a} h^1(\mathcal{E}nd~\mathbb{E})=64m+32\varepsilon-22,\ \ \ h^1(S^2\mathbb{E}(1))=48m+24\varepsilon-16,$$ $$\label{hi S2bbE a} h^i(\mathcal{E}nd~\mathbb{E})=h^i(S^2\mathbb{E}(1))=0,\ \ \ i\ge2.$$ It follows from (\[vanish 2a a\]) and that $$\label{h2E(-a)=0 a} h^2(\mathbb{E}(-a))=0.$$ Note that , and , together with the diagram yield an equality $h^0(\mathrm{coker}\alpha)=1$ and an exact sequence: $$0\to H^0(\mathbb{E}(a+1))/\mathbb{C}\to H^1(S^2E(1))\xrightarrow{\mu}H^1(S^2\mathbb{E}(1))\to0,$$ hence by and we have $$\label{h1 S2E a} \begin{split} & h^1(S^2E(1))=h^0(\mathbb{E}(a+1))+48m+24\varepsilon-17=\\ & 4\binom{a+3}{3}+2\binom{a+3}{2}-(2m+\varepsilon)(2a-19) -17. \end{split}$$ Note that, since $E$ is a stable rank-2 bundle, $H^1({{\mathcal E}}nd~E)=H^1(S^2E(1))$ is isomorphic to the Zariski tangent space $T_{[E]}{{\mathcal B}}(-1,4m+2\varepsilon +a(a+1))$: $$\label{Kod-Sp1 a} \theta_{E}:\ T_{[E]}{{\mathcal B}}(-1,4m+2\varepsilon +a(a+1)){\stackrel{\sim}{\to}}H^1({{\mathcal E}}nd~E)=H^1(S^2E(1)).$$ (Here $\theta_{E}$ is the Kodaira-Spencer isomorphism.) Thus, we can rewrite as $$\label{dim Zar tang sp a} \begin{split} & \dim T_{[E]}{{\mathcal B}}(-1,4m+2\varepsilon +a(a+1))=\\ & 4\binom{a+3}{3}+2\binom{a+3}{2}-(2m+\varepsilon)(2a-19) -17. \end{split}$$ \[Thm B\] For $m\ge1$, $\varepsilon\in\{0,1\}$ and $a\ge2(m+\varepsilon)+3$, there exists an irreducible family ${{\mathcal M}}_n(E)\subset{{\mathcal B}}(-1,n)$, where $n=4m+2\varepsilon +a(a+1)$, of dimension given by the right hand side of and containing the above constructed point $[E]$. Hence the closure ${{\mathcal M}}_n$ of ${{\mathcal M}}_n(E)$ in ${{\mathcal B}}(-1,n)$ is an irreducible component of ${{\mathcal B}}(-1,n)$. The set $\Sigma_1$ of these components ${{\mathcal M}}_n$ is an infinite series distinct from the series $\{{{\mathcal B}}_0(-1,n)\}_{n\ge1}$ and from the series of Ein components described in [@Ein]. The proof of this Theorem is completely parallel to the proof of Theorem \[Thm A\], with clear modifications due to the change from $c_1(E)=0$ to $c_1(E)=-1$. It is easy to check that the dimension $\dim{{\mathcal M}}_n$ given by , with $m,\varepsilon$ and $a$ as in Theorem \[Thm B\], satisfies the strict inequality $\dim{{\mathcal M}}_n>8n-5=\dim{{\mathcal B}}_0(-1,n)$ (cf. ). This shows that $\Sigma_1$ is distinct from $\{{{\mathcal B}}_0(-1,n)\}_{n\ge1}$. To distinguish $\Sigma_1$ from from the series of Ein components, it is enough to see that the spectra of general bundles of these two series are different. (A direct verification of this fact is left to the reader.) \[Rem B1\] Let ${{\mathcal N}}$ be the set of all values of $n$ for which ${{\mathcal M}}_n\in\Sigma_1$, i. e. $${{\mathcal N}}=\{n\in2\mathbb{Z}_+\ |\ n=4m+2\varepsilon+a(a+1),\ where\ m\in\mathbb{Z}_+,\ \varepsilon\in\{0,1\},\ a\ge2m+\varepsilon+3 \},$$ Then one easily sees that $$\lim\limits_{r\to\infty}\frac{{{\mathcal N}}\cap\{2,4,...,2r\}}{r}=1.$$ Examples of moduli components of stable vector bundles with small values of $c_2$ {#section 5} ================================================================================= The conditions imposed on the data $(m,\varepsilon,a)$ in Theorem \[Thm A\], respectively, Theorem \[Thm B\] may not be satisfied for small values of these data. However, for some of small values of $(m,\varepsilon,a)$ the equalities , , , respectively, , , are still true. Hence, our construction of irreducible components ${{\mathcal M}}_n\in\Sigma_0$, where $n=2m+\varepsilon+a^2$, respectively,${{\mathcal M}}_n\in\Sigma_1$, where $n=4m+2\varepsilon+a(a+1)$, given in Sections \[section 3\] and \[section 4\] is still true for these values of $(m,\varepsilon,a)$. A precise computation of these values is performed via using the Serre construction , respectively, for the pairs $([{{\mathcal E}}_1],[{{\mathcal E}}_2])$ from , respectively, from . We thus provide the following list of irreducible components ${{\mathcal M}}_n\in\Sigma_0$ for $n\le20$ and, respectively, ${{\mathcal M}}_n\in\Sigma_1$ for $n\le40$. Components ${{\mathcal M}}_n\in\Sigma_0$ for $n\le20$ ----------------------------------------------------- By $\mathrm{Spec}(E)$ we denote the spectrum of a general bundle $E$ from ${{\mathcal M}}_n$. (Below we use a standard notation $\mathrm{Spec}(E)= (a^p,b^q,...)$ for the spectrum ($\underset{p} {\underbrace{a......a}},~\underset{q} {\underbrace{b......b}},...$).) \(1) $n=6,\ (m,\varepsilon,a)=(1,0,2)$. ${{\mathcal M}}_6$ is a component of the expected (by the deformation theory) dimension $\dim{{\mathcal M}}_6=45$, and $\mathrm{Spec}(E)=(-1,0^4,1)$. This corresponds to the case 6(2) of the Table 5.3 of Hartshorne-Rao [@HR]. \(2) $n=7,\ (m,\varepsilon,a)=(1,1,2)$. ${{\mathcal M}}_7$ is a component of the expected dimension $\dim{{\mathcal M}}_7=53$, and $\mathrm{Spec}(E)=(-1,0^5,1)$ (cf. [@HR Table 5.3, 7(2)]). \(3) $n=8,\ (m,\varepsilon,a)=(2,0,2)$. ${{\mathcal M}}_8$ is a component of the expected dimension $\dim{{\mathcal M}}_8=61$, and $\mathrm{Spec}(E)=(-1,0^6,1)$ (cf. [@HR Table 5.3, 8(2)]). \(4) $n=9,\ (m,\varepsilon,a)=(2,1,2)$. ${{\mathcal M}}_9$ is a component of the expected dimension $\dim{{\mathcal M}}_9=69$, and $\mathrm{Spec}(E)=(-1,0^7,1)$. \(5) $n=10,\ (m,\varepsilon,a)=(3,0,2)$. ${{\mathcal M}}_{10}$ is a component of the expected dimension $\dim{{\mathcal M}}_{10}=77$, and $\mathrm{Spec}(E)=(-1,0^8,1)$. \(6) $n=11,\ (m,\varepsilon,a)=(3,1,2)$. ${{\mathcal M}}_{11}$ is a component of the expected dimension $\dim{{\mathcal M}}_{11}=85$, and $\mathrm{Spec}(E)=(-1,0^9,1)$. \(7) $n=12,\ (m,\varepsilon,a)=(4,0,2)$. ${{\mathcal M}}_{12}$ is a component of the expected dimension $\dim{{\mathcal M}}_{12}=93$, and $\mathrm{Spec}(E)=(-1,0^{10},1)$. \(8) $n=18,\ (m,\varepsilon,a)=(1,0,4)$. ${{\mathcal M}}_{18}$ is a component of the expected dimension $\dim{{\mathcal M}}_{12}=141$, and $\mathrm{Spec}(E)=(-3,-2^2,-1^3,0^6,1^3,2^2,3)$. Components ${{\mathcal M}}_n\in\Sigma_1$ for $n\le40$ ----------------------------------------------------- ${}$\ (1) $n=24,\ (m,\varepsilon,a)=(1,0,4)$. ${{\mathcal M}}_{24}$ is a component of the expected dimension $\dim{{\mathcal M}}_{24}=187$, and $\mathrm{Spec}(E)=(-4,-3^2,-2^3,-1^6,0^6,1^3,2^2,3)$. \(2) $n=34,\ (m,\varepsilon,a)=(1,0,5)$. ${{\mathcal M}}_{34}$ is a component of dimension $\dim{{\mathcal M}}_{34}=281$ larger than expected, and $\mathrm{Spec}(E)=(-5,-4^2,-3^3,-2^4,-1^7,0^7,1^4,2^3,3^2,4)$. \(3) $n=36,\ (m,\varepsilon,a)=(1,1,5)$. ${{\mathcal M}}_{36}$ is a component of dimension $\dim{{\mathcal M}}_{34}=281$ larger than expected, and $\mathrm{Spec}(E)=(-5,-4^2,-3^3,-2^4,-1^8,0^8,1^4,2^3,3^2,4)$. \(4) $n=38,\ (m,\varepsilon,a)=(2,0,5)$. ${{\mathcal M}}_{38}$ is a component of the expected dimension $\dim{{\mathcal M}}_{36}=299$, and $\mathrm{Spec}(E)=(-5,-4^2,-3^3,-2^4,-1^9,0^9,1^4,2^3,3^2,4)$. [99]{} Almeida C., Jardim M., Tikhomirov A., and Tikhomirov S., New moduli components of rank 2 bundles on projective space. arXiv:1702.06520 \[math. AG\]. Barth W., Irreducibility of the space of mathematical instanton bundles with rank 2 and c2 = 4, Math. Ann. 258 (1981), 81–106. Brun J., and Hirschowitz A., Variété des droites sauteuses du fibré instanton général, With an appendix by J. Bingener, Compositio Math. 53, 325–336 (1984). Bruzzo U., Markushevich D., and Tikhomirov A. S., Moduli of symplectic instanton vector bundles of higher rank on projective space $\mathbb{P}^3$. Central European Journal of Mathematics, **10** (2012), No. 4, 1232-1245. Bruzzo U., Markushevich D., and Tikhomirov A. S., Symplectic instanton bundles on $\mathbb{P}^3$ and ’t Hooft instantons. European Journal of Mathematics, **2** (2016), 73-86. Coanda I., Tikhomirov A., and Trautmann G., Irreducibility and smoothness of the moduli space of mathematical 5-instantons over P3. Internat. J. Math. **14**:1 (2003), 1–45. Ein L., Generalized null correlation bundles. Nagoya Math. J. **111** (1988), 13–24. Ellingsrud G., and Stromme S. A., Stable rank $2$ vector bundles on ${{\mathbb{P}^{3}}}$ with $c_1 = 0$ and $c_2 = 3$. Math. Ann. **255** (1981), 123–135. Hartshorne R., Stable vector bundles of rank 2 on $\mathbf{P}^3$. Math. Ann. **238** (1978), 229–280. Hartshorne R., and Rao A. P., Spectra and monads of stable bundles. J. Math. Kyoto Univ. [**31**]{}:3 (1991), 789–806. Huybrechts D., and Lehn M., The Geometry of Moduli Spaces of Sheaves, 2nd ed. Cambridge Math. Lib., Cambridge University Press, Cambridge, 2010. Jardim M., Markushevich D., and Tikhomirov A. S., New divisors in the boundary of the instanton moduli space. Moscow Math. J., **18**:1 (2018), 117-148. Jardim M., and Verbitsky M., Trihyperkähler reduction and instanton bundles on ${{\mathbb{P}^{3}}}$. Compositio Math. [**150**]{} (2014), 1836–1868. Okonek Ch., Schneider M., and Spindler H., Vector Bundles on Complex Projective Spaces, 2nd ed. Springer Basel, 2011. Le Potier J., Sur l’ espace de modules des fibres de Yang et Mills. Seminaire E.N.S. (1980-1981), Partie 1, Exp. no. 3. Progress in Math., Birkhäuser [**37**]{} (1983), p. 65-137. Nüssler T., and Trautmann G., Multiple Koszul structures on lines and instanton bundles”, Internat. J. Math. **5**:3 (1994), 373–388. Ramanathan A., Stable Principal Bundles on a Compact Riemann Surface. Math. Ann. **213** (1975), 129- 152. Rao A. P., A note on cohomology modules of rank two bundles. Journal of Algebra. **86** (1984), 23–34. Tikhomirov A. S., Moduli of mathematical instanton vector bundles with odd $c_2$ on projective space. Izvestiya: Mathematics **76** (2012), 991–1073. Tikhomirov A. S., Moduli of mathematical instanton vector bundles with even $c_2$ on projective space. Izvestiya: Mathematics **77** (2013), 1331–1355. Tikhomirov S. A., Families of stable bundles of rank 2 with $c_1=−1$ on the space $\mathbb{P}^3$. Siberian Mathematical Journal, **55**:6 (2014), 1137–1143. Vedernikov V. K., Moduli of stable vector bundles of rank 2 on $P_3$ with fixed spectrum. Math. USSR-Izv. **25** (1985), 301–313. Vedernikov V., The Moduli of Super-Null-Correlation Bundles on $\mathbf{P}_3$. Math. Ann. **276** (1987), 365–383.
--- abstract: 'Coding/decoding algorithms are of great importance to help in improving information security since information security is a more significiant problem in recent years. In this paper we introduce two new coding/decoding algorithms using Fibonacci $Q$-matrices and $R$-matrices. Our models are based on the blocked message matrices and the encryption of each message matrix with different keys. These new algorithms will not only increase the security of information but also has high correct ability.' address: - | Balikesir University\ Department of Mathematics\ 10145 Balikesir, TURKEY - | Balikesir University\ Department of Mathematics\ 10145 Balikesir, TURKEY - | Balikesir University\ Department of Mathematics\ 10145 Balikesir, TURKEY author: - 'SÜMEYRA UÇAR\*' - NİHAL TAŞ - NİHAL YILMAZ ÖZGÜR title: A New Cryptography Model via Fibonacci and Lucas Numbers --- [^1] Introduction {#intro} ============ It is well known that the sequences of Fibonacci and Lucas numbers are defined by $$F_{n+1}=F_{n}+F_{n-1}\text{,} \label{eqn1}$$$$L_{n+1}=L_{n}+L_{n-1}$$with the initial terms $F_{0}=0$, $F_{1}=1$ and $L_{0}=2$, $L_{1}=1$, respectively (see [@koshy] for more details). The Fibonacci $Q$-matrix is defined in [@gould] and [@hoggat] as follows:$$Q=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 0\end{array}\right] .$$ From [@stakhov; @1999] and [@stakhov; @2006], we known that the $n.$th power of the Fibonacci $Q$-matrix is of the following form:$$Q^{n}=\left[ \begin{array}{cc} F_{n+1} & F_{n} \\ F_{n} & F_{n-1}\end{array}\right] \text{.}$$ In [@brugless], Buggles and Hoggat introduced the $R$-matrix as follows$: $$$R=\left[ \begin{array}{cc} 1 & 2 \\ 2 & -1\end{array}\right]$$Using the Fibonacci $Q$-matrix and $R$-matrix, it was obtained the matrix $R_{n}$ of the following form$:$ $$R_{n}=RQ^{n}=\left[ \begin{array}{cc} 1 & 2 \\ 2 & -1\end{array}\right] \left[ \begin{array}{cc} F_{n+1} & F_{n} \\ F_{n} & F_{n-1}\end{array}\right] =\left[ \begin{array}{cc} L_{n+1} & L_{n} \\ L_{n} & L_{n-1}\end{array}\right] \text{.}$$ Determinants of the Fibonacci $Q$-matrix and the $R$-matrix are as follows:$$Det(Q^{n})=F_{n+1}F_{n-1}-F_{n}^{2}=(-1)^{n}$$and$$Det(R_{n})=L_{n+1}L_{n-1}-L_{n}^{2}=5(-1)^{n+1}\text{.}$$ Fibonacci coding theory was studied by different ways. For example, in [prajat]{} a new approach for secure information transmission over communication channel was obtained with key variability concept in symmetric key algorithms using Fibonacci $Q$-matrix. In [@stakhov; @2006], a new coding theory was introduced using the generalization of the Cassini formula for Fibonacci $p$-numbers and $Q_{p}$-matrices. In [@Wang], it was constructed an application of mobile phone encryption based on Fibonacci structure of chaos using Fibonacci series. In [@prasad-lucas], Prasad developed a new coding and decoding method using Lucas $p$ numbers given in [@Kuhapatanakul]. Recently, a new cryptography algorithm has been introduced by blocking matrices and Fibonacci numbers in [@Tas]. Also there are more studies in the literature (see [@basu], [stakhov1999-2]{}, [@Tarle] and the references therein for more details). In this study we introduce two new coding/decoding algorithms using Fibonacci $Q$-matrices and $R$-matrices. The basic idea of our method depends on dividing the message matrix into the block matrices of size $2\times 2$. Because of using mixed type algorithm and different numbered alphabet for each message, we have a more safely coding/decoding method. The alphabet is determined by the number of block matrices of the message matrix. Our method will not only increase the security of information but also has high correct ability for data transfer over communication channel. A New Coding/Decoding Method using $R$-Matrix {#sec:1} ============================================= In this section we introduce a new coding/decoding algorithm using Lucas numbers. We put our message in a matrix of even size adding zero between two words and end of the message until we obtain the size of the message matrix is even. Dividing the message square matrix $M$ of size $2m$ into the block matrices, named $B_{i}$ ($1\leq i\leq m^{2}$) of size $2\times 2$, from left to right, we construct a new coding method. Now we explain the symbols of our coding method. Assume that matrices $B_{i}$, $E_{i}$, $Q^{n}$ and $R_{n}$ are of the following forms:$$B_{i}=\left[ \begin{array}{cc} b_{1}^{i} & b_{2}^{i} \\ b_{3}^{i} & b_{4}^{i}\end{array}\right] \text{, }E_{i}=\left[ \begin{array}{cc} e_{1}^{i} & e_{2}^{i} \\ e_{3}^{i} & e_{4}^{i}\end{array}\right] \text{, }Q^{n}=\left[ \begin{array}{cc} q_{1} & q_{2} \\ q_{3} & q_{4}\end{array}\right] \text{ and }R_{n}=\left[ \begin{array}{cc} r_{1} & r_{2} \\ r_{3} & r_{4}\end{array}\right] \text{.}$$The number of the block matrices $B_{i}$ is denoted by $b$. According to $b$, we choose the number $n$ as follows:$$n=\left\{ \begin{array}{ccc} b & \text{,} & b\leq 3 \\ \left[ \left\vert \frac{b}{2}\right\vert \right] & \text{,} & b>3\end{array}\right. \text{.}$$Using the chosen $n$, we write the following character table according to $mod30$ (this table can be extended according to the used characters in the message matrix). We begin the $n$ for the first character. $$\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline A & B & C & D & E & F & G & H & I & J \\ \hline $n$ & $n+1$ & $n+2$ & $n+3$ & $n+4$ & $n+5$ & $n+6$ & $n+7$ & $n+8$ & $n+9$ \\ \hline K & L & M & N & O & P & Q & R & S & T \\ \hline $n+10$ & $n+11$ & $n+12$ & $n+13$ & $n+14$ & $n+15$ & $n+16$ & $n+17$ & $n+18 $ & $n+19$ \\ \hline U & V & W & X & Y & Z & 0 & ! & ? & . \\ \hline $n+20$ & $n+21$ & $n+22$ & $n+23$ & $n+24$ & $n+25$ & $n+26$ & $n+27$ & $n+28 $ & $n+29$ \\ \hline \end{tabular}$$ Now we explain the following new coding and decoding algorithms. **Lucas Blocking Algorithm** **Coding Algorithm** **Step 1.** Divide the matrix $M$ into blocks $B_{i}$ $\left( 1\leq i\leq m^{2}\right) $. **Step 2.** Choose $n$. **Step 3.** Determine $b_{j}^{i}$ $\left( 1\leq j\leq 4\right) $. **Step 4.** Compute $\det (B_{i})\rightarrow d_{i}$. **Step 5.** Construct $F=\left[ d_{i},b_{k}^{i}\right] _{k\in \{1,2,4\}} $. **Step 6.** End of algorithm. **Decoding Algorithm** **Step 1.** Compute $R_{n}$. **Step 2.** Determine $r_{j}$ $(1\leq j\leq 4)$. **Step 3.** Compute $r_{1}b_{1}^{i}+r_{3}b_{2}^{i}\rightarrow e_{1}^{i}$ $\left( 1\leq i\leq m^{2}\right) $. **Step 4.** Compute $r_{2}b_{1}^{i}+r_{4}b_{2}^{i}\rightarrow e_{2}^{i}$. **Step 5.** Solve $5\times (-1)^{n+1}\times d_{i}=e_{1}^{i}(r_{2}x_{i}+r_{4}b_{4}^{i})-e_{2}^{i}(r_{1}x_{i}+r_{3}b_{4}^{i}) $. **Step 6.** Substitute for $x_{i}=b_{3}^{i}$. **Step 7.** Construct $B_{i}$. **Step 8.** Construct $M$. **Step 9.** End of algorithm. The above method is similar to the method obtained by Fibonacci numbers given in [@Tas]. In the following example we give an application of the above algorithm for $b>3$. \[exm1\] Let us consider the message matrix for the following message text$:$$$\text{\textquotedblleft HI! HOW ARE YOU?\textquotedblright }$$Using the message text, we get the following message matrix $M:$$$M=\left[ \begin{array}{cccc} H & I & ! & 0 \\ H & O & W & 0 \\ A & R & E & 0 \\ Y & O & U & ?\end{array}\right] _{4\times 4}.$$**Coding Algorithm:** **Step 1.** We can divide the message matrix $M$ of size $4\times 4$ into the matrices, named $B_{i}$ $\left( 1\leq i\leq 4\right) $, from left to right, each of size is $2\times 2:$$$B_{1}=\left[ \begin{array}{cc} H & I \\ H & O\end{array}\right] \text{, }B_{2}=\left[ \begin{array}{cc} ! & 0 \\ W & 0\end{array}\right] \text{, }B_{3}=\left[ \begin{array}{cc} A & R \\ Y & O\end{array}\right] \text{ and }B_{4}=\left[ \begin{array}{cc} E & 0 \\ U & ?\end{array}\right] \text{.}$$ **Step 2.** Since $b=4\geq 3$, we calculate $n=\left[ \left\vert \frac{b}{2}\right\vert \right] =2$. For $n=2$, we use the following character table for the message matrix $M:$$$\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline $H$ & $I$ & $!$ & $0$ & $H$ & $O$ & $W$ & $0$ \\ \hline $9$ & $10$ & $29$ & $28$ & $9$ & $16$ & $24$ & $28$ \\ \hline $A$ & $R$ & $E$ & $0$ & $Y$ & $O$ & $U$ & $?$ \\ \hline $2$ & $19$ & $6$ & $28$ & $26$ & $16$ & $22$ & $0$ \\ \hline \end{tabular}\text{.}$$ **Step 3.** We have the elements of the blocks $B_{i}$ $\left( 1\leq i\leq 4\right) $ as follows:$$\begin{tabular}{|l|l|l|l|} \hline $b_{1}^{1}=9$ & $b_{2}^{1}=10$ & $b_{3}^{1}=9$ & $b_{4}^{1}=16$ \\ \hline $b_{1}^{2}=29$ & $b_{2}^{2}=28$ & $b_{3}^{2}=24$ & $b_{4}^{2}=28$ \\ \hline $b_{1}^{3}=2$ & $b_{2}^{3}=19$ & $b_{3}^{3}=26$ & $b_{4}^{3}=16$ \\ \hline $b_{1}^{4}=6$ & $b_{2}^{4}=28$ & $b_{3}^{4}=22$ & $b_{4}^{4}=0$ \\ \hline \end{tabular}.$$ **Step 4.** Now we calculate the determinants $d_{i}$ of the blocks $B_{i}:$$$\begin{tabular}{|l|} \hline $d_{1}=\det (B_{1})=54$ \\ \hline $d_{2}=\det (B_{2})=140$ \\ \hline $d_{3}=\det (B_{3})=-462$ \\ \hline $d_{4}=\det (B_{4})=-616$ \\ \hline \end{tabular}.$$ **Step 5.** Using Step 3 and Step 4 we obtain the following matrix $F:$$$F=\left[ \begin{array}{cccc} 54 & 9 & 10 & 16 \\ 140 & 29 & 28 & 28 \\ -462 & 2 & 19 & 16 \\ -616 & 6 & 28 & 0\end{array}\right] .$$ **Step 6.** End of algorithm. **Decoding algorithm:** **Step 1.** It is known that$$R_{2}=RQ^{2}=\left[ \begin{array}{cc} 4 & 3 \\ 3 & 1\end{array}\right] \text{.}$$ **Step 2.** The elements of $R_{2}$ are denoted by$$r_{1}=4\text{, }r_{2}=3\text{, }r_{3}=3\text{ and }r_{4}=1\text{.}$$ **Step 3.** We compute the elements $e_{1}^{i}$ to construct the matrix $E_{i}:$$$e_{1}^{1}=66\text{, }e_{1}^{2}=200\text{, }e_{1}^{3}=65\text{ and }e_{1}^{4}=108\text{.}$$ **Step 4.** We compute the elements $e_{2}^{i}$ to construct the matrix $E_{i}:$$$e_{2}^{1}=37\text{, }e_{2}^{2}=115\text{, }e_{2}^{3}=25\text{ and }e_{2}^{4}=46\text{.}$$ **Step 5.** We calculate the elements $x_{i}:$$$\begin{aligned} 5(-1)^{3}(54) &=&66(3x_{1}+16)-37(4x_{1}+48) \\ &\Rightarrow &x_{1}=9\text{.}\end{aligned}$$$$\begin{aligned} 5(-1)^{3}(140) &=&200(3x_{2}+28)-115(4x_{2}+84) \\ &\Rightarrow &x_{2}=24\text{.}\end{aligned}$$$$\begin{aligned} 5(-1)^{3}(-462) &=&65(3x_{3}+16)-25(4x_{3}+48) \\ &\Rightarrow &x_{3}=26\text{.}\end{aligned}$$$$\begin{aligned} 5(-1)^{4}(-616) &=&108(3x_{4}+0)-46(4x_{4}+0) \\ &\Rightarrow &x_{4}=22\text{.}\end{aligned}$$ **Step 6.** We rename $x_{i}$ as follows$:$$$x_{1}=b_{3}^{1}=9\text{, }x_{2}=b_{3}^{2}=24\text{, }x_{3}=b_{3}^{3}=26\text{ and }x_{4}=b_{3}^{4}=22\text{.}$$ **Step 7.** We construct the block matrices $B_{i}:$$$B_{1}=\left[ \begin{array}{cc} 9 & 10 \\ 9 & 16\end{array}\right] \text{, }B_{2}=\left[ \begin{array}{cc} 29 & 28 \\ 24 & 28\end{array}\right] \text{, }B_{3}=\left[ \begin{array}{cc} 2 & 19 \\ 26 & 16\end{array}\right] \text{ and }B_{4}=\left[ \begin{array}{cc} 6 & 28 \\ 22 & 0\end{array}\right] \text{.}$$ **Step 8.** We obtain the message matrix $M:$$$M=\left[ \begin{array}{cccc} 9 & 10 & 29 & 28 \\ 9 & 16 & 24 & 28 \\ 2 & 19 & 6 & 28 \\ 26 & 16 & 22 & 0\end{array}\right] =\left[ \begin{array}{cccc} H & I & ! & 0 \\ H & O & W & 0 \\ A & R & E & 0 \\ Y & O & U & ?\end{array}\right] .$$ **Step 9.** End of algorithm. A Mixed Model: Minesweeper Model {#sec:2} ================================ In this section we present a new approach to coding/decoding algorithm method called as Minesweeper Model using Fibonacci $Q^{n}$-matrices and $R$-matrices. The main idea of this model is to decode blocks of the message matrix using Fibonacci and Lucas numbers randomly. In the following model is constructed by decoding the blocks with odd indices $i$ using Fibonacci $Q^{n}$-matrices and decoding the blocks with even indices $i$ using $R$-matrices. **Minesweeper Algorithm** **Coding Algorithm** **Step 1.** Divide the matrix $M$ into blocks $B_{i}$ $\left( 1\leq i\leq m^{2}\right) $. **Step 2.** Choose $n$. **Step 3.** Determine $b_{j}^{i}$ $\left( 1\leq j\leq 4\right) $. **Step 4.** Compute $\det (B_{i})\rightarrow d_{i}$. **Step 5.** Construct $F=\left[ d_{i},b_{k}^{i}\right] _{k\in \{1,2,3\}} $. **Step 6.** End of algorithm. **Decoding Algorithm** **Step 1.** Compute $Q^{n}$. **Step 2.** Compute $R^{n}$.** ** **Step 3.** Compute $q_{1}b_{1}^{i}+q_{3}b_{2}^{i}\rightarrow e_{1}^{i}, $ $i=2l+1$ for $0\leq l\leq 2m$. **Step 4.** Compute $r_{1}b_{1}^{i}+r_{3}b_{2}^{i}\rightarrow e_{1}^{i}, $ $i=2l$ for $1\leq l\leq 2m$. **Step 5.** Compute $q_{2}b_{1}^{i}+q_{4}b_{2}^{i}\rightarrow e_{2}^{i}, $ $i=2l+1$ for $0\leq l\leq 2m$. **Step 6.** Compute $r_{2}b_{1}^{i}+r_{4}b_{2}^{i}\rightarrow e_{2}^{i}, $ $i=2l$ for $1\leq l\leq 2m$. **Step 7.** Solve $(-1)^{n}\times d_{i}=e_{1}^{i}(q_{2}b_{3}^{i}+q_{4}x_{i})-e_{2}^{i}(q_{1}b_{3}^{i}+q_{3}x_{i}),i=2l+1 $ for $0\leq l\leq 2m$. **Step 8.** Solve $5\times (-1)^{n+1}\times d_{i}=e_{1}^{i}(r_{2}b_{3}^{i}+r_{4}x_{i})-e_{2}^{i}(r_{1}b_{3}^{i}+r_{3}x_{i}), $ $i=2l$ for $0\leq l\leq 2m$. **Step 9.** Substitute for $x_{i}=b_{4}^{i}$. **Step 10.** Construct $B_{i}$. **Step 11.** Construct $M$. **Step 12.** End of algorithm. Now, we give an application of the above algorithm for $b>3$. \[exm2\] Let us consider the message matrix for the following message text$:$$$\text{"MIXED MODELLING FOR CRYPTOGRAPHY"}$$Using the message text, we get the following message matrix $M:$$$M=\left[ \begin{array}{cccccc} M & I & X & E & D & 0 \\ M & O & D & E & L & L \\ I & N & G & 0 & F & O \\ R & 0 & C & R & Y & P \\ T & O & G & R & A & P \\ H & Y & 0 & 0 & 0 & 0\end{array}\right] _{6\times 6}.$$**Coding Algorithm:** **Step 1.** We can divide the message matrix $M$ of size $6\times 6$ into the matrices, named $B_{i}$ $\left( 1\leq i\leq 9\right) $, from left to right, each of size is $2\times 2:$$$\begin{aligned} B_{1} &=&\left[ \begin{array}{cc} M & I \\ M & 0\end{array}\right] \text{, }B_{2}=\left[ \begin{array}{cc} X & E \\ D & E\end{array}\right] \text{, }B_{3}=\left[ \begin{array}{cc} D & 0 \\ L & L\end{array}\right] , \\ B_{4} &=&\left[ \begin{array}{cc} I & N \\ R & 0\end{array}\right] \text{, }B_{5}=\left[ \begin{array}{cc} G & 0 \\ C & R\end{array}\right] \text{, }B_{6}=\left[ \begin{array}{cc} F & O \\ Y & P\end{array}\right] ,\text{ } \\ B_{7} &=&\left[ \begin{array}{cc} T & O \\ H & Y\end{array}\right] \text{, }B_{8}=\left[ \begin{array}{cc} G & R \\ 0 & 0\end{array}\right] \text{, }B_{9}=\left[ \begin{array}{cc} A & P \\ 0 & 0\end{array}\right] .\text{ }\end{aligned}$$ **Step 2.** Due to $b=9>3$, we calculate $n=\left[ \left\vert \frac{b}{2}\right\vert \right] =4$. For $n=4$, we use the following character table for the message matrix $M:$$$\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline $M$ & $I$ & $X$ & $E$ & $D$ & $0$ & $M$ & $O$ & $D$ & $E$ & $L$ & $L$ \\ \hline $16$ & $12$ & $27$ & $8$ & $7$ & $0$ & $16$ & $18$ & $7$ & $8$ & $15$ & $15$ \\ \hline $I$ & $N$ & $G$ & $0$ & $F$ & $O$ & $R$ & $0$ & $C$ & $R$ & $Y$ & $P$ \\ \hline $12$ & $17$ & $10$ & $0$ & $9$ & $18$ & $21$ & $0$ & $6$ & $21$ & $28$ & $19$ \\ \hline $T$ & $O$ & $G$ & $R$ & $A$ & $P$ & $H$ & $Y$ & $0$ & $0$ & $0$ & $0$ \\ \hline $23$ & $18$ & $10$ & $21$ & $4$ & $19$ & $11$ & $28$ & $0$ & $0$ & $0$ & $0$ \\ \hline \end{tabular}.$$ **Step 3.** We have the elements of the blocks $B_{i}$ $\left( 1\leq i\leq 9\right) $ as follows$:$$$\begin{tabular}{|l|l|l|l|} \hline $b_{1}^{1}=16$ & $b_{2}^{1}=12$ & $b_{3}^{1}=16$ & $b_{4}^{1}=18$ \\ \hline $b_{1}^{2}=27$ & $b_{2}^{2}=8$ & $b_{3}^{2}=7$ & $b_{4}^{2}=8$ \\ \hline $b_{1}^{3}=7$ & $b_{2}^{3}=0$ & $b_{3}^{3}=15$ & $b_{4}^{3}=15$ \\ \hline $b_{1}^{4}=12$ & $b_{2}^{4}=17$ & $b_{3}^{4}=21$ & $b_{4}^{4}=0$ \\ \hline $b_{1}^{5}=10$ & $b_{2}^{5}=0$ & $b_{3}^{5}=6$ & $b_{4}^{5}=21$ \\ \hline $b_{1}^{6}=9$ & $b_{2}^{6}=18$ & $b_{3}^{6}=28$ & $b_{4}^{6}=19$ \\ \hline $b_{1}^{7}=23$ & $b_{2}^{7}=18$ & $b_{3}^{7}=11$ & $b_{4}^{7}=28$ \\ \hline $b_{1}^{8}=10$ & $b_{2}^{8}=21$ & $b_{3}^{8}=0$ & $b_{4}^{8}=0$ \\ \hline $b_{1}^{9}=4$ & $b_{2}^{9}=19$ & $b_{3}^{9}=0$ & $b_{4}^{9}=0$ \\ \hline \end{tabular}.$$ **Step 4.** Now we calculate the determinants $d_{i}$ of the blocks $B_{i}:$$$\begin{tabular}{|l|} \hline $d_{1}=\det (B_{1})=96$ \\ \hline $d_{2}=\det (B_{2})=160$ \\ \hline $d_{3}=\det (B_{3})=105$ \\ \hline $d_{4}=\det (B_{4})=-357$ \\ \hline $d_{5}=\det (B_{5})=210$ \\ \hline $d_{6}=\det (B_{6})=-333$ \\ \hline $d_{7}=\det (B_{7})=446$ \\ \hline $d_{8}=\det (B_{8})=0$ \\ \hline $d_{9}=\det (B_{9})=0$ \\ \hline \end{tabular}.$$ **Step 5.** Using Step 3 and Step 4 we obtain the following matrix $F:$$$F=\left[ \begin{array}{cccc} 96 & 16 & 12 & 16 \\ 160 & 27 & 8 & 7 \\ 105 & 7 & 0 & 15 \\ -357 & 12 & 17 & 21 \\ 210 & 10 & 0 & 6 \\ -333 & 9 & 18 & 28 \\ 446 & 23 & 18 & 11 \\ 0 & 10 & 21 & 0 \\ 0 & 4 & 19 & 0\end{array}\right] .$$ **Step 6.** End of algorithm. **Decoding algorithm:** **Step 1.** It is known that$$Q^{4}=\left[ \begin{array}{cc} F_{5} & F_{4} \\ F_{4} & F_{3}\end{array}\right] =\left[ \begin{array}{cc} 5 & 3 \\ 3 & 2\end{array}\right] \text{.}$$ **Step 2.** It is known that$$R_{4}=RQ^{4}=\left[ \begin{array}{cc} L_{5} & L_{4} \\ L_{4} & L_{3}\end{array}\right] =\left[ \begin{array}{cc} 1 & 2 \\ 2 & -1\end{array}\right] \left[ \begin{array}{cc} 5 & 3 \\ 3 & 2\end{array}\right] =\left[ \begin{array}{cc} 11 & 7 \\ 7 & 4\end{array}\right] \text{.}$$ **Step 3.** If $i$ is an odd number, we use the Fibonacci $Q$-matrix. Now we compute the elements $e_{1}^{i},$ for $i=1,3,5,7,9,$ in order to construct the matrix $E_{i}:$$$e_{1}^{1}=116\text{, }e_{1}^{3}=35\text{, }e_{1}^{5}=50,\text{ }e_{1}^{7}=169\text{ and }e_{1}^{9}=47\text{.}$$ **Step 4**. If $i$ is an even number, we use the $R$-matrix. Now we compute the elements $e_{1}^{i},$ for $i=2,4,6,8$ in order to construct the matrix $E_{i}:$$$e_{1}^{2}=353\text{, }e_{1}^{4}=251\text{, }e_{1}^{6}=225\text{ and }e_{1}^{8}=257\text{.}$$ **Step 5.** If $i$ is an odd number, we use the Fibonacci $Q$-matrix. Now we compute the elements $e_{2}^{i},$ for $i=1,3,5,7,9,$ in order to construct the matrix $E_{i}:$$$e_{2}^{1}=72,e_{2}^{3}=21,e_{2}^{5}=30,e_{2}^{7}=105\text{ and }e_{2}^{9}=30\text{.}$$ **Step 6.** If $i$ is an even number, we use the $R$-matrix. Now we compute the elements $e_{2}^{i},$ for $i=2,4,6,8,$ in order to construct the matrix $E_{i}:$$$e_{2}^{2}=221\text{, }e_{2}^{4}=152\text{, }e_{2}^{6}=135,\text{ and }e_{2}^{8}=154\text{.}$$ **Step 7.** If $i$ is an odd number, we use the Fibonacci $Q$-matrix. Now, we calculate the elements $x_{i}$ for $i=1,3,5,7,9.$$$\begin{aligned} (-1)^{4}96 &=&116(48+2x_{1})-72(80+3x_{1}) \\ &\Rightarrow &x_{1}=18\text{.}\end{aligned}$$$$\begin{aligned} (-1)^{4}105 &=&35(45+2x_{3})-21(75+3x_{3}) \\ &\Rightarrow &x_{3}=15\text{.}\end{aligned}$$$$\begin{aligned} (-1)^{4}210 &=&50(18+2x_{5})-30(30+3x_{5}) \\ &\Rightarrow &x_{5}=21\text{.}\end{aligned}$$$$\begin{aligned} (-1)^{4}446 &=&169(33+2x_{7})-105(55+3x_{7}) \\ &\Rightarrow &x_{7}=28\text{.}\end{aligned}$$$$\begin{aligned} (-1)^{4}0 &=&47(0+2x_{9})-30(0+3x_{9}) \\ &\Rightarrow &x_{9}=0\text{.}\end{aligned}$$ **Step 8.** If $i$ is an even number, we use the $R-$matrix. Now, we calculate the elements $x_{i}$ for $i=2,4,6,8.$$$\begin{aligned} 5(-1)^{5}160 &=&353(49+4x_{2})-221(77+7x_{2}) \\ &\Rightarrow &x_{2}=8\text{.}\end{aligned}$$$$\begin{aligned} 5(-1)^{5}\left( -357\right) &=&251(147+4x_{4})-152(231+3x_{4}) \\ &\Rightarrow &x_{4}=0\text{.}\end{aligned}$$$$\begin{aligned} 5(-1)^{5}\left( -333\right) &=&225(196+4x_{6})-135(308+7x_{6}) \\ &\Rightarrow &x_{6}=19\text{.}\end{aligned}$$$$\begin{aligned} 5(-1)^{5}0 &=&257(0+4x_{8})-154(0+7x_{8}) \\ &\Rightarrow &x_{8}=0\text{.}\end{aligned}$$ **Step 9.** We rename $x_{i}$ as follows$:$$$\begin{aligned} x_{1} &=&b_{4}^{1}=18\text{, }x_{2}=b_{4}^{2}=8\text{, }x_{3}=b_{4}^{3}=15,\text{ }x_{4}=b_{4}^{4}=0\text{, }x_{5}=b_{4}^{5}=21, \\ x_{6} &=&b_{4}^{6}=19\text{, }x_{7}=b_{4}^{7}=28\text{, }x_{8}=b_{4}^{8}=0\text{ and }x_{9}=b_{4}^{9}=0\text{. }\end{aligned}$$ **Step 10.** We construct the block matrices $B_{i}:$$$\begin{aligned} B_{1} &=&\left[ \begin{array}{cc} 16 & 12 \\ 16 & 18\end{array}\right] \text{, }B_{2}=\left[ \begin{array}{cc} 27 & 8 \\ 7 & 8\end{array}\right] \text{, }B_{3}=\left[ \begin{array}{cc} 7 & 0 \\ 15 & 15\end{array}\right] ,\text{ } \\ B_{4} &=&\left[ \begin{array}{cc} 12 & 17 \\ 4 & 0\end{array}\right] \text{, }B_{5}=\left[ \begin{array}{cc} 10 & 0 \\ 6 & 21\end{array}\right] \text{, }B_{6}=\left[ \begin{array}{cc} 9 & 18 \\ 28 & 19\end{array}\right] ,\text{ } \\ B_{7} &=&\left[ \begin{array}{cc} 23 & 18 \\ 11 & 28\end{array}\right] \text{, }B_{8}=\left[ \begin{array}{cc} 10 & 21 \\ 0 & 0\end{array}\right] \text{, }B_{9}=\left[ \begin{array}{cc} 4 & 9 \\ 0 & 0\end{array}\right] .\text{ }\end{aligned}$$ **Step 11.** We obtain the following message matrix $M:$$$M=\left[ \begin{array}{cccccc} 16 & 12 & 27 & 8 & 7 & 0 \\ 16 & 18 & 7 & 8 & 15 & 15 \\ 12 & 17 & 10 & 0 & 9 & 18 \\ 4 & 0 & 6 & 21 & 28 & 19 \\ 23 & 18 & 10 & 21 & 4 & 9 \\ 11 & 28 & 0 & 0 & 0 & 0\end{array}\right] =\left[ \begin{array}{cccccc} M & I & X & E & D & 0 \\ M & O & D & E & L & L \\ I & N & G & 0 & F & O \\ R & 0 & C & R & Y & P \\ T & O & G & R & A & P \\ H & Y & 0 & 0 & 0 & 0\end{array}\right] .$$ **Step 9.** End of algorithm. Comparisons and Conclusion {#sec:3} ========================== In this section we give the differences between the method given in [@Tas] and the above methods. At first, in [@Tas], the number $n$ is defined by $$n=\left\{ \begin{array}{ccc} 3 & \text{,} & b\leq 3 \\ b & \text{,} & b>3\end{array}\right. \text{.}$$On the other hand, in our methods the number $n$ has been given in different ways as we have explained in Section \[sec:1\]. Because of the selection method of $n,$ we are studying more smaller numbers to calculate the matrices $Q^{n}$, $R_{n}$ and to form the character table. Hence we obtain more easier methods than the method given in [@Tas]. Furthermore if we use the minesweeper model, then the security has been increased according to Lucas Blocking Model given in Section \[sec:1\] and Fibonacci Blocking Model given in [@Tas]. Especially to increase the security it can be changed the decoding method of blocks using Fibonacci and Lucas numbers randomly. [99]{} M. Basu, B. Prasad, *The generalized relations among the code elements for Fibonacci coding theory,* Chaos Solitons Fractals 41 (2009), no. 5, 2517–2525. I. D. Bruggles V. E. Jr Hoggatt, *A Primer for the Fibonacci numbers-Part IV. Fibonacci Q.* 1 (1963), no.4, 65–71. H. W. Gould, *A history of the Fibonacci* $\emph{Q}$*-matrix and a higher-dimensional problem*, Fibonacci Quart. 19 (1981), no. 3, 250–257. V. E. Hoggat, *Fibonacci and Lucas Numbers,* Houghton-Mifflin, Palo Alto (1969). T. Koshy, *Fibonacci and Lucas numbers with applications*, New York, NY: JohnWiley and Sons, 2001. K. Kuhapatanakul, *The Lucas* $\emph{p}$*-matrix*, Internat. J. Math. Ed. Sci. Techn. (2015) http://dx.doi.org/10.1080/0020739X.2015.1026612 S. Prajapat, A. Jain, R. S. Thakur, *A Novel Approach For Information Security With Automatic Variable Key Using Fibonacci* $\emph{Q}$*-Matrix,* IJCCT 3 (2012), no. 3, 54–57. B. Prasad, *Coding Theoey on Lucas* $p$*Numbers, Discrete Mathematics*, Algorithms and Applications 8 (2016), no.4, 17 pages. A. P. Stakhov, *A generalizition of the Fibonacci Q-matrix,* Rep. Natl. Acad. Sci. Ukraine 9 (1999), 46-49. A. Stakhov, V. Massingue, A. Sluchenkov, *Introduction into Fibonacci Coding and Cryptography,* Osnova, Kharkov (1999). A. P. Stakhov, *Fibonacci matrices, a generalization of the Cassini formula and a new coding theory,* Chaos Solitons Fractals 30 (2006), no. 1, 56–66. B. S. Tarle, G. L. Prajapati, *On the information security using Fibonacci series,* International Conference and Workshop on Emerging Trends in Technology (ICWET 2011)-TCET, Mumbai, India. N. Taş, S. Uçar, N. Y. Özgür, Ö. Ö. Kaymak, *A new coding/decoding algorithm using Finonacci numbers*, submitted for publication. F. Wang, J. Ding, Z. Dai, Y. Peng, *An application of mobile phone encryption based on Fibonacci structure of chaos,* 2010 Second WRI World Congress on Software Engineering. [^1]: \*Corresponding author: S. UÇAR\ Balikesir University, Department of Mathematics, 10145 Balikesir, TURKEY\ e-mail: sumeyraucar@balikesir.edu.tr
--- title: Calibration and Reconstruction Performance of the HAWC Observatory --- The HAWC Observatory {#the-hawc-observatory .unnumbered} ==================== Gamma-ray astronomy has become a field of rapid progress, with imaging air Cherenkov telescopes and satellites detectors continuing to probe the sky at MeV to TeV energies. In a complimentary approach, the Milagro Observatory [@bib:Milagro] has proven that the water Cherenkov technique allows a ground based TeV gamma-ray detector to operate with a high duty cycle and a wide field of view. The HAWC Observatory is being built as Milagro’s successor, based on the same principle but surpassing Milagro’s sensitivity by a factor of 15, see [@bib:HAWCgrb],[@bib:ICRC13general],[@bib:ICRC13sensi] for details. The higher altitude of 4,100 m above sea level at the HAWC site on the Sierra Negra volcano near Puebla, Mexico, improves the low energy response compared to Milagro, widening it to approximately 50 GeV to 100 TeV. HAWC’s modular design comprises a total of 300 WCDs each equipped with three 8-inch PMTs and one central 10-inch PMT. These detectors record the times and multiplicities of Cherenkov photons from high energy particles passing through the array and are used to reconstruct directions and energies of air shower primaries. HAWC has a duty cycle of $>90$ % due to its fully enclosed WCD design and thus serves as a powerful instrument to monitor and survey the TeV sky. In September 2012, the first 30 WCDs of HAWC became operational, making it possible to start data taking while construction continues. The experiment will transition smoothly to full scientific operation with 100 WCDs in August 2013 and 300 WCDs in the summer of 2014. The main data acquisition (DAQ) system is composed of time-to-digital converters (TDCs) with one channel for each PMT. When a photon induced pulse in a PMT crosses the low hardware threshold ($\sim0.35$ PE) or a high threshold ($\sim8$ PE), hit times get recorded as TDC counts ($10.24$ counts $=1$ ns) relative to a trigger time. The time-over-threshold (ToT) of a pulse for the low or high threshold can be converted into charge values, as described in section \[charge\]. The calibrated charges of all PMT pulses that are part of an air shower event are used as weights to locate the core of the shower with a Gaussian fit of the charge distribution. This core fit serves as input to a second algorithm, the angle fit, that calculates the curved shower front based on the photon arrival times and thus yields a direction result for the event. The ability to reconstruct this direction of an air shower primary depends crucially on a precise time record for each PMT pulse. Statistical fluctuations introduce an irreducible spread of photon hit times on the order of a few ns around the passing shower front, thereby defining the goal to calibrate the pulse timing to at least this accuracy. The following sections discuss how these charge and timing calibrations are achieved in HAWC with a laser system. The Laser Calibration System {#the-laser-calibration-system .unnumbered} ============================ The light source components of the HAWC calibration system were installed between December 2012 and March 2013 in the electronics facility in the center of the array. The calibration system relies on short laser pulses with a width of 300 ps, created with a [*Teem Photonics*]{} laser at a wavelength of 532 nanometers. An optical splitter cube located next to the laser creates a separate [*return light path*]{}, while the primary or [*light-to-tanks*]{} path, leads through a series of three filter wheels. Each of these wheels carries six neutral density filter disks of different absorption strength that can be cycled to attenuate the laser light. Aside from an open and an opaque setting, the range of filter wheel combinations covers optical depths from $0.2$ to $8.0$. The primary light-to-tanks path after the filter wheels fans out into optical fibers via [*Dicon*]{} switches that can distribute the light into any of 150 separate channels, with up to 10 channels being illuminated simultaneously. At an optical patch panel each channel is connected to a splitter that passes the same laser pulses into a pair of long, 170 m optical fibers going out to a connection box near a pair of WCDs. Light from the two outputs is then directed through short fibers into the two WCDs and exits through a Teflon diffuser, hanging from a float $\sim3$ m above the central PMT and illuminating all PMTs at the bottom of the tank. The return light path, splitting off before the filter wheels, triggers a [*Thorlabs*]{} photo diode to generate a start time record for each laser pulse and fans out into 150 separate optical fibers that go out to the connection boxes near each tank pair. Inside each box, the return light path connects to a 30 m fiber, replicating the length of the fiber connection between box and tank, and loops back through another 170 m fiber into the laser room. Here, each channel can be selected individually via a [*Dicon*]{} switch to measure the laser pulse return time with a [*Hamamatsu*]{} photomultiplier tube. The knowledge of these individual time constants of the detector array provides the possibility to monitor even local variations, for example due to temperature dependent expansion or contraction of fibers. A schematic of the laser system layout can be found in [@bib:ICRC11cal]. Calibration Method {#calibration-method .unnumbered} ================== Data Taking {#data-taking .unnumbered} ----------- For a calibration run, the system cycles through a wide range of optical filter wheel combinations that vary the laser intensity over more than 4 orders of magnitude up to several thousand photo electrons (PE). 2000 laser pulses at each intensity setting generate sufficient statistics for charge and slewing calibrations. The laser can be operated with frequencies up to 500 Hz, but initial studies at the HAWC test WCD in Fort Collins, Colorado, US, showed that operation at more than 200 Hz can lead to undesirable intensity variation. Using a frequency of 200 Hz and generating 2000 laser hits per filter wheel setting, a calibration cycle with 63 different intensities takes approximately 15 minutes. This has to be repeated for 15 switch settings, each illuminating up to 10 tank pairs simultaneously. The duration of a calibration run for the whole array will thus be less than 4 hours. Furthermore, these runs do not increase the experiment’s dead time significantly, since data taking is done with the normal TDC DAQ which receives electronic trigger flags for the start time of each laser pulse. Only a window of $~10 $ microseconds ($\mu s $) around the PMT response to each laser hit is tagged and excluded for air shower reconstruction. Calibration runs are scheduled on a weekly basis and longterm studies of the calibration stability will show if longer intervals in between are acceptable in future. \[timing\] Timing Calibration {#timing-timing-calibration .unnumbered} ----------------------------- The photo diode measurement of the laser pulse start time provides the trigger signal for the DAQ. A separate [*Berkeley Nucleonics*]{} counter stores the time between this start signal and the second signal produced by the light from the return loop. After averaging, half this time difference is a reliable measurement for the time between the calibration trigger and the instance the laser photons reach the diffuser inside the tank. Subtracting this delay and the time for traversing $\sim3$ m of water from the response time between trigger and TDC pulse measurement yields the time offset due to electronics for each individual PMT. The correction of these time constants for each pulse time that is part of an air shower event reduces timing mismatches and improves the angle fit of air shower fronts. While not all fiber connections are installed, an alternative method to obtain time offset corrections without laser signals is used. By reconstructing a large number of air showers, a systematic shift of pulse times relative to the fitted shower front can be calculated for each individual PMT channel. The details and results from this approach are discussed in the separate contribution [@bib:ICRC13timecal]. The exact relationship between the impact of a photon on the PMT cathode and the subsequent TDC record of an electronic threshold crossing depends also on the pulse shape and its amplitude. A pulse with a higher charge, and therefore larger ToT, has a shorter rise time and thus a reduced intrinsic delay, called slewing, for the TDC time record when compared to a pulse of lower energy. Using the wide intensity range produced in a laser run, the relation between either low threshold or high threshold ToT measurements and the slewing offset can be mapped as a two-dimensional histogram. After subtracting the constant time offset, an empirical fit produces slewing curves for both thresholds for each PMT. In the performance analysis in section \[performance\], individual slewing results were not yet available for all active HAWC PMTs. Instead, average low and high threshold slewing curves were used, derived from data collected at the HAWC test WCD. Details and examples of the slewing procedure and results are presented in [@bib:ICRC13timecal]. \[charge\] Charge Calibration {#charge-charge-calibration .unnumbered} ----------------------------- ![image](icrc2013-0566-01){width="90.00000%"} In HAWC, following the techniques used in the predecessor Milagro, the charge, and thus the weight, of individual PMT pulses in an air shower event is measured as a ToT signal for two thresholds. Both values have to be calibrated with the laser system. Cycling through 63 different attenuation settings provides a wide laser intensity range that is monitored with a [*LaserProbe*]{} radiometer. It receives light through a splitting cube in the light-to-tanks path and is usually operated in a mode that averages over the 2000 laser pulses at a fixed filter wheel setting. The occupancy $\eta=m/n$ for each laser intensity setting is obtained by counting the number of TDC responses $m$ in the predetermined coincidence time window after the trigger signal, divided by the number of laser pulses $n$. The averaged occupancy at a fixed intensity $\lambda$ in PE is equivalent to the Poisson probability of producing at least a $1$ PE pulse, $\eta = 1 - \exp(-\lambda)$ . By substituting ${\lambda = a \cdot r_i/r_n}$, the measured occupancies are expressed as a function of the laser radiometer measurements $r_i$, normalized by an arbitrary value $r_n$. A fit of the occupancies to low intensities $\leq \sim2$ PE is performed to obtain the $a$ parameter, avoiding the regime where the uncertainty on $\lambda$ diverges. The conversion factor $a$ can then be used to translate any relative radiometer measurement into a mean PE intensity $\lambda$. A Poisson distribution for each such $\lambda$, smeared with the energy resolution of the PMTs, produces an expected distribution of numbers of PE ($\mbox{n}_{\mbox{\tiny PE}}$) that can be matched with quantiles of a histogram of the TDC ToT measurements at that intensity. For either the low or the high threshold response, all these $\mbox{n}_{\mbox{\tiny PE}}$-ToT pairs are merged together in profile histograms and fitted with an empirical function. Using the calibration data from cycling through the full intensity spectrum of $<0.1$ PE to $>1000$ PE (exact values are channel dependent) thus yields individual low and high threshold charge calibration curves for each PMT. The fitted ToT-PE conversions are applied analytically to every PMT pulse that is part of a triggered air shower event to provide the calibrated charge as an accurate weight for the reconstruction algorithms. Whenever a pulse crosses the high threshold, the corresponding ToT is the more reliable charge estimator and is used instead of the low threshold ToT. All PMTs were characterized before installation at the site and are grouped in batches with voltages chosen to produce approximately the same gain of $1.4\cdot 10^7$. This procedure also results in charge calibration curves that are generally well aligned when comparing individual PMTs, see Fig. \[fig::chargecurve\] for some examples. This similarity allows for use of an average charge calibration curve, derived from data collected at the HAWC WCD test site, that will be replaced with individual curves when the completion of the optical fiber network allows for a calibration of all PMTs at the HAWC site. The average charge calibration was used for the performance analysis shown in section \[performance\]. \[performance\] Calibration Performance {#performance-calibration-performance .unnumbered} ======================================= ![image](icrc2013-0566-02){width="85.00000%"} The first data collected with 30 HAWC WCDs has limited sensitivity and statistics for gamma ray observations, but a verification of the calibration performance can be achieved by mapping air shower event directions, dominated by charged primary particles, around the position of the moon. The moon blocks these cosmic rays and produces a deficit. A detailed analysis of this observation is presented in [@bib:ICRC13moon]. Here, a qualitative comparison of the position and shape of the moon shadow deficit for data from 30 HAWC WCDs with different processing conditions highlights the improvement of event reconstruction through laser calibration. A subset of data collected with the partial HAWC array between September an December 2012 with a total live time of 24.8 days was processed with four different calibration settings: - All charges set to 1 PE and no timing calibration; - Charge calibration applied with average curve; - Timing calibration applied with average slewing curve and individual shower residual offsets, all charges set to 1 PE; - Charge calibration as in (b) and timing calibration as in (c) applied. For these data sets, all air shower events were reconstructed. Besides requiring that the angle fit succeeded, the only cut applied to the results was removing events with less than 32 PMT channels participating, to exclude low energy showers that reduce the deficit significance. To correct for a known inaccuracy in the survey of PMT positions, an overall shift was applied to all direction results, chosen in such a way that the maximum of the angular shower distribution realigns with the local zenith. A check was performed to confirm that this correction slightly increases the deficit significance but does not strongly affect the shape of the moon shadow in any of the four cases. The results for the four cases are shown in Fig. \[fig:maps\] as maps of binned statistical significances with a radial smoothing of $3^{\circ}$ applied. In the first map (a), no unambiguous moon shadow is visible and no deficit with a significance of at least $-5.0 \sigma$ can be found. After applying only the charge calibration (b), a deficit (peak significance $-5.0 \sigma$) is visible but is both of irregular shape and offset from the moon position by several degrees. Fig. \[fig:maps\] (c) shows that using only the timing calibration and no charge calibration deepens the deficit only slightly ($-5.1 \sigma$) but produces a clearer image of the moon shadow closer to the actual location. This was expected due to the importance of individual pulse times for fitting the shower front. The final comparison with the map (d) in which both charge and timing calibrations are applied reveals a much more pronounced deficit ($-6.4 \sigma$) and a more symmetric and well centered moon shadow. Counting events over the full sky coverage, the last map contains $\sim 1.1 \cdot 10^9$ events and thus $\sim 3 \cdot 10^8$ more than those maps from cases (a), (b) and (c), due to the fact that more shower angle fits are successful when full calibrations are used. Conclusions {#conclusions .unnumbered} =========== The main components of the HAWC laser calibration system are installed and operational and calibration runs can be performed without significantly reducing the array’s dead time. The experience gained from a WCD test setup and systematic time residuals derived from shower fits made it possible to calibrate early HAWC data even before calibration results for all individual PMT channels are available. Both charge and timing calibrations significantly improve air shower reconstruction as is shown here based on the mapping of the moon shadow deficit for a subset of early HAWC data. The optical fiber network will continue to grow with the HAWC array and provides the means for regular calibrations of all PMTs to guarantee a strong and stable performance in gamma-ray observations. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge the support from: US National Science Foundation (NSF); US Department of Energy Office of High-Energy Physics; The Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnología (CONACyT), México; Red de Física de Altas Energías, México; DGAPA-UNAM, México; and the University of Wisconsin Alumni Research Foundation. R. Atkins et al., Astrophys. J., [**608**]{} 680 (2004). A. Abeysekara et al. (HAWC Collaboration), Astropart. Phys., [**35**]{} 641 (2012). M. Mostafa et al. (HAWC Collaboration), these proc. J. Pretz et al. (HAWC Collaboration), these proc. P. Huentmeyer et al. (HAWC Collaboration), Proc. of ICRC 2011 doi:10.7529/ICRC2011/V09/0767. H. Ayala Solares et al. (HAWC Collaboration), these proc. D. Fiorino et al. (HAWC Collaboration), these proc.
=16.5cm **The Elliptic Function in Statistical Integrable Models** Kazuyasu Shigemoto[^1] **Abstract** We examine the group theoretical reason why various two dimensional statistical integrable models, such as the Ising model, the chiral Potts model and the Belavin model, becomes integrable. The symmetry of these integrable models is $SU(2)$ and the Boltzmann weight can be parametrized by the elliptic function in many cases. In this paper, we examine the connection between the $SU(2)$ symmetry and the elliptic function in the statistical integrable models. ----------- --------------------- -------------------------- -------------------- Keywords: elliptic function, elliptic theta function, Ising model chiral Potts model, Belavin model, Heisenberg algebra ----------- --------------------- -------------------------- -------------------- [**Contents**]{}\ I Introduction\ II The Ising model and the elliptic function\ III The chiral Potts model and the elliptic function\ IV The Belavin model and the elliptic theta function\ V The Heisenberg algebra\ VI Summary and discussion\ Introduction ============ The two dimensional integrable statistical models are classified into three types of model, the spin model, the vertex model and the face model[@Baxter]. Typical $N$-state spin model is called the chiral Potts model[@chiral], which includes the Ising model[@Onsager] as the special $N=2$ state model. The origin of the integrability of the non-linear model comes from the Lie group symmetry, which is some generalization of $SU(2)$, that is, the cyclic representation of $SU(2)$[@H-S]. In this model, we can parametrize the Boltzmann weight by the elliptic function which has the difference property for only $N=2$ Ising case. While typical $Z_N \times Z_N$ vertex model is called the Belavin model[@Belavin], which includes the Baxter’s 8-vertex model[@Baxter1] as the special $Z_2 \times Z_2$ vertex model. In this model, the origin of the integrability comes from the symmetry of the cyclic representation of $SU(2)$[@Tracy; @Cherednik]. In this model, we can parametrize the Boltzmann weight by the elliptic theta function with characteristics. The typical face model(IRF model) is $A^{(1)}_{N-1}$ model[@Jimbo], which is the generalization of Baxter’s IRF model[@Baxter2]. But this face model is equivalent to the Belavin model by the vertex-face correspondence by using the intertwining vector[@Jimbo]. Then the origin of the integrability of the two dimensional statistical model comes from the cyclic $SU(2)$ symmetry. And we have the elliptic representation of the Boltzmann weight in many important cases. Then we expect the correspondence between the cyclic $SU(2)$ symmetry and the elliptic function. In other words, we expect that the symmetry of the elliptic function is the cyclic $SU(2)$ symmetry[@Mumford]. In this paper, we examine various statistical integrable models in the context of the correspondence of the cyclic $SU(2)$ symmetry and the elliptic function. The Ising model and the elliptic function ========================================= The star-triangle relation(the integrability condition) in the Ising model is written in the form $$\begin{aligned} &&\sum_{d=\pm 1} \exp\{d(L_1 a+K_2 b +L_3 c) \} =R\exp(K_1bc +L_2 ca +K_3 ab ) . \label{e2-1}\end{aligned}$$ This relation is written in the form $$\begin{aligned} &&\exp( L^*_3\sigma_x ) \exp( K_2\sigma_z ) \exp( L^*_1\sigma_x ) =\exp( K_1\sigma_z ) \exp(L^*_2\sigma_x ) \exp( K_3\sigma_z ) , \label{e2-2}\end{aligned}$$ where we use $\tanh X^*=\exp(-2 X )$ , which gives $\sinh 2X \sinh 2X^*=1$.\ If $\{L_i,K_i\}(i=1,2,3)$ satisfies the above integrable condition, we have $$\begin{aligned} &&\exp(2L^*_3 J_x ) \exp( 2K_2 J_z ) \exp( 2L^*_1 J_x) =\exp( 2K_1 J_z ) \exp( 2L^*_2 J_x ) \exp( 2K_3 J_z ) , \label{e2-3}\end{aligned}$$ for arbitrary spin of $SU(2)$ with the commutation relation $$\begin{aligned} &&[J_x,J_y]=i J_z,\quad [J_y,J_z]=i J_x,\quad [J_z,J_x]=i J_y . \label{e2-4}\end{aligned}$$ If we define $J_{\pm}=J_x \pm i J_y$, the above commutation relation is written in the form $$\begin{aligned} &&[J_z,J_{\pm}]=\pm J_{\pm},\quad [J_+,J_-]=2J_z . \label{e2-5}\end{aligned}$$ We will show that the integrability condition does not depend on the magnitude of the spin of $SU(2)$ in the following way. We denote $U$ and $V$ as the left-hand side and the right-hand side of the integrability condition respectively, that is, $$\begin{aligned} U=\exp( 2L^*_3 J_x ) \exp(2K_2 J_z ) \exp( 2L^*_1 J_x ),\ V=\exp( 2K_1 J_z ) \exp( 2L^*_2 J_x ) \exp( 2K_3 J_z ) . \label{e2-6}\end{aligned}$$ From the relation $$\begin{aligned} UJ_xU^{-1}=VJ_xV^{-1},\ UJ_yU^{-1}=VJ_yV^{-1}, \ UJ_zU^{-1}=VJ_zV^{-1} , \label{e2-7}\end{aligned}$$ we have $$\begin{aligned} && \cosh 2K_k=\cosh 2K_i \cosh 2K_j +\sinh 2K_i \sinh 2K_j \cosh 2L^*_k , \label{e2-8}\\ && \cosh 2L^*_k=\cosh 2L^*_i \cosh 2L^*_j +\sinh 2L^*_i \sinh 2L^*_j \cosh 2K_k , \label{e2-9}\\ && \sinh 2K_k \cosh 2L^*_i=\cosh 2K_i \sinh 2K_j +\sinh 2K_i \cosh 2K_j \cosh 2L^*_k , \label{e2-10}\\ && \sinh 2L^*_k \cosh 2K_i=\cosh 2L^*_i \sinh 2L^*_j +\sinh 2L^*_i \cosh 2L^*_j \cosh 2K_k , \label{e2-11}\\ && \sinh 2L^*_i \sinh 2L^*_j+\cosh 2L^*_i \cosh 2L^*_j \cosh 2K_k \nonumber\\ &&=\sinh 2K_i \sinh 2K_j+\cosh 2K_i \cosh 2K_j \cosh 2L^*_k , \label{e2-12}\\ &&\frac{\sinh 2K_i}{\sinh 2L^*_i}=\frac{\sinh 2K_j}{\sinh 2L^*_j} , \label{e2-13}\\ &&(k=2,\ i \ne j =1,3) , \nonumber\end{aligned}$$ where we use only the commutation relation Eq.(\[e2-4\]). Conversely, if Eq.(\[e2-7\]) is satisfied, $$\begin{aligned} [V^{-1}U,J_x]=0, \ [V^{-1}U,J_y]=0, \ [V^{-1}U,J_z]=0 , \label{e2-14}\end{aligned}$$ which gives $V^{-1}U={ \rm const.} 1 $ by Shur’s lemma. If we consider the special case $K_1=0$, $K_2=0$, $K_3=0$, $L^*_1=0$, $L^*_2=0$, $L^*_3=0$, the proportional constant in the right of $V^{-1} U$ becomes $1$, and we have the integrability condition $U=V$, which is independent of the magnitude of the spin.\ We parametrize the Boltzmann weight of the Ising model with the elliptic function. In that parametrization, we put the following ansatz of the symmetry,\ i) $K_i \Leftrightarrow L_i$ corresponds to $u \Leftrightarrow K-u$ of the argument of the elliptic function.\ ii) $K_i \Leftrightarrow K^*_i$ corresponds to ${{\rm sn}}(u) \Leftrightarrow {{\rm cn}}(u)$\ iii)$L_i \Leftrightarrow L^*_i$ corresponds to ${{\rm sn}}(K-u) \Leftrightarrow {{\rm cn}}(K-u)$\ We use Eq.(\[e2-13\]) as the starting point. Then we parametrize $$\begin{aligned} &&\sinh 2K_i=F({{\rm sn}}(u_i),{{\rm cn}}(u_i)), \nonumber\\ &&\sinh 2K^*_i=\frac{1}{\sinh 2K_i}=F({{\rm cn}}(u_i),{{\rm sn}}(u_i)) =\frac{1}{F({{\rm sn}}(u_i),{{\rm cn}}(u_i))} , \nonumber\\ && \sinh 2L_i=F({{\rm sn}}(K-u_i),{{\rm cn}}(K-u_i)), \nonumber\\ &&\sinh 2K_i \sinh 2L_i=F({{\rm sn}}(u_i),{{\rm cn}}(u_i))F({{\rm sn}}(K-u_i),{{\rm cn}}(K-u_i)) =(i-{\rm independent}) . \nonumber\end{aligned}$$ From the relation $\sinh 2K_i \sinh 2K^{*}_i=F({{\rm sn}}(u_i),{{\rm cn}}(u_i))F({{\rm cn}}(u_i),{{\rm sn}}(u_i))=1$, we take $$\begin{aligned} \sinh 2K_i=F({{\rm sn}}(u_i),{{\rm cn}}(u_i))=\frac{{{\rm sn}}(u_i)}{{{\rm cn}}(u_i)} . \label{e2-15}\end{aligned}$$ (Another possibility is $\sinh 2K_i=F({{\rm sn}}(u_i),{{\rm cn}}(u_i))={{\rm cn}}(u_i)/{{\rm sn}}(u_i)$ but we do not take this possibility here.)\ In this representation, we have $$\begin{aligned} &&\frac{\sinh 2K_i}{\sinh 2L^*_i}=\sinh 2K_i \sinh 2L_i =\frac{{{\rm sn}}(u_i)}{{{\rm cn}}(u_i)}\frac{{{\rm sn}}(K-u_i)}{{{\rm cn}}(K-u_i)} =\frac{1}{k'}={\rm const.} . \label{e2-16}\end{aligned}$$ Then we take $$\begin{aligned} &&\cosh 2K_i=\frac{1}{{{\rm cn}}(u_i)} , \quad \sinh 2K_i=\frac{{{\rm sn}}(u_i)}{{{\rm cn}}(u_i)} , \label{e2-17}\\ &&\cosh 2L^*_i=\frac{1}{{{\rm sn}}(K-u_i)} , \quad \sinh 2L^*_i=\frac{{{\rm cn}}(K-u_i)}{{{\rm sn}}(K-u_i)}, \label{e2-18}\\ &&(i=1,2,3) . \nonumber\end{aligned}$$ From Eq.(\[e2-8\]) and Eq.(\[e2-9\]), we have $$\begin{aligned} &&(1-\sinh 2K_1 \sinh 2K_3 \sinh 2L^*_1 \sinh 2L^*_3) \cosh 2K_2 \nonumber\\ &&=\cosh 2K_1 \cosh 2K_3 +\sinh 2K_1 \sinh 2K_3 \cosh 2L^*_1 \cosh 2L^*_3 . \label{e2-19}\end{aligned}$$ Using the above parametrization Eq.(\[e2-17\]) and Eq.(\[e2-18\]), Eq.(\[e2-19\]) is written in the form $$\begin{aligned} &&{{\rm cn}}(u_2)=\frac{{{\rm cn}}^2(u_1) {{\rm cn}}^2(u_3)-{k'}^2 {{\rm sn}}^2(u_1) {{\rm sn}}^2(u_3)} {{{\rm cn}}(u_1) {{\rm cn}}(u_3)+ {{\rm sn}}(u_1){{\rm dn}}(u_1) {{\rm sn}}(u_3){{\rm dn}}(u_3)} \nonumber\\ &&=\frac{{{\rm cn}}(u_1) {{\rm cn}}(u_3)- {{\rm sn}}(u_1){{\rm dn}}(u_1) {{\rm sn}}(u_3){{\rm dn}}(u_3)} {1-k^2 {{\rm sn}}^2(u_1) {{\rm sn}}^2(u_3)} . \label{e2-20}\end{aligned}$$ By the addition theorem of the elliptic function, we obtain the relation among $u_1,\ u_2,\ u_3$ in the form $$\begin{aligned} u_2=u_1+u_3 . \label{e2-21}\end{aligned}$$ Using Eq.(\[e2-17\]), Eq.(\[e2-18\]), and Eq.(\[e2-21\]), we have checked that Eq.(\[e2-8\])-Eq.(\[e2-13\]) are really satisfied. The chiral Potts model and the elliptic function ================================================ The chiral Potts model is the integrable $N$-state spin model. The star-triangle relation(integrable condition) in the chiral Potts model is given by $$\begin{aligned} &&\sum^{N-1}_{d=0} \overline{W}_{qr}(b-d) W_{pr}(a-d) \overline{W}_{pq}(d-c) =R_{pqr} W_{pq}(a-b) \overline{W}_{pr}(b-c) W_{qr}(a-c), \label{e3-1}\\ && W_{pq}(k)=\prod_{l=1}^{k} \left( \frac{d_p b_q - a_p c_q \omega^{l}} {b_p d_q - c_p a_q \omega^{l}} \right), \quad \overline{W}_{pq}(k)=\prod_{l=1}^{k} \left( \frac{\omega a_p d_q - d_p a_q \omega^{l}} {c_p b_q - b_p c_q \omega^{l}} \right), \label{e3-2}\\ &&a^N_p+k' b^N_p=k d^N_p, \quad k' a^N_p+b^N_p=k c^N_p . \label{e3-3} \end{aligned}$$ This condition is rewritten into a nice form, which is expressed with the Lie group element in the form [@H-S] $$\begin{aligned} &&T_{pq} S_{pr} T_{qr}=S_{qr} T_{pr} S_{pq}, \label{e3-4} \\ &&T_{pq}=\sum_{k=1}^{N} \widetilde{W}_{pq}(k) Z^k, \quad S_{pq}=\sum_{k=1}^{N} \overline{W}_{pq}(k) X^k, \label{e3-5} \\ &&\widetilde{W}_{pq}(k)=\sum^{N-1}_{l=0}\omega^{kl}W_{pq}(l) = \prod_{l=1}^{k} \left( \frac{b_p d_q - d_p b_q \omega^{l-1}} {c_p a_q - a_p c_q \omega^{l}} \right), \label{e3-6}\\ &&\overline{W}_{pq}(k)=\prod_{l=1}^{k} \left( \frac{a_p d_q \omega - d_p a_q \omega^{l}} {c_p b_q - b_p c_q \omega^{l}} \right), \label{e3-7}\end{aligned}$$ where $Z$ and $X$ are elements of the cyclic representation of $SU(2)$, which satisfy $Z X=\omega X Z$, $(\omega=e^{2 \pi i/N})$. In order to show Eq.(\[e3-4\]), we have used the following relation $$\begin{aligned} &&P(Z)(\alpha Z +\beta)X=(\gamma Z +\delta) X P(Z), \label{e3-8} \\ &&P(Z)=\sum_{k=1}^{N} p_k Z^k, \quad p_k=\prod_{l=1}^{k} \left(\frac{\gamma \omega -\alpha \omega^{l}} {\beta \omega^{l} -\delta} \right) , \label{e3-9}\\ &&Q(X)(\alpha X +\beta)Z=(\gamma X +\delta) Z Q(X), \label{e3-10} \\ &&Q(X)=\sum_{k=1}^{N} q_k X^k, \quad q_k=\prod_{l=1}^{k} \left( \frac{\gamma \omega^{l-1} -\alpha} {\beta -\delta \omega^{l}} \right). \label{e3-11}\end{aligned}$$ The Ising model --------------- The Ising model is the special $N=2$ case of the chiral Potts model. The parametrization, which satisfies Eq.(\[e3-3\]) and has the difference property $W_{p,q}(n)=f_n(p-q)$, $\widetilde{W}_{p,q}(n)=g_n(p-q)$, is given by[@Perk] $$\begin{aligned} && (a_p,b_p,c_p,d_p) =(\theta_{11}(p/2K),\theta_{10}(p/2K),\theta_{00}(p/2K),\theta_{01}(p/2K)) . \label{e3-12} \end{aligned}$$ Using the relation, $$\begin{aligned} {{\rm sn}}(u)=-\frac{1}{\sqrt{k}} \frac{\theta_{11}(u/2K)}{\theta_{01}(u/2K)} , \quad {{\rm cn}}(u)=\sqrt{\frac{k'}{k}} \frac{\theta_{10}(u/2K)}{\theta_{01}(u/2K)} , \quad {{\rm dn}}(u)=\sqrt{k'}\frac{\theta_{00}(u/2K)}{\theta_{01}(u/2K)} , \label{e3-13} \end{aligned}$$ we obtain the Ising model from the $N=2$ chiral Potts model by using the addition theorem of the elliptic function in the form $$\begin{aligned} &&\overline{W}_{pq}(1)/\overline{W}_{pq}(0)=\frac{\sinh{L^*}}{\cosh{L^*}} =e^{-2L} \nonumber\\ &&=\frac{-a_p d_q +d_p a_q}{c_p b_q+b_p c_q} =\frac{{{\rm dn}}(p-q)-{{\rm cn}}(p-q)}{k' {{\rm sn}}(p-q)} , \label{e3-14}\\ &&\widetilde{W}_{pq}(1)/\widetilde{W}_{pq}(0)=\frac{\sinh{K}}{\cosh{K}} =e^{-2K^*} \nonumber\\ &&=\frac{b_p d_q -d_p b_q}{c_p a_q+a_p c_q} =\frac{1-{{\rm cn}}(p-q)}{{{\rm sn}}(p-q)}. \label{e3-15}\end{aligned}$$ The Belavin model and the elliptic theta function ================================================= The Belavin model is the integrable $Z_N \times Z_N$ vertex type model. The Yang-Baxter equation, which is the integrability condition in this vertex model, is given by $$\begin{aligned} S_{12}(u_1-u_2) S_{13}(u_1-u_3) S_{23}(u_2-u_3) =S_{23}(u_2-u_3) S_{13}(u_1-u_3) S_{12}(u_1-u_2) . \label{e4-1}\end{aligned}$$ The Boltzmann weight $S(u)$ is given by $$\begin{aligned} &&S(u)=\sum^{N-1}_{\alpha_1,\alpha_2=0} w_{\alpha_1, \alpha_2}(u) I_{\alpha_1, \alpha_2} \otimes I^{-1}_{\alpha_1, \alpha_2} , \label{e4-2}\\ &&w_{\alpha_1, \alpha_2}(u)=\frac{1}{N} \frac{\sigma_{\alpha_1, \alpha_2}(u+\eta/N)\sigma_{0,0}(\gamma \eta)} {\sigma_{\alpha_1, \alpha_2}(\eta/N)\sigma_{0,0}(u+\gamma \eta)} , \label{e4-3}\\ &&\sigma_{\alpha_1, \alpha_2}(u) =\theta_{\frac{\alpha_2}{N}+\frac{1}{2}, \frac{\alpha_1}{N} +\frac{1}{2}}(u,\tau) , \label{e4-4}\\ &&I_{\alpha_1, \alpha_2} =Z^{\alpha_1} X^{\alpha_2} , \label{e4-5}\end{aligned}$$ where $Z$ and $X$ are elements of the cyclic representation of $SU(2)$ in the form $$\begin{aligned} &&Z=\left( \begin{array} {ccccc} 1 & & & & \\ & \omega & & & \\ & & \omega & & \\ & & & \cdots & \\ & & & & \omega^{N-1} \\ \end{array} \right), \quad X=\left( \begin{array} {ccccc} 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ & & \cdots & & \\ 0 & 0 & \cdots & 1 & 0 \\ \end{array} \right), \label{e4-6}\\ &&ZX=\omega XZ, \quad \omega=\exp( 2 \pi i /N ), \quad \omega^{N}=1. \label{e4-7}\end{aligned}$$ The index independent factor $\sigma_{0,0}(\gamma \eta)/\sigma_{0,0}(u+\gamma \eta)$ in $w_{\alpha_1. \alpha_2}(u)$ is trivially factor out in the integrability condition Eq.(\[e4-1\]), but we put this factor in order that $S(u)$ satisfy the Zamolodchikov algebra by choosing $\gamma$ to be the appropriate value. The Zamolodchikov algebra, which is the fundamental relation of the integrability condition in the Belavin model, will be discussed later.\ The theta function with characteristics $\theta_{r_1,r_2}(u,\tau)$ in the above is given by $$\begin{aligned} \theta_{r_1,r_2}(u,\tau)=\sum_{n\in Z} e^{ i \pi (n+r_1)^2 \tau + 2 \pi i (n+r_1)(u+r_2)} . \label{e4-8}\end{aligned}$$ The $u$-independent but index dependent factor $1/\sigma_{\alpha_1,\alpha_2}(\eta/N)$ is necessary to make $w_{\alpha_1 , \alpha_2}(u)$ to be periodic in the index $\alpha_2$. This property comes from the relation $$\begin{aligned} &&\sigma_{\alpha_1, \alpha_2+N}(u+\eta/N) =e^{2 \pi i \alpha_1/N}\sigma_{\alpha_1,\alpha_2}(u+\eta/N), \nonumber\\ &&\sigma_{\alpha_1, \alpha_2+N}(\eta/N) =e^{2 \pi i \alpha_1/N}\sigma_{\alpha_1,\alpha_2}(\eta/N), \nonumber\\ &&\frac{\sigma_{\alpha_1, \alpha_2+N}(u+\eta/N)} {\sigma_{\alpha_1, \alpha_2+N}(\eta/N)} =\frac{\sigma_{\alpha_1, \alpha_2}(u+\eta/N)} {\sigma_{\alpha_1, \alpha_2}(\eta/N)} . \nonumber\end{aligned}$$ In the special $u=0$ case, $S_{12}(0)$ becomes the permutation operator\ $\displaystyle{P_{12}=\frac{1}{N}\sum_{a,b}Z^a X^b \otimes X^{-b} Z^{-a}}$. Then the Yang-Baxter equation is trivially satisfied in the special $u_1=u_2=u_3=0$ case. The cyclic and the ordinary spin representation in $SU(2)$ ---------------------------------------------------------- The $N$-state cyclic representation of $SU(2)$ is given by $$\begin{aligned} &&(Z)_{a,b}=\delta_{a,b} \exp( 2\pi i a/N ), \quad (X)_{a,b}=\delta_{a,b+1} +\delta_{a,b+1-N}, \label{e4-9}\\ &&(a,b=0,\cdots,N-1) . \nonumber\end{aligned}$$ While the ordinary spin $J$ representation of $SU(2)$ is given by $$\begin{aligned} &&(J_z)_{a,b}=\delta_{a,b}(J-a),\quad (J_+)_{a,b}=\delta_{a,b+1} ,\quad (J_-)_{a,b}=\delta_{a,b-1} ,\quad \label{e4-10}\\ &&(a,b=0,\cdots,N-1) , \quad (N=2J+1) . \nonumber\end{aligned}$$ Then we have the relation between the $N$-state cyclic representation and the ordinary spin $J$ representation in the form $$\begin{aligned} &&Z=\exp( 2\pi i (J-J_z)/N ), \quad X=J_{+} +J_{-}^{N-1} . \label{e4-11}\end{aligned}$$ In the above relation, we can interprete that the element of the cyclic representation is somewhat the quantum group element in a sense that $Z$ is the element of the Lie group but $X$ is the sum of the element of Lie algebra.\ In the special $N=2$ case, $Z, X$ become elements of Lie albegra $Z=\sigma_z, X=\sigma_x$. The Zamolodchikov algebra ------------------------- The fundamental integrability relation in the Belavin model is the following Zamolodchikov algebra[@Cherednik] $$\begin{aligned} A(u_1)\otimes A(u_2+\eta)=S(u_1-u_2)A(u_1+\eta )\otimes A(u_2) . \label{e4-12}\end{aligned}$$ We define $T(u,v)$ as $$\begin{aligned} T(u,v)=S_{12}(u) S_{13}(u+v) S_{23}(v)-S_{23}(v) S_{13}(u+v) S_{12}(u) . \label{e4-13}\end{aligned}$$ If the Zamolodchikov algebra Eq.(\[e4-12\]) is satisfied, we have $$\begin{aligned} T(u,v) A(w+u-\eta) \otimes A(w) \otimes A(w-v+\eta)=0 . \label{e4-14}\end{aligned}$$ Then we have the Yang-Baxter relation $T(u,v)=0$, if $A(u)$ satisfies the Zamolodchikov algebra and $A(u)$ is the basis of the complete set.\ We will construct this $A(u)$ by the elliptic theta function, and we will show that $A(u)$ is the basis of the complete basis. In order to construct $A(u)$ by the elliptic theta function, we examine the quasi-periodic property of the Boltzmann weight $S(u)$. If both side of the Zamolodchikov algebra has the same quasi-periodicity, it suggests that the left-hand side is equal to the right-hand side up to the constant factor.\ Using the property $$\begin{aligned} \theta_{r_1, r_2}(u+\xi_1 +\xi_2 \tau,\tau) =e^{-i \pi \xi_2(\xi_2 \tau+2u)} e^{2 \pi i (r_1 \xi_1-r_2 \xi_2)} \theta_{r_1, r_2}(u,\tau), \quad (\xi_1, \xi_2 \in Z) . \label{e4-15}\end{aligned}$$ We have the transformation of the Boltzmann weight $$\begin{aligned} &&w_{\alpha_1, \alpha_2}(u+\xi_1 +\xi_2 \tau)= \frac{\theta_{\frac{\alpha_2}{N}+\frac{1}{2},\frac{\alpha_1}{N}+\frac{1}{2}} (u+\xi_1 +\xi_2 \tau+\eta/N,\tau)} {\theta_{\frac{\alpha_2}{N}+\frac{1}{2},\frac{\alpha_1}{N}+\frac{1}{2}} (u+\eta/N,\tau)} \nonumber\\ && \times \frac{\theta_{\frac{1}{2},\frac{1}{2}}(u+\gamma \eta,\tau)} {\theta_{\frac{1}{2},\frac{1}{2}}(u+\xi_1 +\xi_2 \tau+\gamma \eta, \tau)} w_{\alpha_1, \alpha_2}(u) \nonumber\\ &&= e^{2 \pi i \xi_2 \eta(\gamma-1/N)} e^{2 \pi i (\xi_1 \alpha_2 -\xi_2 \alpha_1)/N} w_{\alpha_1, \alpha_2}(u) = e^{2 \pi i \xi_2 \eta(\gamma-1/N)} \omega^{<{\bf \xi},{\bf \alpha}>} w_{\alpha_1, \alpha_2}(u) , \label{e4-16}\end{aligned}$$ where we use the notation $<{\bf \xi},{\bf \alpha}>=\xi_1 \alpha_2 -\xi_2 \alpha_1$. Because of the index independent but $u$ dependent factor $\sigma_{0,0}(\gamma \eta)/\sigma_{0,0}(u+\gamma \eta)$, which is trivially factor out in the Yang-Baxter equation Eq.(\[e4-1\]), the multiplied factor in the right-hand side of Eq.(\[e4-16\]) becomes $u$ independent, which is necessary to satisfy the Zamolodchikov algebra. Therefore the Boltzmann weight transforms into the form $$\begin{aligned} &&S(u+\xi_1+\xi_2 \tau) =\sum^{N-1}_{\alpha_1,\alpha_2=0}w_{\alpha_1, \alpha_2}(u+\xi_1+\xi_2 \tau) Z^{\alpha_1}X^{\alpha_2}\otimes X^{-\alpha_2}Z^{-\alpha_1} \nonumber\\ &&=e^{2 \pi i \xi_2 \eta(\gamma-1/N)} \sum^{N-1}_{\alpha_1,\alpha_2=0}w_{\alpha_1, \alpha_2}(u) \omega^{<{\bf \xi},{\bf \alpha}>}Z^{\alpha_1}X^{\alpha_2} \otimes X^{-\alpha_2}Z^{-\alpha_1} , \nonumber\end{aligned}$$ Then we have $$\begin{aligned} &&e^{-2 \pi i \xi_2 \eta(\gamma-1/N)} S(u+\xi_1+\xi_2 \tau) \nonumber\\ &&=(I_{{\bf \xi}}\otimes 1 ) S(u) (I^{-1}_{{\bf \xi}}\otimes 1 ) =(1 \otimes I^{-1}_{{\bf \xi}}) S(u) (1 \otimes I_{{\bf \xi}}) . \label{e4-17}\end{aligned}$$ Construction of the state vector $A(u)$ --------------------------------------- Next we construct the state vector $A(u)$ by the elliptic theta function. For this purpose we consider the transformation of $\theta_{\frac{a}{N}, Bk}(u+\xi_1+\xi_2 \tau; C \tau)$, where $B, C$ are constant. $$\begin{aligned} \theta_{\frac{a}{N}, Bk}(u+\xi_1 +\xi_2 \tau;C \tau) =\omega^{a \xi_1} e^{-i \pi \tau \xi_2^2/C-2 \pi i \xi_2 (u+B k)/C} \theta_{\frac{a}{N}+\frac{\xi_2}{C}, B k}(u ;C \tau) . \label{e4-18}\end{aligned}$$ In order that $\theta_{\frac{a}{N}, Bk}(u;C \tau), \quad (a, k \in Z_N)$ is closed under the transformation, and also the prefactor $\omega^{a \xi_1} e^{-i \pi \tau \xi_2^2/C-2 \pi i \xi_2 (u+B k)/C}$ becomes $k$ independent, we have two possibilities $$\begin{aligned} &&{\rm case\ 1)}: B=C=N/(N-1) \nonumber\\ &&\theta_{\frac{a}{N}, B k}(u; C \tau) =\theta_{\frac{a}{N}, \frac{N k}{N-1}}(u; N\tau/(N-1)) , \label{e4-19}\\ &&\theta_{\frac{a}{N}, \frac{N k}{N-1}}(u+\xi_1+\xi_2 \tau; N\tau/(N-1)) \nonumber\\ &&=e^{-i \pi \tau \xi_2^2 (N-1)/N-2 \pi i \xi_2 u (N-1)/N} \omega^{a \xi_1} \theta_{\frac{a-\xi_2}{N}, \frac{N k}{N-1}}(u;N\tau/(N-1)) . \label{e4-20}\\ &&{\rm case\ 2)}: B=C=N \nonumber\\ &&\theta_{\frac{a}{N}, B k}(u; C \tau) =\theta_{\frac{a}{N}, Nk}(u; N \tau)=\theta_{\frac{a}{N},0}(u; N \tau) =(k-{\rm independent}) , \label{e4-21}\\ &&\theta_{\frac{a}{N}, 0}(u+\xi_1+\xi_2 \tau; N \tau) =e^{-i \pi \tau \xi_2^2/N-2 \pi i \xi_2 u/N} \omega^{a \xi_1} \theta_{\frac{a+\xi_2}{N}, 0}(u; N \tau) . \label{e4-22}\end{aligned}$$ Then we define $A_k(u)$ in the case 1) in the form, $$\begin{aligned} && A_k(u) =\left( \begin{array} {c} \theta_{0, \frac{kN}{N-1}}(u; N \tau/(N-1)) \\ \theta_{\frac{1}{N}, \frac{kN}{N-1}} (u; N \tau/(N-1))\\ \theta_{\frac{2}{N}, \frac{kN}{N-1}}(u; N \tau/(N-1))\\ \cdots \\ \theta_{\frac{N-1}{N}, \frac{kN}{N-1}}(u; N \tau/(N-1)) \end{array} \right) , \label{e4-23}\\ &&A_k(u+\xi_1+\xi_2 \tau)= e^{-i \pi \tau \xi_2^2 \frac{N-1}{N}-2 \pi i \xi_2 u \frac{N-1}{N}} Z^{\xi_1} X^{\xi_2} A_k(u), \quad (\xi_1, \xi_2\in Z). \label{e4-24}\end{aligned}$$ The transformation of the Zamolodchikov algebra ----------------------------------------------- We examine the transformation $u_1\rightarrow u_1+\xi_1+\xi_2 \tau, \ (\xi_1,\xi_2\in Z)$ and $u_2\rightarrow u_2+\zeta_1+\zeta_2 \tau, \ (\zeta_1,\zeta_2\in Z)$ of the Zamolodchikov algebra in the case 1) $$\begin{aligned} A(u_1)\otimes A(u_2+\eta)=S(u_1-u_2)A(u_1+\eta )\otimes A(u_2) . \label{e4-25}\end{aligned}$$ Under the transformation $u_1\rightarrow u_1+\xi_1+\xi_2 \tau, \ (\xi_1,\xi_2\in Z)$, the Zamolodchikov algebra tranforms into the form $$\begin{aligned} &&({ \rm left-hand\ side})=A(u_1+\xi_1+\xi_2 \tau)\otimes A(u_2+\eta) \nonumber\\ &&=e^{-i \pi \tau \xi_2^2 \frac{N-1}{N}-2 \pi i \xi_2 u_1 \frac{N-1}{N}} (I_{{\bf \xi}} \otimes 1 )A(u_1)\otimes A(u_2+\eta) , \label{e4-26}\\ && \nonumber\\ &&({ \rm right-hand\ side}) =S(u_1+u_1+\xi_1+\xi_2 \tau-u_2) A(u_1+\xi_1+\xi_2 \tau+\eta ) \otimes A(u_2) \nonumber\\ &&=e^{2 \pi i \xi_2 \eta (\gamma-1/N)} e^{-i \pi \tau \xi_2^2 \frac{N-1}{N} -2 \pi i \xi_2 (u_1+\eta) \frac{N-1}{N}} \nonumber\\ && \times (I_{{\bf \xi}} \otimes 1 )S(u_1-u_2)(I^{-1}_{{\bf \xi}} \otimes 1 ) (I_{{\bf \xi}} \otimes 1 )A(u_1+\eta)\otimes A(u_2) \nonumber\\ &&=e^{-i \pi \tau \xi_2^2 \frac{N-1}{N}-2 \pi i \xi_2 u_1 \frac{N-1}{N}} e^{ 2 \pi i \xi_2 \eta (\gamma-1)} (I_{{\bf \xi}} \otimes 1 )S(u_1-u_2)A(u_1+\eta)\otimes A(u_2) . \label{e4-27}\end{aligned}$$ In order that the transformation is the same in the left-hand and the right-hand side of the Zamolodchikov algebra, we have $\gamma=1$ in case 1).\ By the similar calculation, we have shown that the Zamolodchikov algebra has the same form under the transformation $u_2\rightarrow u_2+\zeta_1+\zeta_2 \tau, \ (\zeta_1,\zeta_2\in Z)$. The possibility of another Zamolodchikov algebra ------------------------------------------------ Using the vector of case 2), we have the possibility of another Zamolodchikov algebra. In the case 2), we denote the state vector as $\widetilde{A}_k(u)$, which is given by $$\begin{aligned} && \widetilde{A}_0(u) =\left( \begin{array} {c} \theta_{0, 0} (u; N \tau)\\ \theta_{\frac{1}{N}, 0} (u; N \tau)\\ \theta_{\frac{2}{N}, 0} (u; N \tau)\\ \cdots \\ \theta_{\frac{N-1}{N}, 0} (u; N \tau) \end{array} \right) , \label{e4-28}\\ &&\widetilde{A}_0(u+\xi_1+\xi_2 \tau)= e^{-i \pi \tau \xi_2^2/ N-2 \pi i \xi_2 u/ N} Z^{\xi_1} X^{-\xi_2} \widetilde{A}_0(u), \quad (\xi_1, \xi_2 \in Z). \label{e4-29}\end{aligned}$$ Correspondingly, we define the another Boltzmann weight $\widetilde{S}(u)$ in the form $$\begin{aligned} &&\tilde{S}(u)=\sum^{N-1}_{\alpha_1,\alpha_2=0} \widetilde{w}_{\alpha_1, \alpha_2}(u) J^{-1}_{\alpha_1, \alpha_2} \otimes J_{\alpha_1, \alpha_2} , \label{e4-30}\\ &&\widetilde{w}_{\alpha_1, \alpha_2}(u)=\frac{1}{N} \frac{\sigma_{\alpha_1, \alpha_2}(u+\eta/N)\sigma_{0,0}(\gamma \eta)} {\sigma_{\alpha_1, \alpha_2}(\eta/N)\sigma_{0,0}(u+\gamma \eta)} , \label{e4-31}\\ &&J_{\alpha_1, \alpha_2} =Z^{\alpha_1} X^{-\alpha_2} . \label{e4-32}\end{aligned}$$ We consider another Zamolodchikov algebra $$\begin{aligned} \widetilde{A}(u_1)\otimes \widetilde{A}(u_2+\eta) =\widetilde{S}(u_1-u_2) \widetilde{A}(u_1+\eta )\otimes \widetilde{A}(u_2) . \label{e4-33}\end{aligned}$$ Under the transformation $u_1\rightarrow u_1+\xi_1+\xi_2 \tau, \ (\xi_1,\xi_2\in Z)$, another Zamolodchikov algebra transforms in the form $$\begin{aligned} &&({ \rm left-hand\ side})=\tilde{A}(u_1+\xi_1+\xi_2 \tau) \otimes \tilde{A}(u_2+\eta) \nonumber\\ &&=e^{-i \pi \tau \xi_2^2/N-2 \pi i \xi_2 u_1/N} (J_{{\bf \xi}} \otimes 1 ) \widetilde{A}(u_1) \otimes \tilde{A}(u_2+\eta) , \label{e4-34}\\ &&({ \rm right-hand\ side}) =\widetilde{S}(u_1+\xi_1+\xi_2 \tau-u_2) \widetilde{A}(u_1+\xi_1+\xi_2 \tau+\eta ) \otimes \widetilde{A}(u_2) \nonumber\\ &&=e^{2 \pi i \xi_2 \eta (\gamma-1/N)} e^{-i \pi \tau \xi_2^2 /N -2 \pi i \xi_2 (u_1+\eta) /N} \nonumber\\ && \times (J_{{\bf \xi}} \otimes 1 ) \widetilde{S}(u_1-u_2) (J^{-1}_{{\bf \xi}} \otimes 1 ) (J_{{\bf \xi}} \otimes 1 ) \widetilde{A}(u_1+\eta) \otimes \widetilde{A}(u_2) \nonumber\\ &&=e^{-i \pi \tau \xi_2^2 /N-2 \pi i \xi_2 u_1 /N} e^{ 2 \pi i \xi_2 \eta (\gamma-2/N)} (J_{{\bf \xi}} \otimes 1 ) \widetilde{S}(u_1-u_2) \widetilde{A}(u_1+\eta) \otimes \widetilde{A}(u_2) . \label{e4-35}\end{aligned}$$ In order that the transformation is the same in the left-hand side and the right-hand side of the Zamolodchikov algebra, we have $\gamma=2/N$ in case 2).\ By the similar calculation, we have shown that the Zamolodchikov algebra has the same form under the transformation $u_2\rightarrow u_2+\zeta_1+\zeta_2 \tau, \ (\zeta_1,\zeta_2\in Z)$.\ The numerical calculation by REDUCE suggests that another Zamolodchikov algebra Eq.(\[e4-33\]) is satisfied for $N=2$ but not satisfied for $N\ge3$. The Heisenberg algebra and the elliptic theta function ====================================================== In this section, we first review the well-known relation between the Heisenberg algebra and the elliptic theta function[@Mumford].\ If $N$ is the square of some integer, that is, $N=l^2, (l\in Z)$, we can connect the another state vector $\tilde{A}_k(u)$ , which appears in Zamolodchikov algebra of the Belavin model, with the representation of the Heisenberg algebra by the theta function with characteristics.\ The Heisenberg algebra is constructed from two operators $S_b,\ T_a$, which are defined by $$\begin{aligned} && S_b f(u)=f(u+b) , \label{e5-1}\\ && T_a f(u)=\exp( \pi i a^2 \tau + 2\pi i a u) f(u+a\tau) , \label{e5-2}\end{aligned}$$ for the function $f(u)$. We define the theta function with characteristics in the form $$\begin{aligned} && \theta_{a,b}(u,\tau)=S_b T_a \theta(u,\tau) =\exp\{ \pi i a^2 \tau +2\pi i a (u+b)\}\theta(u+a\tau+b,\tau) \nonumber\\ &&=\sum_{n\in Z} \exp\{ \pi i (n+a)^2 \tau +2\pi i (n+a) (u+b)\} . \label{e5-3}\end{aligned}$$ Then we have $$\begin{aligned} && S_{\frac{1}{l}} \sum_{n\in \frac{1}{l}Z} c_n \exp(\pi i n^2 \tau +2 \pi i n u) =\sum_{n\in \frac{1}{l}Z} c_n \exp(2 \pi i n/l) \exp(\pi i n^2 \tau +2 \pi i n u) , \label{e5-4}\\ && T_{\frac{1}{l}} \sum_{n\in \frac{1}{l}Z} c_n \exp(\pi i n^2 \tau +2 \pi i n u) =\sum_{n\in \frac{1}{l}Z} c_{n-\frac{1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) . \label{e5-5}\end{aligned}$$ The action of $\displaystyle{S_{\frac{1}{l}}}$is given by $$\begin{aligned} && S_{\frac{1}{l}} \sum_{n\in l Z} \exp(\pi i n^2 \tau +2 \pi i n u) =\sum_{n\in lZ} \exp(\pi i n^2 \tau +2 \pi i n u) , \nonumber\\ && S_{\frac{1}{l}} \sum_{n\in lZ+\frac{1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) =e^{2 \pi i/l^2}\sum_{n\in lZ+\frac{1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) , \nonumber\\ && \cdots \nonumber\\ && S_{\frac{1}{l}} \sum_{n\in lZ+\frac{l^2-1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) =e^{2 \pi i(l^2-1)/l^2}\sum_{n\in lZ+\frac{l^2-1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) . \label{e5-6}\end{aligned}$$ The action of $\displaystyle{T_{\frac{1}{l}}}$is given by $$\begin{aligned} && T_{\frac{1}{l}} \sum_{n\in l Z} \exp(\pi i n^2 \tau +2 \pi i n u) =\sum_{n\in lZ+\frac{1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) , \nonumber\\ && T_{\frac{1}{l}} \sum_{n\in lZ+\frac{1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) =\sum_{n\in lZ+\frac{2}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) , \nonumber\\ && \cdots \nonumber\\ && T_{\frac{1}{l}} \sum_{n\in lZ+\frac{l^2-1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) =\sum_{n\in lZ} \exp(\pi i n^2 \tau +2 \pi i n u) . \label{e5-7}\end{aligned}$$ Then we take the basis, which is closed under actions $S_{\frac{1}{l}}$ and $T_{\frac{1}{l}}$, in the form $$\begin{aligned} && {\bf v}(u) =\left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) =\left( \begin{array} {c} \sum_{n\in lZ} \exp(\pi i n^2 \tau +2 \pi i n u) , \\ \sum_{n\in lZ+\frac{1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) , \\ \sum_{n\in lZ+\frac{2}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) , \\ \cdots \\ \sum_{n\in lZ+\frac{l^2-1}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) \end{array} \right) . \label{e5-8}\end{aligned}$$ Here we use $Z$ and $X$, which is the $l^2 \times l^2$ cyclic representation of $SU(2)$, in the form $$\begin{aligned} &&Z=\left( \begin{array} {ccccc} 1 & & & & \\ & \omega & & & \\ & & \omega^2 & & \\ & & & \cdots & \\ & & & & \omega^{l^2-1} \\ \end{array} \right), \quad X=\left( \begin{array} {ccccc} 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ & & \cdots & & \\ 0 & 0 & \cdots & 1 & 0 \\ \end{array} \right), \label{e5-9}\\ &&{\rm with}\quad ZX=\omega XZ, \quad \omega=e^{2 \pi i /l^2}, \quad \omega^{l^2}=1. \label{e5-10}\end{aligned}$$ The Heisenberg algebra, acting on the basis ${\bf v}$, is expressed in the form $$\begin{aligned} S_{\frac{1}{l}}\left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) =\left( \begin{array} {ccccc} 1 & & & & \\ & \omega & & & \\ & & \omega^2 & & \\ & & & \cdots & \\ & & & & \omega^{l^2-1} \\ \end{array} \right) \left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) =Z \left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) , \label{e5-11}\\ T_{\frac{1}{l}}\left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) =\left( \begin{array} {ccccc} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ & & & \cdots & \\ 0 & 0 & 0 & \cdots & 1\\ 1 & 0 & 0 & \cdots & 0 \\ \end{array} \right) \left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) =X^{-1} \left( \begin{array} {c} v_0(u) \\ v_1(u) \\ v_2(u) \\ \cdots \\ v_{l^2-1}(u) \end{array} \right) . \label{e5-12}\end{aligned}$$ Noticing that $ S_{\frac{1}{l}},T_{\frac{1}{l}}$ as the operator on the basis but not the matrix, we have $$\begin{aligned} S_{\frac{1}{l}} T_{\frac{1}{l}} {\bf v}(u) =S_{\frac{1}{l}} X^{-1}{\bf v}(u)=X^{-1} S_{\frac{1}{l}}{\bf v}(u) =X^{-1} Z {\bf v}(u) . \label{e5-13}\end{aligned}$$ $$\begin{aligned} \omega T_{\frac{1}{l}} S_{\frac{1}{l}}{\bf v}(u) =\omega T_{\frac{1}{l}} Z {\bf v}(u) =\omega Z T_{\frac{1}{l}} {\bf v}(u) =\omega Z X^{-1} {\bf v}(u) . \label{e5-14}\end{aligned}$$ Using $ZX=\omega XZ$, we have the fundamental relation of the Heisenberg algebra $$\begin{aligned} S_{\frac{1}{l}} T_{\frac{1}{l}}=\omega T_{\frac{1}{l}} S_{\frac{1}{l}} . \label{e5-15}\end{aligned}$$ The standard basis is Eq.(\[e5-8\]) with $l^2$-dimension. This is expressed as the linear combination of $\theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau)$ with $(a=0, 1, 2, \cdots, l-1)$ and $(b=0, 1, 2, \cdots, l^2-1)$ in the form $$\begin{aligned} &&\theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau) =\omega^{ab}\sum_{n\in lZ+\frac{a}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) \nonumber\\ &&+\omega^{ab}\omega^{lb}\sum_{n\in lZ+\frac{a+l}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) \nonumber\\ &&\cdots \nonumber\\ && +\omega^{ab}\omega^{l(l-1)b}\sum_{n\in lZ+\frac{a+l(l-1)}{l}} \exp(\pi i n^2 \tau +2 \pi i n u) \nonumber\\ &&= \omega^{ba}v_a+\omega^{b(a+l)}v_{a+l} +\cdots+\omega^{b\left(a+l(l-1)\right)}v_{a+l(l-1)} . \label{e5-16}\end{aligned}$$ Multiplying $\displaystyle{ \frac{1}{l^2}\sum^{l^2-1}_{b=0} \omega^{-bc}}$ in the above relation, we have $$\begin{aligned} &&v_a(u)=\frac{1}{l^2} \sum^{l^2-1}_{b=0}\omega^{-ba} \theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau), \nonumber\\ &&v_{a+l}(u)=\frac{1}{l^2} \sum^{l^2-1}_{b=0}\omega^{-b(a+l)} \theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau), \nonumber\\ && \cdots \nonumber\\ &&v_{a+l(l-l)}(u)=\frac{1}{l^2} \sum^{l^2-1}_{b=0} \omega^{-b\left(a+l(l2-l)\right)} \theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau) , \label{e5-17}\\ &&(a=0,1, \cdots,l-1) , \nonumber\end{aligned}$$ where we use $\displaystyle{ \frac{1}{l^2}\sum^{l^2-1}_{b=0} \omega^{-bc} \theta_{\frac{a}{l},\frac{b}{l}}(u,\tau) =\delta_{c,a} v_a+\delta_{c,a+l} v_{a+l}+\cdots +\delta_{c,a+l(l-1)} v_{a+l(l-1)} } $ . In this way, we have the two kind of basis. The state $v_p(u),\ (p=0,1, \cdots, l^2-1)$ is the vector basis of the Heisenberg algebra with dimension $l^2$. While theta function with characteristics $\theta_{\frac{a}{l},\frac{b}{l}}(u,\tau),\ (a,b=0,1, \cdots (l-1))$ is the matrix basis with dimension $ l \times l$. We summarize the bases of the Heisenberg algebra in the form $$\begin{aligned} &&\hskip -10mm \theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau) =\sum^{l-1}_{c=0}\omega^{b(a+lc)} v_{a+lc}(u), (a=0,1,2,\cdots, l-1) , (b=0,1,2,\cdots, l^2-1) \label{e5-18}\\ &&\hskip -10mm v_p(u)=\frac{1}{l^2}\sum^{l^2-1}_{b=0} \omega^{-bp} \theta_{\frac{p}{l}, \frac{b}{l}}(u,\tau), \quad (p=0,1,2,\cdots, l^2-1). \label{e5-19}\end{aligned}$$ The action of the Heisenberg algebra on these bases are $$\begin{aligned} &&S_{\frac{1}{l}} v_p(u)=\omega^p v_p(u), \quad T_{\frac{1}{l}} v_p(u)= v_{p+1}(u) , \label{e5-20}\\ &&S_{\frac{1}{l}}\theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau) = \theta_{\frac{a}{l}, \frac{b+1}{l}}(u,\tau) , \quad T_{\frac{1}{l}}\theta_{\frac{a}{l}, \frac{b}{l}}(u,\tau) =\omega^{-b} \theta_{\frac{a+1}{l}, \frac{b}{l}}(u,\tau) . \label{e5-21}\end{aligned}$$ A property of the vector $v_p(u;\tau)$ -------------------------------------- Here we will show $\theta_{\frac{p}{l^2},0}(l u; l^2\tau)=v_p(u; \tau), (p=0,1,\cdots, l^2-1), (N=l^2)$. Then we can connect the state vector $\widetilde{A}_0(u)$, which is the special case $k=0$ of the state vector which appears in the Zamolodchikov algebra of the Belavin model, with the basis of the Heisenberg algebra represented by the theta function with characteristics.\ Starting from $v_p(u;\tau)$, we have $$\begin{aligned} &&v_p(u;\tau)=\frac{1}{l^2}\sum^{l^2-1}_{b=0} \omega^{-bp} \theta_{\frac{p}{l}, \frac{b}{l}}(u,\tau) =\frac{1}{l^2}\sum^{l^2-1}_{b=0} e^{-2 \pi i bp/l^2} \sum_{n\in Z}e^{i \pi \tau (n+\frac{p}{l})^2 +2 \pi i (n+\frac{p}{l})(u+\frac{b}{l})} \nonumber\\ &&= \frac{1}{l^2} \sum^{l^2-1}_{b=0} e^{-2 \pi i bp/l^2} \sum^{l-1}_{c=0} \sum_{m\in Z} e^{i \pi l^2 \tau (m+\frac{c}{l}+\frac{p}{l^2})^2 +2 \pi i (m+\frac{c}{l}+\frac{p}{l^2})(lu+b)} \nonumber\\ &&=\frac{1}{l^2}\sum^{l^2-1}_{b=0} e^{-2 \pi i bp/l^2} \sum^{l-1}_{c=0} e^{2 \pi i b(\frac{c}{l}+\frac{p}{l^2})} \theta_{\frac{c}{l}+\frac{p}{l^2},0}(lu;l^2\tau) \nonumber\\ &&=\frac{1}{l^2}\sum^{l^2-1}_{b=0} \sum^{l-1}_{c=0} e^{2 \pi i b c/l} \theta_{\frac{c}{l}+\frac{p}{l^2},0}(lu;l^2\tau) =\sum^{l-1}_{c=0} \delta_{c,0}e^{2 \pi i b c/l} \theta_{\frac{c}{l}+\frac{p}{l^2},0}(lu;l^2\tau) \nonumber\\ &&= \theta_{\frac{p}{l^2},0}(lu;l^2\tau), \quad (p=0,1,2,\cdots, l^2-1). \label{e5-22}\end{aligned}$$ If we notice that $N=l^2$, we have $$\begin{aligned} \theta_{\frac{p}{N},0}(lu; N\tau)=v_p(u; \tau),\quad (p=0,1,\cdots, N-1), \label{e5-23}\end{aligned}$$ which means that if we redefine $u$ as $lu$ in the Boltzmann weight $S(u)$, the state vector $v_p(u,\tau)$ in the Zamolodchikov algebra in case 2) is nothing but the representation of the Heisenberg algebra by the theta function with characteristics. Summary and discussion ====================== In oreder to understand the fundamental mechanism why the various two-dimensional statistical integrable model, we have considered the typical spin type models, the Ising model and the chiral Potts model, and the typical vertex type model, the Belavin model. These typical integrable models are parametrized by the elliptic function in many cases. In this paper, we then have examined the connection between the elliptic function and the integrability condition. We have shown that the integrability comes from the cyclic $SU(2)$ symmetry of the model, which comes from the elliptic parametrization of the Boltzmann weight.\ The connection between the Heisenberg algebra and the elliptic theta function is well-known[@Mumford], where Mamford has shown that the theta function with characteristics is the matrix representation of the Heisenberg albegra. In this paper, we have found that the vector representation of the heisenberg algebra. In the $N=l^2$ case, we further have shown that the vector representation another of the Heisenberg albegra is equal to the state vector of the another Zamolodchikov algebra of the Belavin model. [\[00\]]{} R. J. Baxter, “Exactly Solved Models in Statistical Mechanics”, (Academic, New York), 1982. R. J. Baxter, J. H. H. Perk and H. Au-Yang, [*Phys. Lett.*]{}, [**A128**]{}(1973),138. L. Onsager, [*Phys. Rev.*]{}, [**60**]{}(1944), 117. M. Horibe and K. Shigemoto, [*Nuovo Cimento*]{}, [**116B**]{}(2001), 1017. A. A. Belavin, [*Nucl. Phys.*]{}, [**180**]{}(1981), 189. R. J. Baxter, [*Ann. Phys.*]{}, [**76**]{}(1973), 1. C. A. Tracy, [*Physica*]{}, [**D16**]{}(1985), 203. I. V. Cherednik, [*Sov. J. Nucl. Phys.*]{}, [**36**]{}(1982), 320. M. Jimbo, T. Miwa and M. Okado, [*Nucl. Phys.*]{}, [**B300\[FS22\]**]{}(1988), 74. R. J. Baxter, [*Ann. Phys.*]{}, [**76**]{}(1973), 25. D. Mumford, “Tata Lectures on Theta I”, (Birkhäuser, Boston $\cdot$ Basel $\cdot$ Stuttgart), 1983. H. Au-Yang and J. H. H. Perk, [*Adv. Stud. in Pure Math.*]{}, [**19**]{}(1989), 57. [^1]: E-mail address: shigemot@tezukayama-u.ac.jp
--- author: - 'A. Morgenthaler[^1]' - 'P. Petit' - 'J. Morin' - 'M. Aurière' - 'B. Dintrans' - 'R. Konstantinova-Antova' - 'S. Marsden' title: 'Direct observation of magnetic cycles in Sun-like stars' --- Introduction ============ Sun-like stars are characterized by convective envelopes in which large-scale plasma flows (related, in particular, to radial and latitudinal differential rotation and to the Coriolis force) are able to trigger a global dynamo (Parker 1955). This continuous generation of a large-scale field is related to surface variability affecting a wide range of temporal and spatial scales, including quasi-periodic polarity reversals associated to magnetic cycles. Recent numerical models, in particular global MHD simulations, are able to mimic some characteristics of this cyclic behaviour for Sun-like stars (Ghizaru et al. 2010, Brown et al. 2011). This magnetic activity is observable through many observational proxies, from the photosphere (e.g. broad-band visible photometry) to the corona (X-ray and radio emissions). The long-term monitoring of chromospheric activity of tens of solar-type stars, carried out at Mount Wilson since 1965 (Wilson 1978) has allowed for the detection of periodic variations in a number of objects (Baliunas et al. 1995). Cyclic patterns were observed with a variety of cycle lengths, as well as seemingly more erratic activity fluctuations in other objects (in particular young stars) or no detectable activity at all in some others. More recently, asteroseismology was demonstrated its ability to investigate magnetic variability through variations of the p-mode amplitudes and frequencies (Garcia et al. 2010) with the detection of variations of these quantities over a few tens of days, presumably linked to magnetic activity, for the F5V star HD 49933. In addition to this wealth of indirect indicators of stellar activity, spectropolarimetry now enables us to perform direct measurements of surface magnetic fields and follow the long-term temporal evolution of large-scale magnetic geometries. So far, it allowed the observation in Sun-like stars of one global polarity switch (Petit et al. 2009) and of a full magnetic cycle (Fares et al. 2009). Our aim is to study the long-term variations of the magnetic field properties of a sample of solar-type stars, using both direct and indirect measurements. Our observed sample includes 19 FGK-type stars on the main sequence, monitored since 2007. We probe here stellar masses between 0.6 and 1.4, and rotation periods between 3.4 and 43 days. After a brief description of the instrumental setup, data reduction and multi-line extraction of Zeeman signatures, we explain the reconstruction technique of the large-scale topology of the stars, and the computing of a chromospheric activity indicator. We then highlight three representative examples of different types of variability observed in our sample. We finally discuss the results derived from our measurements. Instrumental setup, data reduction, and extraction of Zeeman signatures ======================================================================= We use data from the NARVAL spectropolarimeter (Aurière 2003), installed at Telescope Bernard Lyot[^2] (Pic du Midi, France). The instrumental setup is strictly identical to the one described by Petit et al. (2008). The spectrograph unit of NARVAL benefits from a spectral resolution of 65,000 and covers the whole wavelength domain from near-ultraviolet (370 nm) to near-infrared (1,000 nm). Thanks to the polarimetric module, NARVAL can provide intensity, circularly or linearly polarized spectra. In the present study, we restrict the measurements to Stokes I and V. The circularly polarized spectra allow the detection of large-scale photospheric magnetic fields, thanks to the Zeeman effect. However, when observing cool dwarfs, the signal-to-noise ratio of circularly polarized spectra produced by NARVAL is not sufficiently high to reach the detection threshold of typical Zeeman signatures (which amplitude does not exceed $10^{-4}I_c$ for low-activity stars, where $I_c$ is the continuum intensity). To solve this problem, we calculate from the reduced spectrum a single, cross-correlated photospheric line profile using the Least-Squares-Deconvolution (LSD) multi-line technique (detailed by Donati et al. 1997 and Kochukhov et al. 2010). Thanks to the large number of available photospheric lines in cool stars (several thousands in the spectral domain of NARVAL), the noise level is reduced by a factor of about 30 with respect to the initial spectrum. As an illustration, Fig. \[fig:stokes\] shows the resulting LSD signatures for successive observations of the Sun-like star $\xi$ Boo A. ![Normalized Stokes V profiles of $\xi$ Boo A for the summer of 2007, after correction of the mean radial velocity of the star. Black line represent the data and red lines correspond to synthetic profiles of our magnetic model. Successive profiles are shifted vertically for display clarity. Rotational phases of observations are indicated in the right part of the plot and errors bars are illustrated on the left of each profile.[]{data-label="fig:stokes"}](ksibooa_stokesvjul07.eps) Magnetic mapping and chromospheric emission =========================================== The Stokes I and Stokes V LSD profiles allow the derivation of various quantities to study the temporal variations of the magnetic field properties. Here we focus on the reconstruction of the surface distribution of the magnetic vector and on the computation of a chromospheric activity index. Magnetic maps ------------- To reconstruct the surface magnetic geometry of the stars, we use Zeeman-Doppler Imaging (ZDI). This tomographic inversion technique is based on the modelling of the rotational modulation of the circularly polarized signal (Semel 1989). The time series of polarized signatures are iteratively compared to artificial profiles corresponding to a synthetic magnetic geometry, until a good fit is obtained between the model and the observations (Donati & Brown 1997, Donati et al. 2006). Thus, ZDI enables to recover, to some extent, the location of magnetic regions, as well as the strength and orientation of the magnetic vector in magnetic spots. The application of this technique to cool stars with low  and moderate to low magnetic activity is described by Petit et al. (2008). In this case, ZDI is only sensitive to low-order field components, contrary to the chromospheric flux which includes also the contribution of smaller scale magnetic elements. The resulting maps for the three stars presented here are illustrated in Fig. \[fig:hd78366\], \[fig:hd190771\] and \[fig:ksiboo\]. $N_{CaII H}$-index ------------------ From the Stokes I profiles, we construct an index to quantify the chromospheric emission changes in the CaII H line. The complete pipeline of the computation is described in details in a forthcoming paper (Morgenthaler et al., submitted). We follow the methods of Duncan et al. (1991) and Wright et al. (2004), who define indexes based on Mount Wilson observations, and we calculate a $N_{CaIIH}$-index for our NARVAL observations. To help comparing our chromospheric estimate with older studies, we calibrated the index against the values derived at Mount Wilson. The $N_{CaIIH}$-indexes we obtained for HD 78366, HD 190771 and $\xi$ Boo A are detailed in Tab. \[tab:indice\]. Results ======= ------------- --------------- ----------------- ----------------- --------------- ---------------- ------------- Star $T_{eff}$ Mass Radius $v$sin$i$ $P^{eq}_{rot}$ inclination (K) ($M_{\odot}$) ($R_{\odot}$) (km/s) (d) (degrees) HD 78366 $6014 \pm 50$ $1.34 \pm 0.13$ $1.03 \pm 0.02$ $3.9 \pm 0.5$ $11.4 \pm 0.1$ $60 \pm 15$ HD 190771 $5834 \pm 50$ $0.96 \pm 0.13$ $0.98 \pm 0.02$ $4.3 \pm 0.5$ $8.8 \pm 0.1$ $50 \pm 10$ $\xi$ Boo A $5551 \pm 20$ $0.86 \pm 0.07$ $0.80 \pm 0.03$ $3.0 \pm 0.4$ $6.43$ $28 \pm 5$ ------------- --------------- ----------------- ----------------- --------------- ---------------- ------------- \ \[tab:param\] Since the monitoring began a few years ago, long-term changes in the magnetic properties become observable in some of our targets. Both the magnetic quantities derived from ZDI and the chromospheric index exhibit temporal fluctuations over a wide range of timescales, due to rotational modulation and longer-term magnetic trends. Three representative examples of the different kinds of stellar variability we observed is described hereafter. Star Set of obs. $N_{CaIIH}$ ------------- ------------- ------------------- HD 78366 2008.09 $0.273 \pm 0.004$ 2010.04 $0.291 \pm 0.006$ 2011.08 $0.278 \pm 0.003$ HD 190771 2007.59 $0.335 \pm 0.006$ 2008.67 $0.338 \pm 0.011$ 2009.47 $0.337 \pm 0.007$ 2010.50 $0.345 \pm 0.006$ $\xi$ Boo A 2007.59 $0.443 \pm 0.008$ 2008.09 $0.420 \pm 0.008$ 2010.48 $0.403 \pm 0.006$ 2010.59 $0.402 \pm 0.011$ : Chromospheric activity indices for HD 78366, HD 190771 and $\xi$ Boo A for each corresponding set of observations. \ \[tab:indice\] Short magnetic cycle : HD 78366 ------------------------------- A simple type of variability is observed for HD 78366. This target is slightly more massive than the Sun and rotates about two times faster (Tab. \[tab:param\]). The data sets of this object are collected over three distant seasons. The corresponding magnetic maps are shown in Fig. \[fig:hd78366\]. We observe two polarity switches, especially visible in the polar area of the radial field component, which is of negative polarity in 2008.09, positive in 2010.04 (and associated at that time with a more complex magnetic field geometry), and negative again in 2011.08. After the two observed polarity reversals, the magnetic field retrieves its initial configuration. Assuming that the magnetic variability of this star is not much faster than the temporal sampling imposed by the right ascension of the star (visible only during winters), this first time-series suggests that HD 78366 may obey to a magnetic cycle of about three years. We note that the chromospheric activity indicator $N_{CaII H}$ (Tab. \[tab:indice\]) seems to increase with the complexity of the large-scale magnetic field, with a clear maximum in 2010.04. Fast polarity reversals : HD 190771 ----------------------------------- A more complex type of variability is illustrated by HD 190771. It has a mass similar to the Sun’s, but has a rotation period of 8.8 days (Tab. \[tab:param\]). In Fig. \[fig:hd190771\], we plot the magnetic maps derived for this star. A polarity reversal is visible on the strong azimuthal component between 2007.59 and 2008.67 (Petit et al. 2009). Between 2008.67 and 2009.47, the magnetic geometry changed in a different manner : the magnetic field which was mainly toroidal in 2008.67 became mostly poloidal in 2009.47. A second polarity reversal took place between 2009.47 and 2010.50, this time on the radial field component. In this case, the two successive polarity switches do not imply that the initial magnetic state is reached again, so that the observed variability is not taking the form of a cycle. In addition, we observe that the magnetic field intensity is correlated with the chromospheric emission. In the first three years, both the field strength and the chromospheric flux are roughly stable. We then observe a significant increase of these quantities in 2010.50 (Tab. \[tab:indice\]). Fast and complex variability : $\xi$ Bootis A --------------------------------------------- Finally, another, more complex type of variability is observed with $\xi$ Boo A, the less massive and most rapidly rotating star of our three examples (Tab. \[tab:param\]). It was observed at seven epochs, for which the magnetic field geometry and $N_{CaII H}$ were derived (Morgenthaler et al., submitted). Here we highlight two results of this long-term monitoring. The first one refers to the 2007.59 and 2008.09 data sets (top part of Fig. \[fig:ksiboo\]). We observe that within a six months interval, the intensity of the magnetic field decreased by about 50% and that the magnetic geometry, which was quite simple in 2007.59 with an aligned dipole and a prominent ring of azimuthal field, became more complex and less axisymmetric in 2008.09, with a less pronounced toroidal surface component. The decrease of the large-scale field strength is also observed in the chromospheric flux, with a sharp drop of emission between the two epochs (Tab. \[tab:indice\]). The second example is visible in the set of observations collected during the summer of 2010, which we decided to split in two subsets (2010.48 and 2010.59) to take into account the fast variations of the Zeeman signatures over this short timespan. In the correponding magnetic maps (bottom part of Fig. \[fig:ksiboo\]), the most striking evolution is a sharp increase of the azimuthal magnetic field. These changes are taking place at a roughly constant level of chromospheric emission. $\xi$ Boo A is therefore submitted to fast and complex surface changes that are different from those of the two previous stars, and reminiscent of the complex behaviour of other rapid rotators observed in the past (e.g. Donati et al. 2003). Discussion ========== All stars of our sample show variability over the four years of our monitoring, but of different types. Stars which show at least one field reversal over this timespan have in common a fast rotation period (at least twice the solar one) and masses equal or slightly larger than that of the Sun (Fig. \[fig:m\_prot\]). In Fig. \[fig:m\_prot\], we include $\tau$ Bootis, which is not part of our sample but which is reported to be affected by a short magnetic cycle of two years at most (Fares et al. 2009). We stress also that active stars with masses below our lower mass boundary (in particular, mid-M dwarfs with masses just below the fully convective limit) are reported to possess strong, simple and stable surface magnetic fields (Morin et al. 2008a,b). ![Rotation period versus mass for the stellar sample. Pink symbols stand for stars with at least one polarity switch.[]{data-label="fig:m_prot"}](cycle_masse_prot.eps) $\tau$ Boo and HD 78366 were also observed at Mount Wilson as chromospherically active stars. For $\tau$ Boo, Baliunas et al. (1995) report a cycle of twelve years, versus two years from spectropolarimetry. Concerning HD 78366, periods of six and twelve years were identified using the Mount Wilson time-series, against about three years in our investigation. We therefore note that, at least for these two examples, the cycle lengths derived by chromospheric activity seem to be longer than those derived by spectropolarimetry. We suggest that this apparent discrepancy may be linked to the different temporal sampling inherent to the two approaches, so that the sampling adopted at Mount Wilson may not be sufficiently tight to unveil short activity cycles. Future observations of our stellar sample will allow us to investigate longer timescales of the stellar magnetic evolution. The sample includes several solar twins (Petit et al. 2008) which have not showed cycles yet, and which will help us to determine how small departures from the solar fundamental parameters may affect the characteristics of magnetic cycles. More generally, a regular monitoring of our targets over more than one decade will enable us to determine more precisely the relation between the length/oc-currence of magnetic cycles and the rotation/mass of Sun-like stars. This research made use of the POLLUX database (http://pollux.graal.univ-monpt2.fr) operated at LUPM (Université Montpellier II - CNRS, France, with support of the PNPS and INSU). We are grateful to the staffs of TBL for their efficient help during the many nights dedicated to this observing project. Aurière, M.: 2003, in Magnetism and Activity of the Sun and Stars, ed J. Arnaud, & N. Meunier, EAS Publ. Ser., 9, 105 Baliunas, S.L., Donahue, R.A., Soon, W.H., et al.: 1995, ApJ, 438, 269 Brown, B.P., Miesch, M.S., Browning, M.K., Brun, A.S., & Toomre, J.: 2011, ApJ, 731, 69 Donati, J.-F. & Brown, S.F.: 1997, A&A, 326, 1135 Donati, J.-F., Collier Cameron, A., Semel, M., et al.: 2003, MNRAS, 345, 1145 Donati, J.-F., Howarth, I.D., Jardine, M.M., et al.: 2006, MNRAS, 370, 629 Donati, J.-F., Semel, M., Carter, B.D., Rees, D.E., & Collier Cameron, A.: 1997, MNRAS, 291, 658 Duncan, D.K., Vaughan, A.H., Wilson, O.C., et al.: 1991, ApJS, 76, 383 Fares, R., Donati, J.-F., Moutou, C., et al.: 2009, MNRAS, 398, 1383 Fernandes, J., Lebreton, Y., Baglin, A., & Morel, P.: 1998, A&A, 338, 455 García, R.A., Mathur, S., Salabert, D., et al.: 2010, Science, 329, 1032 Ghizaru, M., Charbonneau, P., & Smolarkiewicz, P.K.: 2010, ApJ, 715, L133 Gray, D.F.: 1984, ApJ, 281, 719 Kochukhov, O., Makaganiuk, V., & Piskunov, N.: 2010, A&A, 524, A5+ Morgenthaler, A., Petit, P., Saar, S.H., et al.: 2011, A&A, submitted Morin, J., Donati, J.-F., Forveille, T., et al.: 2008a, MNRAS, 384, 77 Morin, J., Donati, J.-F., Petit, P., et al.: 2008b, MNRAS, 390, 567 Parker, E.N.: 1955, ApJ, 122, 293 Petit, P., Dintrans, B., Morgenthaler, A., et al.: 2009, A&A, 508, L9 Petit, P., Dintrans, B., Solanki, S.K., et al.: 2008, MNRAS, 388, 80 Petit, P., Donati, J.-F., Aurière, M., et al.: 2005, MNRAS, 361, 837 Semel, M.: 1989, A&A, 225, 456 Toner, C.G., & Gray, D.F.: 1988, ApJ, 334, 1008 Valenti, J.A., & Fischer, D.A.: 2005, ApJS, 159, 141 Wilson, O.C.: 1978, ApJ, 226, 379 Wright, J.T., Marcy, G.W., Butler, R.P., & Vogt, S.S.: 2004, ApJS, 152, 261 [^1]: Corresponding author: [^2]: The Bernard Lyot Telescope is operated by the Institut National des Sciences de l’Univers of the Centre National de la Recherche Scientifique of France.
--- abstract: 'The recent experimental realization of spin-orbit coupling for ultracold atomic gases provides a powerful platform for exploring many interesting quantum phenomena. In these studies, spin represents spin vector (spin-1/2 or spin-1) and orbit represents linear momentum. Here we propose a scheme to realize a new type of spin-tensor–momentum coupling (STMC) in spin-1 ultracold atomic gases. We study the ground state properties of interacting Bose-Einstein condensates (BECs) with STMC and find interesting new types of stripe superfluid phases and multicritical points for phase transitions. Furthermore, STMC makes it possible to study quantum states with dynamical stripe orders that display density modulation with a long tunable period and high visibility, paving the way for direct experimental observation of a new dynamical supersolid-like state.. Our scheme for generating STMC can be generalized to other systems and may open the door for exploring novel quantum physics and device applications.' author: - 'Xi-Wang Luo' - Kuei Sun - Chuanwei Zhang title: 'Spin-tensor–momentum-coupled Bose-Einstein condensates' --- [^1] *Introduction*.—The coupling between matter and gauge field plays a crucial role for many fundamental quantum phenomena and practical device applications in condensed matter [@xiao2010berry; @hasan2010colloquium; @qi2011topological] and atomic physics [@dalibard2011colloquium]. A prominent example is the spin-orbit coupling, the coupling between a particle’s spin and orbit (e.g., momentum) degrees of freedom, which is responsible for important physics such as topological insulators and superconductors [@hasan2010colloquium; @qi2011topological]. In this context, recent experimental realization of spin-orbit coupling in ultracold atomic gases [@lin2011spin; @zhang2012collective; @qu2013observation; @olson2014tunable; @wang2012spin; @cheuk2012spin; @Williams2013; @huang2016experimental; @wu2016realization] opens a completely new avenue for investigating quantum many-body physics under gauge field [Stanescu2008, zhang2008p, Wu2011, wang2010spin, ho2011bose, li2012quantum, zhang2012mean, hu2012spin, ozawa2012stability, Gong2011, Hu2011, Yu2011, Qu2013b, Zhang2013b, galitski2013spin]{}. So far in most works on spin-orbit coupling in solid state and cold atomic systems, the spin degrees of freedom are taken as rank-1 spin vectors $F_{i}$ ($i=x,y,z$), such as electron spin-1/2 or pseudospins formed by atomic hyperfine states that can be large (e.g., spin-1 or 3/2). Experimentally, spin-orbit coupling for spin-1 Bose-Einstein condensates (BECs) has been realized recently [@campbell2015itinerant; @luo2016tunable] and interesting magnetism physics has been observed [@lan2014raman; @Natu2015; @sun2016interacting; @yu2016phase; @martone2016tricriticalities]. Mathematically, it is well known that there exist not only spin vectors, but also spin tensors \[e.g., irreducible rank-2 spin-quadrupole tensor $% N_{ij}=\left( F_{i}F_{j}+F_{j}F_{i}\right) /2-\delta_{ij}\mathbf{F}^2/3$\] in a large spin ($\geq 1$) system. Therefore two natural questions are: *i*) Can the coupling between spin tensors of particles and their linear momenta be realized in experiments? *ii*) What new physics may emerge from such spin-tensor–momentum coupling (STMC)? In this Letter, we address these two questions by proposing a simple experimental scheme for realizing STMC for spin-1 ultracold atomic gases. Our scheme is based on slight modification of previous experimental setup [@campbell2015itinerant] and is experimentally feasible. The STMC changes the band structure dramatically, leading to interesting new physics in the presence of many-body interactions between atoms. Although both bosons and fermions can be studied, here we only consider spin-1 BECs to illustrate the effects of STMC. Our main results are: *i*) The single particle band structure with STMC consists of two bright-state bands (top and bottom) and one dark-state middle band \[Fig. [fig:sys]{}(b)\], where the dark-state band is not coupled with two bright-state bands through Raman coupling. However, the dark-state band plays an important role on both ground-state and dynamical properties of the interacting BECs. ![(a) Top: Experimental scheme to generate STMC in BEC. Bottom: Raman transitions between three hyperfine spin states with detuning $\Delta $. (b) Single-particle band structure for Raman strength $\Omega =0.5$ and detuning $\Delta =0.1$. The (dominant) spin components $\left\vert 0\right\rangle $ and $\left\vert \pm \right\rangle =\frac{1}{\protect\sqrt{2}}(\left\vert \uparrow \right\rangle \pm \left\vert \downarrow \right\rangle )$ are indicated around the corresponding band minima.[]{data-label="fig:sys"}](Fig1_sys_f.pdf){width="1.0\linewidth"} *ii*) We study the ground-state phase diagrams with exotic plane-wave and stripe phases, where the dark-state middle band can be partially populated despite not the single particle ground state. The stripe phase is a coherent superposition of two or more plane-wave states. It possesses both superfluid property as a BEC and crystal-like density modulation that spontaneously breaks translational symmetry of the Hamiltonian, satisfying two major criteria for the supersolid order [@pomeau1994dynamics]. Experimentally, the stripe order has recently been observed indirectly using Bragg reflection [@li2017stripe]. We find the transitions between different phases possess interesting multicriticality phenomena with triple, quadruple and even quintuple points. *iii*) The existence of dark middle band makes it possible to study quantum states with *dynamical supersolid-like stripe orders*. In particular, we show how to dynamically generate a stripe state with a long tunable period ($\sim 5\mu $m) and high visibility ($\sim 100\%$) of density modulation, which may be directly measured in experiments (such direct measurement is still challenging for the ground-state stripe patterns due to their short period and low visibility [@li2014superstripes]). The dynamical stripe state as a superfluid BEC, although not the ground state, does possess interesting stripe patterns that break the translational symmetry of the Hamiltonian, resembling a dynamical supersolid-like order. *The model*.—We consider a setup similar as that in the recent experiment [@campbell2015itinerant] but with a slightly different laser configuration, as shown in Fig. \[fig:sys\](a), where three Raman lasers with wavenumber $k_{\text{R}}$ are employed to generate STMC. The three lasers induce two Raman transitions between hyperfine spin states $|0\rangle $ and $\left\vert \uparrow (\downarrow )\right\rangle $, both of which have the same recoil momentum $2k_{\text{R}}$ along the $x$ direction. The single-particle Hamiltonian in the spin-1 basis $(\left\vert \uparrow \right\rangle ,|0\rangle ,\left\vert \downarrow \right\rangle )^{T}$ is (we set $\hbar=1$) $$\widetilde{H}_{0}=-\frac{{\mathbf{\nabla }}^{2}}{2m}+\Delta F_{z}^{2}+\left( \sqrt{2}\Omega e^{i2k_{\text{R}}x}|0\rangle \langle +|+h.c.\right) ,$$where $F_{z}^{2}=\left\vert \uparrow \right\rangle \left\langle \uparrow \right\vert +\left\vert \downarrow \right\rangle \left\langle \downarrow \right\vert $ is equivalent to the spin tensor $N_{zz}$ (up to a constant), $% |+\rangle \equiv \frac{1}{\sqrt{2}}(\left\vert \uparrow \right\rangle +\left\vert \downarrow \right\rangle )$, $\Omega $ is the Raman coupling strength, and $\Delta $ is the detuning for both $\left\vert \uparrow \right\rangle $ and $\left\vert \downarrow \right\rangle $ states. We see that another spin state $|-\rangle \equiv \frac{1}{\sqrt{2}}(\left\vert \uparrow \right\rangle -\left\vert \downarrow \right\rangle )$ is always an eigenstate and does not couple to $|0\rangle $ nor $|+\rangle $ through $% \Omega $, and thus is a dark state. Since the BEC wavefunction in the $y$ and $z$ directions is not affected by the Raman lasers, we can consider the physics only along the $x$ direction [@sun2016interacting; @yu2016phase; @martone2016tricriticalities]. After a unitary transformation $U=\exp (-i2k_{\text{R}}xF_{z}^{2})$ to quasi-momentum basis, we write the Hamiltonian in energy and momentum units $% \frac{k_{R}^{2}}{2m}$ and $k_{R}$, respectively, as $$H_{0}=-\partial _{x}^{2}+(\Delta +4+4i\partial _{x})F_{z}^{2}+\sqrt{2}\Omega F_{x}, \label{eq:H0}$$where $\Omega $ and $\Delta $ are dimensionless transverse-Zeeman and spin-tensor potential, respectively, and $(i\partial _{x})F_{z}^{2}$ describes the coupling between spin tensor $F_{z}^{2}$ and the linear momentum, *i.e.*, STMC. The single-particle Hamiltonian has three energy bands \[see a typical structure in Fig. \[fig:sys\](b)\]. The dark-state middle band always has the spin state $|-\rangle $ and spectrum $(k-2)^{2}+\Delta $, which are independent of $\Omega $. The top and bottom bright-state bands exhibit the same behavior as the known spin-orbit-coupled spin-$1/2$ system with spin states $|0\rangle $ and $|+\rangle $. The decoupling of the middle band is protected by the spin-tensor symmetry $[F_{x}^{2},H_{0}]=0$, under which the middle band (top and bottom bands) corresponds to $\langle F_{x}^{2}\rangle =0$ (1). Although the single-particle ground state always selects the bottom band, the atomic interactions can break the symmetry and drastically change the BEC’s ground state as well as dynamical properties by involving the middle band. Under the Gross-Pitaevskii (GP) mean-field approximation, the energy density becomes $$\varepsilon =\frac{1}{V}\int dx\left[ \Psi ^{\dag }H_{0}\Psi +\frac{g_{0}}{2}% (\Psi ^{\dag }\Psi )^{2}+\frac{g_{2}}{2}(\Psi ^{\dag }\mathbf{F}_{U}\Psi )^{2}\right] , \label{eq:energydensity}$$with $V$ the system volume, and $\Psi $ the three-component condensate wavefunction normalized by the average particle number density $\bar{n}% =V^{-1}\int dx\Psi ^{\dag }\Psi $. The interaction strengths $g_{0,2}$ represent density and spin interactions in spinor condensates [ho1998spinor, Ohmi1998Bose]{}, respectively. $\mathbf{F}_{U}=U^{\dag }\mathbf{% F}U$ is the unitarily transformed spin operator, whose $x$ and $y$ components exhibit spatial modulation that cannot be eliminated through any local spin rotation (different from previous models [sun2016interacting, yu2016phase, martone2016tricriticalities]{}). Such modulation is essential for stripe phases in the system. We consider a variational ansatz [@SM]$$\Psi =\sqrt{\bar{n}}\left( |c_{1}|\chi _{1}e^{ik_{1}x}+|c_{2}|\chi _{2}e^{ik_{2}x+i\alpha }\right) \label{eq:ans}$$to find the ground state, with $|c_{1}|^{2}+|c_{2}|^{2}=1$, and spinors $% \chi _{j}=(\cos \theta _{j}\cos \phi _{j},-\sin \theta _{j},\cos \theta _{j}\sin \phi _{j})^{T}$. The energy density now becomes a functional of eight variational parameters $|c_{1}|$, $k_{1}$, $k_{2}$, $\theta _{1}$, $% \theta _{2}$, $\phi _{1}$, $\phi _{2}$, and $\alpha $, and its minimization ($\varepsilon_{\text{g}}=\min\{\varepsilon\}$) leads to the ground state [SM]{}. The quantum phase diagram can be characterized by the variational wavefunction, experimental observables $\langle F_{z}\rangle $ and $\langle F_{z}^{2}\rangle $, and the symmetry $\langle F_{x}^{2}\rangle $. The derivative of the ground-state energy $\frac{\partial \varepsilon_{\text{g}}% }{\partial \Delta}= \langle F^2_z\rangle$ ($\frac{\partial^2 \varepsilon_{% \text{g}}}{\partial \Delta^2}= \frac{\partial \langle F^2_z\rangle}{\partial \Delta}$) displays discontinuity as $\Delta$ varies across a first-order (second-order) phase boundary [@SM]. This argument also applies to $% \frac{\partial \varepsilon_{\text{g}}}{\partial \Omega}$ ($\frac{\partial^2 \varepsilon_{\text{g}}}{\partial \Omega^2}$) [@SM]. We also numerically solve the GP equation using imaginary time evolution to obtain the ground states, which are in good agreement with the variational results. ![ (a) Ground state phase diagram in the $\Omega$-$\Delta$ plane with $g\bar{n}=3$. The dashed line is a crossover boundary. (b) Zoom in of the framed region in (a). (c) \[(d)\] Ground state phase diagram in the $g$-$% \Delta $ ($g$-$\Omega$) plane with $\Omega=0.16$ ($\Delta=0$). Solid (dotted) lines represent first (second) order phase transitions. The interaction ratio is $g_0=-50g_2\equiv g$.[]{data-label="fig:phase"}](Fig2_phase_f.pdf){width="1.0\linewidth"} *Phase diagram*.—For ferromagnetic interaction $g_{2}<0$ (e.g., $^{87}$Rb), the BEC has three plane-wave ($|c_{1}c_{2}|=0$) and two stripe ($% |c_{1}c_{2}|\neq 0$) phases (Fig. \[fig:phase\]): (I) plane-wave phase in $% k<1$, having $\langle F_{z}\rangle =0$ (spin unpolarized), $\langle F_{z}^{2}\rangle <0.5$, and $\langle F_{x}^{2}\rangle =1$ (middle band unpopulated); (II) plane-wave phase in $k>1$, having $\langle F_{z}\rangle =0 $, $\langle F_{z}^{2}\rangle >0.5$, and $\langle F_{x}^{2}\rangle =1$; (III) spin-polarized plane-wave phase in $k>1$ having $\langle F_{z}\rangle \neq 0$ and $\langle F_{x}^{2}\rangle <1$ (middle band populated); (IV) mix-band stripe phase, having $k_{1}<1$, $k_{2}>1$, and $\langle F_{x}^{2}\rangle <1$; (V) bottom-band stripe phase, same as (IV) except $% \langle F_{x}^{2}\rangle =1$. The last three phases exhibit $Z_{2}$ ferromagnetism: phases (III), (IV), and (V) all have twofold degenerate ground states with global ferromagnetic order $\pm \langle F_{z}\rangle \neq 0$, $\pm \langle F_{y}\rangle \neq 0$, and $\pm \langle F_{x}\rangle \neq 0$, respectively. Note that these orders are calculated in the laboratory frame (the basis of ${\widetilde{H}}_{0}$) and reflect the energetic favor by the ferromagnetic interaction. For anti-ferromagnetic interaction $% g_{2}>0 $ (e.g., $^{23}$Na), the system has a relatively simple phase diagram containing only two plane-wave phases (I) and (II), separated by a first-order phase-boundary at $\Delta =0$. Hereafter we focus on the ferromagnetic case. In Fig. \[fig:phase\](a) we plot the phase diagram in the $\Omega $-$% \Delta $ plane. At a sufficiently large $\Omega $, the middle band does not participate in the ground state, so the phase diagram is similar to the spin-orbit-coupled spin-$1/2$ system: the two plane-wave phases (I) and (II) are separated by a first-order-transition boundary (solid line along $\Delta =0$) if $\Omega <\Omega _{c}$ or a crossover one (dashed line) if $\Omega >\Omega _{c}$. As $\Omega $ decreases, the middle band minimum gets closer to the right minimum of the bottom band \[Fig. \[fig:sys\](b)\]. If the BEC originally stays in the plane-wave phase (II) ($\Delta <0$), it starts to partially occupy the middle band \[Fig. \[fig:phase\](b), bottom inset\], undergoing a second-order transition (dotted curve) to the polarized phase (III). From the energetic point of view, the BEC populates to a slightly higher single particle energy state to get polarized to reduce ferromagnetic interaction energy. Note that phase (III) is still a plane-wave phase since the BEC occupies both bands at the same $k$. ![(a) \[(b)\] The local density modulations of phase (IV) \[(V)\] in Fig. \[fig:phase\](b), with $\Omega=0.16$ and $\Delta=0.006$ ($% \Delta=0.023$). (c) \[(d)\] $\langle F_z^2\rangle$ (blue-solid line) and $% \langle F_z\rangle$ (red-dashed line) vs $\Omega$ ($\Delta$) along the path $% \Delta=-0.018$ ($\Omega=0.16$) in Fig. \[fig:phase\](b). Dots (lines) are obtained from imaginary time GP equation (variational method). []{data-label="fig:GP"}](Fig3_GP_f.pdf){width="1.0\linewidth"} At a small $\Omega $ and $\Delta >0$, the energy difference between the single-particle band minimum \[plane wave (I)\] and the other bottom-band minimum \[plane wave (II)\] or the middle-band minimum is comparable to the interaction energy, so the BEC may favor the co-occupation of (I) and a higher-energy local minimum as long as the total energy can be reduced more by the interaction. In Fig. \[fig:phase\](b), we zoom in the framed region of Fig. \[fig:phase\](a) and show the emergence of two stripe phases. The mix-band stripe phase (IV) is the superposition of plane wave (I) and the one around the middle-band minimum (top inset). Phase (IV) exhibits spin-density waves due to the superposition \[Fig. \[fig:GP\](a)\] and a global ferromagnetic order $\langle F_{y}\rangle \neq 0$ that reduces the $% g_{2}$ interaction energy, compensating the higher middle-band energy. Note that phase (IV) has a uniform total density due to the orthogonality between the middle and bottom band spins, but the spin-density waves form a stripe pattern. The bottom-band stripe phase (V), which appears at even weaker $% \Omega $ and $\Delta $, is the superposition of two bottom-band plane waves (I) and (III) \[Fig. \[fig:phase\](d) inset\]. Phase (V) exhibits a total-density wave \[Fig. \[fig:GP\](b)\], which, compared with (IV), increases the $g_{0}$ interaction energy, but the total energy is favorable due to the pure bottom-band occupation and global ferromagnetic order $% \langle F_{x}\rangle \neq 0$. We remark that the superposition of three plane waves (with co-occupation of three band minima) is never energetically favorable because it cannot maximize the ferromagnetic order. Returning to the phase diagram Fig. \[fig:phase\](b), the (I)–(IV) phase boundary corresponds to a second-order transition, which meets the (II)–(III) boundary at a quadruple point $C_{\mathrm{quad}}$ at $\Delta =0$. The (IV)–(V) boundary corresponds to a first-order transition, which encounters phase (III) at a triple point $C_{3}^{\mathrm{T}}$ at $\Delta =0$. To study the dependence on interaction, we plot the phase diagram in the $% \Delta $-$g$ plane in Fig. \[fig:phase\](c), with a fixed ratio $% g_{0}=-50g_{2}\equiv g$. We see that the stripe region increases with $g$ (due to the increasing $g_{2}$), and phase (IV) is more favorable than (V) in the large-$g$ region (due to the large $g_{0}$). For the plane-wave phases (II) and (III), the latter has global ferromagnetic order $\langle F_{z}\rangle \neq 0$ and is hence favorable with strong interaction. The $% \Delta $-$g$ diagram also shows first-order transitions between any two of (III), (IV), and (V) phases, second-order transitions between any other adjacent phases, and four triple points $C_{1,2,3,4}^{\mathrm{T}}$ at the (I)-(II)-(V), (II)-(III)-(V), (III)-(IV)-(V), and (I)-(IV)-(V) encounters, respectively. In Fig. \[fig:phase\](d), we show how the encounters of phases along $\Delta =0$ change with the interaction. We see that phases (III) and (IV) survive at large $g$, while (I) and (II) survive at large $% \Omega $, in agreement with the energetic argument. The boundaries represent three traces of triple points $C_{1,3}^{\mathrm{T}}$ and quadruple point $C_{% \mathrm{quad}}$, respectively, which intercept at a quintuple point $C_{% \mathrm{quin}}$ as the joint of all five phases. In Figs. \[fig:GP\](a) and (b), we plot spatial profiles of each spin component’s density $\rho _{\downarrow ,0,\uparrow }$ and total density $% \rho _{t}$ for stripe phases (IV) and (V), respectively. Phase (IV) shows out-of-phase modulations between $\rho _{\uparrow }$ and $\rho _{\downarrow } $, representing spin-vector ($F_z$) density wave, and uniform $\rho _{0}$ and $\rho _{t}$, while (V) shows in-phase modulations of all components and hence $\rho _{t}$, of which $\rho _{\uparrow ,\downarrow }$ overlap each other, representing a spin-tensor ($F^2_{z}$) density wave. The modulation wavelength matches the laser’s recoil momentum $2k_{R} $ (i.e., $% |k_{2}-k_{1}|=2k_{R}$). This can be understood in the quasi-momentum frame that the minimization of $g_{2}$ interaction energy requires equal modulations between the spin components and the spin operator $\mathbf{F}_{U}$ in Eq. (\[eq:energydensity\]). Since the separation between two band minima is smaller than $2k_{R}$ at finite $\Omega $, the two plane-wave components of the stripe phases do not exactly stay on the band minima. In Figs. [fig:GP]{}(c) and (d), we plot $\langle F_{z}\rangle $ (squares) and $\langle F_{z}^{2}\rangle $ (circles) along (III)-(II) and (III)-(V)-(IV)-(I) transition paths in Fig. \[fig:phase\](b), respectively. The discontinuity in spin-tensor polarization $\langle F_{z}^{2}\rangle $ (its first derivative) indicates the occurrence of first-order (second-order) phase transition. *Dynamical stripe state*.—The middle-band minimum and the right bottom-band minimum are close to each other (both near $k=2$). Therefore a coherent superposition of plane waves on these two minima leads to a long-period stripe state, which can be directly measured in experiments. To generate such a stripe state, we consider $^{\text{87}}$Rb atoms in a harmonic trap $\omega =2\pi \times 50$Hz, initially prepared in spin state $% \left\vert \uparrow \right\rangle $ with the Raman lasers off and $\Delta <0$ \[the initial state belongs to phase (III) since the two minima coincide and are equally populated as $\left\vert \uparrow \right\rangle =\frac{1}{\sqrt{2% }}(\left\vert +\right\rangle +\left\vert -\right\rangle )$\]. The 800-nm Raman lasers are gradually turned on such that $\Omega $ increases from $0$ to $\Omega _{\text{f}}$ within a time $T$ and then remains constant. If we consider an adiabatic process, where the ramping rate of $\Omega $ is much slower than the energy scale of the spin-interaction strength $g_{2}\bar{n}$, the system will stay in the ground-state plane-wave phase (III) until $% \Omega $ exceeds the critical value where a transition to plane-wave phase (II) occurs. While for a dynamical process where the ramping rate of $\Omega $ is much faster than the spin-interaction strength (but much slower than other energy scales such as the trapping frequency), the system no longer stays in the ground state, and the BEC on the two band minima are expected to split in the momentum space, leading to the stripe state. ![ (a)-(c) Averaged momentum $\bar{k}_{m}$ ($\bar{k}_{b}$) and percentage population (colorbar) of atoms in the middle (bottom) band. Thin dashed line in (a) shows the band minima. (d)-(f) The initial (dashed line) and final (solid line) spin density $\protect\rho _{\uparrow }$ corresponding to (a)-(c) respectively. The interactions are $(g_{0}\bar{n}% ,g_{2}\bar{n})=(0,0)$, $(0.5,0)$ and $(4,-0.2)$ for (a), (b) and (c). Other parameters are $T=100$ms, $\Delta =-0.05$, and $\Omega_\text{f}=0.7$.[]{data-label="fig:Dyn"}](Fig4_Dyn_f.pdf){width="1.0\linewidth"} Figs. \[fig:Dyn\](a) and (d) show the results of real-time GP simulation for non-interacting atoms. The averaged momenta $\bar{k}_{\text{b}}$ and $% \bar{k}_{\text{m}}$ of atoms in the bottom and middle bands follow their band minima respectively, with $\bar{k}_{\text{b}}$ displaying slight dipole oscillation [@chen2012collective] at $t>T$ due to the collective excitations caused by the finite increasing rate of $\Omega $. The final state is a stripe state similar to phase (IV) but with a much higher visibility and a longer period, and the stripe pattern is moving rather than stationary due to the dynamical phases of the two bands [@SM]. For atoms with realistic interactions $|g_{2}|\ll g_{0}$ and consider a dynamical process much faster compared to $g_{2}\bar{n}$, we can neglect the spin interaction and focus on the density-interaction effects. The density interaction preserves the symmetry $F_{x}^{2}$ and thus the atom populations of the two bands remain unchanged. However, $\bar{k}_{\text{m}}$ shifts together with $\bar{k}_{\text{b}}$ at the beginning then they separate and eventually return to their band minima respectively. At $t>T$, the density interaction induces synchronous dipole oscillations of $\bar{k}_{\text{m}}$ and $\bar{k}_{\text{b}}$ with a frequency different from the single-particle case \[see Fig. \[fig:Dyn\](b)\]. Nevertheless, we obtain a stripe state as the final state \[see Fig. \[fig:Dyn\](e)\] with a long period ($\sim 5\mu $m for $\Omega _{\text{f}}=0.7$) and high visibility (close to 100%). For $^{% \text{87}}$Rb with $g_{2}=-0.005g_{0}$, such dynamical stripe states can always be obtained in the region where $|g_{2}|\bar{n}\ll T^{-1}$ [@SM]. Also, the stripe period can be tuned by changing the value of $\Omega _{% \text{f}}$ (e.g. $\Omega _{\text{f}}=1$ leads to a period of $\sim 3\mu $m) [@SM]. Such periodic density modulations of dynamical stripe phases break the translational symmetry of the Hamiltonian, showing dynamical supersolid-like properties. In the opposite region where the dynamical process is slow compared to the spin interaction, the system follows the plane-wave ground state. As $\Omega $ increases, atoms are transferred from the middle to bottom band until a transition to phase (II) occurs. Thus the final state has no middle-band population and no stripe states would be obtained, as shown in Figs. [fig:Dyn]{}(c) and (f) with tiny stripes caused by weak excitations. *Conclusions.*—In summary, we propose a scheme to realize STMC in a spin-1 BEC, and study its ground-state and dynamical properties. The interplay between STMC and atomic interactions leads to many interesting quantum phases and multicritical points for phase transitions. The STMC offers a simple way to generate a new type dynamical stripe states with high visibility and long tunable periods, paving the way for direct experimental observation of long-sought stripe states. The proposed STMC for ultracold atoms open the door for exploring many other interesting physics, such as STMC fermionic superfluids, Bogoliubov excitations with interesting roton spectrum [@khamehchi2014measurement; @Ji2015], non-Abelian STMC (similar as Rashba spin-orbit coupling), and STMC in optical lattices (where nontrivial topological bands may emerge). **Acknowledgements**: We thank P. Engels for helpful discussion. This work is supported by AFOSR (FA9550-16-1-0387), NSF (PHY-1505496), and ARO (W911NF-17-1-0128). Supplementary Materials {#supplementary-materials .unnumbered} ======================= ### Validation of the ansatz {#validation-of-the-ansatz .unnumbered} [The top and bottom bright-state bands exhibit the same physics as the known spin-orbit-coupled spin-$1/2$ system: the two spin branches $|0\rangle $ and $|+\rangle $ with relative energy difference $\Delta $ are separated by $2k_{R}$ at $\Omega =0$, and mixed to form top/bottom bands with a gap at a finite $\Omega $. At $\Delta =0$, the bottom band has degenerate double minima for $\Omega <\sqrt{2}E_{\text{R}}$, above which the band makes a transition to a single-minimum structure. The decoupling of the middle band is protected by the spin-tensor symmetry $F_{x}^{2}$, under which the middle band (top and bottom bands) corresponds to $\langle F_{x}^{2}\rangle =0$ (1). Therefore, even if the gap between the middle and bottom band minima is small \[$\sim O(\Omega ^{2})$ at weak $\Omega $\], the single-particle ground state always selects one minimum on the the bottom band. However, the atomic interactions can break the symmetry and drastically change the BEC’s ground state by involving the middle band.]{} The ground state is mainly determined by the two lower bands, with three minima in total. So we may consider a more general ansatz $$\begin{aligned} \Psi=\sqrt{\bar{n}}\left(|c_1|\chi_1e^{ik_1x} +|c_2|\chi_2e^{ik_2x+i\alpha}+|c_3|\chi_3e^{ik_3x+i\beta}\right), \label{eq:ans-sm}\end{aligned}$$ with $k_1\simeq 0$, $k_{2,3}\simeq 2$, and $\chi_i=(\cos\theta_i\cos\phi_i,-\sin\theta_i,\cos\theta_i\sin\phi_i)^T$. The stripe phase is supposed to lower the spin interaction $g_2(\Psi^{\dag}\mathbf{F}_U\Psi)^2$ by generating ferromagnetic order. The ferromagnetic order is maximized when $k_{2}=k_3=k_1+2$, that is when the modulation of the spin density is equal to the modulation of the spin operator $\mathbf{F}_U$. Then Eq. (\[eq:ans-sm\]) is reduced to the ansatz given in the main text. The above arguments are verified numerically by considering the ansatz Eq. (\[eq:ans-sm\]) and we always have $k_3=k_2=k_1+2$ for the ground state. ### Variational energy density {#variational-energy-density .unnumbered} In the following, we give a detailed derivation of the variational energy density, using the variational ansatz $$\begin{aligned} \Psi=\left(\begin{array}{c} \psi_{+1} \\ \psi_{0} \\ \psi_{-1} \\ \end{array}\right)=\sqrt{\bar{n}}|c_1|\left( \begin{array}{c} \cos(\theta_1)\cos(\phi_1) \\ -\sin(\theta_1) \\ \cos(\theta_1)\sin(\phi_1) \\ \end{array} \right)e^{ik_1x} +\sqrt{\bar{n}}|c_2|\left( \begin{array}{c} \cos(\theta_2)\cos(\phi_2) \\ -\sin(\theta_2) \\ \cos(\theta_2)\sin(\phi_2) \\ \end{array} \right)e^{ik_2x+i\alpha}.\end{aligned}$$ The single particle energy density is $$\begin{aligned} \varepsilon_0=\frac{1}{V}\int\Psi^\dag H_0\Psi dx=\frac{1}{V}\int\Psi^\dag[(-\partial^2_x) + (\Delta+4+4i\partial_x) F_z^2 + \sqrt{2}\Omega F_x]\Psi dx.\end{aligned}$$ We have $$\frac{1}{V}\int\Psi^\dag (-\partial^2_x)\Psi dx=\frac{1}{V}\int\sum_{j=0,\pm1}\psi^*_j (-\partial^2_x)\psi_j dx=\bar{n}\sum_{i=1}^2|c_i|^2k_i^2,$$ similarly we can obtain $$\frac{1}{V}\int\Psi^\dag \sqrt{2}\Omega F_x\Psi dx=-\bar{n}\sqrt{2}\Omega \sum_{i=1}^2|c_i|^2\sin(2\theta_i)\sin(\phi_i+\frac{\pi}{4}),$$ and $$\frac{1}{V}\int\Psi^\dag (\Delta+4+4i\partial_x) F_z^2\Psi dx=\bar{n}\sum_{i=1}^2(\Delta+4-4k_i)|c_i|^2\cos^2(\theta_i).$$ The density-interaction energy is $$\begin{aligned} \varepsilon_{\text{d}}&=&\frac{1}{V}\int dx \frac{g_0}{2} (\Psi^{\dag} \Psi)^2= \frac{g_0}{2}\frac{1}{V}\int dx \left(\sum_{j=0,\pm1}|\psi_j|^2\right)^2 \nonumber\\ &=&\bar{n}\frac{g_0\bar{n}}{2}\left\{1+2|c_1|^2|c_2|^2[\sin(\theta_1)\sin(\theta_2)+\cos(\theta_1)\cos(\theta_2)\cos(\phi_1-\phi_2)]^2\right\},\end{aligned}$$ and the spin-interaction energy is $$\begin{aligned} \varepsilon_\text{s}=\frac{1}{V}\int dx \frac{g_2}{2} (\Psi^{\dag}\mathbf{F}_U\Psi)^2,\end{aligned}$$ with spatially modulated spin operator $\mathbf{F}_U=(F_U^x,F_U^y,F_U^z)$, $$F_U^x=\frac{1}{\sqrt{2}}\left( \begin{array}{ccc} 0 & e^{i2k_Rx} & 0 \\ e^{-i2k_Rx} & 0 & e^{-i2k_Rx} \\ 0 & e^{i2k_Rx} & 0 \\ \end{array} \right)$$ $$F_U^y=\frac{1}{\sqrt{2}}\left( \begin{array}{ccc} 0 & -ie^{i2k_Rx} & 0 \\ ie^{-i2k_Rx} & 0 & -ie^{-i2k_Rx} \\ 0 & ie^{i2k_Rx} & 0 \\ \end{array} \right)$$ and $F_U^z=F_z$. Thus we have $$\begin{aligned} \varepsilon_\text{s} &=&\frac{g_2}{2}\frac{1}{V}\int dx \left[\left(|\psi_{+1}|^2-|\psi_{-1}|^2\right)^2+2\left|\psi_0^*\psi_{+1}e^{-i2k_{\text{R}}x}+\psi_{-1}^*\psi_0e^{i2k_{\text{R}}x}\right|^2\right]\nonumber \\ &=&\bar{n}\frac{g_2\bar{n}}{2} \big\{2|c_1c_2|^2\cos^2(\theta_1)\cos^2(\theta_2)\cos^2(\phi_1+\phi_2) + |c_1c_2|^2\sin(2\theta_1)\sin(2\theta_2)\cos(\phi_1-\phi_2) \nonumber\\ & & +\left[\sum\nolimits_i|c_i|^2\cos^2(\theta_i)\cos(2\phi_i)\right]^2 +2\left[\sum\nolimits_i|c_i|^2\sin^2(\theta_i)\right]\left[\sum\nolimits_i|c_i|^2\cos^2(\theta_i)\right] \nonumber\\ & & +2\delta_{k_1,k_2-2}|c_1c_2|^2\sin^2(\theta_1)\cos^2(\theta_2)\sin(2\phi_2)\cos(2\alpha)\big\}.\end{aligned}$$ Then we obtain the total energy density as $$\varepsilon=\varepsilon_0+\varepsilon_\text{d}+\varepsilon_\text{s}.$$ The stripe phase is supposed to lower the spin-interaction energy density $\varepsilon_\text{s}$, in which there exists a term proportional to $\delta_{k_1,k_2-2}$. This gives the mathematic reason why we always have $k_2-k_1=2$ in the stripe phases. [The variational ansatz leads to an energy density which is a functional of eight parameters. Such an energy density plays the role of Ginsburg-Landau potential, and the ground state and the corresponding energy density are obtained by finding the minimum of the Ginsburg-Landau potential with respect to all eight parameters.]{} [The quantum phase diagram can be characterized with the variational wavefunction, experimental observables $\langle F_{z}\rangle $ and $\langle F_{z}^{2}\rangle $, and the symmetry property $\langle F_{x}^{2}\rangle $. The phase transitions in our system are determined based on the Ehrenfest classification, with the order of the phase transition labeled by the lowest derivative of the ground-state energy density $\varepsilon_\text{g}=\min\{\varepsilon\} $ that is discontinuous at the transition. In particular, we examine the derivatives $\frac{\partial \varepsilon_\text{g} }{\partial \Delta }= \langle F_{z}^{2}\rangle $ and $\frac{\partial ^{2}\varepsilon_\text{g} }{\partial \Delta ^{2}}= \frac{\partial \langle F_{z}^{2}\rangle }{\partial \Delta }$ (One can apply the Hellmann-Feynman theorem to obtain these relations), and $\langle F_{z}^{2}\rangle $ ($\frac{\partial \langle F_{z}^{2}\rangle }{\partial \Delta }$) displays discontinuity as $\Delta $ varies across a first-order (second-order) phase boundary \[see Figs. 3(c) and (d) in the main text\]. This argument also applies to the derivatives $\frac{\partial \varepsilon_\text{g}}{\partial \Omega}$ ($\frac{\partial^2 \varepsilon_\text{g}}{\partial \Omega^2}$) as shown in Fig. \[fig:sm0\], though they are less experimentally accessible. For a crossing over, all these derivatives should be continuous.]{} ![ [First (blue solid line) and second (red dashed line) order derivatives of the ground state energy over the Raman coupling strength $\Omega$. The discontinuity in the second order derivative implies the transition between phases (II) and (III) is a second order one (with the boundary given by the black-dotted vertical line). Other parameters are the same as in Fig. 3(c) in the main text.]{}[]{data-label="fig:sm0"}](Fig_sm0_f.pdf){width="0.5\linewidth"} ### Perturbation analysis {#perturbation-analysis .unnumbered} We consider the regime where $\Omega$ and $\Delta$ are small, and the interactions are weak. For the ground state properties, we can omit the high-energy top band safely, and consider only the two lower bands. The middle band has a minimum at $k=2$ with spin state $$\begin{aligned} \chi_{\text{m}}=\left( \begin{array}{ccc} \frac{1}{\sqrt{2}}, & 0, & \frac{-1}{\sqrt{2}} \end{array} \right)^T.\end{aligned}$$ The bottom band has two minima, one at $k\simeq0$ with spin state $$\begin{aligned} \chi_{\text{b,l}}=\left( \begin{array}{ccc} -\frac{\Omega}{4}, & 1-\frac{\Omega^2}{16}, & -\frac{\Omega}{4} \end{array} \right)^T,\end{aligned}$$ and the other at $k\simeq2$ with spin state $$\begin{aligned} \chi_{\text{b,r}}=\left( \begin{array}{ccc} \frac{1-\frac{\Omega^2}{16}}{\sqrt{2}}, & -\frac{\Omega}{2\sqrt{2}}, & \frac{1-\frac{\Omega^2}{16}}{\sqrt{2}} \end{array} \right)^T.\end{aligned}$$ ![ (a) Phase diagram in the $\Omega$-$\Delta$ plane with $g=1.5$. (b) Phase diagram in the $\Delta$-$g$ plane with $\Omega=0.16$. Solid lines represent first order phase transitions while dotted lines represent second order phase transitions. The phase diagram is obtained using perturbation analysis with interaction ratio $g_0=-50g_2\equiv g$.[]{data-label="fig:sm1"}](Fig_sm1_f.pdf){width="0.75\linewidth"} As we discussed above, the ground state may contain two plane waves at most, so we consider a perturbation ansatz $$\begin{aligned} \Psi_\text{p}=|c_1|\chi_{\text{b,l}}e^{ik_1x} +\left(|c_2|\chi_{\text{b,r}}e^{i\alpha} +|c_3|\chi_{\text{m}}e^{i\beta}\right)e^{i(k_1+2)x},\end{aligned}$$ with $|c_1|^2+|c_2|^2+|c_3|^2=1$, the energy density now becomes $$\begin{aligned} \varepsilon&=&-|c_1|^2\frac{\Omega^2}{2}+|c_2|^2(\Delta-\frac{\Omega^2}{2})+|c_3|^2\Delta + \frac{g_0}{2}\left(1+|c_1|^2|c_2|^2\Omega^2\right) \nonumber\\ & &+g_2\left[|c_1|^2(|c_2|^2+|c_3|^2)+2|c_2c_3|^2\cos^2(\alpha-\beta)\right] +g_2\left[|c_1c_2|^2\cos(2\alpha)-|c_1c_3|^2\cos(2\beta)\right]\end{aligned}$$ According to the second partial derivative test, it can be proven that the minima of $\varepsilon$ always satisfy $c_1c_2c_3=0$, which means that the co-occupation of three band minima is never energetically favorable. So there are three cases: \(1) $c_1=0$ and $\cos^2(\alpha-\beta)=1$, $\Psi_\text{p}$ describes a plane-wave state in phase (II) or a polarized plane-wave state in phase (III), with its energy density $$\begin{aligned} \varepsilon_{23}&\equiv&\varepsilon|_{c_1=0} =|c_2|^2(\Delta-\frac{\Omega^2}{2})+|c_3|^2\Delta + \frac{g_0}{2}+2g_2|c_2c_3|^2. \label{eq:E23}\end{aligned}$$ \(2) $c_2=0$ and $\sin^2(\beta)=1$, $\Psi_\text{p}$ describes a plane-wave state in phase (I) or stripe state in phase (IV) with energy density $$\begin{aligned} \varepsilon_{31}&\equiv&\varepsilon|_{c_2=0} =-|c_1|^2\frac{\Omega^2}{2}+|c_3|^2\Delta + \frac{g_0}{2}+2g_2|c_1c_3|^2. \label{eq:E31}\end{aligned}$$ \(3) $c_3=0$ and $\cos^2(\alpha)=1$, $\Psi_\text{p}$ describes a plane-wave state in phase (I) or stripe state in phase (V) with energy density $$\begin{aligned} \varepsilon_{12}&\equiv&\varepsilon|_{c_3=0} =-|c_1|^2\frac{\Omega^2}{2}+|c_2|^2(\Delta-\frac{\Omega^2}{2}) + \frac{g_0}{2}\left(1+|c_1|^2|c_2|^2\Omega^2\right) +2g_2|c_1c_2|^2. \label{eq:E12}\end{aligned}$$ [Generally, the Ginsburg-Landau potential can *not* be written as a functional of a single scalar order parameter for the interacting multi-component bosonic fields considered here. Nevertheless, by assuming an perturbative ansatz with fixed spin state and reduced parameter space, the effective Ginsburg-Landau potential can be written as a functional of a single scalar order parameter (either $c_1$ or $c_2$) for certain phase transitions, as can be seen from Eqs. (\[eq:E23\]), (\[eq:E31\], (\[eq:E12\]).]{} Therefore, the ground state is determined by minimizing $\varepsilon_{12},\varepsilon_{23},\varepsilon_{31}$, with ground-state energy density given by $\varepsilon_{\text{g}}=\min\{\varepsilon_{12},\varepsilon_{23},\varepsilon_{31}\}$. Using Eqs. (\[eq:E23\]), (\[eq:E31\], (\[eq:E12\]), it is straight forward to calculate the ground state and the corresponding energy density $\varepsilon_{\text{g}}$. The phase boundaries can be obtained by examining the ground state or the derivation of $\varepsilon_{\text{g}}$ over $\Omega, \Delta, \cdots$. As shown in Figs. \[fig:sm1\] (a) (b), we find that the phase boundary between (I) and (II) \[(III) and (IV)\] is $\Delta=0$, the phase boundary between (I) and (IV) is $\Delta=-2g_2-\Omega^2/2$, the boundary between (II) and (III) is $\Omega^2=-4g_2$, and the boundary between (V) and (I) \[(II)\] is $\pm\Delta=2g_2+g_0\Omega/2$. The phase diagrams by perturbation analysis, as well as the behavior of the multicriticalities, are qualitatively in good agreement with the full variational calculation, though the exact phase boundaries are slightly different. This is because the perturbation results are valid only to the order of $(g_2,\Delta,\Omega^2)$, and generally the spin states of interacting BECs are slightly different from the spin states in the perturbation ansatz. ![ (a) Ground state phase diagram in the $\Omega$-$\Delta$ plane for $^{87}$Rb BECs. Solid lines represent first order phase transitions while dotted lines represent second order phase transitions. (b) Density modulations of stripe phase (IV) in the presence of harmonic trap, with $\Omega=0.08$, $\Delta=0.02$. (c) Density modulations of stripe phase (IV) in the presence of harmonic trap, with $\Omega=0.08$, $\Delta=0.005$. In (a-c), typical $^{87}$Rb interaction ratio $g_0=-200g_2$ is used, with $g_0\bar{n}=2.2$.[]{data-label="fig:sm2"}](Fig_sm2_f.pdf){width="0.8\linewidth"} ### Effects of interaction ratio and harmonic trap {#effects-of-interaction-ratio-and-harmonic-trap .unnumbered} In the main text, we have fixed the interaction ratio as $g_0=-50g_2$, a stronger (weaker) $g_2$ will enlarge (shrink) the regions of stripe and polarized plane-wave phases, but does not qualitatively change the phase diagram structure. To show this, in Fig. \[fig:sm2\](a), we give the phase diagrams of interaction ratio $g_0=-200g_2$ for $^\text{87}$Rb atoms. Typically, the atomic density is about $10^{15}$cm$^{-3}$, for s-wave scattering length 100.48$a_0$ ($a_0$ is the Bohr radius) and Raman-laser wavelength $800$nm, the corresponding interaction is $g_0\bar{n} = 2.2$. Moreover, for realistic experiments, the BECs are confined by a harmonic trap, we consider a trapping frequency $\omega=2 \pi\times 0.2$kHz and calculate the ground state using imaginary time evolution of the GP equation. Figs. \[fig:sm2\] (b) and (c) show the ground-state density modulations corresponding to stripe phases (IV) and (V). ![ (a) and (b) The evolution of the spin density corresponding to Figs. 4(d) and (e) in the main text. (c) and (d) The same as in Figs. 4(b) and (e) in the main text except that the interaction ratio of $^{87}$Rb ($g_0=-200g_2$) is used, with $g_0\bar{n}=0.5$.[]{data-label="fig:sm3"}](Fig_sm3_f.pdf){width="0.75\linewidth"} ![ (a)-(c) Averaged momentum $\bar{k}_{m}$ ($\bar{k}_{b}$) and percentage population (colorbar) of atoms in the middle (bottom) band. Thin dashed line in (a) shows the band minima. (d)-(f) The initial (dashed line) and final (solid line) spin density $\protect\rho _{\uparrow }$ corresponding to (a)-(c) respectively. The interactions are $(g_{0}\bar{n},g_{2}\bar{n})=(0,0)$, $(0.5,0)$ and $(4,-0.2)$ for (a), (b) and (c). The final value of $\Omega$ is $\Omega_\text{f}=1$, leading to a stripe period of $\sim3\mu$m. $\omega=2\pi\times100$Hz is used to reduce the time period with $T=65$ms. The detuning is $\Delta =-0.05$.[]{data-label="fig:sm4"}](Fig_sm4_f.pdf){width="0.8\linewidth"} ### Dynamical stripe states {#dynamical-stripe-states .unnumbered} [Our dynamical process (where the ramping rate of $\Omega$ is much faster than the spin-interaction strength) leads to a final state being a nearly equal superposition of two plane waves (with an overall Gaussian-packet form in the presence of a Harmonic trap), $$\Psi =c_{\text{b}}\chi _{\text{b}}e^{ik_{\text{b}}x}+c_{\text{m}}\chi _{\text{m}}e^{ik_{\text{m}}x+i\phi _{\text{m}}(t)},$$where b“ (m”) labels the bottom (middle) band, with spin states $\chi _{\text{b(m)}}$, momentum $k_{\text{b(m)}}$ and coefficients $c_{\text{b(m)}}\simeq \frac{1}{\sqrt{2}}$. Although the equal superposition remains over time, $\Psi $ is different from the ordinary stripe state by a dynamical phase $\phi _{\text{m}}(t)$ originating from the energy difference between middle and bottom bands.]{} [Nevertheless, this state has a uniform total density and a striped sinusoidal spin density. Although the spin-density modulation propagates in space due to the dynamical phase $\phi _{\text{m}}(t)$, the visibility and period of the spin-density modulation do not change, as shown in Figs. \[fig:sm3\] (a) and (b). Furthermore, the dynamical stripe state itself has the superfluid property and breaks the translational symmetry of the Hamiltonian, showing a supersolid-like property. Note that the density modulation period is tunable and long enough for direct experimental observation of such dynamical stripe state.]{} The long-period and high-visibility dynamical stripe states can always be obtained as long as the spin interaction is weak, as shown in Figs. \[fig:sm3\] (c) and (d), where we consider the weakly interacting $^\text{87}$Rb atoms with $g_0\bar{n}=0.5$ and typical interaction ratio $g_0=-200g_2$. The population percentage of the bottom band is slightly increased since some middle-band atoms are scattered to the bottom band by spin interaction, and the stripe pattern in Fig. \[fig:sm3\] (d) moves similarly as in Fig. \[fig:sm3\] (b). Moreover, the modulation period can be tuned by changing the value of $\Omega_{\text{f}}$, as shown in Fig. \[fig:sm4\] with $\Omega_{\text{f}}=1$ and a corresponding stripe period of $\sim3\mu$m. In Fig. \[fig:sm4\] (e), the stripe visibility is slightly reduced, because the spin component $\vert0\rangle$ in the bottom band increases slightly with $\Omega_\text{f}$. [^1]: Corresponding author.\ Email: <chuanwei.zhang@utdallas.edu>
--- abstract: 'I discuss an upper bound on the boost and the energy of elementary particles. The limit is derived utilizing the core principle of relativistic quantum mechanics stating that there is a lower bound for localization of an elementary quantum system and the assumption that when the localization scale reaches the Planck length, elementary particles are removed from the $S$-matrix observables. The limits for the boost and energy, $M_{\mathrm{Planck}}/m$ and $M_{\mathrm{Planck}}c^{2}\approx\,8.6\cdot 10^{27}\,\mathrm{eV}$, are defined in terms of fundamental constants and the mass of elementary particle and does not involve any dynamic scale. These bounds imply that the cosmic ray flux of any flavor may stretch up to energies of order $10^{18}$ GeV and will cut off around this value.' author: - George Japaridze title: Maximal attainable boost and energy of elementary particles as a manifestation of the limit of localizability of elementary quantum systems --- Introduction ============ This letter presents the scenario establishing the ultimate upper bound on the Lorentz boost and energy of elementary particles with a non-zero mass. The cosmic ray detectors measure higher and higher energies; recently the IceCube collaboration released data containing extra terrestrial neutrino events with $E_{\nu}\sim$ PeV ($10^{15}$ eV) [@icecube]. The region of energies probed is steadily increasing: cosmic ray experiments as HiRes, Telescope Area, Auger report events with unprecedented energies of up to $10^{20}$ eV [@CR]. It is natural to ask whether there is no upper bound on Lorentz boosts/energies as it follows from the classical relativity. Accounting for the quantum feature of elementary particles and gravity may result in halting the increase of boost and energy of elementary particles. Though no deviation from Lorentz invariance is observed, which implies that space-time symmetry is described by a non compact group and that boosts and energies can acquire arbitrarily large values, there have been numerous attempts to modify the untamed growth of the boost and energy, see e.g. [@mat], [@alan]. Present note addresses this question. The suggested mechanism for the maximal attainable boost and the energy of elementary particles is based on the fundamental concept of the limit of localizability of elementary quantum system in relativistic quantum mechanics [@nw]. This limit is combined with the conjecture that when the localization scale of the elementary particle reaches the critical value, defined by the maximum between the Planck length and Schwarzschild radius, it can not be observed as a quantum of an asymptotic [*in, out*]{} filed and thus elementary particle is not $S$-matrix observable any more. For brevity, let us call this assumption the quantum hoop conjecture, since it can be viewed as a quantum counterpart to the well-known hoop conjecture, which suggests that when the system is localized inside the volume of size of classical gravitational length, Schwarzschild radius, system undergoes gravitational collapse [@hoop]. Combining the quantum hoop conjecture with the limit of localizability of the elementary particle leads to the upper bound on Lorentz boost and energy of elementary particles. Assuming that the mass of the elementary quantum system $m$ is less than Planck mass $M_{\mathrm{P}}$, this model predicts value of maximal attainable boost $\Gamma^{\mathrm{max}}=M_{\mathrm{P}}/m$ and of maximal attainable energy $E^{\mathrm{max}}=M_{\mathrm{P}}c^{2}$. When the boost reaches $\Gamma^{\mathrm{max}}$, the contracted localization scale reaches the critical value and elementary particle is not the $S$-matrix observable any more. The limits on boost and energy depend only on the fundamental constants and the mass and therefore can be considered as the ultimate bounds for the boost and energy of an elementary particle. Maximal attainable boost and maximal attainable energy ====================================================== To begin with, let us recall fundamental units. In this letter Planck mass $M_{\mathrm{P}}$ is defined as the value of a mass parameter at which the Compton wave length of a system with the mass $m$, $\lambda_{q}(m)=\hbar/mc$, is equal to the Schwarzschild radius, $\lambda_{\mathrm{gr}}(m)=2Gm/c^{2}$ (throughout a four dimensional space-time, no extra dimensions, is considered) $$\label{Plank} \lambda_{q}(M_{\mathrm{P}})\equiv{\hbar\over M_{\mathrm{P}}c}=\lambda_{\mathrm{gr}}(M_{\mathrm{P}})\equiv{2GM_{\mathrm{P}}\over c^{2}},$$ where $\hbar$ is the Planck’s constant, $G$ is the gravitational constant and $c$ is the speed of light. This results in $$\label{m1} M_{{\mathrm{P}}}=\sqrt{{\hbar c\over 2 G}}\,\simeq\,1.5\cdot 10^{-8}\mathrm{kg}\simeq8.6\,\cdot\,10^{27}\,\mathrm{eV}/c^{2}.$$ Planck length $\lambda_{{\mathrm{P}}}$ is defined as a Compton wave length of system with the Planck mass, evidently coinciding with the Schwarzschild radius of a system with the Planck mass. From Eq. (\[Plank\]) it follows $$\label{plength} \lambda_{{\mathrm{P}}}\,\equiv {\hbar\over M_{\mathrm{P}}c}\,=\,{2GM_{\mathrm{P}}\over c^{2}}\,=\,\sqrt{{2\,G\,\hbar\over c^{3}}}\simeq \,2.3\cdot 10^{-35}\,\mathrm{m}.$$ These values differ by a factor of $\sqrt{2}$ from the ones established on a purely dimensional basis. Note the relation for a mass-independent Planck length made up from the mass-dependent $\lambda_{q}(m)$ and $\lambda_{\mathrm{gr}}(m)$: $$\label{lll} \lambda_{q}(m)\,\lambda_{\mathrm{gr}}(m)\,=\,\lambda^{2}_{{\mathrm{P}}}.$$ We say that system is localized inside volume with a linear size $L_{0}$ when the probability of finding the system inside the ball of linear size $L_{0}$ is $1$. Observation is described by exchanging quanta of [*in, out*]{} fields between the localized system and the outside observer, separated from the system by distance much greater than $L_{0}$. In classical physics the feature of rearranging the physical degrees of freedom when the localization scale reaches its critical value is formulated by the hoop conjecture, which states that a black hole forms whenever the amount of energy $mc^{2}$ is compacted inside a region that in no direction extends outside a circle of circumference (roughly) equal to $2\pi\lambda_{gr}(m)$ [@hoop]. In other words, no signal from the ball radius $a$ can reach external observer when $a<\lambda_{gr}(m)$. Originally, hoop conjecture was put forward for astrophysical bodies, macroscopic objects which can be reasonably described by classical theory of gravity [@hoop]. To establish the maximal attainable boost we introduce and utilize a quantum counterpart to the hoop conjecture, which we call throughout the quantum hoop conjecture. Quantum hoop conjecture states that whenever the localization scale of elementary quantum system approaches the Planck length $\lambda_{{\mathrm{P}}}$, the system is not observed as elementary particle, as the quantum of asymptotic [*in, out*]{} fields. The idea that below the Planck length the notion of space-time and length ceases to exists is not new, see [@mead]-[@sabine]; we just express it in terms of elementary particles, quanta of [*in, out*]{} fields. In the solution of the Heisenberg equations of motion of quantum field theory [@qft] $$\label{HHH} \langle \alpha|\hat{\Psi}|\beta \rangle = \langle \alpha|\sqrt{Z}\hat{\Psi}_{in,\,out}|\beta \rangle+\langle\alpha|{\hat R}|\beta \rangle,$$ the term $\langle \alpha|\sqrt{Z}\hat{\Psi}_{in,\,out}|\beta \rangle$ describes an incoming/outgoing free particle, the quantum of asymptotic field and $\hat{R}$ stands for the rest of the solution, which in case of a classical source $j(x)$ is $\hat{R}(x)=\int\,d^{4}y\,j(y)\triangle_{ret,\,adv}(x-y)$. In terms of quantum field theory, the quantum hoop conjecture is translated into the statement that when the localization scale reaches the Planck length, the matrix element $\langle \alpha| \sqrt{Z}\hat{\Psi}_{in,\,out}|\beta\rangle$ vanishes, $$\label{LLLKK} \lim_{\Gamma\to\Gamma^{\mathrm{max}}}\,\langle \alpha(\Gamma)| \sqrt{Z}\hat{\Psi}_{in,\,out}|\beta(\Gamma)\rangle\,=\,0.$$ Note that the flux of the incoming/outgoing particles and consequently, the $S$-matrix elements are defined by $\hat{\Psi}_{in,\,out}$; if matrix element of $\hat{\Psi}_{in,\,out}$ vanishes, corresponding $S$-matrix element vanishes as well [@qft]. According to quantum hoop conjecture the $S$-matrix element, corresponding to scattering of particle with the boost exceeding the maximal attainable boost, $\Gamma(m)\,>\,\Gamma^{\mathrm{max}}(m)$, should vanish. The simplest way to realize this is to modify the expression for the operator of [*in*]{} field: $$\label{psi1} \hat{\Psi}_{in}\,=\,\sum_{{\bf k}}\,{a_{in}({\bf k})\,e^{-ikx}+a^{\dagger}_{in}({\bf k})\,e^{ikx}\over 2k_{0}}\,\to\,\sum_{{\bf k}}\,{\widetilde{a}_{in}({\bf k})\,e^{-ikx}+\widetilde{a}^{\dagger}_{in}({\bf k})\,e^{ikx}\over 2k_{0}},$$ where $k_{0}=\sqrt{{\bf k}^{2}+m^{2}}$, $\widetilde{a}_{in}({\bf k})\equiv a_{in}({\bf k})\,\Theta(E^{\mathrm{max}}\,-\,k_{0})$, $a_{in}({\bf k}),\,a^{\dagger}_{in}({\bf p})$ are the annihilation and creation operators of elementary particle (quantum of [*in*]{} field) satisfying the usual commutator relation $[a_{in}({\bf k}),\,a^{\dagger}_{in}({\bf p})]=\delta({\bf k}-{\bf p})$, $\Theta(E^{\mathrm{max}}\,-\,k_{0})$ is the Heaviside step function and $E^{\mathrm{max}}$ is the energy of particle mass $m$ boosted with $\Gamma^{\mathrm{max}}(m)$. From (\[psi1\]) it follows that the matrix element of asymptotic field $\hat{\Psi}_{in}(x)$ between the vacuum and the one particle state $|k\rangle=a^{\dagger}_{in}(k)|0\rangle$ is $$\label{inset3} \langle 0|\hat{\Psi}_{in}(x)|k\rangle\,\sim\,\sqrt{Z}\,e^{ikx}\,\Theta(E^{\mathrm{max}}\,-\,k_{0}),$$ which vanishes when $k_{0}\,\geq \,E^{\mathrm{max}}$. As far as $k_{0}\leq E^{\mathrm{max}}$, i.e. when $\Gamma\leq \Gamma^{\mathrm{max}}(m)$, the vacuum-one particle matrix element of an [*in, out*]{} field exists, i.e. a particle can be observed as a quantum of $\hat{\Psi}_{in,\,out}$ and the flux and the $S$-matrix exist. Phenomenological condition (\[psi1\]) has to be derived from future theory of quantum gravity at a regime when an elementary particle having energy of the order of maximal attainable energy $E^{\mathrm{max}}$ in some reference frame would presumably interact with the degrees of freedom of the unknown underlying theory of quantum gravity. Eqs. (\[psi1\]) and (\[inset3\]) are written in a preferred reference frame. As any other scenario speaking of the maximum attainable boost and energy and thus postulating the existence of preferred reference frame, the present model also requires the existence of the reference frame to which the maximum boost is compared. In this work, motivated by the cosmological considerations, for such a reference frame the cosmic rest frame is chosen. This is the reference frame where the cosmic microwave background (CMB) is at rest, and its temperature is homogeneous and is 2.73$K$. The Earth rest reference frame moves relative to CMB with peculiar velocity $\sim 370$ km/s $\approx 0.0012\,c$, as is follows from CMB dipole anisotropy measurements [@earth]. Because of the low value of $\Gamma_{\mathrm {Earth-CMB}}\sim 1$, with a good approximation the Earth rest reference frame can be identified with the preferred reference frame, the one where as $\Gamma\,\rightarrow\,\Gamma^{\mathrm{max}}$, no elementary particle can be observed. Maximal attainable boost for the elementary particle with mass $m$, $\Gamma^{{\mathrm max}}(m)$, is derived from the requirement that the Lorentz-contracted localization scale is still larger than either Schwarzschild radius or Planck length and therefore, elementary particle is still observable as a quantum of [*in, out*]{} fields. From the classical special relativity it follows that when boosted with $\Gamma$, the localization scale is spatially contracted and becomes $L=L_{0}/\Gamma$. This relation holds in a relativistic quantum mechanics as well - position operator in relativistic quantum mechanics acquires overall factor $1/\Gamma$ when the reference frame is boosted with $\Gamma$ [@nw1]. From the requirement that the Lorentz-contracted size of the system is still larger than the threshold value $L_{\mathrm{thr}}$, given by either gravitational radius or Planck scale $$\label{boost0} L={L_{0}\over \Gamma}\,\geq\,L_{\mathrm{critical}}\equiv \mathrm{max}(\lambda_{\mathrm{P}},\,\lambda_{\mathrm{gr}}(m)),$$ it follows that $$\label{boost1} \Gamma\,\leq\, \Gamma^{\mathrm{max}}(m)={L_{0}\over L_{\mathrm{critical}}}\,=\,{L_{0}\over \lambda_{\mathrm{q}}(m)}\,{\lambda_{\mathrm{q}}(m)\over L_{\mathrm{critical}}}.$$ We utilize the well-known fact that in the framework of relativistic quantum mechanics, a particle at rest can not be localized with accuracy better than its Compton wave length $\lambda_{\mathrm{q}}(m)$, i.e. min$(L_{0})=\lambda_{\mathrm{q}}(m)$ [@nw], [@qft]. Then the inequality (\[boost1\]) turns into $$\label{boost2} \Gamma\,\leq\, \Gamma^{\mathrm{max}}(m)\,=\,{\lambda_{\mathrm{q}}(m)\over L_{\mathrm{thr}}}.$$ We assume that the mass of the elementary quantum system is bounded from above, $m\leq M_{\mathrm{P}}$. This constraint is realized as an inequality ordering spatial scales as follows $$\label{scales} \lambda_{\mathrm{gr}}(m)\leq \lambda_{\mathrm{P}}\leq \lambda_{\mathrm{q}}(m),$$ and consequently, $L_{\mathrm{critical}}=\mathrm{max}(\lambda_{\mathrm{P}},\,\lambda_{\mathrm{gr}}(m))=\lambda_{\mathrm{P}}$. Therefore, the maximal attainable boost for a particle with mass $m\leq M_{\mathrm{P}}$ is $$\label{boost6} \Gamma^{\mathrm{max}}(m)={\lambda_{\mathrm{q}}(m)\over \lambda_{\mathrm{P}}}={\lambda_{\mathrm{P}}\over \lambda_{\mathrm{gr}}(m)}={M_{\mathrm{P}}\over m}.$$ The hoop conjecture which may serve as a physical insight for the rearrangement of the space of physical degrees of freedom, appeals to the Schwarzschild radius $\lambda_{\mathrm{gr}}$, as the minimal localization length. Connection between Planck length and gravity effects was established long ago, and it states that in presence of gravity it is impossible to measure the position of a particle with error less than $\lambda_{\mathrm{P}}$ [@mead], [@garay], [@carlo]. We have assumed that similar to the hoop conjecture, suggesting that object collapses into black hole when localized in area with size less than classical gravitational radius, $2Gm/c^{2}$, elementary particle is removed from $S$-matrix observables when localized in volume with size less than $\lambda_{\mathrm{P}}$, the latter determined by both $G$ and $\hbar$. When the mass of elementary particle approaches $M_{\mathrm{P}}$, as seen from Eq. (\[lll\]), the Planck length and the Schwarzschild radius coincide $$\label{joe} \lambda_{\mathrm{P}}\,=\,\lambda_{\mathrm{gr}}(M_{\mathrm{P}})$$ i.e. the quantum and the classical hoop conjectures merge into the same supposition. According to Eq. (\[boost6\]), the value of maximal attainable boost varies from particle to particle, e.g. $\Gamma^{\mathrm{max}}(\mathrm{proton})=M_{\mathrm{P}}/m_{\mathrm{proton}}\approx 9.2\cdot 10^{14}$, $\Gamma^{\mathrm{max}}(\mathrm{electron})\approx 1.7\cdot 10^{22}$. Not so for the maximal attainable energy: $E^{\mathrm{max}}$ is the same for all particles and is given by the Planck energy $E_{\mathrm{P}}\equiv M_{\mathrm{P}}c^{2}$: $$\label{maxe} E^{\mathrm{max}}=\Gamma^{\mathrm{max}}(m)\,mc^{2}\,=\,M_{\mathrm{P}}c^{2}\approx\,8.6\cdot\,10^{27}\,\mathrm{eV}$$ It is important to note that the above results are obtained and are valid for the systems with $m\leq M_{\mathrm{P}}$ which we consider as elementary particles, i.e. quanta of [*in, out*]{} fields. The disappearance of [*in, out*]{} fields from the solution of Heisenberg equations of motion, in other words, removing elementary particles from the $S$-matrix observables when $\Gamma\,>\, \Gamma^{\mathrm{max}}$, seems to violate the $S$-matrix unitarity. Indeed, asymptotic completeness, according to which the [*in*]{} and [*out*]{} states span the same Hilbert space, which is also assumed to agree with the Hilbert space of interacting theory [@qft] $$\label{Hilb} {\cal H}_{in}\,=\,{\cal H}_{out}\,={\cal H}_{\mathrm{interacting}}$$ is not satisfied. Condition of asymptotic completeness is not trivial already in the framework of standard quantum field theory: if particles can form bound states, the structure of space of states is modified, and the $S$-matrix unitarity is restored only after bound states are accounted for in the unitarity condition [@qft]. Accounting for gravity brings in another reasoning for the violation of the $S$-matrix unitarity. It has previously been observed that [*in*]{} and [*out*]{} states, which are related by unitary transformation, can not be be defined in the presence of an arbitrary metric [@parker]. Applying the quantum hoop conjecture to elementary particles drives this observation to the extreme, stating that as soon as the localization region becomes smaller than $\lambda_{\mathrm{P}}$, the space of physical degrees of freedom is rearranged and elementary particles, quanta of [*in, out*]{} fields are not observables any more. In this case, the unitarity condition has to be formulated not in terms of elementary particles, but in terms of new physical degrees of freedom of quantum gravity, task which is beyond the scope of this letter. As for the predictions of a suggested scenario, the only clear one is the cut-off of beam of particles/flux of cosmic rays at limiting value $E_{\mathrm{P}}\sim 10^{18}$ GeV, energy, which is not accessible by modern accelerators and the cosmic rays observatories. Up to $E_{\mathrm{P}}$, i.e. up to $\Gamma^{\mathrm{max}}$ when the localization region is still larger than the Planck length, physical degrees of freedom, observables, are elementary particles. When $E\geq E_{\mathrm{P}}$, elementary particles are removed from observables (in full analogy of collapsing system into a black hole according to a hoop conjecture); thus cosmic ray flux should vanish when $E\rightarrow 10^{18}$ GeV. As mentioned above, maximum attainable boost varies from particle to particle and is $M_{\mathrm{P}}/m$, but maximum attainable energy for any type of elementary particle is the same $E_{\mathrm{P}}$. These bounds are “kinematical” in a sense that no dynamic scale related with any particular interaction is involved in establishing the bound. Of course, some concrete conditions may alter the maximal observed energy. e.g the well-known GZK limit on the energy of cosmic rays from distant sources, caused by the existence of the omnipresent target - cosmic microwave background [@gzk]. However the presented bounds are ultimate, derived from the quantum hoop conjecture and the basic principles of relativistic theory of quantum systems; in other words statement is that independently of dynamics the kinematic parameters describing elementary particles can not exceed these bounds. This is in contrast with the scenario for the boost and energy cut-off which was recently put forward [@learned]. In [@learned] it is suggested that the maximum attainable boost and maximum attainable energy for the neutrino are $$\label{learn} \Gamma^{\mathrm{max}}_{\nu}={M_{\mathrm{P}}\over M_{\mathrm{weak}}};\quad E^{\mathrm{max}}_{\nu}=m_{\nu}{M_{\mathrm{P}}\over M_{\mathrm{weak}}}$$ where $M_{\mathrm{P}}$ is a Planck scale and $M_{\mathrm{weak}}$ is a scale of weak interactions ($\sim 100$ GeV). The main point of work [@learned] is that the upper bound on energy is defined by weak scale; it follows from (\[learn\]) that neutrino spectrum cuts off at energies $\sim$ few PeV. Our prediction is $\Gamma^{\mathrm{max}}_{\nu}=M_{\mathrm{P}}/m_{\nu}$, i.e. much higher than one from Eq. (\[learn\]). Regard neutrino energies, as an example, we quote the estimate from analysis of energetics of gamma-ray bursts - maximum neutrino energies may reach $10^{16}-5\cdot 10^{19}$ eV [@grb]. This value does not contradict Eq. (\[maxe\]) - $E^{\mathrm{max}}_{\nu}= 8.6\cdot 10^{27}$ eV, and exceeds the upper bound of a few PeV on neutrino energies suggested in [@learned]. Discussion ========== We have combined: the lower limit of localizability of an elementary quantum system, Lorentz contraction, and the quantum hoop conjecture to derive the upper bound on Lorentz boost and energy for massive particles. When the upper bound is reached, elementary particles - the quanta of asymptotic [*in, out*]{} fields disappear from a spectrum of $S$-matrix observables, presumably replaced by the local physical degrees of freedom of quantum gravity. In derivation of this upper bound, we used the property of Lorentz contraction, i.e. validity of the theory of relativity applied to elementary particles up to $\Gamma^{\mathrm{max}}$ is assumed. Though the limiting values of boost and energy are Lorentz invariant, assigning the physical meaning to $\Gamma^{\mathrm{max}}$ and $E^{\mathrm{max}}$ implies the existence of a preferred reference frame. In this work it is postulated that the preferred reference frame is the CMB rest reference frame. Since the Earth rest reference frame almost coincides with the CMB rest frame, we predict that in the Earth rest reference frame the cosmic ray spectrum continues all the way till the Planck energy $\sim 10^{18}$ GeV, where the cut-off of the flux occurs. Lastly, let us note that as it is assumed that the boost is bounded from above, i.e. for a massive particle the (classical) limit $v=c$ can not be reached, the classical hoop conjecture remains intact. This is because the metric remains of that of boosted Schwarzschild space-time and can not be approximated by a plane impulsive gravitational wave as it happens for states moving with $c$ [@light]. I thank V.A. Petrov for fruitful discussions and pointing out to me works regarding boosted gravitational fields of massless particles [@light]. [100]{} M. G. Aartsen et al. \[IceCube Collaboration\], [*Phys. Rev. Lett.*]{}, [**113**]{}, 101101, (2014); arXiv:1405.5303. P. Blasi, Plenary talk at at the 33rd International Cosmic Ray Conference, 2013, Rio de Janeiro, Brazil; arXiv:1312.1590 \[astro-ph.HE\]. D. Mattingly, [*Living Rev. Rel.*]{}, [**8**]{}, 5, (2005); arXiv:gr-qc/0502097; S. Liberati, [*Class. Quantum Grav.*]{}, [**30**]{}, 133001, (2013). A. Kostelecky and N. Russell, [*Rev. Mod. Phys.*]{}, [**83:11**]{}, 2011. T.D. Newton and E.P. Wigner, [*Rev. Mod. Phys.*]{}, [**21**]{}, 400, (1949). K.S. Thorne, in [*J.R. Klauder, Magic Without Magic*]{}, Freeman, S. Francisco, 231, (1972). C. A. Mead, [*Phys. Rev.*]{}, [**135**]{}, B849, (1964), [*Phys. Rev.*]{}; [**143**]{}, 990, (1966). L. Garay, Int. J. Mod. Phys. [**A10**]{}, 145, (1995). S. Doplicher, K. Fredenhagen and J.E. Roberst, [*Commun. Math. Phys.*]{}, [**172**]{}, 187, (1995); C. Rovelli and L. Smolin, Nucl. Phys. [**B442**]{}, 593 (1995). S. Hossenfelder, [*Living Rev. Rel.*]{}, in [**16**]{}, 2 (2013); arXiv:0806.0339v2 \[gr-qc\] . S. Weinberg, [*The Quantum Theory of Fields*]{}, Campridge University Press, 2005. C. L. Bennett et al., [*Astrophys. J. Supp.*]{} [**148**]{}, 1 (2003). A.H. Monahan and M. McMillan, [*Phys. Rev. A*]{}, [**56**]{}, 2563, (1997). L. Parker, [*Phys. Rev.*]{}, [**183**]{}, 1057 (1969); S. A. Fulling, L. Parker and B. L. Hu, [*Phys. Rev.*]{} [bf 10]{}, 3905 (1974). K. Greisen, [*Phys. Rev. Lett*]{}, [**16**]{}, 748 (1966); G. Zatsepin and V. Kuzmin, [*JETP Lett.*]{}, [**4**]{}, 78 (1966). J.G. Learned and T.J. Weiler, arXiv:1407.0739 \[astro-ph.HE\]. P. Kumar and B. Zhang, [*Phys. Rep.*]{}, [**561**]{}, 1, (2015). W.B. Bonnor, [*Commun. math. Phys.*]{}, [**13**]{}, 163 (1969); P.C. Aichelburg and R.U. Sexl, [*Gen. Rel. Grav.*]{}, [**2**]{}, 303 (1971).
--- author: - 'Artem N. Shevlyakov' title: On irreducible algebraic sets over linearly ordered semilattices --- Introduction ============ This paper is devoted to the following problem. One can define a notion of an equation over a linearly ordered semilattice $L_l=\{a_1,a_2,\ldots,a_l\}$ (the formal definition of an equation is given below). A set $Y$ is [*algebraic*]{} if it is the solution set of some system of equations over $L_l$. Let us consider an equation $t(X)=s(X)$ over $L_l$, and $Y$ be the solution set of $t(X)=s(X)$. One can find algebraic sets $Y_1,Y_2,\ldots,Y_m$ such that $Y=\bigcup_{i=1}^m Y_i$. One can decompose each $Y_i$ into a union of other algebraic sets, etc. This process terminates after a finite number of steps and gives a decomposition of $Y$ into a union of [*irreducible*]{} algebraic sets $Y_i$ (the sets $Y_i$ are called the [*irreducible components* ]{}of $Y$). Roughly speaking, irreducible algebraic sets are “atoms” which form any algebraic set. The size and the number of such “atoms” are important characteristics of the semilattices $L_l$, since there are connections between irreducible algebraic sets and universal theory of linearly ordered semilattices (see [@uni_Th_II]). Moreover, the number of irreducible components was involved in the estimation of lower bounds of algorithm complexity (see [@ben-or] for more details). In this paper (Section \[sec:decomposition\_properties\]) we study the properties of the irreducible components of the solution set $Y$ of an equation $t(X)=s(X)$. Precisely, we prove that the union of irreducible algebraic sets $Y=\bigcup_{i=1}^m Y_i$ is redundant, i.e. the intersections $\bigcap_{i\in I}Y_i$ ($|I|<m$) consists of many points (Proposition \[pr:redundant\]). Moreover, for any equation $t(X)=s(X)$ in $n$ variables we count the number $m$ of irreducible components (see (\[eq:Irr(k\_1,k\_2,n,l)\])), and in Section \[sec:average\] we count the average number $\overline{{{\mathrm{Irr}}}}(n,l)$ of irreducible components of the solution sets of equations in $n$ variables. Main definitions ================ Let $L_l=\{a_1,a_2,\ldots,a_l\}$ be the linearly ordered semilattice of $l$ elements and $a_1<a_2<\ldots <a_l$. The multiplication in $L_l$ is defined by $a_i\cdot a_j=a_{\min(i,j)}$. Obviously, the linear order on $L_l$ can be expressed by the multiplication as follows $$a_i\leq a_j\Leftrightarrow a_ia_j=a_i.$$ A [*term*]{} $t(X)$ in variables $X=\{x_1,x_2,\ldots,x_n\}$ is a commutative word in letters $x_i$. Let ${{\mathrm{Var}}}(t)$ be the set of all variables occurring in a term $t(X)$. Following [@uni_Th_II], an [*equation*]{} is an equality of terms $t(X)=s(X)$. Below we consider inequalities $t(X)\leq s(X)$ as equations, since $t(X)\leq s(X)$ is the short form of $t(X)s(X)=t(X)$. Notice that we consider equations as [*ordered pairs*]{} of terms, i.e. the expressions $t(X)=s(X)$, $s(X)=t(X)$ are [*different*]{} equations. Let $Eq(n)$ denote the set of all equations in $X=\{x_1,x_2,\ldots,x_n\}$ variables (we assume that each $t(X)=s(X)\in Eq(n)$ contains the occurrences of all variables $x_1,x_2,\ldots,x_n$). An equation $t(X)=s(X)\in Eq(n)$ is said to be a [*$(k_1,k_2)$-equation*]{} if $|{{\mathrm{Var}}}(t)\setminus{{\mathrm{Var}}}(s)|=k_1$ and $|{{\mathrm{Var}}}(s)\setminus{{\mathrm{Var}}}(t)|=k_2$. For example, $x_1x_2=x_1x_3x_4$ is a $(1,2)$-equation. Let $Eq(k_1,k_2,n)\subseteq Eq(n)$ be the set of all $(k_1,k_2)$-equations in $n$ variables. Obviously, $$Eq(n)=\bigcup_{(k_1,k_2)\in K_n}Eq(k_1,k_2,n), \label{eq:Eq(n)}$$ where $$K_n=\{(k_1,k_2)\mid k_1+k_2\leq n\}\setminus\{(0,n),(n,0)\}.$$ Each equation $t(X)=s(X)\in Eq(k_1,k_2,n)$ is uniquely defined by $k_1$ variables in the left part and by $k_2$ other variables in the right part (the residuary $n-k_1-k_2$ variables should occur in both parts of the equation). Thus, $$\#Eq(k_1,k_2,n)=\binom{n}{k_1}\binom{n-k_1}{k_2}.$$ By (\[eq:Eq(n)\]), one can compute $$\#Eq(n)=3^n-2.$$ In this paper we consider only equations $t(X)=s(X)$ with $n>l$, i.e. the number of variables occurring in $t(X)=s(X)$ is more than the order of the semilattice $L_l$. The case $n\leq l$ needs the different technic and was announced in [@malov]. A point $P\in L_l^n$ is a [*solution*]{} of an equation $t(X)=s(X)$ if $t(P),s(P)$ define the same element in the semilattice $L_l$. By the properties of linearly ordered semilattices, a point $P=(p_1,p_2,\ldots,p_n)$ is a solution of $t(X)=s(X)$ iff there exist variables $x_i\in{{\mathrm{Var}}}(t)$, $x_j\in{{\mathrm{Var}}}(s)$ such that $p_i=p_j$ and $p_i\leq p_k$ for all $1\leq k\leq n$. The set of all solutions of an equation $t(X)=s(X)$ is denoted by ${{\mathrm{V}}}(t(X)=s(X))$. An arbitrary set of equations is called a [*system*]{}. The set of all solutions ${{\mathrm{V}}}({{\mathbf{S}}})$ of a system ${{\mathbf{S}}}=\{t_i(X)=s_i(X)\mid i\in I\}$ is defined as $\bigcap_{i\in I}{{\mathrm{V}}}(t_i(X)=s_i(X))$. A set $Y\subseteq L_l^n$ is called [*algebraic over* ]{}$L_l$ if there exists a system ${{\mathbf{S}}}$ in $n$ variables with ${{\mathrm{V}}}({{\mathbf{S}}})=Y$. An algebraic set $Y$ is [*irreducible*]{} if $Y$ is not a proper finite union of other algebraic sets. Any algebraic set $Y$ over $L_l$ is a finite union of irreducible sets $$Y=Y_1\cup Y_2\cup \ldots\cup Y_m,\quad Y_i\nsubseteq Y_j \mbox{ for all $i\neq j$}, \label{eq:union_Y_general}$$ and this decomposition is unique up to a permutation of components. A semilattice $S$ is [*equationally Noetherian*]{} if for any infinite system ${{\mathbf{S}}}$ in variables $X=\{x_1,x_2,\ldots,x_n\}$ there exists a finite subsystem ${{\mathbf{S}}}^{{\prime}}\subseteq {{\mathbf{S}}}$ with the same solution set. According to [@uni_Th_II], the decomposition (\[eq:union\_Y\_general\]) holds for any algebraic set $Y$ over an equationally Noetherian semilattice $S$. Thus, it is sufficient to prove that $L_l$ is equationally Noetherian. However the condition $|Eq(n)|<\infty$ gives that there is not any infinite system over $L_l$. Thus, $L_l$ is equationally Noetherian. The subsets $Y_i$ from the union (\[eq:union\_Y\_general\]) are called the [*irreducible components*]{} of $Y$. Let $Y$ be an algebraic set over $L_l$ defined by a system ${{\mathbf{S}}}(X)$. One can define an equivalence relation $\sim_Y$ over the set of all terms in variables $X$ as follows $$t(X)\sim_Y s(X)\Leftrightarrow t(P)=s(P) \mbox{ for any point $P\in Y$}.$$ The set of $\sim_Y$-equivalence classes is called [*the coordinate semilattice of $Y$*]{} and denoted by $\Gamma(Y)$ (see [@uni_Th_II] for more details). The following statement describes the coordinate semilattices of irreducible algebraic sets. A set $Y$ is irreducible over $L_l$ iff $\Gamma(Y)$ is embedded into $L_l$ \[pr:gamma\_is\_embedded\_for\_irr\] Following [@uni_Th_II], $\Gamma(Y)$ is discriminated by $L_l$ iff $Y$ is irreducible (see [@uni_Th_II] for the definition of the discrimination). However for a finite semilattice $L_l$ the discrimination is equivalent to the embedding. There are different algebraic sets over $L_l$ with isomorphic coordinate semilattices. Such sets are called [*isomorphic*]{}. For example, the following sets $$Y_1={{\mathrm{V}}}(\{x_1\leq x_2\leq x_3\}),\; Y_2={{\mathrm{V}}}(\{x_3\leq x_2\leq x_1\})$$ has the isomorphic coordinate semilattices $$\Gamma(Y_1)={{\langle}}x_1,x_2,x_3\mid x_1\leq x_2\leq x_3{{\rangle}}\cong L_3,$$ $$\Gamma(Y_2)={{\langle}}x_1,x_2,x_3\mid x_3\leq x_2\leq x_1{{\rangle}}\cong L_3.$$ Thus, $Y_1,Y_2$ are isomorphic. Example ======= Let $n=3$, $l=2$. We have exactly $Eq(3)=3^3-2=25$ equations in three variables over $L_2$. The following table contains the information about such equations over $L_2$. The second column contains systems which define irreducible components of the solution set of an equation in the first column. A cell of the table contains $\uparrow$ if an information in this cell is similar to the cell above. Equations Irreducible components (IC) Number of IC ----------------------- -------------------------------------------- -------------- $x_1x_2x_3=x_1x_2x_3$ $x_1\leq x_2=x_3 \cup x_1=x_2\leq x_3\cup$ $6$ $x_2\leq x_1=x_3 \cup x_3\leq x_1=x_2\cup$ $x_1=x_3\leq x_2 \cup x_2=x_3\leq x_1$ $x_1=x_1x_2x_3$, $x_1\leq x_2=x_3\cup x_1=x_2\leq x_3\cup$ $3$ $x_1x_2x_3=x_1$ $x_1=x_3\leq x_2$ $x_2=x_1x_2x_3$, $\uparrow$ $3$ $x_1x_2x_3=x_2$ $x_3=x_1x_2x_3$, $\uparrow$ $3$ $x_1x_2x_3=x_3$ $x_1=x_2x_3$, $x_1=x_2\leq x_3\cup x_1=x_3\leq x_2$ $2$ $x_2x_3=x_1$ $x_2=x_1x_3$, $\uparrow$ $2$ $x_1x_3=x_2$ $x_3=x_1x_2$, $\uparrow$ $2$ $x_1x_2=x_3$ $x_1x_2=x_1x_3$, $x_1=x_2\leq x_3\cup x_1=x_3\leq x_2\cup$ $4$ $x_1x_3=x_1x_2$ $x_1\leq x_2= x_3\cup x_2=x_3\leq x_1$ $x_1x_2=x_2x_3$, $\uparrow$ $4$ $x_2x_3=x_1x_2$ $x_1x_3=x_2x_3$, $\uparrow$ $4$ $x_2x_3=x_1x_3$ $x_1x_2=x_1x_2x_3$, $x_1=x_2\leq x_3\cup x_1=x_3\leq x_2\cup$ $5$ $x_1x_2x_3=x_1x_2$ $x_1\leq x_2= x_3\cup x_2=x_3\leq x_1\cup$ $x_2\leq x_1= x_3$ $x_1x_3=x_1x_2x_3$, $\uparrow$ $5$ $x_1x_2x_3=x_1x_3$ $x_2x_3=x_1x_2x_3$, $\uparrow$ $5$ $x_1x_2x_3=x_2x_3$ One can directly compute the average number of irreducible components of algebraic sets defined by equations in three variables: $$\overline{{{\mathrm{Irr}}}}(3,2)=\frac{6+2(3+3+3+2+2+2+4+4+4+5+5+5)}{25}=\frac{90}{25}=3.6 \label{eq:Irr(3,2)_handy}$$ Recall that in Section \[sec:average\] we obtain the general expression for $\overline{{{\mathrm{Irr}}}}(n,l)$ (\[eq:Irr\]). Clearly, (\[eq:Irr\]) gives (\[eq:Irr(3,2)\_handy\]) for $n=3$, $l=2$ (see the proof in (\[eq:Irr\_n\_2\]) and (\[eq:Irr\_3\_2\_from\_formula\])). Decompositions of algebraic sets {#sec:decomposition_properties} ================================ Let $Y$ denote the solution set of an equation $t(X)=s(X)$ over the semilattice $L_l=\{a_1,a_2,\ldots,a_l\}$. The table above shows that any irreducible component divides the variables $X$ into $l$ classes and sorts the classes in some order. The following definition formalizes such properties of irreducible components. A disjoint partition $\sigma=(X_1,X_2,\ldots,X_l)$ of the set $X=\{x_1,x_2,\ldots,x_n\}$ is called *ordered* if there is a linear order $\leq_\sigma$ on $\sigma$: $X_1\leq_\sigma X_2\leq_\sigma\ldots\leq_\sigma X_l$. Let $\chi_{{\sigma}}(x_i)$ denote the class $X_k$ with $x_i\in X_k$. We shall denote $x_i=_{{\sigma}}x_j$ ($x_i\leq_{{\sigma}}x_j$) if $\chi(x_i)=\chi(x_j)$ (respectively, $\chi_{{\sigma}}(x_i)\leq_{{\sigma}}\chi_{{\sigma}}(x_j)$). An ordered partition $\sigma$ is $Y$-*irreducible* if the set $X_1$ (the minimal set of the order $\leq_\sigma$) contains a variable from $t(X)$ and a variable from $s(X)$. For example, an equation $x_1x_2x_3=x_1$ over $L_2$ has the following $Y$-irreducible partitions: $(\{x_1\},\{x_2,x_3\})$, $(\{x_1,x_2\},\{x_3\})$, $(\{x_1,x_3\},\{x_2\})$. Such partitions obviously correspond to irreducible components of ${{\mathrm{V}}}(x_1x_2x_3=x_1)$ in the table above. Any $Y$-irreducible partition $\sigma$ defines an algebraic set $Y_\sigma$ as follows $$Y_\sigma={{\mathrm{V}}}({{\mathbf{S}}}_\sigma)={{\mathrm{V}}}(\bigcup_{x_i=_{{\sigma}}x_j}\{x_i=x_j\}\bigcup_{x_i<_{{\sigma}}x_j}\{x_i\leq x_j\}).$$ For example, the partition ${{\sigma}}=(\{x_2,x_3\},\{x_1\})$ defines the system $${{\mathbf{S}}}_{{\sigma}}=\{x_2=x_3,x_2\leq x_1,x_3\leq x_1\}.$$ for $Y={{\mathrm{V}}}(\{x_1x_2=x_1x_3\})$. The set $Y_\sigma$ defined by a $Y$-irreducible partition $\sigma$ is an irreducible algebraic set, and moreover $\Gamma(Y_{{\sigma}})\cong L_l$. \[l:Y\_sigma\_is\_irreducible\] By the definition of a coordinate semilattice, $\Gamma(Y_\sigma)$ is generated by the elements $\{x_1,x_2,\ldots,x_n\}$ and has the following defined relations $$\{x_i=x_j\mid \mbox{ if $x_i=_{{\sigma}}x_j$}\}\cup \{x_i\leq x_j\mid \mbox{ if $x_i\leq_{{\sigma}}x_j$}\}.$$ It is easy to see that all elements $x_i$ are linearly ordered in $\Gamma(Y_{{\sigma}})$. Thus, $\Gamma(Y_\sigma)$ is a linearly ordered semilattice, and it is isomorphic to $L_l$. By Proposition \[pr:gamma\_is\_embedded\_for\_irr\], the set $Y_\sigma$ is irreducible. The following lemma gives the decomposition of the set $Y={{\mathrm{V}}}(t(X)=s(X))$ via ordered partitions. The set $Y={{\mathrm{V}}}(t(X)=s(X))$ is a union $$\label{eq:union_of_Y} Y=\bigcup_{\mbox{$\sigma$ is $Y$-irreducible}}Y_\sigma$$ \[l:union\_of\_Y\] Let $P=(p_1,p_2,\ldots,p_n)\in Y$. One can define an equivalence relation $\sim_P$ as follows $$x_i\sim_P x_j\Leftrightarrow p_i=p_j.$$ Thus, we obtain equivalence classes $\{X_1^P,X_2^P,\ldots,X_k^P\}$. Since $p_i\in L_l$, $k\leq l$. One can define a linear order $x_i\leq_P x_j$ if $p_i\leq p_j$. The order $\leq_P$ induces a linear order over the classes $\{X_i\}$. Let us fix a pair of variables $x_t,x_s\in X_1^P$ (probably, $x_t,x_s$ is the same variable) such that $x_t\in{{\mathrm{Var}}}(t)$ and $x_s\in{{\mathrm{Var}}}(s)$ (such pair $(x_t,x_s)$ always exists, since $P$ satisfies the equation $t(X)=s(X)$). Let us find a set $Y_{{\sigma}}$ with $P\in Y_{{\sigma}}$ by the following procedure. [**Procedure**]{} Input: a set of $k$ equivalence classes ${{\sigma}}_0=(X_1^P,X_2^P,\ldots,X_k^P)$ with the linear order $\leq_P$. Output: ${{\sigma}}=(X_1,X_2,\ldots,X_l)$ with a linear order $\leq_{{\sigma}}$. Step 0: Put ${{\sigma}}=\sigma_0$. If $l=k$ terminate the procedure, otherwise go to the step 1. Step $j$ ($1\leq j\leq l-k$): 1. Take an arbitrary equivalence class $X_i\in\sigma=(X_1,X_2,\ldots,X_{k+j-1})$ such that $|X_i|\geq 2$ and $X_i$ contains a variable $x\in X\setminus\{x_t,x_s\}$. Such class always exists, since $n>l>k+j-1$. 2. Move $x$ from $X_i$ to a new class $X^{{\prime}}$ and define a linear order $\leq_{{\sigma}}$ by $X_i\leq_{{\sigma}}X^{{\prime}}\leq X_{i+1}$. Put ${{\sigma}}=(X_1,X_2,\ldots,X_{i},X^{{\prime}},X_{i+1},\ldots,X_{l+j-1})$. Go to the next step. Roughly speaking, the procedure increases the number of classes preserving the relation $<_{{\sigma}}$. After the procedure we obtain an ordered partition ${{\sigma}}$ of $l$ equivalence classes $X_i$. The procedure does not move the variables $x_t,x_s$, therefore $x_t,x_s\in X_1$ and $\sigma$ is a $Y$-irreducible partition. Let us prove $P\in Y_{{\sigma}}={{\mathrm{V}}}({{\mathbf{S}}}_{{\sigma}})$. An equation $x_i\leq x_j\in {{\mathbf{S}}}_{{\sigma}}$ (one can similarly consider an equality $x_i=x_j\in{{\mathbf{S}}}_{{\sigma}}$) is not satisfied by $P$ if $p_i> p_j$ or equivalently $x_j<_P x_i$. Since the procedure preserves the relation $<_{{\sigma}}$, we have $x_j<_{{\sigma}}x_i$, and by the definition of ${{\mathbf{S}}}_{{\sigma}}$, the equation $x_i\leq x_j$ can not occur in ${{\mathbf{S}}}_{{\sigma}}$. Thus, we came to the contradiction. Let us prove now $Y_{{\sigma}}\subseteq Y$ for each ${{\sigma}}$. Consider a point $P=(p_1,p_2,\ldots,p_n)\in Y_{{\sigma}}$. Since ${{\sigma}}=(X_1,X_2,\ldots,X_l)$ is a $Y$-irreducible partition, the class $X_1$ contains variables $x_t\in{{\mathrm{Var}}}(t)$, $x_s\in{{\mathrm{Var}}}(s)$ and $p_t=p_s$. Since $X_1$ is the minimal class of the order $\leq_{{\sigma}}$, $$x_t\leq x_i\in{{\mathbf{S}}}_{{\sigma}}, \; x_s\leq x_i\in{{\mathbf{S}}}_{{\sigma}}\mbox{ for any }i\in [1,n]\setminus\{t,s\}.$$ Thus, $p_t=p_s\leq p_i$ for any $1\leq i\leq n$, and we have $$t(P)=p_t=p_s=s(P)\Rightarrow P\in{{\mathrm{V}}}(t(X)=s(X))=Y.$$ Let $\sigma=(X_1,X_2,\ldots,X_l)$ be a $Y$-irreducible partition of $X$. Let us define a point $P_\sigma=(p_1,p_2,\ldots,p_n)\in L_l^n$ by $$p_i=a_k\mbox{ if $x_i\in X_k$}.$$ The point $P_\sigma$ belongs to the set $Y_\sigma$, and $P_\sigma\notin Y_{\sigma^{{\prime}}}$ for each $Y$-irreducible partition $\sigma^{{\prime}}\neq \sigma$. Thus, in the union (\[eq:union\_of\_Y\]) $Y_{\sigma}\nsubseteq Y_{\sigma^{{\prime}}}$ for distinct partitions $\sigma,\sigma^{{\prime}}$. \[l:about\_point\_P\_sigma\] One can directly prove that $P_\sigma\in{{\mathrm{V}}}({{\mathbf{S}}}_\sigma)=Y_{{\sigma}}$. Let us take an irreducible partition $${{\sigma}}^{{\prime}}=(X_1^{{\prime}},X_2^{{\prime}},\ldots,X_l^{{\prime}})\neq{{\sigma}}=(X_1,X_2,\ldots,X_l).$$ There exist variables $x_i,x_j$ such that $x_i<_{{{\sigma}}} x_j$ but $x_i\geq_{{{\sigma}}^{{\prime}}} x_j$. For the point $P_{{\sigma}}$ we have $p_i<p_j$, therefore $P_{{\sigma}}$ does not satisfy the equation $x_i\geq x_j\in {{\mathbf{S}}}_{{{\sigma}}^{{\prime}}}$, and $P_{{\sigma}}\notin Y_{{{\sigma}}^{{\prime}}}$. According to Lemmas \[l:Y\_sigma\_is\_irreducible\], \[l:union\_of\_Y\], \[l:about\_point\_P\_sigma\], we obtain the following statement. The number of $Y$-irreducible partitions of a set $Y={{\mathrm{V}}}(t(X)=s(X))$ is equal to the number of irreducible components of $Y$. \[th:number\_of\_irr\_compionents\] The next statement describes the properties the union (\[eq:union\_of\_Y\]). Let (\[eq:union\_of\_Y\]) be a union of the irreducible components of a set $Y={{\mathrm{V}}}(t(X)=s(X))$ over $L_l$. Then 1. a point $P$ belongs to all $Y_{{\sigma}}$ iff $P=(a,a,,\ldots,a)$ for some $a\in L_l$; 2. $$Y_{{\sigma}}\setminus\bigcup_{{{\sigma}}^{{\prime}}\neq{{\sigma}}}Y_{{{\sigma}}^{{\prime}}}=\{P_{{\sigma}}\}$$ (it follows that the decomposition (\[eq:union\_of\_Y\]) is redundant, i.e. each point of $Y\setminus\bigcup_{{\sigma}}\{P_{{\sigma}}\}$ is covered by at least two irreducible components); 3. all irreducible components are isomorphic to each other; 4. $|Y_{{\sigma}}|=\binom{2l-1}{l}$ for each ${{\sigma}}$. \[pr:redundant\] 1. Obviously, $P=(a,a,\ldots,a)$ satisfies all systems ${{\mathbf{S}}}_{{\sigma}}$, so $P\in\bigcap_{{\sigma}}Y_{{\sigma}}$. Let us consider a point $Q=(q_1,q_2,\ldots,q_n)$ with $q_i<q_j$. It is clear that $Q$ does not satisfy any set $Y_{{\sigma}}$ with $x_i\geq_{{\sigma}}x_j$. Thus, $Q\notin\bigcap_{{\sigma}}Y_{{\sigma}}$. 2. In Lemma \[l:about\_point\_P\_sigma\] we proved $P_{{\sigma}}\in Y_{{\sigma}}$. By the definition, only the point $P_{{\sigma}}$ makes all inequalities $\leq$ of the system ${{\mathbf{S}}}_{{\sigma}}$ strict. Thus, for any point $P=(p_1,p_2,\ldots,p_n)\in Y_{{\sigma}}\setminus\{ P_{{\sigma}}\}$ there exists an equation $x_i\leq x_j\in{{\mathbf{S}}}_{{\sigma}}$ such that $p_i=p_j$. Below we find an irreducible partition ${{{\sigma}}^{{\prime}}}$ with $P\in Y_{{{\sigma}}^{{\prime}}}$. Let ${{\sigma}}=(X_1,X_2,\ldots,X_l)$, $x_i\in X_{i^{{\prime}}}$ and without loss of generality one can assume that $x_j\in X_{i^{{\prime}}+1}$. If $i^{{\prime}}\neq 1$ we put ${{\sigma}}^{{\prime}}=(X_1^{{\prime}},X_2^{{\prime}},\ldots,X_l^{{\prime}})$ where $$X_k^{{\prime}}=\begin{cases}\ X_k\mbox{ if $k\neq i^{{\prime}}$, $k\neq i^{{\prime}}+1$},\\ (X_{i^{{\prime}}+1}\setminus\{x_{j}\})\cup\{x_i\}\mbox{ if $k=i^{{\prime}}+1$},\\ (X_{i^{{\prime}}}\setminus\{x_{i}\})\cup\{x_j\}\mbox{ if $k=i^{{\prime}}$} \end{cases} \label{eqq:ast}$$ Since $X_1^{{\prime}}=X_1$, ${{\sigma}}^{{\prime}}$ is a $Y$-irreducible partition. The system ${{\mathbf{S}}}_{{{\sigma}}^{{\prime}}}$ contains $x_j\leq x_i$ instead of $x_i\leq x_j\in{{\mathbf{S}}}_{{\sigma}}$. Since other relations in the systems ${{\mathbf{S}}}_{{{\sigma}}^{{\prime}}},{{\mathbf{S}}}_{{\sigma}}$ are the same, $P\in {{\mathrm{V}}}({{\mathbf{S}}}_{{{\sigma}}^{{\prime}}})=Y_{{{\sigma}}^{{\prime}}}$. Suppose now $i^{{\prime}}=1$. Without loss of generality we assume $x_i\in{{\mathrm{Var}}}(t)$. By the definition of a $Y$-irreducible partition, there exists a variable $x_k\in X_1\cap {{\mathrm{Var}}}(s)$. If $x_j\in{{\mathrm{Var}}}(t)$ we can define ${{\sigma}}^{{\prime}}$ by (\[eqq:ast\]). In this case $X_1^{{\prime}}$ contains variables $x_j\in{{\mathrm{Var}}}(t)$, $x_k\in{{\mathrm{Var}}}(s)$, so ${{\sigma}}^{{\prime}}$ is an $Y$-irreducible partition and $P\in Y_{{{\sigma}}^{{\prime}}}$. Otherwise ($x_j\in{{\mathrm{Var}}}(s)$), one can take $x_k$ instead $x_i$ and repeat all reasonings above. 3. The statement immediately follows from Lemma \[l:Y\_sigma\_is\_irreducible\]. 4. For ${{\sigma}}=(X_1,X_2,\ldots,X_l)$ the number $|Y_{{\sigma}}|$ is equal to the number of sequences $X_1\leq X_2\leq\ldots\leq X_l$ with $X_i\in\{a_1,a_2,\ldots,a_l\}$. According to combinatorics, the number of such monotone sequences is $\binom{2l-1}{l}$. Average number of irreducible components {#sec:average} ======================================== Let ${\genfrac{\{}{\}}{0pt}{}{n}{m}}$ be the Stirling number of the second kind. By the definition, ${\genfrac{\{}{\}}{0pt}{}{n}{m}}$ is the number of all partitions of an $n$-element set into $m$ non-empty unlabelled subsets. The number ${\genfrac{\{}{\}}{0pt}{}{n}{m}}^\ast=m!{\genfrac{\{}{\}}{0pt}{}{n}{m}}$ obviously equals the number of all partitions of $n$-element set into $m$ [*labelled*]{} non-empty subsets. Thus, there are exactly ${\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast$ ordered partitions ${{\sigma}}=(X_1,X_2,\ldots,X_l)$ of the set of variables $X$, $|X|=n$ into $l$ equivalence classes. [*An ordered partition ${{\sigma}}=(X_1,X_2,\ldots,X_l)$ is not $Y$-irreducible if either $X_1\subseteq {{\mathrm{Var}}}(t)\setminus{{\mathrm{Var}}}(s)$ or $X_1\subseteq {{\mathrm{Var}}}(s)\setminus{{\mathrm{Var}}}(t)$*]{} For a $(k_1,k_2)$-equation $t(X)=s(X)$ there exists $$\sum_{i=1}^{k_1}\binom{k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast$$ partitions ${{\sigma}}$ with $X_1\subseteq {{\mathrm{Var}}}(t)\setminus{{\mathrm{Var}}}(s)$. Similarly, there exist $$\sum_{i=1}^{k_2}\binom{k_2}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast$$ partitions ${{\sigma}}$ with $X_1\subseteq {{\mathrm{Var}}}(s)\setminus{{\mathrm{Var}}}(t)$. By Theorem \[th:number\_of\_irr\_compionents\], for a $(k_1,k_2)$-equation $t(X)=s(X)$ the number of irreducible components ($Y$-irreducible partitions) equals $${{\mathrm{Irr}}}(k_1,k_2,n,l)={\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-\sum_{i=1}^{k_1}\binom{k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast-\sum_{i=1}^{k_2}\binom{k_2}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast. \label{eq:Irr(k_1,k_2,n,l)}$$ The average number of irreducible components of algebraic sets defined by equations from $Eq(n)$ is $$\begin{gathered} \overline{{{\mathrm{Irr}}}}(n,l)=\frac{\sum_{(k_1,k_2)\in K_n}\#Eq(k_1,k_2,n){{\mathrm{Irr}}}(k_1,k_2,n,l)}{\#Eq(n)}=\\ \frac{\sum_{k_1=0}^{n-1}\sum_{k_2=0}^{n-k_1}\#Eq(k_1,k_2,n){{\mathrm{Irr}}}(k_1,k_2,n,l)-\#Eq(0,n,n){{\mathrm{Irr}}}(0,n,n,l)}{\#Eq(n)}\end{gathered}$$ Below we compute $\overline{{{\mathrm{Irr}}}}$ using the following denotations: 1. $A\stackrel{(1)}{=}B$: an expression $B$ is obtained from $A$ by the binomial theorem $$(a+b)^n=\sum_{i=0}^n\binom{n}{i}a^ib^{n-i}.$$ 2. $A\stackrel{(2)}{=}B$: an expression $B$ is obtained from $A$ by the following identity of binomial coefficients $$\binom{a}{b}\binom{b}{c}=\binom{a}{c}\binom{a-c}{b-c}.$$ 3. $A\stackrel{(3)}{=}B$: an expression $B$ is obtained from $A$ by the recurrence relation of Stirling numbers $${\genfrac{\{}{\}}{0pt}{}{a+1}{b}}=b{\genfrac{\{}{\}}{0pt}{}{a}{b}}+{\genfrac{\{}{\}}{0pt}{}{a}{b-1}}.$$ 4. $A\stackrel{(4)}{=}B$: an expression $B$ is obtained from $A$ by the following identity of Stirling numbers $${\genfrac{\{}{\}}{0pt}{}{a+1}{b+1}}=\sum_{i=0}^a\binom{a}{i}{\genfrac{\{}{\}}{0pt}{}{i}{b}}.$$ Remark that in the last formula one can change the sum $\sum_{i=0}^a$ to $\sum_{i=c}^a$ ($c<b$), since ${\genfrac{\{}{\}}{0pt}{}{c}{b}}=0$ for $c<b$. We have $$\begin{gathered} \#Eq(0,n,n){{\mathrm{Irr}}}(0,n,n,l)=\binom{n}{0}\binom{n}{n}\left({\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-\sum_{i=1}^{n}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast \right)=\\ {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-\sum_{i=1}^{n}\binom{n}{n-i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast= {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-\sum_{j=0}^{n-1}\binom{n}{j}{\genfrac{\{}{\}}{0pt}{}{j}{l-1}}^\ast= {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-(l-1)!\sum_{j=0}^{n-1}\binom{n}{j}{\genfrac{\{}{\}}{0pt}{}{j}{l-1}}\stackrel{(4)}{=}\\ {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-(l-1)!\left({\genfrac{\{}{\}}{0pt}{}{n+1}{l}}-{\genfrac{\{}{\}}{0pt}{}{n}{l-1}}\right)\stackrel{(3)}{=}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-(l-1)!l{\genfrac{\{}{\}}{0pt}{}{n}{l}}=0,\end{gathered}$$ $$\begin{gathered} \sum_{k_1=0}^{n-1}\sum_{k_2=0}^{n-k_1}\#Eq(k_1,k_2,n){{\mathrm{Irr}}}(k_1,k_2,n)=\\ \sum_{k_1=0}^{n-1}\sum_{k_2=0}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{k_2} \left({\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-\sum_{i=1}^{k_1}\binom{k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast-\sum_{i=1}^{k_2}\binom{k_2}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\right)=\\ \sum_{k_1=0}^{n-1}\sum_{k_2=0}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{k_2}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast- \sum_{k_1=0}^{n-1}\sum_{k_2=0}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{k_2}\sum_{i=1}^{k_1}\binom{k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast-\\ \sum_{k_1=0}^{n-1}\sum_{k_2=0}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{k_2}\sum_{i=1}^{k_2}\binom{k_2}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast=S_1-S_2-S_3,\end{gathered}$$ where $$S_1={\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast\sum_{k_1=0}^{n-1}\binom{n}{k_1}2^{n-k_1}\stackrel{(1)}{=}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast(3^n-1),$$ $$\begin{gathered} S_2\stackrel{(2)}{=} \sum_{k_1=0}^{n-1}\sum_{i=1}^{k_1}\binom{n}{k_1}\binom{k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\sum_{k_2=0}^{n-k_1}\binom{n-k_1}{k_2}\stackrel{(1)}{=}\\ \sum_{k_1=0}^{n-1}\sum_{i=1}^{k_1}\binom{n}{i}\binom{n-i}{k_1-i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 2^{n-k_1}= \sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\sum_{k_1=i}^{n-1}\binom{n-i}{k_1-i} 2^{n-k_1}=\\ \sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\sum_{j=0}^{n-i-1}\binom{n-i}{j} 2^{n-i-j}= \sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\left(\sum_{j=0}^{n-i}\binom{n-i}{n-i-j} 2^{n-i-j} -1\right)\stackrel{(1)}{=}\\ \sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\left(3^{n-i} -1\right).\end{gathered}$$ Computing $$\begin{gathered} \sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast=(l-1)!\sum_{j=1}^{n-1}\binom{n}{j}{\genfrac{\{}{\}}{0pt}{}{j}{l-1}}\stackrel{(4)}{=} (l-1)!\left({\genfrac{\{}{\}}{0pt}{}{n+1}{l}}- {\genfrac{\{}{\}}{0pt}{}{n}{l-1}}\right)\stackrel{(3)}{=}\\ (l-1)!l{\genfrac{\{}{\}}{0pt}{}{n}{l}}={\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast,\end{gathered}$$ we obtain $$S_2=\sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 3^{n-i}-{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast=S(n,l)-{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast,$$ where $$S(n,l)=\sum_{i=1}^{n-1}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 3^{n-i}.$$ Let us compute $$\begin{gathered} S_3=\sum_{k_1=0}^{n-1}\sum_{i=1}^{n-k_1}\sum_{k_2=i}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{i}\binom{n-k_1-i}{k_2-i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast=\\ \sum_{k_1=0}^{n-1}\sum_{i=1}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast\sum_{k_2=i}^{n-k_1}\binom{n-k_1-i}{k_2-i}\stackrel{(1)}{=} \sum_{k_1=0}^{n-1}\sum_{i=1}^{n-k_1}\binom{n}{k_1}\binom{n-k_1}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 2^{n-k_1-i}\stackrel{(2)}{=}\\ \sum_{k_1=0}^{n-1}\sum_{i=1}^{n-k_1}\binom{n}{i}\binom{n-i}{n-k_1-i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 2^{n-k_1-i}= \sum_{i=1}^{n}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 2^{n-i}\sum_{k_1=0}^{n-i}\binom{n-i}{k_1} 2^{-k_1}\stackrel{(1)}{=}\\ \sum_{i=1}^{n}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 2^{n-i}\left(1+\frac{1}{2}\right)^{n-i}= \sum_{i=1}^{n}\binom{n}{i}{\genfrac{\{}{\}}{0pt}{}{n-i}{l-1}}^\ast 3^{n-i}=S(n,l)+\binom{n}{n}{\genfrac{\{}{\}}{0pt}{}{n-n}{l-1}}^\ast=S(n,l).\end{gathered}$$ Finally, we obtain $$\begin{gathered} \label{eq:Irr} \overline{{{\mathrm{Irr}}}}(n,l)=\frac{S_1-S_2-S_3-0}{3^n-2}= \frac{{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast(3^n-1)-(S(n,l)-{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast)-S(n,l)}{3^n-2}=\\ \frac{3^n{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-2S(n,l)}{3^n-2}.\end{gathered}$$ Let us compute $\overline{{{\mathrm{Irr}}}}(n,2)$ using the following identities of the Stirling numbers $${\genfrac{\{}{\}}{0pt}{}{n}{1}}=1,\; {\genfrac{\{}{\}}{0pt}{}{n}{2}}=2^{n-1}-1.$$ We have $$S(n,2)=\sum_{i=1}^{n-1}\binom{n}{i}\cdot 1\cdot 3^{n-i}=\sum_{i=1}^{n-1}\binom{n}{i}3^{n-i}\stackrel{(1)}{=}4^n-3^n-1,$$ therefore $$\overline{{{\mathrm{Irr}}}}(n,2)=\frac{3^n\cdot 2(2^{n-1}-1)-2(4^n-3^n-1)}{3^n-2}=\frac{6^n-2\cdot 4^n+2}{3^n-2}. \label{eq:Irr_n_2}$$ In particular, $n=3$ gives $$\label{eq:Irr_3_2_from_formula} \overline{{{\mathrm{Irr}}}}(3,2)=\frac{6^3-2\cdot 4^3+2}{3^3-2}=\frac{90}{25}=3.6$$ that coincides with (\[eq:Irr(3,2)\_handy\]). The following statement gives the estimation of $\overline{{{\mathrm{Irr}}}}(n,l)$. The number $\overline{{{\mathrm{Irr}}}}(n,l)$ satisfies $$\frac{1}{3}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast\leq\overline{{{\mathrm{Irr}}}}(n,l)\leq {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast$$ \[pr:Irr\_double\_ineq\] One can bound $S(n,l)$ as follows $$\begin{gathered} S(n,l)\leq 3^{n-1}\sum_{i=1}^{n-1}\binom{n}{j}{\genfrac{\{}{\}}{0pt}{}{j}{l-1}}^\ast\stackrel{(4)}{=}3^{n-1}(l-1)!\left({\genfrac{\{}{\}}{0pt}{}{n+1}{l}}-{\genfrac{\{}{\}}{0pt}{}{n}{l-1}}\right)\stackrel{(3)}{=}\\ 3^{n-1}(l-1)!l{\genfrac{\{}{\}}{0pt}{}{n}{l}}=3^{n-1}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast,\end{gathered}$$ and similarly $$S(n,l)\geq 3\sum_{i=1}^{n-1}\binom{n}{j}{\genfrac{\{}{\}}{0pt}{}{j}{l-1}}^\ast= 3{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast.$$ Thus, $$\overline{{{\mathrm{Irr}}}}(n,l)\leq\frac{3^n{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-2\cdot 3{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast}{3^n-2}={\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast\frac{3^n-6}{3^n-2}\leq{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast,$$ and $$\overline{{{\mathrm{Irr}}}}(n,l)\geq\frac{3^n{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast-2\cdot 3^{n-1}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast}{3^n-2}={\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast\frac{3^n-2\cdot 3^{n-1}}{3^n-2}\geq {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast\frac{3^n-2\cdot 3^{n-1}}{3^n}=\frac{1}{3}{\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast.$$ For a fixed $l$ and $n\to\infty$ we have the asymptotic equivalence $$\overline{{{\mathrm{Irr}}}}(n,l)\sim l^n.$$ Using the following explicit formula for Stirling numbers $${\genfrac{\{}{\}}{0pt}{}{n}{l}}=\frac{1}{l!}\sum_{j=0}^l(-1)^{l-j}\binom{l}{j}j^n,$$ we obtain ${\genfrac{\{}{\}}{0pt}{}{n}{l}}\sim l^n$ for fixed $l$ and $n\to\infty$. By Proposition \[pr:Irr\_double\_ineq\], we have $$\overline{{{\mathrm{Irr}}}}(n,l)\sim {\genfrac{\{}{\}}{0pt}{}{n}{l}}^\ast=l!{\genfrac{\{}{\}}{0pt}{}{n}{l}}\sim l!l^n\sim l^n.$$ [10]{} E. Yu. Daniyarova, A. G. Myasnikov, V. N. Remeslennikov, Algebraic geometry over algebraic structures. II. Foundations, J. Math. Sci., 185:3 (2012), 389–416. M. Ben-Or, Lower bounds for algebraic computation trees, Proc. 15th Annual Symposium on Theory of Computing (1983), 80–86. M. V. Malov, On irreducible algebraic sets over infinite linearly ordered semilattices, to appear. The information of the author: Artem N. Shevlyakov Sobolev Institute of Mathematics 644099 Russia, Omsk, Pevtsova st. 13 Phone: +7-3812-23-25-51. e-mail: `a_shevl@mail.ru`
--- author: - | Yunhong Ding$^{1, 2}$\*, Zhao Cheng$^{1, 6}$, Xiaolong Zhu$^{5}$, Kresten Yvind$^{1, 2}$, Jianji Dong$^{6}$, Michael Galili$^{1, 2}$, Hao Hu$^{1, 2}$,\ N. Asger Mortensen$^{3,4}$, Sanshui Xiao$^{1, 4}$, Leif Katsuo Oxenl[ø]{}we$^{1, 2}$ title: '****' --- The fast development of silicon photonics makes it feasible to construct optical interconnects that can replace electrical interconnects for chip-level data communications with low-energy consumption and large bandwidth [@KCS1982; @DAM2009]. However, for photodetection in the silicon-based optical interconnect, it still needs to be integrated with another absorbing material, e.g., germanium or III-V compound semiconductor [@MJ2010; @IH2004], leaving a big challenge for direct monolithic integration with the complementary metal-oxide-semiconductor (CMOS) technology and in achieving high bandwidth limited by absorbing materials’ poor electrical properties. Graphene, a unique CMOS compatible two-dimensional (2D) material, provides great potential in the realization of high-performance optoelectronic devices [@ML2011; @VS2018; @YD2017; @TG2012; @SY2017; @ZS2010]. In particular, significant efforts have been devoted to graphene photodetectors (PDs) [@TM2010; @XG2013; @AP2013; @XW2013; @DS2014; @CL2014; @SS2016; @IG2016; @MP2018arxiv]. The distinct properties of graphene in terms of ultrahigh carrier mobility [@KB2008; @VD2010], zero bandgap property that enables wavelength-independent light absorption over a very wide spectral range [@RN2008; @JD2008; @SCakmakyapan2018], and tunable optoelectronic properties [@ZL2008; @FW2008] give rise to realize graphene photodetectors with large spectral bandwidth and high speed. Graphene PDs rely on devices with broken inversion symmetry [@TM2010; @XG2013; @FK2014]. For graphene-based integrated PDs, the inversion symmetry can be conveniently relaxed through an asymmetric positioning of the waveguide with respect to the graphene coverage, and several devices  [@XG2013; @RJS2015] have been reported. However, due to the modest light-matter interaction between the single-layer graphene (SLG) and the waveguide mode, the size of devices has to be at least tens to hundreds of microns to achieve a reasonable responsivity, thus limiting high-speed operation. A small device however gives weak absorption of light, and thus eventually low responsivity. This counter-acting effect represents a big challenge for graphene-based PDs supporting both high-responsivity and large bandwidth. So far, the state-of-the-art bandwidth of the graphene PD is $\sim$76GHz, however with a modest responsivity of 1mA/W [@DS2017]. Another promising scheme is to break the symmetry of potential profile of the device by different metallic-induced doping [@GG2008] near metal-graphene contact regions. In this scheme, an internal (built-in) electric field [@KN2004] is formed to separate photo-generated carriers [@FX2009; @TM2010]. A milestone for high-speed photodetectors based on SLG has been demonstrated by free-space top-illumination technique [@TM2010]. However, the internal built-in electric field only exists in narrow regions of $\sim$200nm adjacent to the electrode/graphene interfaces [@FX2009; @TM2010]. The large distance of 1$\mu$m between the two electrodes [@TM2010] limits collection efficiency of photo-generated carriers and thus the responsivity. Moreover, the required very high-quality graphene typically relies on the exfoliation method [@KN2004], which restricts the potential for large scale integration. Here, we report an ultra-compact, on-chip, and high-speed graphene photodetector based on a plasmonic slot waveguide [@MA2017; @YD2017; @YS2018]. The subwavelength confinement of the plasmonic mode gives rise to the enhanced light-graphene interactions, and the narrow plasmonic slot of 120nm enables short drift paths for photogenerated carriers. A smallest integrated graphene photodetector with a graphene-coverage length of 2.7$\mu$m is demonstrated without response dropping up to a frequency of 110GHz and an intrinsic responsivity of 25mA/W. Increasing the device size to 19$\mu$m results in an increased intrinsic responsivity of 360mA/W, equivalent to a high external quantum efficiency of 29$\%$. These performances are comparable with a state-of-the-art commercial 100GHz semiconductor photodetector [@XPDV412xR], and can further be improved significantly. With the extremely broad absorption band of graphene, our device shows great potential. Moreover, the use of chemical-vapour-deposition(CVD)-growth graphene here allows for the scalable fabrication and we believe that our work greatly pushes the 2D material towards practical applications, e.g. in optical interconnects, high-speed optical communications, and so on. ![**Principle of the graphene-plasmonic integrated photodetector.** **A**. Schematic of the proposed graphene-plasmonic hybrid photodetector. **B**. The potential profile of the device showing the drift of the photo-generated carrier. $\Delta\phi_{\rm Pd}$ and $\Delta\phi_{\rm Ti}$ are the difference between the Dirac point energy and the Fermi level in palladium- and titanium-doped graphene, respectively. **C**. Cross-section of the device with its corresponding plasmonic slot mode at $\lambda=1.55~\mu m$. []{data-label="fig:principle"}](Fig1.pdf){width="46.00000%"} Results {#results .unnumbered} ======= Principle {#principle .unnumbered} --------- The schematic of the proposed graphene plasmonic hybrid photodetector is shown in Fig. \[fig:principle\](A), where the light from a fiber is first coupled to a silicon waveguide through a grating coupler, and further to the plasmonic slot waveguide by a short taper structure [@YD2017; @YS2018]. The plasmonic slot waveguide, see the cross-section of the device in Fig. \[fig:principle\](C), consists of two asymmetric metallic contacts of Au(90nm)/Pd(5nm) and Au(90nm)/Ti(5nm), resulting in different doping in the graphene [@TM2010]. The potential difference indicated in Fig. \[fig:principle\](B) gives rise to the efficient separation of the photo-generated carriers and formation of photocurrent. The performance with respect to responsivity and speed of the proposed photodetector really takes the advantages of the plasmonic slot waveguide with the narrow gap $g$ of 120nm. Firstly, the plasmonic slot waveguide provides sub-wavelength light confinement in the nanometer-scale, as shown in Fig. \[fig:principle\](C), resulting in extremely strong graphene-light interaction and thus high responsivity. With the optimum geometry of small gap and thin Au thickness of the plasmonic slot waveguide, the single-layer graphene leads to extremely high light absorption of $\sim$1dB/$\mu$m, see the Supplementary Material, which is at least more than one order of magnitude higher than that of graphene-silicon waveguide photodetectors [@XG2013; @DS2017]. Secondly, the internal electric-field mentioned above covers the whole plasmonic slot region of 120nm. Thus, photo-generated carriers can be effectively separated, leading to high responsivity. Furthermore, the narrow plasmonic slot gives short drift paths for carriers, resulting in an ultra-fast carriers transition through the photodetection region and thus high speed. Experimental results {#experimental-results .unnumbered} -------------------- Fig. \[fig:fabricated\](A) shows the fabricated graphene-plasmonic photodetector, where the dashed lines represent the graphene coverage boundary. The Raman spectrum shown in Fig. \[fig:fabricated\](B) illustrates moderate degradation after the wet-transferring process, which can be found in Method. Graphene plasmonic hybrid photodetectors with different graphene-coverage lengths were fabricated, and the cut-back method shows absorption coefficient of 0.8dB/$\mu$m in the detection region, as presented in Fig. \[fig:fabricated\](C). ![**Characterization of the fabricated device.** **A**. An example of fabricated graphene plasmonic PD with the graphene coverage length of 2.7$\mu$m. **B**. Measured Raman spectrum for the graphene after wet-transferring process. The inset shows the scanning-electron microscope (SEM) image of the device with 2.7$\mu$m after graphene patterning process. **C**. Analysis of coupling and propagation loss by the cut-back method. The error bar is obtained by measuring the six-copy device with the same graphene coverage length. []{data-label="fig:fabricated"}](Fig2.pdf){width="48.00000%"} ![image](Fig3.pdf){width="65.00000%"} Being a key parameter, the intrinsic responsivity $R_{\rm ph}$ of the graphene-plasmonic waveguide photodetector with respect to the power in the plasmonic waveguide was characterized with light power of –4dBm (400$\mu W$) in the plasmonic waveguide. The photocurrent was measured by switching on and off the light, while recording the current difference. Measurements were performed at different bias voltage $V_B$ (Au/Ti electrode relative to Au/Pd electrode) for devices with different graphene-coverage length $L(=2.7, 3.5, 9, 19~\mu m)$, as shown in Fig. \[fig:responsivity\](A). The potential profile at different bias voltage and their corresponding output electrical signals, due to injection of an optical pulse into the chip, are also presented in Fig. \[fig:responsivity\](B) and  \[fig:responsivity\](C), respectively. At the zero bias voltage, the intrinsic responsivity of $\sim$1.3mA/W is observed for the device with the graphene-coverage length of 19$\mu$m, see the insets of Fig. \[fig:responsivity\](A). It is due to the asymmetric potential profile between the two electrodes, which is also addressed in [@TM2010]. The zero responsivity measured at the negative bias around $-$0.1V indicates a flat potential profile as presented in Fig. \[fig:responsivity\](B), and no change for the optical signal is observed, see the second subplot in Fig. \[fig:responsivity\](C). Further increasing the negative bias voltage results in an increased responsivity, and a clear electrical pulse is obtained at $-$0.8V, as presented in Fig. \[fig:responsivity\](C). Increasing the positive bias voltage also causes an increase of responsivity and a clear output electrical pulse with opposite polarity compared to the negative biasing case. A bias voltage beyond 1.5V leads to a significant increase of responsivity and much larger amplitude for the electrical pulse. Moreover, as the graphene-coverage length $L$ increases, the improvement for the responsivity is also observed, since a larger fraction of optical power is absorbed, thus in turn leading to higher photocurrent. As presented in Fig. \[fig:responsivity\](A), the smallest device with 2.7$\mu$m graphene coverage gives the responsivity of $\sim$26mA/W at the bias voltage of 2V. The quantum yield represented by the external quantum efficiency (EQE) is defined by ${\rm EQE}=R_{\rm ph}\times \hbar\omega/e$, where $\hbar\omega$ is the photon energy, with $\hbar$ being the Planck constant and $\omega$ the frequency of light, while $e$ is the electron charge. Thus, the corresponding EQE of $2\%$ is obtained. For the device with 19$\mu$m graphene coverage, the highest responsivity of 360mA/W is obtained with the bias voltage of 2.2V, corresponding to a high EQE of $29\%$. ![image](Fig4.pdf){width="70.00000%"} We further measured the bandwidth of these graphene-plasmonic photodetectors, as presented in Figs. \[fig:bandwidth\](A)–(D). The experimental setup can be found in the Supplementary Material. All measurements are carried out with the bias voltage of 1.6V to get a good signal-to-noise ratio. At the low frequency below 40GHz, we used a vector network analyzer (VNA, 40GHz bandwidth) to measure the current (optical) frequency response $R_{\rm op}$. One can find no response-drop within the 40GHz bandwidth, see the blue lines in Figs. \[fig:bandwidth\](A)–(D), indicating a large detection bandwidth. Then, the devices were further characterized by the alternative method of fast Fourier transformation (FFT) of the impulse response of the detectors. Within the 40GHz bandwidth, the frequency response obtained by the impulse-response method, see the red solid red lines, agrees quite well with the VNA method mentioned above. With the aid of the impulse response method, we obtain the measurement system optical bandwidth of at least 85GHz for the smallest device with graphene coverage length of 2.7$\mu$m. Increasing the device size results in broader impulse response (see the Supplementary Material) and thus smaller bandwidth. The device with graphene coverage length of 19$\mu$m exhibits the measurement system optical bandwidth of at least 75GHz. All the impulse response measurements for devices can be found in the Supplementary Material. Note that such systematical bandwidth is significantly influenced by the bandwidth of the oscilloscope, RF cable, and Bias Tee. Thus, we have employed the third method (FB+ESA) utilizing frequency beating (FB) between two coherent light fields to measure the RF power in wide-band electrical spectral analyzer (ESA, 110GHz bandwidth). As presented in Figs. \[fig:bandwidth\] (A)-(D), for the frequency near 40GHz, the response obtained by the FB+ESA method (the triangular marks) overlaps quite well with those by the VNA method (the blue curves). Within 110GHz, no drop in detection response is observed for the device with graphene coverage length of 2.7$\mu$m, indicating a bandwidth &gt;110GHz. A longer device leads to slight drop in detection response at high frequency around 100GHz. The small frequency dip at $\sim$46GHz is attributed to impedance mismatch between the electrode pads and the RF probe. The device with graphene coverage length of 19$\mu$m, see Fig. \[fig:bandwidth\] (D), exhibits a 1.5-dB optical bandwidth of 110GHz. With the increasing of the graphene coverage length, the responsivity is significantly improved as demonstrated in Fig. \[fig:responsivity\](A), while the current frequency response $R_{\rm op}$ shown by the red triangular marks at the high frequency slightly drops, indicating a tradeoff between responsivity and bandwidth. The detector with graphene coverage length of 19$\mu$m was further used to receive 10, 20, and 40Gbit/s return-to-zero (RZ) optical signals with pseudo-random binary sequence (PRBS) length of $2^{31}-1$, which is amplified by an erbium-doped fiber amplifier (EDFA) and injected to the chip. The output electrical signal from the graphene photodetector was electrically amplified (40GHz bandwidth electrical amplifier) and fed to the high-speed oscilloscope (70GHz bandwidth) to record the eye-diagram. Clear and open eye-diagrams were obtained for all 10, 20, and 40Gbit/s signals, as exhibited in Figs. \[fig:bandwidth\](E)–(G), indicating no pattern effect of the detector and proving the feasibility of using such graphene photodetectors in realistic optical communication applications. Discussion {#discussion .unnumbered} ---------- Table \[SOTA\] summarizes the state-of-the-art waveguide-coupled integrated graphene photodetectors, as well as a commercially available photodetector. The performance of more than 110GHz bandwidth with high responsivity of 360mA/W is significantly beyond previous graphene silicon waveguide photodetectors and is comparable to the state-of-the-art commercial high-speed photodetectors [@XPDV412xR]. With narrower plasmonic gap as well as thinner Au thickness, higher absorption by graphene is expected, thus promising even higher responsivity, as analyzed in the Supplementary Material. Furthermore, tuning the Fermi-level by top-gate [@CL2014] would further improve the absorption of graphene and thus responsivity. Moreover, the use of CVD-growth graphene enables large scale integration. It should be noted that the narrow plasmonic slot of 120nm results in an ultra-fast transit of carriers to the electrodes. The transit-time-limited bandwidth of the photodetector is given by $f_t=3.5/2\pi{t_{\rm tr}}$ [@FX2009], where $t_{\rm tr}$ is the transit time through the photodetection region. At the zero bias voltage, the difference in Fermi level between the palladium- and titanium-doped graphenes is 0.05V [@TM2010], resulting in the built-in electric-field of $\sim$0.4V/$\mu$m. With a corresponding carrier velocity of 1.1$\times 10^5$cm/s [@VD2010; @IM2008; @MB2017], it takes only 1.1ps for the carriers to transit through the 120nm plasmonic slot gap. Thus, a single transit-time limited bandwidth of 520GHz is expected, much larger than the measured bandwidth of 110GHz. It implies that there are possibilities to further optimize the performance of the graphene photodetector. [ c | c | c | c | c]{} Reference & Responsivity & EQE & Bandwidth & Type of\ & & & & graphene\ [@XG2013] & 100mA/W & 7.9$\%$ & 20GHz & Exfoliation\ [@DS2014] & 7mA/W & 0.56$\%$ & 41GHz & CVD\ [@NY2014] & 57mA/W & 4.56$\%$ & 3GHz & CVD\ [@RJS2015] & 360mA/W & 29$\%$ & 42GHz & Exfoliation\ [@SS2016] & 76mA/W & 6.4$\%$ & 65GHz & Exfoliation\ [@IG2016] & 370mA/W & 29.5$\%$ & —- & CVD\ [@DS2017] & 1mA/W & 0.08$\%$ & 76GHz & CVD\ (\*)[@XPDV412xR] & 500mA/W & 40$\%$ & 100GHz & —-\ This work & 360mA/W & 29$\%$ & &gt;110GHz & CVD\ \[SOTA\] In summary, we have demonstrated an on-chip ultra-high bandwidth photodetector based on a single-layer CVD graphene and plasmonic slot waveguide hybrid structure. The narrow plasmonic slot waveguide not only enhances light-graphene interactions, but also enables to separate the photo-generated carriers effectively, leading to high responsivity and large bandwidth. An optical bandwidth larger than 110GHz with large responsivity of 360mA/W is achieved. The devices demonstrated here are fully CMOS compatible and can easily be integrated with silicon platform, and the use of CVD-growth graphene paves a promising way to achieve multi-functional integrated graphene devices for optical interconnects. Method {#method .unnumbered} ====== Fabrication process {#fabrication-process .unnumbered} ------------------- The device was fabricated on a commercial silicon-on-insulator sample with top silicon layer of 250nm and buried oxide layer of 3$\mu$m. The top silicon layer was first thinned down to 100nm by dry-etching (STS Advanced-Silicon-Etching machine, ASE) in order to obtain a good coupling efficiency with the plasmonic slot waveguide. The grating couplers and silicon waveguides were patterned by electron-beam lithography (EBL, E-Beam Writer JBX-9500FSZ), and full-etched by ASE dry-etching process. After that, the graphene layer was wet-transfered and patterned by standard ultraviolet (UV) lithography (Aligner: MA6-2) and oxygen (O$_2$) plasmonic etching. Then, the Au/Pd (90nm/3nm)-graphene contact was patterned by a second EBL, and obtained by metal deposition and lift-off process. Finally, the Au/Ti (90nm/3nm)-graphene contact was patterned by a third EBL, and obtained by metal deposition and lift-off process. Graphene wet-transferring process {#graphene-wet-transferring-process .unnumbered} --------------------------------- Single-layer graphene grown by CVD on Cu foil (GRAPHENE SUPERMARKET) was transferred onto the devices by a wet chemistry method. The SLG sheet was transferred by the following steps. Firstly, a layer of photo-resist (AZ5200 series) was spin-coated (3500rpm for 1 minute) onto the SLG by a spin-coater and the photo-resist/SLG stack was then released by etching the underlying Cu foil in a Fe(NO3)3 solution (17 wt%) at room temperature. The stack floating in the solution was then washed by deionized (DI) water for a couple of times and transferred to the target device followed by drying at room temperature for at least 24 hours. Afterwards, the photo resist on the SLG was dissolved in acetone at room temperature. Finally, the graphene device was cleaned by ethanol and DI water and then dried before further processing. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank Vitaliy Zhurbenko for the help of calibrating the RF cables and Bias Tee, and thank Prof. Fengnian Xia and Dr. Peter David Girouard for constructive discussion. The work is supported by the Center for Silicon Photonics for Optical Communication (SPOC, DNRF123), the Center for Nanostructured Graphene (CNG, DNRF103) both sponsored by the Danish National Research Foundation, and the mid-chip project sponsored by VILLUM FONDEN (No. 13367). N. A. M. is a VILLUM Investigator supported by VILLUM FONDEN (No. 16498) and X. Z. is supported by VILLUM Experiment (No. 17400). Supplementary Materials {#supplementary-materials .unnumbered} ======================= Supplementary materials are available under request. [10]{} K. C. Saraswat, and F. Mohammadi. “Effect of scaling of interconnects on the time dealy of VLSI circuits,” *IEEE Trans. Electron. Devices* 4, 645–650 (1982). D. A. Miller. “Device requirements for optical interconnects to silicon chips,” *Proc. IEEE* 97, 1166–1185 (2009). J. Michel, J. Liu, and L. C. Kimerling. “High-performance Ge-on-Si photodetectors,” *Nature Photon.* 4, 526–534 (2010). H. Ito et al. “High-speed and high-output InP-InGaAs unitraveling-carrier photodiodes,” *IEEE J. Sel. Top. Quantum Electron.* 10, 709–727 (2004). M. Liu et al. “A graphene-based broadband optical modulator,” *Nature* 474, 64–67 (2011). V. Sorianello et al. “Graphene-silicon phase modulators with gigahertz bandwidth,” *Nature Photon.* 12, 40–44 (2018). Y. Ding et al. “Effective electro-optic modulation in lowloss graphene-plasmonic slot waveguides,” *Nanoscale* 9, 15576–15581 (2017). T. Gu et al. “Regenerative oscillation and four-wave mixing in graphene optoelectronics.,” *Nature Photon.* 6, 554–559 (2012). S. Yan et al. “Slow-light-enhanced energy efficiency for the graphene microheater on silicon photonic crystal waveguides,” *Nature Commun.* 8, 14411 (2017). Z. Sun et al. “Graphene Mode-Locked Ultrafast Laser,” *Nano Lett.* 4, 803–810 (2010). T. Mueller, F. Xia, and P. Avouris. “Graphene photodetectors for high-speed optical communications,” *Nature Photon.* 4, 297–301 (2010). X. Gan et al. “Chip-integrated ultrafast graphene photodetector with high responsivity,” *Nature Photon.* 7, 883–887 (2013). A. Pospischil et al. “CMOS-compatible graphene photodetector covering all optical communication bands,” *Nature Photon.* 7, 892–896 (2013). X. Wang, Z. Cheng, K. Xu, H. K. Tsang, and J.-B. Xu. “High-responsivity graphene/silicon-heterostructure waveguide photodetectors,” *Nature Photon.* 7, 888–891 (2013). D. Schall et al. “50 GBit/s photodetectors based on wafer-scale graphene for integrated silicon photonic communication systems,” *ACS Photon.* 1, 781–784 (2014). C.-H. Liu, Y.-C. Chang, T. B. Norris, and Z. Zhong. “Graphene photodetectors with ultra-broadband and high responsivity at room temperature,” *Nature Nanotechnol.* 9, 273–278 (2014). S. Schuler et al. “Controlled generation of a p-n junction in a waveguide integrated graphene photodetector,” *Nano Lett.* 16, 7107–7112 (2016). I. Goykhman et al. “On-chip integrated, silicon-graphene plasmonic Schottky photodetector with high responsivity and avalanche photogain,” *Nano Lett.* 16, 3005–3013 (2016). P. Ma et al. “Plasmonically enhanced graphene photodetector featuring 100 GBd, high-responsivity and compact size,” arXiv:1808.10823 (2018). K. I. Bolotina et al. “Ultrahigh electron mobility in suspended graphene,” *Solid State Commun.* 146, 351–355 (2008). V. E. Dorgan, M.-H. Bae, and E. Pop. “Mobility and saturation velocity in graphene on SiO$_2$,” *Appl. Phys. Lett.* 97, 082112 (2010). R. R. Nair et al. “Fine structure constant defines visual transparency of graphene,” *Science* 320, 1308 (2008). J. M. Dawlaty et al. “Measurement of the optical absorption spectra of epitaxial graphene from terahertz to visible,” *Appl. Phys. Lett.* 93, 131905 (2008). S. Cakmakyapan et al. “Goldpatched graphene nano-stripes for high-responsivity and ultrafast photodetection from the visible to infrared regime,” *Light Sci. Appl.* 7, 20 (2018). Z. Q. Li et al. “Dirac charge dynamics in graphene by infrared spectroscopy,” *Nature Phys.* 4, 532–535 (2008). F.Wang et al. “Gate-variable optical transitions in graphene,” *Science* 320, 206–209 (2008). F. H. L. Koppens et al. “Photodetectors based on graphene, other two-dimensional materials and hybrid systems,” *Nature Nanotechnol.* 11, 780–793 (2014). R.-J. Shiue et al. “High-responsivity graphene-boron nitride photodetector and autocorrelator in a silicon photonic integrated circuit,” *Nano Lett.* 15, 7288–7293 (2015). D. Schall, C. Porschatis, M. Otto, and D. Neumaier. “Graphene photodetectors with a bandwidth &gt;76 GHz fabricated in a 600 wafer process line,” *J. Phys. D: Appl. Phys.* 50, 124004 (2017). G. Giovannetti et al. “Doping graphene with metal contacts,” *Phys. Rev. Lett.* 101, 026803 (2008). K. S. Novoselov et al. “Electric field effect in atomically thin carbon films,” *Science* 306, 666–669 (2004). F. Xia, T. Mueller, Y. ming Lin, A. Valdes-Garcia, and P. Avouris. “Ultrafast graphene photodetector,” *Nature Nanotechnol* 4, 839–843 (2009). M. Ayata et al., “High-speed plasmonic modulator in a single metal layer,” *Science* 358, 630–632 (2017). Y. Salamin et al. “100 GHz Plasmonic Photodetector,” *ACS Photon.* doi:10.1021/acsphotonics.8b00525 (2018). FINISAR 100 GHz Single High-speed Photodetector &lt;https://www.finisar.com/communication-components/xpdv412xr&gt;. Y. Ding, C. Peucheret, H. Ou, and K. Yvind. “Fully etched apodized grating coupler on the SOI platform with –0.58 dB coupling efficiency,” *Opt. Lett.* 39, 5348–5350 (2014). I. Meric et al. “Current saturation in zero-bandgap, top-gated graphene field-effect transistors,” *Nature Nanotechnol.* 3, 54–659 (2008). M. Bonmann et al. “Charge carrier velocity in graphene field-effect transistors,” *Appl. Phy. Lett.* 111, 233505 (2017). N. Youngblood et al. “Multifunctional graphene optical modulator and photodetector integrated on silicon waveguides,” *Nano Lett.* 14, 2741–2746 (2014).
--- abstract: 'In this work we study a simple way of controlling the emitted fields of sub-wavelength nanometric sources. The system studied consists of arrays of nanoparticles (NPs) embedded in optical active media. The key concept is the careful tuning of NP’s damping factors, which changes the eigenmode’s decay rates of the whole array. This inevitably leads, at long time, to a locking of relative phases and frequencies of individual localized-surfaces-plasmons (LSPs) and, thus, controlls the emitted field. The amplitude of the LSP’s oscillations can be kept constant by embedding the system in optical active media. In the case of full loss compensation, this implies that, not only the relative phases, but also the amplitudes of the LSPs remain fixed, leading us, additionally, to interpret the process as a new example of synchronization. The proposed approach can be used as a general way of controlling and designing the electromagnetic fields emitted by nanometric sources, which can find applications in optoelectronic, nanoscale lithography and probing microscopy.' author: - 'Raúl A. Bustos-Marún$^{1,2}$, Axel D. Dente$^{1}$, Eduardo A. Coronado$^{2}$, and Horacio M. Pastawski$^{1}$' title: 'Tailoring optical fields emitted by nanometric sources.' --- INTRODUCTION. ============= Recent advances in past decades in fabrication and characterization of nanometric devices have given rise to a revolution, fueled by the new and intriguing properties of matter in this size scale. Among the new fields that rapidly became central, emerged the promise of plasmonics with applications that go from ultra sensitive nano-sensors to plasmonic circuitry. [@Maier-book; @Novotny-book; @NanoscaleEdu; @CRNordlander; @SPCircuitry] Many of those promises have became a reality nowadays, but the advances do not seem to slow down and new ideas are still emerging in this field. One interesting example, is the combination of plasmonic devices with active media that compensate in part or totally system’s losses. [@ExamGain1; @ExamGain2; @ExamGain3; @ExamGain4; @ExamGain5; @ExamGain6; @ExamGain7; @Li; @Spaser2003; @Spaser2010; @Spaser2011; @Soukoulis; @expSpasers1; @expSpasers2; @expSpasers3; @expSpasers4; @expSpasers5; @expSpasers6; @Spa-Mode-Sel1; @Spa-Mode-Sel2] Active media are made of dye molecules or semiconductors nanocrystals, where the population inversion is created optically or electrically. The concept of spaser (surface plasmon amplification by stimulated emission of radiation), also known as surface plasmon laser in a wider context, is an example of that. Originally proposed by Bergman and Stockman in 2003,[@Spaser2003] and finally implemented experimentally in 2009,[@expSpasers1; @expSpasers2; @expSpasers3] it is basically a source of electromagnetic fields, containing both propagating and evanescent waves, and formed by the interaction of surface plasmons with active media that fully compensate the losses of the plasmonics system.[@Spaser2003; @Spaser2010; @Spaser2011; @Soukoulis] Spasers can provide us with many possibilities for prospective applications in nanoscience and nanotechnology, in particular for near-field nonlinear-optical probing and nanomodification. In this respect, it should be desirable to control and to design *a priori* the electromagnetic fields generated by those hybrid systems. If the plasmonic system consist of arrays of NPs, the design of electromagnetic fields implies a control over the synchronized oscillation of the individual localized surface plasmons (LSPs), which leads to another interesting aspect. Essentially, as in those systems not only the phases and frequencies of individual LSPs but also their amplitudes remain fixed, the whole phenomenon can be interpreted as another example of synchronization The phenomenon of synchronization, usually defined as the adjustment of rhythms of self-sustained oscillating objects because of their mutual interaction,[@SyncBook] has been observed in many physical and biological systems: from the coupled pendulums clocks first described by Christian Huygens[@Huygens] to the chemical or biological examples, such as fireflies that flash in unison.[@SyncBook] However, up to our knowledge, it has never been described in the context of plasmonics. In this work we study plasmonic systems, consisting of metallic nanoparticle (NP) arrays, where losses are partially or fully compensated by an active medium. We not only find that the localized surface plasmons (LSPs) of individual NPs can be kept oscillating with a fixed amplitude and a fixed relative phase, becoming a new example of synchronization, but we also show that it should be relatively easy to control their asymptotic states by controlling NP’s damping. The manipulation of the system’s state at long time implies the control of NP’s dipolar moments and thus of their emitted electromagnetic field. Therefore, our approach is a general way of designing the interference patterns of sources of optical fields in the sub-wavelength scale, which can have applications in several areas of nantechnology. The paper is organized as follows: In section \[SecCDA\] we develop the basics tools used in our calculation. In section \[SecResults\] we present the main results, analyzed through two simple examples of NP’s arrays, and discuss them in terms of: non-Hermiticity of the dynamical matrix and asymptotic states, phase and frequency locking, role of active media, gain-loss compensation, amplitude locking, and generalization to more complex structures, subsections \[subSecR0\] to \[subSecR5\]. Finally, in section \[SecConclusions\] we summarize the main conclusions. COUPLED DIPOLE APPROXIMATION FOR ELLIPSOIDS WITH RADIATION DAMPING. {#SecCDA} =================================================================== The systems studied are basically different arrays of metallic NPs which are modeled through the well known coupled dipole approximation.[@Bustos1; @Bustos2; @CDA1; @CDA2; @CDA3; @CDA4; @CDA5] In this model, each $i^{\mathrm{th}}$-NP is described by a dipole $\vec{P}_{i}$ induced by the electric field produced by the others dipoles, $\vec{E}_{j,i}$, and the external source, $\vec{E}_{i}^{(\mathrm{ext})}$. We assume a generic ellipsoidal shape for the NPs whose polarizabilities $\alpha$ are described in a quasi-static approximation, [@Ellipsoids; @KellyCoronadoSchatz] $$\alpha=\frac{\epsilon _{0}V(\epsilon -\epsilon _{m})}{\left[ \epsilon _{m}+L(\epsilon -\epsilon _{m})\right] } \label{alpha},$$ where $V$ is the volume, $\epsilon_{0}$ is the free space permittivity, $\epsilon_{m}$ is the dielectric constant of the host medium, and $L$ is a geometric factor that depends on the shape of the ellipsoidal NP and the direction of $E$. The dielectric constant of the NP, $\epsilon$, is described by a Drude-Sommerfeld’s like model $$\epsilon = \epsilon_{\infty}-\frac{\omega_{_\mathrm{P}}^{2}}{(\omega ^{2}+i\omega \eta )},$$ where $\epsilon_{\infty}$ is a material dependent constant and take into account the contribution of the bound electrons to the polarizability, $\omega_{_\mathrm{P}}^{}$ is the plasmon frequency, and $\eta$ the electronic damping factor. Assuming for simplicity a linear array of NPs and a near field approximation for $\vec{E}_{i,j}$ yields, $$\vec{E}_{i,j}=- \frac{\gamma ^{T,L} \vec{P}_{j}}{4\pi \epsilon _{0}\epsilon_{m}d^{3}}, \label{E-quasi}$$ where $d$ is the distance between NPs, and $\gamma$ is a constant that depends on the orientation of the NP’s array relative to the direction of $E$, $\gamma ^{T}=1$ if it is perpendicular and $\gamma ^{L}=-2$ if it is parallel. If we take take into account these considerations, then all $\vec{P}$s and $\vec{E}^{(\mathrm{ext})}$s can be arranged as vectors $\mathbf{P}$ and $\mathbf{E}$ resulting in:[@Bustos1; @Bustos2] $$\mathbf{P} = \left( \mathbb{I}\omega ^{2}-\mathbb{M}\right)^{-1} \mathbb{R} \mathbf{E}= \mathbf{\chi} \mathbf{E},\label{MatrixP}$$ where $\mathbf{\chi}$ is the response function, $\mathbb{M}$ is the dynamical matrix and $\mathbb{R}$ is a diagonal matrix that rescales the external applied field according to local properties: $$R_{i,i}=-\epsilon _{0}V_{i} \omega_{_\mathrm{P}i}^{2} f,\label{Rii}$$ with $$f= \frac{ \left[ 1 - (\epsilon_{\infty}-\epsilon _{m,i}) \left( \omega ^{2}+i\omega \eta_{i}\right) /\omega_{_\mathrm{P}i}^{2} \right] } {\left[\epsilon _{m,i}+L_{i}(\epsilon_{\infty}-\epsilon _{m,i})\right]}.$$ To understand the physical meaning of $f$, first note that Eq. \[MatrixP\] resembles that of a set of coupled harmonic oscillators. In the quasi-electrostatic limit, Eq. \[E-quasi\], and for a negligible radiation damping term, see Eq. \[Gamma\], this similarity is strict for $f$ equal to 1. Thus, this factor essentially accounts for deviations of the ideal model of coupled harmonic oscillators. The coupling constants, $M_{i,j}= - \omega_{_\mathrm{X}i,j}^{2}$ (for $i \neq j$), and the LSP complex square frequencies, $M_{i,i}=\omega_{_\mathrm{SP}i}^{2}-\mathrm{i} \Gamma_i(\omega)$, are given by:[@Bustos1; @Bustos2] $$\omega_{_\mathrm{X}i,j}^{2} =\frac{\gamma ^{_{T,L}} V_{i} \omega_{_\mathrm{P}i}^{2}} {4\pi \epsilon_{m}d_{i,j}^{3}} f, \label{OmegaX}$$ $$\omega_{_\mathrm{SP}i}^{2} =\frac{\omega_{_\mathrm{P}i}^{2}L_{i}}{\left[ \epsilon _{m,i}+L_{i}(\epsilon_{\infty}-\epsilon _{m,i})\right] }, \label{OmegaSP}$$ and $$\Gamma(\omega)=\eta \omega + \eta _R \omega ^3. \label{Gamma}$$ where $\eta$ is the electronic damping and $\eta _R$ the radiation damping. The electronic damping $\eta$ can be calculated from the Fermi velocity $v_f$, the bulk mean free path $l_{bulk}$, the volume $V$, and the surface $S$ of the NP by using the Matthiessen’s rule $\eta= v_f (1/l_{bulk}-C/l_{eff})$ with $C \approx 1$, and the Coronado-Schatz formula $l_{eff}=4V/S$.[@CoronadoSchatz] The value of $\eta _R$ can be calculated from the ellipsoid’s radius $a$, $b$, and $c$, $\eta _R=2/9(a b c/v^3) \omega_P^2 f$, where $v$ is the speed of light in the host medium. This extra damping term appears when the polarizability $\alpha$ is corrected by using the modified long-wavelength approximation, $\alpha'=\alpha [1- i (2/12 \pi \epsilon_0) k^3 \alpha]^{-1}$.[@KellyCoronadoSchatz] In the examples analyzed here, dynamic depolarization is negligible and thus not included in the equations for simplicity. Retardation effects change the coupling terms which now should be determined by the true dipole-induced electric field, *i.e.*: $$\begin{aligned} \vec {E} = \frac { e^{ ik d } |P|} { 4\pi \epsilon_0 \epsilon_m d^{ 3 } } & \left\{ (k d)^{ 2 }(\hat { d } \times \hat { p } )\times \hat { d } \right. \\ & + \left. \left[ 3\hat { d } (\hat { d } \cdot \hat { p } )-\hat { p } \right] \left( 1- ikd \right) \right\},\end{aligned}$$ where $k$ is the wavenumber in the dielectric, $k=\omega/v$ (where $v$ is the speed of light in the medium), $\hat { d }$ is the unit vector in the direction of $\vec {d}$ (where $\vec {d}$ is the position of the observation point with respect to the position of the dipole), $\hat { p }$ is the unit vector in the direction of $\vec {P}$, and $|P|$ is its modulus. If the system consists of a linear array of NPs where the spheroids axes are aligned with respect to the direction of the array, transversal ($T$) and longitudinal ($L$) excitations do not mix, which allows us to preserve the form of Eq. \[OmegaX\] by simply replacing $\gamma^{_{T,L}}$ by $\widetilde{\gamma }^{_{T,L}}$, where: $$\begin{aligned} & \widetilde{\gamma }^{_{L}}_{i,j}=-2[1-ikd_{i,j}]e^{ ikd_{i,j} } \notag \\ & \widetilde{\gamma }^{_{T}}_{i,j}=[1-ikd_{i,j}-(kd_{i,j})^{ 2 }]e^{ ikd_{i,j} }.\end{aligned}$$ We use this final form of the equations in all the calculations shown here. However, the qualitative results do not change by using the quasistatic approximation. The temporal evolution of the dipolar moments of individual NPs can be evaluated by Fourier transforming the response function $\chi(\omega)$ into $\chi(t)$ and using the convolution theorem: $$P_i(t) = \sum_{j} \int_{0}^{t} \chi_{i,j} (t-\tau) E_{j}^{(ext)}(\tau) d \tau. \label{fourier}$$ The functions $\chi(t)_{i,j}$ were numerically computed from $\chi(\omega)_{i,j}$ by using a fast Fourier transform algorithm.[@Numerical] Here, one must be careful, in case of using active media, of not overpassing the loss-compensation condition as one is always assuming that the response function $\chi_{i,j}$ is square integrable. RESULTS. {#SecResults} ======== Non-Hermiticity of $\mathbb{M}$ and asymptotic states. {#subSecR0} ------------------------------------------------------ In the type of system studied here, frequency and phase locking may appear as a natural consequence of the properties of non-Hermitian matrices. While isolated systems are described by a typical Hermitian dynamical matrix $\mathbb{M}$, where the final state depends on the initial conditions, the presence of an “environment” leads to a non-Hermitian dynamical matrix. [@Bustos1; @Bustos2; @Rotter; @PastPhysB] This interaction may cause asymptotic states that are independent of the initial conditions. An illustrative example of that is the case of a pair of piano strings in a unison group.[@pianos] There, the slightly detuned strings are coupled through the bridge, which, in turn, is coupled to a dissipative soundboard. Within a certain critical parametric range, this dissipative coupling induces the synchronous oscillation of both strings [@pianos] and gives the piano its characteristic and persistent aftersound. This dissipative coupling can be modeled by an imaginary coupling which, at a critical strength, produces the collapse of the pair of originally mistuned eigenfrequencies into a single tone. Simultaneously, the originally identical dampings split into a short and a long lived modes. The effect of this is that the long time evolution is dominated, for almost any initial condition, by the normal mode whose eigenvalue has the smallest imaginary part. The same analysis can be straightforwardly applied to plasmonics systems represented by Eq. \[MatrixP\], where the analogy also includes the concepts of dissipative couplings, frequency collapses, and damping’s splittings, see Appendix. However, in plasmonic’s systems it is not always obvious which is the asymptotic state of a given system, which makes its control even less obvious. The situation worses if we consider that usually parameters such as NP’s shape and separations are not accurately determined. Besides, unlike the discussed case of coupled piano strings, the amplitude of the oscillations of the LSPs decays so fast that it would be quite difficult to observe the phase and frequency locking. Therefore, two main features are desirable. The control at will of the asymptotic state of the system and to be able to keep the amplitude of the LSP’s oscillations over a long period of time. In the following sections we will address sequentially each one of these points. Phase and frequency locking. {#subSecR1} ---------------------------- ![(Color online) **- A)**. Dipolar moment $P_i$ of NPs 1 and 3 (in arbitrary units) vs time (in units of $\omega_{SP}^{-1}$) for three aligned and identical NPs. Between $t=0$ and $62$ (mark in green) a external field of frequency $\omega=\omega_{SP}$ and direction parallel to the array, is applied locally to the first NP to initialize the system. The parameters used correspond to spheroidal Ag’s NPs of radii $30$, $30$ and $8$ nm, separated $32 nm$, and $\epsilon_m=1.77$. **B)**. The same but with the middle NP having a different shape ($90$x$90$x$8$ $nm$). **Upper insets**: Detail of the main figure. **Bottom insets**: Detail of the decay rate of different oscillation modes. $S1=(P_1+\sqrt{2}P_2+P_3)/2$, $A=(P_1-P_3)/\sqrt{2}$, and $S2=(P_1-\sqrt{2}P_2+P_3)/2$. **Side figures**: Schemes of the NP’s arrays.[]{data-label="Figure1"}](Figure1.eps){width="3.5in"} As we mentioned, the plasmonic dynamical matrix $\mathbb{M}$ resembles that of coupled harmonic oscillators. This can be used to analyze certain systems in simple terms as we will see. Assuming the quasi-electrostatic limit, negligible damping terms, and $f \approx 1$, it is easy to evaluate the normal modes of $\mathbb{M}$. In the case of three equal NPs, aligned linearly, and equally spaced the normal modes can be written as: $(P_1-P_3)/\sqrt{2}$, $(P_1+\sqrt{2}P_2+P_3)/2$ and $(P_1-\sqrt{2}P_2+P_3)/2$. Where $P1$, $P2$, and $P3$ stand for the dipolar moments in some given direction of NPs 1, 2, and 3 respectively. If the difference in frequency between the NPs of the ends and the central one is small this expressions are still approximately valid. Let us analyze this simple example of three aligned Nps and let us assume that we want to ensure an asymptotic state in which the NPs of the ends remain oscillating in anti-phase. In this case, one only need to add a larger damping factor to the middle NP. The normal mode of $\mathbb{M}$ that has zero weight over the NP with a high damping factor, $\approx (P_1-P_3)/\sqrt{2}$, has a small decay rate compared with the other two, $\approx(P_1+\sqrt{2}P_2+P_3)/2$ and $\approx(P_1-\sqrt{2}P_2+P_3)/2$, which both have finite weights over the highly dispersive nanostructrure (NP 2 in this example). The strategy is then clear, the key to control the phase and frequency locking is the careful designing of the damping factors of NPs in such a way that it leaves one normal mode (the one that will define the desirable phase relationship and frequency) with the smallest, ideally zero, weight over the regions of the array with the largest damping factors. There are of course several ways of increasing the damping factor of NPs, not only by changing their shape or material but also by “connecting” them to waveguides for example.[@Bustos1; @Bustos2] Here we use the shape of NPs to control the damping factors. According to the parameters chosen, the radiation damping term is the dominant one for the NP with the high damping factor, while the electronic damping term is the dominant one for the others. In Fig. \[Figure1\], we evaluate the temporal evolution of the dipolar moment $P_i(t)$ of each NP, by using Eq. \[fourier\] in two examples that illustrate how tuning the damping factor of NPs can be used to control the asymptotic state of the system. In these examples we explicitly take into account the material and shape of NPs, always within the couple dipole approximation described in section \[SecCDA\] and including the full dependence of $\omega_{_\mathrm{X}}^{2}$ and $\Gamma$ on $\omega$. The results essentially show the above discussed: After the external source is switched off the LSPs decay very fast, but as indicated in the lower insets, different modes decay at different rates which leads to a natural phase and frequency locking of the LSPs of individual NPs. The asymptotic state of case **A** is not easily seen in the upper inset, but a more careful analysis, depicted in the lower inset, reveals that the mode with the lowest decay rate is “S1”. A comparison of Figs. **A** and **B** shows that the asymptotic state changes as consequence of the increased damping factor of the middle NP. It should be mentioned that the only role of the external source of electric field is just to initialize the system. This could have been done in many different ways in the simulation, for example by using a pulse of electromagnetic radiation. However, as long as all normal modes are excited the final state will be the same, up to a factor in the amplitude of course. Role of active media. {#subSecR2} --------------------- As previously mentioned, there is a problem with the phase and frequency locking mechanism described above. Everything occurs too fast. Note that in Fig. \[Figure1\] the time scale is in units of $\omega_{SP}$ of NPs 1 which for the NPs used corresponds to around 0.2 fs. This implies that all the process starts and finishes in less than 0.1 ps approximately. There is a need, then, of keeping the system oscillating for longer periods of time, in order to reasonably envision possible applications. This can be done by embedding the system in an optically active medium. If the gain of the active medium is below the loss compensation threshold, its effect can be modeled phenomenologically on the basis of classical electrodynamics without taking into account explicitly the quantum dynamics of the chromophores. This is done by considering the medium as a dielectric with a negative imaginary part in the refraction index $n$. $n=n_0-i \kappa$.[@ExamGain1; @ExamGain2; @ExamGain3; @ExamGain4; @ExamGain5; @ExamGain6; @ExamGain7; @Li; @Spaser2010] Within this model, the active medium is consistent with an homogeneous distribution of the dye molecules, or nanocrystals quantum dots, and with a wide band approximation for its response. The wide band approximation implies that the eigenfrequencies of the modes are close compared with the frequency dependence of the active medium. If this condition is not fulfilled, each mode will have a different value of $\kappa$ or even a null one if the frequency of the mode is far enough from the maximum of the medium’s stimulated-emission-spectrum. In this case the analysis of mode’s compensation is direct as it can be based only on mode’s frequencies. On the contrary if the eigenfrequencies of the modes are close enough, such as all modes experience approximately the same value of $\kappa$, it is in principle not obvious which mode will be compensated first, and less obvious how to control this. This is why, the wide band approximation allows us to explore alternatives for controlling the system’s asymptotic states, beyond the mechanisms based on the frequency response of the active medium or the use of some spatial inhomogeneities in its distribution around the system.[@Spa-Mode-Sel1; @Spa-Mode-; @Sel2] As mentioned, it could result not obvious how an active medium would affect the phenomenon depicted in Fig. \[Figure1\], mainly because $n$ enters not-linearly in the equations, see Eqs. \[MatrixP\]-\[Gamma\], and this could in principle changes the expected asymptotic state. However, as we are precisely considering gain media without explicit spacial distribution or frequency dependence, it is reasonable to expect that all modes will be excited similarly. Thus, if there are appreciable differences in the natural decay rates, the asymptotic states with active medium should be determined directly by them. Figs. \[Figure2\] shows essentially that. Incorporation of optical gain media does not change the asymptotic states discussed in the previous section, even thought it has been used a value of $\kappa$ that almost completely compensate losses. In the two examples analyzed, the slowest decaying mode keeps as such, modes “S1” and “A” for cases **A**, **B** respectively. The only effect of the active medium in those examples, besides keeping the system oscillating for longer periods of times, is that it systematically increases even further the differences in the decaying rates, making phase and frequency locking to occur even earlier. As the system remains oscillating for longer periods, it is easier to see in the figures (upper insets) the phase locking and how it is affected by changing the damping factors. In case **A**, the NPs of the edges (Nps 1 and 3 in Fig. \[Figure2\]) end oscillating in phase, while, if we increase the damping factor of the middle NP as in case **C**, the NPs of the edges end oscillating in anti-phase. As mentioned before, the reason of that is simply that the anti-phase oscillation of the NPs of the edges interfere destructively over the NP of the middle where the largest damping factor is present. The other two normal modes having some weight on the middle NP will increase their decay rate. Besides the examples shown in the figures, we also tried other possibilities as, other NP’s arrays or changing the system’s parameters. However, the results were always the same, when there are appreciable differences in the decay rates with $\kappa = 0$, which is for example the case of Fig. \[Figure1\]-**B**, an homogeneous active medium is not able to change the expected asymptotic state. Only, for systems like case **A** of Fig. \[Figure1\], where the decay rates for $\kappa = 0$ are very close, we observed, for some system’s parameters, that the active media changes the expected asymptotic state. ![(Color online) - The same as Fig. \[Figure1\] but considering an optically active medium with $\kappa=0.11$ and $0.12$ for subfigures **A** and **B** respectively.[]{data-label="Figure2"}](Figure2.eps){width="3.5in"} Gain-loss compensation. {#subSecR3} ----------------------- At this point, it is important to discuss about the limiting value of $\kappa$, $\kappa_{\mathrm{lim}}$, for which losses are exactly compensated, and the experimental feasibility of this. The value of $\kappa_{\mathrm{lim}}$ can be evaluated from the poles of Eq. \[MatrixP\] by looking for the pole with the smallest imaginary part. Thus, $\kappa_{\mathrm{lim}}$ is the value of $\kappa$ for which the imaginary part of this pole equals zero. In some cases, it can be easy to obtain approximate analytical expressions, but in general one must resort to numerical evaluations. In case **B** of Figs. \[Figure1\] and \[Figure2\], the eigenvalue of the “A“ eigenmode $\omega_{\mathrm{eig-A}}^2$, $(P_1-P_3)/\sqrt{2}$, can be obtained easily by assuming a wide band approximation: $$\omega_{\mathrm{eig-A }}^2 \approx \omega _{_\mathrm{SP}}^{2}-\mathrm{i} \Gamma, \label{polos}$$ where $\omega _{_\mathrm{SP}}^{2}$ is the LSP resonant frequency of one of the NPs of the ends, and $\Gamma$ is its damping factor. Then, the value of $\kappa_{\mathrm{lim}}$ can be obtained by using Eq. \[OmegaSP\] with $\epsilon_m=n^2$ and assuming a small $\kappa$. The result is: $$\kappa_{\mathrm{lim}} \approx \frac {\Gamma [n_0^2+L(\epsilon_{\infty}-n_0^2)]} {2 n_0 \omega_{_\mathrm{SP}}^2 (1-L)} \label{klim}$$ which according to the parameters used, $n_0=1.33$, $L=0.689$, $\epsilon_{\infty}=3.7$, and $\Gamma / \omega_{_\mathrm{SP}}^2 \approx 0.032$, gives $\kappa_{\mathrm{lim}} \approx 0.121$. For case **A** of Figs. \[Figure1\] and \[Figure2\], it is more difficult to obtain simple analytical solutions as $ \omega_{_\mathrm{X}}^2$ also enters into the equations and depends on $\kappa$. However, they can always be evaluated numerically. From the simulation, we estimated the value of $\kappa_{\mathrm{lim}}$ as $0.11$ and $0.12$ approximately for cases **A** and **B** respectively, which should be close to experimental possibilities.[@expSpasers1; @expSpasers2; @expSpasers3; @expSpasers4; @expSpasers5; @expSpasers6; @GainExp1; @GainExp2; @GainExp3; @GainExp4] Note the agreement between the numerical and analytical results for case **B**. The value of $\kappa$ is a phenomenological coefficient that represent the property of some media of coherently amplify an electromagnetic field. It is related with the amplification coefficient $g$ by $g = 4 \pi \kappa / \lambda$. Gain media in plasmonics are made of chromophores that overlap spatially and spectrally with the surface plasmon modes of the nanostructure. These chromophores can be semiconductors nanocrystals, dye molecules, rare-earth ions, or electron-hole excitations of a bulk semiconductor. The gain coefficient can be written as $g = N \sigma_e$, where $N$ is the concentration of electron-hole pairs in the case of semiconductors or the concentration of molecules and their population inversion in the case of dye molecules. The symbol $\sigma_e$ is the stimulated emission cross section which, in turn, depends on the dipolar moment of the transition.[@ExamGain1; @ExamGain2; @ExamGain3; @ExamGain4; @ExamGain5; @ExamGain6; @ExamGain7; @Li; @Spaser2003; @Spaser2010; @Spaser2011; @Soukoulis] Here we should clarify one point. Up to now, we have been discussing and comparing the decay rates of different modes that have always the same direction of the electric field, parallel to the array. However, there are two other set of modes, those with the electric field perpendicular to the array, that can also enter in the analysis of the system’s asymptotic state. If the dipolar moment of the transition of the molecules or semiconductors nanocrystals that constitute the active media, have a preferential direction, then the media can only feedback some modes, those with a finite overlap between the mode’s electric field and the dipolar moment.[@Spaser2010; @Spaser2011; @Soukoulis] In this case, only some modes, ideally those that oscillate in the preferable direction, should be considered. On the contrary, if the dipolar moment of the transition has a random orientation, then one has to analyze the full picture, i.e. the whole nine modes for the arrays. In this last case, the shape of the NPs acquires a central role, because it determines in which direction the system will remain oscillating. To see that, note that the $L_i$ factor of Eq. \[alpha\], depends on both the shape of the NP and the direction of electric field, and this parameter enters, not only in eigenfrequency of the mode, but also in the damping term $\Gamma$ through $\eta_R$ and $f$. For example, let us consider a system of three equal NPs of 30x20x8 nm aligned in the direction of the minor axis and separated 24 nm. Here, the mode that is compensated in the first place by the active medium is that where all LSPs are oscillating synchronous in phase and parallel to the major axis. In this case $\kappa_{\mathrm{lim}} \approx 0.024$. The equivalent modes but for the other directions, those where the LSPs oscillate in phase and parallel to the second largest axis and to the minor axis, have a value of $\kappa_{\mathrm{lim}}$ of approximately $0.029$ and $0.073$ respectively. Amplitude locking. {#subSecR4} ------------------ ![Electric field $E$ for the asymptotic state of case **A** and **B** of Fig. \[Figure2\]. The strength of $E$ is normalized to its maximum value in each figure.[]{data-label="Figure3"}](Figure3.eps){width="2.7in"} According to our equations up to now, for $\kappa > \kappa_{\mathrm{lim}}$, $P(t)$ should grow exponentially *at infinitum*, which is of course not realistic. At some point the pumping mechanism that keeps the inversion population must be overcome by the decay rate of the molecules in the excited state decaying toward their fundamental state. The realistic situation is that the amplitude of the surface plasmon oscillations should stabilize at some point. This is so, because the stimulated emission that depletes the excited states depends on $|E|^2$ which, in turn, depends on $P_i$, while the mechanism that restore the inversion population is fixed and independent of $P_i$.[@Spaser2010; @Spaser2011; @Soukoulis] A complete treatment would require to solve the quantum mechanics dynamics of each chromophore under the influence of the electromagnetic field corresponding to its position and the coupled equation of motion of the surface plasmon dynamics. This is beyond the scope of this work and besides was already addressed by other authors in the context of spasers.[@Spaser2010; @Soukoulis] The important result of these previous works, for the present purposes, is that the system evolves in a somehow complex way, until a stationary regime is reached. This stationary regime corresponds to a net amplification equals zero, which means that gain exactly compensate losses,[@Spaser2003; @Spaser2010; @Spaser2011; @Soukoulis] a condition expressed in our case by Eqs \[klim\] in terms of $\kappa_{\mathrm{lim}}$. Essentially, the convergence towards a stationary regime where losses are compensated, implies amplitude locking. The asymptotic value of the amplitude could be complex to evaluate but the important point is that, sooner or latter, it is reached and it is non zero for $\kappa_{\mathrm{initial}} > \kappa_{\mathrm{lim}}$ and initial conditions different from the trivial one, $P_i = 0$. The other important point is that, once the system is in the stationary state regime, the inversion population freezes, fixating the gain coefficient $g$ and thus $\kappa$, at $\kappa = \kappa_{\mathrm{lim}}$. Then, independently of how or when this stationary regime is reached, in the end one should see the type of behaviors showed in the context of Fig. \[Figure2\], i.e. different normal modes are compensated differently by the active medium. Therefore, while the slowest decaying mode is exactly compensated, the others will be undercompensated which will inevitably lead to a phase, frequency, and also amplitude locking. Note that, because of this, the plasmonics systems studied can be considered as a new example of synchronization. The above analysis has also another important consequence, gain medium can not, in general, exactly compensate the losses of all eigenmodes at the same time. Let us assume the system has three eigenmodes each one with different values of $\kappa_{\mathrm{lim}}$; $\kappa_{1} < \kappa_{2} < \kappa_{3}$ . Then, if one try to compensate the second or the third modes , $\kappa = \kappa_{2}$ or $\kappa = \kappa_{3}$, the first one will be overcompensated which can not define a stationary state as it should grow indefinitely. The realistic situation is that the inversion population of the active medium will be depleted by the increasing electromagnetic field of the first mode, reducing the value of $\kappa$ until it reaches $\kappa_{1}$. As this argument is very general, we believe its consequences should be present in the majority of this kind of systems provided that the necessary ingredients are present. The eigenfrequencies of the modes must be close enough compared with the frequency response of the medium and different modes should share somehow the same dye molecules or semiconductor nanocrystals. We plan to address this interesting issue in a future work. Figs. \[Figure3\]**-A** and \[Figure3\]**-B** show the electric field generated by the examples shown in Figs. \[Figure1\] and \[Figure2\] for $t \rightarrow \infty$. The former corresponds to the system with three equal Nps and the latter to the system with the middle Np having a larger damping factor. Note the great differences in the emitted electric fields. The upper case shows the typical interference patterns of a punctual dipolar source, while the lower one shows that of a quadrupole. This example highlight the fact that amplitude locking becomes our system into, not only another example of synchronization, but also a nanometric source of both evanescent and propagating waves with a predetermined and controllable interference pattern. Generalization to more complex structures. {#subSecR5} ------------------------------------------ ![(Color online) - Schemes of others NP’s arrays. Small spheres stand for NPs with large damping factors while the large ones represent NPs with small damping factors.[]{data-label="Figure4"}](Figure4.eps){width="3.0in"} The proposed synchronization mechanism can be easily extended to more complex nano-structures. The key is to build the system such as all normal modes but one have some weight on the highly dispersive NPs, the middle one in Fig. \[Figure2\]-**B** for example. Then, if the damping factor of the highly dispersive NP is large enough, the slowest decaying normal mode, which will control the relative phases of the LSP in the asymptotic state, will be that having the smallest weight over these NPs. In Fig. \[Figure4\] we present just one possible set of examples of that for NP’s arrays of arbitrary size. The examples assume nearest neighbors interactions. The small spheres represent equal NPs with large damping factors while the large ones represent equal NPs with small damping factors. In all these cases, one can show that there is always one eigenvalue of $\mathbb{M}$ that has zero weight over the small NPs. This normal mode corresponds to that where the LSP of the large NPs oscillates in anti-phase with respect to their nearest large-NPs neighbors. Thus, as this mode will have the slowest decay rate it will determine the phase locking at long time. Others asymptotic states are also possible in those systems. One only has to evaluate the weight of individual NPs on each normal mode and, based on that, increases selectively the damping factors of certain Nps to achieve the desired asymptotic state. CONCLUSIONS. {#SecConclusions} ============ In this work we have shown a simple way of controlling phase and frequency locking of the self-sustained oscillation of NP’s LSPs, by tuning the damping factors of individual NPs. Furthermore, we have shown that it should be possible to keep the system oscillating with constant amplitude by including optically active media properly tuned. We interpret this as a new example of synchronization as we are in the presence of self sustained oscillating objects, clearly separable, that depict phase, frequency as well as amplitude locking, consequence of their mutual interaction. Since it is possible to control the asymptotic state of these NP arrays with self sustained LSP, our approach is a general way of designing the interference patterns of sources of optical fields in the sub-wavelength scale. This can surely find applications in optoelectronic, nanoscale lithography and probing microscopy. In addition, the proposed method can naturally be combined with other alternatives, such as using the frequency dependence of the active medium or controlling its spatial distribution. ACKNOWLEDGEMENTS. ================= The authors acknowledge the financial support from CONICET, SeCyT-UNC, ANPCyT, and MinCyT-Córdoba. E.A.Coronado thanks the financial support provided by CONICET PIP (2012) 112-201101-00430 and by FONCYT Program BID PICT 2012-2286. Appendix: Dissipative couplings and dynamical phase transitions. ================================================================ We mentioned that in the case of coupled piano strings, there is a dissipative coupling between the strings which can be modeled by an imaginary coupling term in the dynamical matrix $\mathbb{M}$. Pure imaginary, or at least complex, couplings have interesting effects on the properties of the eigenvalues of $\mathbb{M}$. At some critical values of the system’s parameters, there can be a collapse of the real part of the eigenvalues of $\mathbb{M}$ and a bifurcation of their imaginary part at points called ”exceptional points“. There, among other effects, $\mathbb{M}$ becomes singular and the system’s eigenvectors behave oddly in their surroundings.[@Rotter; @Nimrod] Since the dynamical observables have a non-analytic dependence on the system’s paremeters, this results in what is called a dynamical phase transition, DPT.[@Rotter; @PastPhysB; @Bustos1; @Bustos2] In the case of plasmonics systems, as those showed in this work, the complex coupling can be seen as just the consequence of the effective interaction between two parts of a system connected through a bridging dissipative subsystem. For example, if we have three NPs aligned one can always calculate an effective coupling between the NPs at the ends.[@RevMex] The result of this is a complex effective coupling, consequence of the damping term of the NP in the middle.[@Bustos1; @Bustos2] Fig. \[FigureA1\] shows that the eigenvalues of $\mathbb{M}$ present a collapse of their real part accompanied by a splitting of their imaginary part. Just as in the example of the coupled piano strings. This case corresponds to a very large value of the damping term of the middle NP and a mistuning parameter, $\delta$, below a critical value. Here, it should be mentioned that what really sets the decay rates, are the imaginary part of the poles, $\mathrm{Im} \left ( \omega _{\mathrm{pole}} \right )$, of the response function $\chi(\omega )$, and not the imaginary part of the eigenvalues of $\mathbb{M}$. In the wide band approximation, these last coincides with $\mathrm{Im} \left( \omega_{\mathrm{pole}}^{2} \right)$. This distinction can be quite irrelevant in some situations but becomes fundamental in others. In Fig.\[FigureA2\] we consider the case of two internacting NPs. We can see that although the eigenvalues of $\mathbb{M}$ have exactly the same imaginary part, which would preclude the synchronization mechanism depicted in the main section of the article, there is a difference in the imaginary part of $\omega_{\mathrm{pole}}$. Although this difference is very small, as compared with the case shown in Fig. \[FigureA1\], it is enough to give rise to a characteristic asymptotic state and, thus, it can be used to induce a phase and frequency locking. In this example, the mode with the longest lifetime will be the antisymmetric one. This, at sufficiently long times, implies that the LSPs of both NPs will end oscillating in anti-phase. In general, systems with dynamical phase transitions are expected to have large differences in the imaginary parts of the eigenfrequencies, as in the case of coupled piano strings or in the example shown in Fig. \[FigureA1\]. However, phase and frequency locking is not an exclusive phenomenom of this situation. For the particular case of metallic nanoparticle arrays, the value of the damping terms needed to achieve the DPT described here are far from the realistic situation, at least for metallic NPs. Thus, the cases discussed in the main section of the article correspond to systems that do not present a DPT. ![(Color online) - The same as Fig. \[FigureA1\] but for a system of two NPs with equal damping, $\Gamma=0.03$. Notice that in spite of the small differences in decay rates as compared to those in Fig. \[FigureA1\], they can be enough to produce an observable phase locking through the use of an active medium.[]{data-label="FigureA2"}](FigureA1.eps){width="3.2in"} ![(Color online) - The same as Fig. \[FigureA1\] but for a system of two NPs with equal damping, $\Gamma=0.03$. Notice that in spite of the small differences in decay rates as compared to those in Fig. \[FigureA1\], they can be enough to produce an observable phase locking through the use of an active medium.[]{data-label="FigureA2"}](FigureA2.eps){width="3.2in"} [99]{} Maier SA (2007) Plasmonics: fundamentals and applications. Springer, New York Novotny L and Hecht B (2007) Principles of Nano-Optics. Cambridge Press, Cambridge Coronado E, Encina E, and Stefani FD (2011) Optical properties of metallic nanoparticles: manipulating light, heat and forces at the nanoscale. Nanoscale 3:4042 Halas N, Lal S, Chang W-S, Link S, and Nordlander P (2011) Plasmons in strongly coupled metallic nanostructures. Chem. Rev. 111:3913 Ebbesen T, Genet C, and Bozhevolnyi S (2008) Surface-plasmon circuitry. Physics Today 61:44 Cao Y, Wei Z, Li W, Fang A, Li H, Jiang X, Chen H, and Chan CT (2013) Light Amplification with Low-Gain Material: Harvesting Harmonic Resonance Modes of Surface Plasmon Polaritons on a Magnetic Meta-Surface. Plasmonics 8:793 Hess O, Pendry JB, Maier SA, Oulton RF, Hamm JM, and Tsakmakidis KL (2012) Active nanoplasmonic metamaterials. Nature Mater. 11:573 Tao J, Wang QJ, and Huang XG (2011) All-Optical Plasmonic Switches Based on Coupled Nano-disk Cavity Structures Containing Nonlinear Material Plasmonics 6:753 Krasavin A, Phong Vo T, Dickson W, Bolger P, and Zayats A (2011) All-plasmonic modulation via stimulated emission of copropagating surface plasmon polaritons on a substrate with gain. Nano Lett. 11:2231 Kottos T (2010) Optical physics: Broken symmetry makes light work. Nature Physics 6:166 Wuestner S, Pusch A, Tsakmakidis KL, Hamm JM, and Hess O (2010) Overcoming Losses with Gain in a Negative Refractive Index Metamaterial. Phys. Rev. Lett. 105:127401 Noginov M, Zhu G, Belgrave A, Bakker R, Shalaev V, Narimanov E, Stout S, Herz E, Suteewong T, and Wiesner U (2009) Demonstration of a spaser-based nanolaser. Nature 460:1110 Li Z-Y and Xia Y (2010) Metal nanoparticles with gain toward single-molecule detection by surface-enhanced Raman scattering. Nano Lett. 10:243 Bergman DJ and Stockman MI (2003) Surface Plasmon Amplification by Stimulated Emission of Radiation: Quantum Generation of Coherent Surface Plasmons in Nanosystems. Phys. Rev. Lett. 90:027402 Stockman MI (2010) Spaser as Nanoscale Quantum Generator and Ultrafast Amplifier. J. Opt. 12:024004 Stockman MI (2011) Spaser Action, Loss Compensation, and Stability in Plasmonic Systems with Gain. Phys. Rev. Lett. 106:156802 Fang A, Koschny T, and Soukoulis CM (2010) Lasing in metamaterial nanostructures. J. Opt. 12:024013 Oulton RF, Sorger VJ, Zentgraf T, Ma R-M, Gladden C, Dai L, Bartal G, and Zhang X (2009) Nature 461:629 Noginov M, Zhu G, Belgrave A, Bakker R, Shalaev V, Narimanov E, Stout S, Herz E, Suteewong T, and Wiesner U (2009) Demonstration of a spaser-based nanolaser. Nature 460:1110 Hill MT, et al. (2009) Lasing in metal-insulator-metal sub-wavelength plasmonic waveguides. Opt. Express 17:11107 Yong Suh J,Hoon Kim C, Zhou W, Huntington MD,Co DT, Wasielewski MR, and Odom TW (2012) Plasmonic bowtie nanolaser arrays. Nano Lett. 12:5769 Ma R-M,Yin X,Oulton RF,Sorger VJ, and Zhang X (2012) Multiplexed and electrically modulated plasmon laser circuit. Nano Lett. 12:5396 Wu C-Y,Kuo C-T,Wang C-Y, He C-L, Lin M-H, Ahn H, and Gwo S (2011) Plasmonic green nanolaser based on a metal-oxide-semiconductor structure. Nano Lett. 11:4256 Li J, Zhang Y, Mei T, and Fiddy M (2010) Surface plasmon laser based on metal cavity array with two different modes. Opt. Expr. 18:23626 Kitur JK, Podolskiy VA, and Noginov MA (2011) Stimulated Emission of Surface Plasmon Polaritons in a Microcylinder Cavity. Phys. Rev. Lett. 106:183903 Pikovsky A, Rosenblum M, and Kurths J (2001) Synchronization: A Universal Concept in Nonlinear Sciences. Cambridge University Press, Cambridge. Hugenii C (1673) Horoloquium Oscilatorium. Apud F. Muguet, Parisiis. Bustos-Marún RA, Coronado EA, and Pastawski HM (2010) Buffering plasmons in nanoparticle waveguides at the virtual-localized transition. Phys. Rev. B 82:035434 Bustos-Marún RA, Coronado EA, and Pastawski HM (2012) Excitation-Transfer Plasmonic Nanosensors Based on Dynamical Phase Transitions. J. Phys. Chem. C 116:18937 Brongersma ML, Hartman JW, and Atwater HA (2000) Electromagnetic energy transfer and switching in nanoparticle chain arrays below the diffraction limit. Phys. Rev. B 62:R16356 Citrin DS (2004) Coherent Excitation Transport in Metal−Nanoparticle Chains. Nano Lett. 4:1561 Burin L, Cao H, Schatz GC, Ratner MA (2004) High-quality optical modes in low-dimensional arrays of nanoparticles: application to random lasers. J. Opt. Soc. Am. B 21:121 Garcia de Abajo FJ (2007) Colloquium: Light scattering by particle and hole arrays. Rev. Mod. Phys. 79:1267 Malyshev AV, Malyshev VA and Knoester J (2008) Frequency-Controlled Localization of Optical Signals in Graded Plasmonic Chains. Nano Lett. 8:2369 Jones RC (1945) A Generalization of the Dielectric Ellipsoid Problem. Phys. Rev. 68:93 Kelly K, Coronado EA, Zhao L, and Schatz GC (2003) The Optical Properties of Metal Nanoparticles: The Influence of Size, Shape, and Dielectric Environment. J. of Phys. Chem. B 107:668 Coronado EA and Schatz GC (2003) Surface plasmon broadening for arbitrary shape nanoparticles: A geometrical probability approach. J. Chem. Phys. 119:3926 Press W, Teukolsky S, Vetterling W, and Flannery B (1998) Numerical Recipes in Fortran 77: The Art of Scientific Computing. Cambridge University Press, Cambridge Rotter I (2009) A non-Hermitian Hamilton operator and the physics of open quantum systems. J. Phys. A: Mathematical and Theoretical 42:153001 Gilary I, Mailybaev AA, and Moiseyev N (2013) Time-asymmetric quantum-state-exchange mechanism. Phys. Rev. A 88:R010102 Pastawski HM (2007) Revisiting the Fermi Golden Rule: Quantum dynamical phase transition as a paradigm shift. Physica B 398:278 Weinreich G (1977) Coupled piano strings. J. Acoust. Soc. Am. 62:1474 Pastawski HM and Medina E (2001) ‘Tight Binding’ methods in quantum transport through molecules and small devices: From the coherent to the decoherent description. Rev. Mex. Fis. 47s1:1 Pisignano D, Anni M, Gigli G, Cingolani R, Zavelani-Rossi M, Lanzani G, Barbarella G, and Favaretto L (2002) Amplified spontaneous emission and efficient tunable laser emission from a substituted thiophene-based oligomer. Appl. Phys. Lett. 81:3534 Carrere H, Marie X, Lombez L, and Amand T (2006) Optical gain of InGaAsN∕InP quantum wells for laser applications. Appl. Phys. Lett. 89:181115 Seidel J, Grafström S, and Eng L (2005) Stimulated Emission of Surface Plasmons at the Interface between a Silver Film and an Optically Pumped Dye Solution. Phys. Rev. Lett. 94:177401 Noginov MA, Zhu G, Mayy M, Ritzo BA, Noginova N, and Podolskiy VA (2008) Stimulated Emission of Surface Plasmon Polaritons. Phys. Rev. Lett. 101:226806
--- abstract: 'We summarize results of a search for X-ray-emitting binary stars in the massive globular cluster $\omega$ Centauri (NGC 5139) using and . ACIS-I imaging reveals 180 X-ray sources, of which we estimate that $45-70$ are associated with the cluster. We present 40 identifications, most of which we have obtained using ACS/WFC imaging with  that covers the central 1010 of the cluster. Roughly half of the optical IDs are accreting binary stars, including 9 very faint blue stars that we suggest are cataclysmic variables near the period limit. Another quarter comprise a variety of different systems all likely to contain coronally active stars. The remaining 9 X-ray-bright stars are an intriguing group that appears redward of the red giant branch, with several lying along the anomalous RGB. Future spectroscopic observations should reveal whether these stars are in fact related to the anomalous RGB, or whether they instead represent a large group of “sub-subgiants” such as have been seen in smaller numbers in other globular and open clusters.' author: - Daryl Haggard - 'Adrienne M. Cool' - Tersi Arias - 'Michelle B. Brochmann' - Jay Anderson - 'Melvyn B. Davies' title: 'A Deep Multiwavelength View of Binaries in $\omega$ Centauri' --- [ address=[Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA; dhaggard@northwestern.edu]{} ]{} [ address=[Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132, USA]{} ]{} [ address=[Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132, USA]{} ]{} [ address=[Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132, USA]{} ]{} [ address=[Space Telescope Science Institute, Baltimore, MD 21218, USA]{} ]{} [ address=[Lund Observatory, Box 43, SE-221 00 Lund, Sweden]{} ]{} Introduction ============ Globular clusters (GCs) host a variety of binary star systems, formed both primordially and dynamically via stellar encounters. These binaries play a crucial role in the dynamical evolution of GCs, providing an energy reservoir that can delay core collapse for many times the half-mass relaxation time [, @Fregeau07]. The dense cluster environment also dramatically alters the evolution of GC binaries [, @Ivanova06; @Fregeau03; @Pooley06; @Fregeau08]. X-ray-emitting systems have, in particular, emerged as a promising source of information about the history of binary formation and destruction in galactic GCs. The [*Chandra X-ray Observatory*]{}’s high spatial resolution and resulting sensitivity to point sources makes it possible to obtain nearly complete samples of compact accreting binaries in nearby globular clusters. The ability to pinpoint sources to $< 1$ also means that the stars responsible for the X-ray emission may be recovered at other wavelengths even in the crowded fields of GCs. While the high luminosity X-ray binaries ($L_x = 10^{36-38}$ ergs$^{-1}$) are understood to be accreting neutron stars [@Brown98; @Heinke03a; @Heinke03b], the low X-ray-luminosity sources are now known to comprise several distinct populations: cataclysmic variables (CVs), quiescent neutron stars (qNS, or qLMXB), millisecond pulsars (MSPs), and binaries with chromospherically active stars,  active binaries [ABs; @Pooley02b; @Heinke05; @Lugger07; @Haggard09]. Of these, only the quiescent NSs, with their distinctive soft X-ray spectra, can be identified uniquely on the basis of X-ray observations alone [@Pooley02a; @Rutledge02]. For others, optical (or radio, in the case of MSPs) follow-up is essential.  is the most massive GC in the Milky Way . At 4.9 kpc, it is relatively nearby, making it possible to detect low-luminosity X-ray sources in modest exposure times with . Its unusually complex stellar populations have prompted debate as to whether  is a GC at all — it is instead likely to be the remnant of a dwarf galaxy accreted by the Milky Way [@Bedin04; @Gratton04; @Piotto05; @Villanova07] — and controversy continues over the existence of an intermediate black hole in its core,  see @Noyola08 vs. @Anderson10. Regardless of ’s origins, the binary stars that it contains play a crucial role in its dynamical evolution and can in turn shed light on the impact that a cluster has on its binary population. Here we summarize results of our search for X-ray-emitting binary stars in  using [*Chandra X-ray Observatory*]{} and [*Hubble Space Telescope*]{} (). The  results have been reported by @Haggard09 and the complete  results will appear in @Cool10. Chandra Observations and Results ================================ The  observations were made using the Advanced CCD Imaging Spectrometer (ACIS), whose field of view (FOV) is 1717. For comparison, the half-mass radius of  is 4.2  [@Harris96] and its core radius is  $=$ 2.6 [@Trager95]. With a total exposure of 70 ksec, we detected 180 sources to a limiting X-ray flux of 4.3  $10^{-16}$ ergcm$^{-2}$s$^{-1}$. At the distance of  (4.9 kpc), this corresponds to $L_x$ 1.2 $10^{30}$ ergsec$^{-1}$. Because  is very large on the sky, X-ray sources anywhere in the ACIS-I field can potentially be cluster members. However, given the large FOV and faint limiting flux of the observations, significant numbers of AGN will be present. After a statistical accounting of AGN as well as foreground stars, we estimated that $45-70$ of the  sources are associated with the cluster. Based on nine optical IDs we projected that perhaps $20-35$ of the sources were cataclysmic variables (CVs), with most of the remainder being binaries containing coronally-active stars [see @Haggard09 for details]. Figure \[xray\_cmd\] shows an X-ray color-magnitude diagram for all the sources for which counts were recorded in three bands: “soft” ($0.5-1.5$ keV), “medium” ($0.5-4.5$ keV), and “hard” ($1.5-6.0$ keV). Black symbols indicate the radial offset of each source from the cluster center; colored symbols mark X-ray sources for which optical identifications have been obtained. In Fig. \[xray\_cmd\] and the descriptions below, we exclude sources for which the identifications suggest they are not associated with the cluster (several AGN and a few foreground stars). We also exclude objects whose optical signatures are ambiguous. The complete set of optical IDs will be presented by @Cool10. HST Observations and Analysis ============================= The  data consist of 9 pointings with the ACS Wide Field Camera (WFC) covering 10 10, approximately centered on the cluster; this field encompasses 109 of the  sources. At each pointing we obtained four F625W (R$_{625}$), four F435W (B$_{435}$), and four F658N () exposures. The broad-band exposures include one short exposure to measure the bright stars. To map the positions of  sources onto the ACS/WFC images, we corrected the WFC images for distortion and then constructed mosaics in each filter. We used the star lists of @Kaluzny96 and @vanLeeuwen00 to map R.A. and Dec. onto the mosaic images and then back onto the original “flt”-format images. We did the photometric analysis using DAOPHOT/ALLSTAR [@Stetson87] and ALLFRAME [@Stetson94] on the “flt”-format images. We first extracted 20 20“patches” centered on each X-ray source from each of the ACS/WFC exposures available at the corresponding position. For most sources, this meant analyzing a total of 12 images. Because of the possibility that an interesting optical counterpart could be missed in a fully automated process, we carefully scrutinized the error circle region and iterated several times to insure that all objects identified within it were real and nothing was missed [details in @Cool10]. Initially we adopted 10-radius error circles. Once several identifications had been made we performed a boresight correction which enabled us to reduce the radius to 06. Finally, we constructed  vs.  and  vs. color-magnitude diagrams (CMDs) for each patch. Once we had constructed CMDs for the patch around each X-ray source, we carried out a systematic evaluation of potentially interesting objects. All objects that did not lie on or very near the main sequence or giant branch in both the  and  CMDs were considered potentially interesting and evaluated for reliability of the photometry. We examined them in all the individual images to check the potential impact of near neighbors, cosmic rays, and diffraction spikes, and checked how cleanly DAOPHOT removed them from each of the images. We also took account of the consistency of multiple independent measurements in each filter. We then assigned a numerical index to represent the quality of the photometry (0 = best, 3 = worst); here we report only quality 0 and 1 candidates. Figure \[opt\_cmd\] shows a combined  vs.  and  vs. CMD for 35 of the optical IDs obtained using the  data. Black dots mark the stars in the error circles of these 35 objects. To better delineate the turnoff and giant branch we also plot stars brighter than  = 19 from the full 20 20 patch associated with the qLMXB. Optical Counterparts ==================== ![X-ray “color-magnitude” diagram for 164  sources in  with non-zero counts in both hard and soft bands. Round symbols indicate where in the cluster a source lies (large solid dots $=$ core; large open circles $= 1-2$ ; small open circles $= 2-3$ ; dots $=$ outside 3 ). Special symbols indicate optical IDs: a quiescent neutron star (red cross); previously known and newly identified CVs (large blue triangles); less certain new CVs (smaller blue triangles); very faint CV candidates with no  detection (cyan inverted triangles); active binaries,  BY Dra systems (green diamonds); possible sub-subgiants, also sometimes called red stragglers (red pentagons); and a candidate blue straggler (blue asterisk). Bold symbols indicate optical counterparts identified by @Carson00 and @Haggard09, including 5 ABs from the @Kaluzny04 variable star catalog.[]{data-label="xray_cmd"}](wcen_DH_conf2010_fig1.eps){height=".4\textheight"} ![Color-magnitude diagram in B$_{435}$ $-$ R$_{625}$ vs. R$_{625}$ (left panel) and $-$ R$_{625}$ vs. R$_{625}$ (right panel) showing all stars that appear in the error circles of 35 X-ray sources for which promising optical identifications have been made. Black dots represent all the stars in the error circles. To better delineate the turnoff region and red giant branch, all stars with R$_{625}$ $<$ 19 in the full 20 20 patch around the qLMXB are shown as blue dots. Suggested optical counterparts of X-ray sources are marked with symbols as in Fig. \[xray\_cmd\]. Note that the 5 ABs obtained by matching X-ray sources to the @Kaluzny04 variable star catalog are not shown as they either lie outside the  FOV or were not distinctive in the  data.[]{data-label="opt_cmd"}](wcen_DH_conf2010_fig2.eps){height=".56\textheight"} Accreting Binary Stars: Cataclysmic Variables and a qLMXB --------------------------------------------------------- Thirteen of the X-ray sources have an optical counterpart that is both blue and -bright (Fig. \[opt\_cmd\]). This combination of signatures is strongly indicative of a compact accreting binary, with the excess attributable to an emission line from an accretion disk and the blueness to the disk and/or the white dwarf. One of these is the object reported by @Haggard04, which was first identified as a qNS on the basis of its X-ray spectrum [@Rutledge02]; it is marked with a red cross in Figures \[xray\_cmd\] and \[opt\_cmd\]. This is the only X-ray source in  with both the soft spectrum and the luminosity characteristic of a quiescent neutron star (see red cross in Fig. \[xray\_cmd\]). The remaining 12 optical counterparts that are both blue and -bright have X-ray colors and luminosities typical of cataclysmic variables (CVs; see blue triangles). Three of these (bold blue triangles) were known from previous studies [@Carson00; @Haggard04]; nine are new. The best of the new candidates (large blue triangles) are all easily confirmed visually as being blue and all individual  measurements are consistent in showing that they are -bright. The remaining six CV candidates (small blue triangles) are visually confirmed as either blue or -bright but have somewhat lower confidence associated with either their excess or blueness. Nevertheless, we think it probable that most, if not all, of these stars are the optical counterparts of the X-ray sources and are CVs. In addition to these 12 CV candidates, we find very faint blue stars in the error circles of nine of the  sources (see inverted cyan triangles). These nine stars are exceedingly faint ($= 24.5-26.3$). Most are seen in  only because they are so blue; main-sequence stars of comparable  magnitude are below the detection limit. All but one were confirmed visually as being blue by blinking  vs. images. However, as none of these stars are detected in , they do not appear in the  diagram (see Fig. \[opt\_cmd\]). Given that these stars lie in the region of the CMDs generally occupied by white dwarfs (WDs), we have considered the possibility that they could be WDs that have landed by chance in the  error circles and are unrelated to the X-ray sources. Our statistical analysis [@Cool10] suggests that perhaps one could be such a chance coincidence, but that it is unlikely that many more could be explained in this way. We therefore suggest that these objects are also CVs. It is important to note that the lack of  detections does not necessarily imply that they are not -bright; they may simply be too faint to be detected, even in the presence of an  emission line. Binaries Containing Stars with Active Coronae --------------------------------------------- X-ray imaging with  of nearby GCs is also sensitive enough to pick up binaries containing coronally active stars. Falling broadly into this category are the five optical identifications by @Haggard09 based on comparison of the variable star catalog of @Kaluzny04 with the  source list. These five sources, which include two eclipsing Algols and a long-term variable, are shown as bold green diamonds in Figure \[xray\_cmd\]. They do not appear in Figure \[opt\_cmd\], as four are outside the  FOV, and the fifth was not picked up as being distinctive in the CMDs. Narrow-band imaging with  also enables us to search for the elevated levels of coronal activity associated with certain types of binary stars (e.g., BY Dra and RS CVn). In the field, this activity is typically the result of fast spin rates induced by tidal synchronization [see @Makarov09 and references therein]. However,  equivalent widths for such stars are much lower than for CVs, $1-3$ Angstroms [@Young89]. Given the 80 Angstrom width of the ACS/WFC  filter, such stars will appear only very slightly -bright (at best) in the present study. To limit the number of false positives in our search, we required that a star show both an  signature and also be above the main sequence in the individual  CMD associated with its patch. The latter requirement excludes binaries whose mass ratio is much less than unity. The three candidate BY Dra stars we found in this way are shown in Fig. \[opt\_cmd\] (green diamonds). In Fig. \[xray\_cmd\] it can be seen that, on average, these are relatively faint and moderately soft X-ray sources. A Blue Straggler or Turnoff Binary? ----------------------------------- One of the candidates (blue asterisk in Fig. \[opt\_cmd\]) appears above the turnoff, to the blue side of the subgiant branch, a location which is suggestive of a blue straggler. The star has a 0.25-magnitude  excess, which strongly suggests that it is the X-ray source. However, the star also lies 0.75 magnitudes above the turnoff and thus may instead be a detached binary containing two turnoff stars. We suggest that this star probably falls in the broad category of active stars, in the sense that its X-ray emission is most likely to be associated with an active corona. However, we give it its own symbol in the CMDs to distinguish it as a special case. Sub-subgiants or Anomalous RGB Stars? ------------------------------------- Nine of the candidates we have identified appear to the red side of the main-sequence turnoff and subgiant and giant branches (Fig. \[opt\_cmd\]; red pentagons). Several show signs of  in emission, which is strongly associated with enhanced X-ray emission and supports an association between these stars and the X-ray sources. Given their location in the CMD, we tentatively identify these stars as sub-subgiants [SSG; @Mathieu03], also sometimes called red stragglers. However, a close inspection of the CMD shows that 7 of the stars lie along the metal-rich “anomalous” red giant branch [@Pancino00; @Villanova07]. Thus it is possible that these stars are instead a subset of the anomalous RGB stars which for some reason are unusually X-ray bright. Discussion ========== Using a combination of  and  imaging in blue, red and  filters, we have identified a total of 40 X-ray-emitting binary stars in . Five were found as X-ray counterparts of variable stars reported by @Kaluzny04 and four had been found in earlier  and/or  studies. The remaining 31 are newly reported here. Accreting binary stars make up just over half of the identifications: one qLMXB and 21 candidate CVs. The remaining identifications are evenly split into two broad classes: 9 active binaries and 9 objects that are possible sub-subgiants. The active binaries include an assortment of different types of systems, all of whose X-ray emission is likely due to active coronae. These include two eclipsing Algol systems, three possible BY Dra stars, and a blue straggler. Of particular interest among the CV candidates are the nine very faint blue stars shown as inverted triangles in Figures 1 and 2. Given the distance modulus to , their absolute magnitudes are in the range $M_{625} = 10.9-12.7$. This is comparable to the absolute magnitudes of the short-period CVs recently uncovered in the Sloan Digital Sky Survey [SDSS; @Gansicke09]. Thus the systems in  could be short-period systems with very low-mass secondaries, as is expected for very old CVs. Their positions in the X-ray CMD (see Fig. \[xray\_cmd\]) generally support the CV interpretation. Alternatively, some of these objects could be helium white dwarfs with MSP companions; a few such systems are known in globular clusters [@Edmonds01]. Deeper  imaging and/or multiwavelength broad-band imaging is needed to distinguish between these possibilities. Given that the faintest CVs detected in this study are at the detection limit in the both the X-ray and optical images, it is likely that more CVs remain to be discovered in . How many more depends on the relative numbers of faint vs. bright CVs — a ratio that depends both on the evolution of CVs and their formation history in the cluster. The present study shows that even very faint CVs can be found in the crowded environs of ; deeper observations should allow a more complete census to be made. The present census of active binaries in  is undoubtedly very incomplete. At X-ray wavelengths we are likely seeing just the tip of the iceberg. In the optical, we are hampered by the weakness of the  emission lines. Observations with a narrower  filter can help, as demonstrated by the large number of BY Dra stars identified by @Taylor01 in NGC 6397 using a 20 Angstrom-wide filter. Perhaps the most intriguing set of X-ray-emitting stars identified in this study are the nine that lie redward of the turnoff and giant branch. While their close proximity to the evolutionary sequences in  suggests that they are associated with the cluster, proper motions are needed to be sure. If spectroscopic observations reveal that some or all are members of the anomalous RGB, then it will be important to understand why this subpopulation in  is prone to producing X-ray-bright stars. If instead these stars turn out to be bonafide SSGs, then  should provide a valuable testing ground for studying this as-yet poorly understood class of X-ray-emitting binary systems. [38]{} natexlab\#1[\#1]{}\[1\][“\#1”]{} url \#1[`#1`]{}urlprefix\[2\]\[\][[\#2](#2)]{} J. M. [Fregeau]{}, and F. A. [Rasio]{}, ** **658**, 1047–1061 (2007), . N. [Ivanova]{}, C. O. [Heinke]{}, F. A. [Rasio]{}, R. E. [Taam]{}, K. [Belczynski]{}, and J. [Fregeau]{}, ** **372**, 1043–1059 (2006), . J. M. [Fregeau]{}, M. A. [G[ü]{}rkan]{}, K. J. [Joshi]{}, and F. A. [Rasio]{}, ** **593**, 772–787 (2003), . D. [Pooley]{}, and P. [Hut]{}, ** **646**, L143–L146 (2006), . J. M. [Fregeau]{}, ** **673**, L25–L28 (2008), . E. F. [Brown]{}, L. [Bildsten]{}, and R. E. [Rutledge]{}, ** **504**, L95 (1998), . C. O. [Heinke]{}, J. E. [Grindlay]{}, D. A. [Lloyd]{}, and P. D. [Edmonds]{}, ** **588**, 452–463 (2003), . C. O. [Heinke]{}, J. E. [Grindlay]{}, P. M. [Lugger]{}, H. N. [Cohn]{}, P. D. [Edmonds]{}, D. A. [Lloyd]{}, and A. M. [Cool]{}, ** **598**, 501–515 (2003), . D. [Pooley]{}, W. H. G. [Lewin]{}, F. [Verbunt]{}, L. [Homer]{}, B. [Margon]{}, B. M. [Gaensler]{}, V. M. [Kaspi]{}, J. M. [Miller]{}, D. W. [Fox]{}, and M. [van der Klis]{}, ** **573**, 184–190 (2002), . C. O. [Heinke]{}, J. E. [Grindlay]{}, P. D. [Edmonds]{}, H. N. [Cohn]{}, P. M. [Lugger]{}, F. [Camilo]{}, S. [Bogdanov]{}, and P. C. [Freire]{}, ** **625**, 796–824 (2005), . P. M. [Lugger]{}, H. N. [Cohn]{}, C. O. [Heinke]{}, J. E. [Grindlay]{}, and P. D. [Edmonds]{}, ** **657**, 286–301 (2007), . D. [Haggard]{}, A. M. [Cool]{}, and M. B. [Davies]{}, ** **697**, 224–236 (2009), . D. [Pooley]{}, W. H. G. [Lewin]{}, L. [Homer]{}, F. [Verbunt]{}, S. F. [Anderson]{}, B. M. [Gaensler]{}, B. [Margon]{}, J. M. [Miller]{}, D. W. [Fox]{}, V. M. [Kaspi]{}, and M. [van der Klis]{}, ** **569**, 405–417 (2002), . R. E. [Rutledge]{}, L. [Bildsten]{}, E. F. [Brown]{}, G. G. [Pavlov]{}, and V. E. [Zavlin]{}, ** **578**, 405–412 (2002), . G. [Meylan]{}, “[The Globular Cluster [$\omega$]{} Centauri: A General Overview]{},” in *Omega Centauri, A Unique Window into Astrophysics*, edited by [F. van Leeuwen, J. D. Hughes, & G. Piotto]{}, 2002, vol. 265 of *Astronomical Society of the Pacific Conference Series*, pp. 3. L. R. [Bedin]{}, G. [Piotto]{}, J. [Anderson]{}, S. [Cassisi]{}, I. R. [King]{}, Y. [Momany]{}, and G. [Carraro]{}, ** **605**, L125–L128 (2004), . R. [Gratton]{}, C. [Sneden]{}, and E. [Carretta]{}, ** **42**, 385–440 (2004). G. [Piotto]{}, S. [Villanova]{}, L. R. [Bedin]{}, R. [Gratton]{}, S. [Cassisi]{}, Y. [Momany]{}, A. [Recio-Blanco]{}, S. [Lucatello]{}, J. [Anderson]{}, I. R. [King]{}, A. [Pietrinferni]{}, and G. [Carraro]{}, ** **621**, 777–784 (2005), . S. [Villanova]{}, G. [Piotto]{}, I. R. [King]{}, J. [Anderson]{}, L. R. [Bedin]{}, R. G. [Gratton]{}, S. [Cassisi]{}, Y. [Momany]{}, A. [Bellini]{}, A. M. [Cool]{}, A. [Recio-Blanco]{}, and A. [Renzini]{}, ** **663**, 296–314 (2007), . E. [Noyola]{}, K. [Gebhardt]{}, and M. [Bergmann]{}, ** **676**, 1008–1015 (2008), . J. [Anderson]{}, and R. P. [van der Marel]{}, ** **710**, 1032–1062 (2010), . A. M. [Cool]{}, D. [Haggard]{}, T. [Arias]{}, M. [Brochmann]{}, J. [Dorfman]{}, M. V. [White]{}, and J. [Anderson]{}, *in preparation* (2010). W. E. [Harris]{}, *VizieR Online Data Catalog* **7195**, 0 (1996). S. C. [Trager]{}, I. R. [King]{}, and S. [Djorgovski]{}, ** **109**, 218–241 (1995). J. [Kaluzny]{}, M. [Kubiak]{}, M. [Szymanski]{}, A. [Udalski]{}, W. [Krzeminski]{}, and M. [Mateo]{}, ** **120**, 139–152 (1996), . F. [van Leeuwen]{}, R. S. [Le Poole]{}, R. A. [Reijns]{}, K. C. [Freeman]{}, and P. T. [de Zeeuw]{}, ** **360**, 472–498 (2000). P. B. [Stetson]{}, ** **99**, 191–222 (1987). P. B. [Stetson]{}, ** **106**, 250–280 (1994). J. E. [Carson]{}, A. M. [Cool]{}, and J. E. [Grindlay]{}, ** **532**, 461–466 (2000). J. [Kaluzny]{}, A. [Olech]{}, I. B. [Thompson]{}, W. [Pych]{}, W. [Krzemi[ń]{}ski]{}, and A. [Schwarzenberg-Czerny]{}, ** **424**, 1101–1110 (2004), . D. [Haggard]{}, A. M. [Cool]{}, J. [Anderson]{}, P. D. [Edmonds]{}, P. J. [Callanan]{}, C. O. [Heinke]{}, J. E. [Grindlay]{}, and C. D. [Bailyn]{}, ** **613**, 512–516 (2004), . V. V. [Makarov]{}, and P. P. [Eggleton]{}, ** **703**, 1760–1765 (2009). A. [Young]{}, F. [Ajir]{}, and G. [Thurman]{}, ** **101**, 1017–1031 (1989). R. D. [Mathieu]{}, M. [van den Berg]{}, G. [Torres]{}, D. [Latham]{}, F. [Verbunt]{}, and K. [Stassun]{}, ** **125**, 246–259 (2003), . E. [Pancino]{}, F. R. [Ferraro]{}, M. [Bellazzini]{}, G. [Piotto]{}, and M. [Zoccali]{}, ** **534**, L83–L87 (2000), . B. T. [G[ä]{}nsicke]{}, M. [Dillon]{}, J. [Southworth]{}, J. R. [Thorstensen]{}, P. [Rodr[í]{}guez-Gil]{}, A. [Aungwerojwit]{}, T. R. [Marsh]{}, P. [Szkody]{}, S. C. C. [Barros]{}, J. [Casares]{}, D. [de Martino]{}, P. J. [Groot]{}, P. [Hakala]{}, U. [Kolb]{}, S. P. [Littlefair]{}, I. G. [Mart[í]{}nez-Pais]{}, G. [Nelemans]{}, and M. R. [Schreiber]{}, ** **397**, 2170–2188 (2009), . P. D. [Edmonds]{}, R. L. [Gilliland]{}, C. O. [Heinke]{}, J. E. [Grindlay]{}, and F. [Camilo]{}, ** **557**, L57–L60 (2001), . J. M. [Taylor]{}, J. E. [Grindlay]{}, P. D. [Edmonds]{}, and A. M. [Cool]{}, ** **553**, L169–L172 (2001).
--- abstract: | By the generalization of Chern–Simons topological current and Gauss–Bonnet-Chern theorem, the purpose of this paper is to make a non-Abelian gauge field theory foundation of the topological current of $\tilde{p}$-branes formulated in our previous work. Using $\phi $–mapping topological current theory proposed by Professor Duan, we find that the topological $\tilde p$-branes are created at every isolated zero of vector field $\vec \phi (x)$. It is shown that the topological charges carried by $\tilde p$-branes are topologically quantized and labeled by Hopf index and Brouwer degree, i.e., the winding number of the $\phi $–mapping. The action of topological $\tilde p$–branes is obtained and is just Nambu action for multistrings when $D- \tilde d=2$. address: | Institute of Theoretical Physics,\ School of Physical Science and Technology,\ Lanzhou University, Lanzhou, 730000, P. R. China author: - 'Yi-Shi Duan' - 'Ji-Rong Ren' title: | The Non-Abelian Topological Gauge\ Field Theory of $\tilde{p}$–Branes --- , $\tilde{p}$–branes ,Non-Abelian Topological Gauge Field Theory ,Gauss-Bonnet-Chern theorem ,$\phi $–mapping topological current theory Introduction ============ Extended objects with $p$ spatial dimension, known as $ p$-branes, play an essential role in revealing the nonperturbative structure of the superstring theories and M–theories[@b1; @b2; @b3; @b4; @StelleTownsend1987; @SchwarzSeibergRMP1999]. Antisymmetric tensor gauge fields determine all of the features of a $ p$–brane and have been widely studied in the theory of $ p$–branes [@DiamantiniPLB1996; @a1; @a2; @a3; @duff]. In the context of the effective $D=10$ or $D=11$ supergravity theory a $p$-brane is a $p$-dimensional extended source for a $% (p+2)$-form gauge field strength $F.$ It is well-known that the $(p+2)$-form strength $F$ satisfies the field equation$$\label{No.1} \nabla _\mu F^{\mu \mu _1\cdots \mu _{p+1}}=j^{\mu _1\cdots \mu _{p+1}}$$ where $j^{\mu _1\cdots \mu _{p+1}}$is a $(p+1)$-form tensor current and corresponding to “electric source”, and the dual field strength $^{*}F$ satisfies$$\label{TCCTM} \nabla _\mu ~^{*}F^{\mu \mu _1\cdots \mu _{\tilde p+1}}=\tilde j^{\mu _1\cdots \mu _{\tilde p+1}}$$ in which $\tilde j^{\mu _1\cdots \mu _{\tilde p+1}}$ is a extended $(\tilde p+1)$-form topological tensor current and corresponding to “magnetic source” [@strom; @duff; @hull]. In Ref.[@DiamantiniPLB1996; @strom; @duff; @DvaliPRD2000; @CembranosPRD2002], from the perspective of a higher dimensional theory, the topological theories of $\tilde p$–branes in $M$–theory were also studied. The $\phi $–mapping topological current theory proposed by Professor Duan plays a crucial role in studying the structure of topological defects[@DuanSlac1984; @DuanRenYang2003; @DuanLiuJHEP2004; @DuanMengJMP1993; @DuanLiYangNPB1998; @DuanFuJMP1998; @DuanLiuFuPRD2003; @DUANZHangLiPRB1998; @DUANLiuZHangJP2002; @DuanZhangFuPRE1999; @DuanZhangPRE1999; @DuanFuZhangPRD2000; @DuanFuJiaJMP2000; @DuanLiuHopkins1987; @DuanZhangMPLA2001]. In our previous work[@DuanFuJiaJMP2000], using the $\phi $–mapping topological current theory, we had present a new topological tensor current of $\tilde p$–branes. It’s shown that the current is identically conserved and behaves as $\delta (\vec \phi )$, and every isolated zero of the field $\vec \phi (x)$ corresponding to a “magnetic” $\tilde p$–brane. It must be pointed out that usually the study of extended $\tilde p$–branes is always via generalizing Kalb-Ramond Abelian gauge field[@DiamantiniPLB1996; @NepomechiePRD1985]. That is a kind of $U(1)$ gauge field theory, so far from which the topological current can’t be strictly induced. In fact the present work is a generalization of $GBC$ topological current of moving point defect[@DuanMengJMP1993; @DuanLiYangNPB1998]. By making use of the generalization of Gauss-Bonnet-Chern theorm and $\phi $–mapping field theory, we find a $SO(N)$ non-Abelian topological field theory of $\tilde p$–branes, in which the dual field strength $^* F^{\mu \mu _1 \cdots \mu _{\tilde p +1}}$ of eq.(\[TCCTM\]) can rigorously create a topological current of $\tilde p$–branes in a natural way. In the $SO(N)$ non-Abelian topological gauge field theory of $\tilde p$–branes, we also investigate the inner structure of topological current of $\tilde p$–branes, show that the topological charges of $\tilde p$–branes are topologically quantized and labeled by the Hopf index and Brouwer degree, the winding number of the $\phi $–mapping. We also find that in this $SO(N)$ gauge field theory of $\tilde p$–branes, when $N$ is even, the topological tensor current of $\tilde p$–branes $\tilde j^{\mu _1\cdots \mu _{\tilde p+1}} $ can be looked upon as the generalization of Chern-Simoin topological current, that we have formulated in [@DuanFuJiaJMP2000]. The non-Abelian field theory of $\tilde{p}$–branes ================================================== In this paper the studies of the non-Abelian gauge field theory and topological current of $\tilde{p}$–branes are basing on the Gauss–Bonnet–Chern($GBC$) theorem and the generalization of Chern–Simons topological current. It is well-known that Gauss-Bonnet-Chern theorem is a generalization of Euler number density from two dimensional Gauss–Bonnet theorem to arbitrary even dimensional theory, which relates the curvature of the compact and oriented even–dimensional Riemannian manifold $M$ with an important topological invariant, the Euler-Poincaré characteristic $\chi {(M)}$. The $GBC$–form corresponding to Euler number density is given by $$\label{GBCform1} \Lambda = {\frac{(-1)^{\frac{N}{2}-1}}{2^N\pi ^{\frac N2}\left(\frac{N}{2}\right)!}} \varepsilon_{A_1A_2\cdots {A_{N-1}A_N}}F^{{A}_1{A}_2}\wedge \cdots \wedge {F^{{A}_{N-1}{A}_N}},$$ in which $F^{AB}$ is the curvature tensor of $SO(N)$ principal bundle of the Riemannian manifold $M$, i.e., the $SO(N)$ gauge field 2–form $$\label{} F^{AB}=d\omega ^{AB}-\omega ^{AC}\wedge \omega ^{CB}.$$ where $\omega ^{AB}$ is the spin connection $1$-form. In 1944, an elegantly intrinsic proof of the theorem was given by Chern[@Cher1], whose instructive idea was to work on the sphere bundle $S^{N-1}(M)$. Using a recursion method Chern has proved that the $GBC$–form is exact on $S^{N-1}(M)$ $$\label{GBC} \Lambda =d \Omega ,$$ where the $(N-1)$–form $\Omega$ is called the $Chern$–form $$\Omega = {1 \over {\left(2\pi \right)^{\frac{N}{2}} }}\sum\limits_{k = 0}^{\frac{N}{2} - 1} {( - 1)^k {{2^{ - k} } \over {(N - 2k - 1)!!k!}}\Theta _k },$$ in which $$\label{theta1} \begin{array}{ll} \Theta _k =& \varepsilon _{A_1 A_2 \cdots A_{N - 2k} A_{N - 2k + 1} A_{N - 2k + 2} \cdots A_{n - 1} A_N } n^{A_1 } \theta ^{A_2 }\wedge \cdots \vspace{.4cm} \\ & \wedge \theta ^{A_{N - 2k} } \wedge F^{A_{N - 2k + 1} A_{N - 2k + 2} } \wedge \cdots \wedge F^{A_{N - 1} A_N }, \end{array}$$ $$\theta ^A \equiv Dn^A = dn^A - \omega ^{AB} n^B,$$ and $n^A$ is the section of the sphere bundle $S^{N-1}(M)$ $$n:\partial M \to S^{N - 1}(M).$$ A detailed review of Chern’s proof of the $GBC$ theorem was presented in Ref.[@Dowk] and one great advance in this field is the discovery of the relationship between supersymmetry and the index theorem[@Alva]. It must be pointed out that the $GBC$ theorem is formulated by the exterior differential forms[@Cher1]. Differential forms are a vector space (with a C-infinity topology) and therefore have a dual space in higher-dimension space. Submanifolds represent an element of the dual via integration, so it is common to say that they are in the dual space of forms, which is the space of currents[@Weissteink-Form]. let $(X,g)$ be a $D$–dimensional manifold and $F^{AB}_{\mu \nu }$ the curvature tensor of $SO(N)$ principal bundle, we can define a $(\tilde p+1)$–dimensional topological tensor current on manifold $X$ $$\begin{aligned} \label{Chernformcompdual1} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p} }=&&\frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \mu _{2} \cdots \mu _{N}}}{\sqrt{g}} {\frac{(-1)^{\frac{N}{2}-1}N!}{2^N (2\pi )^{\frac N 2}\left(\frac{N}{2}\right)!}} \nonumber \\ && \cdot \varepsilon_{A_1A_2\cdots {A_{N-1}A_N}} F^{{A}_1{A}_2}_{\mu _1\mu _2 } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}},\end{aligned}$$ where $N$ is the dimension of a submanifold $M$. It is easy to see that eq.(\[Chernformcompdual1\]) is just the generalization of Chern–Simons $SO(2)$ topological current[@DuanFuZhangPRD2000] $$\label{} \tilde j^{\lambda }=\frac{1}{8\pi} \frac{\varepsilon ^{\lambda \mu \nu}}{\sqrt{g}} \varepsilon_{AB} F^{AB }_{\mu \nu}.$$ Using the Bianchi identity, $$\label{Bianchip1} D _\mu F_{\nu \lambda }^{AB} + D _\nu F_{\lambda \mu }^{AB} + D _\lambda F_{\mu \nu }^{AB} = 0$$ one find $$\label{Bianchip2} \varepsilon ^{\mu _1 \cdots \mu _{i - 1} \mu _i \mu _{i + 1} \cdots \mu _{N+k} } D _{\mu _{i - 1} } F_{\mu _i \mu _{i + 1} }^{AB} = 0 .$$ From (\[Chernformcompdual1\]), it can be proved[@spivak1975] that $$\nabla _{\lambda }\tilde j^{ \lambda \lambda _1 \cdots \lambda _{\tilde p}} =0,$$ i.e., the antisymmetric topological tensor current $\tilde j^{ \lambda \lambda _1 \cdots \lambda _{\tilde p}} $ is identically conserved. It’s also easy to find that the topological tensor current $\tilde j^{ \lambda \lambda _1 \cdots \lambda _{\tilde p}} $ is the dual tensor of $GBC$ tensor defined in eq.(\[GBCform1\]) $$\label{Chernformcompdual2} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p} }= \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \mu _{2} \cdots \mu _{N}}}{\sqrt{g}}\Lambda _{\mu _{1} \mu _{2} \cdots \mu _{N} }.$$ Furthermore, by virtue of the tensor form of $GBC$ theorem (\[GBCform1\]) and (\[GBC\]) $$\label{CPartofK} \Lambda _{\mu _{1} \mu _{2} \cdots \mu _{N} } =\partial_{[\mu _1}F_{\mu _{2} \cdots \mu _{N}] },$$ we can find that the dual tensor of $Chern$–tensor $F_{\mu _{2} \cdots \mu _{N} }$ is $$\label{DualfieldGBC} {\; ^{\star } F}^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1}= \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \mu _{2} \cdots \mu _{N}}}{\sqrt{g}}F_{\mu _{2} \cdots \mu _{N} }.$$ It’s obvious that if in eq.(\[TCCTM\]) the dual field tensor ${\; ^{\star } F}^{\mu \mu _1 \cdots \mu _{\tilde p +1} }$ is taking the form of (\[DualfieldGBC\]) deduced from $GBC$ theorem, then we have the conserved topological tensor current (\[Chernformcompdual1\]). Therefore for the case of even $N$ the anti-symmetric tensor current(\[Chernformcompdual1\]) that constructed in term of $SO(N)$ gauge field tensor $F^{AB}_{\mu \nu }$ is just the topological current of creating $\tilde{p}$-branes. As to the field tensor $F^{\mu \mu _1\cdots \mu _{p+1}}$ in (\[No.1\]) and the dual tensor $~^{*}F^{\mu \mu _1\cdots \mu _{\tilde p+1}} $ in (\[TCCTM\]) can be found by making use of the Chern-form. In the following we will show that the tensor current defined by (\[Chernformcompdual1\]) is just the $\phi$–mapping topological tensor current of $\tilde p$–branes in [@DuanFuJiaJMP2000]. This is a novel foundation of the non-Abelian topological gauge field theory of $\tilde p$–branes. The early work of Chern[@Cher1] had shown that in a neighborhood of arbitrary point $P$ on $M$ it can be chosen a family of frames such that the spin connection $\omega ^{AB}=0$. This locally Euclidean homeomorphism immediately gives an important consequence of the $GBC$ theorem on sphere bundle $S^{N-1}(X)$: $$\Omega = {1 \over {\left(2\pi \right)^{\frac{N}{2}} }} { {{1 } \over {(N - 1)!!}}\varepsilon _{A_1 A_2 \cdots A_N } n^{A_1 } dn ^{A_2 } \wedge \cdots \wedge dn ^{A_{N} } }. \label{Chernfform2}$$ Using the unit sphere area formula $ A(S^{N-1})={{2\pi ^{N/2}}/{{\Gamma (\frac N2)}}} \label{area} $ and the following relation $$\left(2\pi \right) ^\frac{N}{2} (N-1)!! = A(S^{N-1})(N-1)!,$$ the $Chern$–form expressed by (\[Chernfform2\]) can be locally reduced to $$\Omega = \frac{1}{ A(S^{N-1})(N-1)!} \varepsilon _{A_1 A_2 \cdots A_N } n^{A_1 } dn ^{A_2 } \wedge \cdots \wedge dn ^{A_{N} }. \label{Chernform4}$$ We see that the expression (\[Chernform4\]) is nothing but the ratio of area element to the total area $A(S^{N-1})$ of unit sphere $S^{N-1}$. This is essential of $GBC$ theorem. Using the $Chern$–form(\[Chernform4\]), it’s easy to prove that the $Chern$ tensor field can be simply written as $$\label{CherenformKcomp} F_{\mu _{2} \cdots \mu _{N} }= \frac{1}{ A(S^{N-1})(N-1)!} \varepsilon _{A_1 A_2 \cdots A_N } n^{A_1 } \partial _{\mu _2 }n ^{A_2 } \cdots \partial _{\mu _N}n ^{A_{N} }$$ and the $(\tilde p+1)$–dimensional tensor current (\[Chernformcompdual1\]) can also be expressed as follows $$\begin{aligned} \label{Chernformcompdual3} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} = && \frac{1}{A(S^{N-1})(N-1)!} \varepsilon_{A_1A_2\cdots {A_{N-1}A_N}} \nonumber \\ && \cdot \frac{\varepsilon ^{ \lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \mu _2 \cdots \mu _N}}{\sqrt{g}} \partial _{\mu _1}n^{A_1}\cdots \partial _{\mu _N}n^{A_N}.\end{aligned}$$ This is just the topological tensor current of $\tilde p$–branes in [@DuanFuJiaJMP2000], i.e., the tensor current of creating $\tilde p$–dimensional manifold[@JiangDuanJMP2000; @YangJiangDuanCPL2001]. In case of $SO(N+1)$ gauge field theory on $D$–dimensional manifold $X$, we can define a new field theory as $$\begin{aligned} \label{LambdaPrimeComp1} F_{\mu _1 \cdots \mu _N} = && {\frac{(-1)^{\frac{N}{2}-1}N!}{2^N (2\pi )^{\frac N 2}\left(\frac{N}{2}\right)!}} \varepsilon_{AA_1A_2\cdots {A_{N-1}A_N}} \nonumber \\ && \cdot n^AF^{{A}_1{A}_2}_{\mu _1\mu _2 } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}},\end{aligned}$$ $$\label{ChernformcompdualN+1} {{\; ^{\star } F}}^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu }= \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \mu _{2} \cdots \mu _{N}}}{\sqrt{g}}{F}_{\mu _{1} \mu _{2} \cdots \mu _{N} },$$ where $n^A $ is the section of sphere bundle $S^{N}(X)$. Using (\[LambdaPrimeComp1\]) and the Bianchi identity (\[Bianchip2\]), we find that the topological tensor current can be defined as $$\label{TopoTenCurrSON+1} \begin{array}{ll} & \displaystyle {\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}\\ \vspace{4mm} = & \displaystyle \frac{1}{\sqrt{g}}\partial _{\mu } \left({\sqrt{g} {\; ^{\star } F}}^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu } \right) \\\vspace{4mm} = & \displaystyle \frac{1}{\sqrt{g}}\partial _{\mu } \left(\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \mu _{2}\cdots \mu _{N}}F_{\mu _1 \mu _{2}\cdots \mu _{N} } \right) \\ \vspace{4mm} = & \displaystyle \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \mu _{2}\cdots \mu _{N}}}{\sqrt{g}}D _{\mu } \left(F_{\mu _1 \mu _{2}\cdots \mu _{N} } \right) \\ \vspace{4mm} = &\displaystyle {\frac{(-1)^{\frac{N}{2}-1}N!}{2^N (2\pi )^{\frac N 2}\left(\frac{N}{2}\right)!}} \varepsilon_{AA_1A_2\cdots {A_{N-1}A_N}} \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}}\\ & \cdot D_\mu n^AF^{{A}_1{A}_2}_{\mu _1\mu _2 } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}}. \end{array}$$ In the like manner, it can be proved that the covariant divergence of this topological tensor current is $$\label{TopoTenCurrSON+1CON} \begin{array}{ll} \nabla _{\lambda } {\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} = &\displaystyle \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}} \\ & \cdot D_{\lambda } D_\mu n^AF^{{A}_1{A}_2}_{\mu _1\mu _2 } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}}. \end{array}$$ Using the definition of curvature tensor $F^{AB}_{\mu \nu}$ of $SO(N+1)$ principal bundle $$(D_\mu D_\nu -D_\nu D_\mu)n^A=-F^{AB}_{\mu \nu}n^B,$$ we can obtain $$\begin{array}{ll} & \nabla _{\lambda } \displaystyle {\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}= {\frac{(-1)^{\frac{N}{2}}N!}{2^{(N+1)} (2\pi )^{\frac N 2}\left(\frac{N}{2}\right)!}} \varepsilon_{AA_1A_2\cdots {A_{N-1}A_N}} \\ & \displaystyle \cdot \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}} F^{AB}_{\lambda \mu }n^B F^{{A}_1{A}_2}_{\mu _1\mu _2 } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}} , \end{array}$$ where $B, A, A_1, A_2, \cdots ,{A_{N-1}, A_N}$ are valued to $SO(N+1)$ Lie algebra index. Let $$\begin{aligned} \Delta ^B = && \varepsilon_{AA_1A_2\cdots {A_{N-1}A_N}} \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}} \nonumber \\ && \cdot F^{AB}_{\lambda \mu } F^{{A}_1{A}_2}_{\mu _1\mu _2 } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}} ,\end{aligned}$$ for fixed $B$, if $A=B$, it’s obvious that $\Delta ^B=0$; then one of $A_1, A_2, \cdots ,{A_{N-1}, A_N}$ must be valued to $B$ and $$\begin{aligned} \Delta ^B = && \varepsilon_{AA_1A_2\cdots {A_{N-1}A_N}} \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}} \nonumber \\ && \cdot F^{AB}_{\lambda \mu } \cdots F^{{A}_iB}_{\mu _i\mu _{i+1} } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}} ,\end{aligned}$$ or $$\begin{aligned} \Delta ^B = && \varepsilon_{AA_1A_2\cdots {A_{N-1}A_N}} \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}} \nonumber \\ && \cdot F^{AB}_{\lambda \mu } \cdots F^{B{A}_{i+1}}_{\mu _i\mu _{i+1} } \cdots F^{{A}_{N-1}{A}_N}_{\mu _{N-1}\mu _{N}} ,\end{aligned}$$ where $F^{AB}_{\lambda \mu }$ and $F^{{A}_iB}_{\mu _i\mu _{i+1} }$, or $ F^{AB}_{\lambda \mu }$ and $F^{B{A}_{i+1}}_{\mu _i\mu _{i+1} } $ can be exchanged symmetrically, but the exchange between $A$ and $A_i$ or $A$ and $A_{i+1}$ is antisymmetric, so we obtain that $\Delta ^B$ equals to zero also. Therefore, the $SO(N+1)$ topological tensor current $\tilde j^{ \lambda \lambda _1 \cdots \lambda _{\tilde p}} $ is identically conserved $$\nabla _{\lambda } \displaystyle {\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}=0.$$ In the case of odd $N$ the dual field tensor $~^{*}F^{\mu \mu _1\cdots \mu _{\tilde p+1}} $ in (\[TCCTM\]) can be directly expressed in term of the field tensor $F^{AB}_{\mu \nu }$ as expression (\[ChernformcompdualN+1\]). As above, the eq.(\[TopoTenCurrSON+1\]) can be locally written in a simple form $$\begin{aligned} \label{TopoTenCurrSON+12} && {\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} = \frac{1}{ 2A(S^{N-1})(N-1)!} \varepsilon _{AA_1 A_2 \cdots A_N } \nonumber \\ && \cdot \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu \mu _1 \cdots \mu _{N}}}{\sqrt{g}} \partial _{\mu } n^{A } \partial _{\mu _1 } n^{A_1 } \partial _{\mu _2 }n ^{A_2 } \cdots \partial _{\mu _N}n ^{A_{N} }.\end{aligned}$$ Multipling (\[TopoTenCurrSON+1\]) and (\[TopoTenCurrSON+12\]) by ${2A(S^{N-1})}/{NA(S^N)}$ and defining $$\label{} d= \left\{ \begin{array}{ll}N,&for \quad d \quad even \vspace{.4cm} \\ N+1,&for \quad d \quad old \end{array} \right.$$ where $N$ is even and $D=d+\tilde p+1$ is the dimension of total manifold $X$, then we can obtain a unified topological tensor current on $D$–dimensional smooth manifold $X$ $$\begin{aligned} \label{SO(d)TTC} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} = && \frac{1}{A(S^{d-1})(d-1)!} \varepsilon_{A_1A_2\cdots {A_{d-1}A_d}}\nonumber \\ && \cdot \frac{\varepsilon ^{ \lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \mu _2 \cdots \mu _d}}{\sqrt{g}} \partial _{\mu _1}n^{A_1}\cdots \partial _{\mu _d}n^{A_d}. \end{aligned}$$ Here it’s easier to prove that above topological currents are locally conserved $$\label{} \nabla _{\lambda } {\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}=\frac{1}{\sqrt{g}}\partial _{\lambda }\left(\sqrt{g}{\tilde j}^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}\right)=0.$$ It’s well known that the $\phi $-mapping is a $d$-dimensional smooth vector field on $X$ $$\label{phi} \phi ^A=\phi ^A(x),\quad A=1,2,\cdots ,d,$$ and the direction field of $\vec \phi (x)$ is $$\label{directionfield} n^A=\frac{\phi ^A}{||\phi ||},\quad \quad ||\phi ||=\sqrt{\phi ^A\phi ^A},$$ i.e., $n^A$ is the section of sphere bundle $S^{d-1}(X)$. Substituting (\[directionfield\]) into (\[SO(d)TTC\]) and considering that $$\partial _\mu n^A=\frac{\partial _\mu \phi ^A}{||\phi ||}+\phi ^A \partial _\mu \left( \frac 1{||\phi ||}\right) ,$$ we have $$\small \begin{array}{rl} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} (x) = & \displaystyle\frac{1}{ A(S^{d-1})(d-1)!} \varepsilon _{A_1 \cdots A_d } \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \cdots \mu _d }}{\sqrt{g}} \\ & \displaystyle \cdot \partial _{\mu _1}\left( \frac{\phi ^{A_1 }}{ \left\| \phi \right\|^d} \partial _{\mu _2}\phi ^{A_2}\cdots \partial _{\mu _d}\phi ^{A_d}\right) \\ = & \displaystyle \frac{1}{ A(S^{d-1})(d-1)!} \varepsilon _{A_1 \cdots A_d } \frac{\varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \cdots \mu _d }}{\sqrt{g}} \\ & \displaystyle \cdot \partial _{\mu _1}\left( \frac{\phi ^{A_1 }}{ \left\| \phi \right\|^d} \right) \partial _{\mu _2}\phi ^{A_2}\cdots \partial _{\mu _d}\phi ^{A_d}. \end{array}$$ Defining the $rank$–$(\tilde p+1)$ Jacobian tensor $J^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} \left( {{\phi \over x}} \right)$ of $\vec \phi$ as $$\begin{array}{ll} & \varepsilon ^{\lambda \lambda _1 \cdots \lambda _{\tilde p} \mu _1 \cdots \mu _d} \partial _{\mu _1 } \phi ^A\partial _{\mu _2 } \phi ^{A_2 } \cdots \partial _{\mu _d } \phi ^{A_d } \\ = & J^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} \left( {{\phi \over x}} \right)\varepsilon ^{AA_2 \cdots A_d }, \label{JacobiaofPhi} \end{array}$$ and noticing $$\varepsilon _{A_1 A_2 \cdots A_d } \varepsilon ^{AA_2 \cdots A_d } = \delta _{A_1 }^A (d- 1)!,$$ it follows that $$\tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p} } (x) =\frac{1}{ A(S^{d-1})\sqrt{g}} {\partial \over {\partial \phi ^A }}\left( \frac{\phi ^{A }}{ \left\| \phi \right\|^d} \right) J^{\lambda \lambda _1 \cdots \lambda _{\tilde p} } \left( {{\phi \over x}} \right). \label{monopolecurr2}$$ Using the Green functions in $\vec \phi $–space $${{\phi ^A } \over {\left\| \phi \right\|^d }} = \left\{ \begin{array}{ll} - {1 \over {(d- 2)}}{\partial \over {\partial \phi ^A }}\left( {{1 \over {\left\| \phi \right\|^{d - 2} }}} \right) & \;\;for \;\; d>2 ,\vspace{.4cm} \\ {\partial \over {\partial \phi ^A }} ln\left\| \phi \right\| &\;\; for \;\;d=2, \end{array} \right.$$ and $$\Delta _\phi \left( {{1 \over {\left\| \phi \right\|^{d - 2} }}} \right) = - \left( {d - 2} \right)A\left( {S^{d - 1} } \right)\delta \left( {\vec \phi } \right),$$ $$\Delta _\phi \left( ln \left\| \phi \right\| \right) = 2\pi \delta \left( {\vec \phi } \right),$$ where $\Delta _{\phi} =\frac{\partial ^2}{\partial \phi ^A\partial \phi ^A} $ is the $d$-dimensional Laplacian operator in $\vec \phi $ space, we obtain a $\delta $-function like topological tensor current $$\label{current} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}=\frac{1}{\sqrt{g}}\delta (\vec \phi )J^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}(\frac \phi x),$$ and find that $ \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}\neq 0$ only when $\vec \phi =0$. So, it is essential to discuss the solutions of the equations $$\phi ^A(x)=0,\quad A=1,\cdots ,d.$$ This kind of solution plays an crucial role in realization of the $\tilde p$–brane scenario. Suppose that the vector field $\vec \phi (x)$ possesses $\ell$ isolated zeroes, according to the deduction of Ref.[@DuanLiuHopkins1987] and the implicit function theorem[@Goursat04; @yang76], when the zeroes are regular points of $\phi $–mapping, i.e., the rank of the Jacobian matrix $[\partial _\mu \phi ^A]$ is $ d$, the solutions of $\vec \phi (x)=0$ can be parameterized as $$\label{solutdi} x^\mu =z_i^\mu (u^0,u^1,\cdots ,u^{\tilde p}),\quad i=1,\cdots ,\ell, \quad \mu =1,\cdots ,D,$$ where the subscript $i$ represents the $i$–th solution and the parameters $u^I (I=0,1,2,\cdots ,\tilde{p})$ span a $(\tilde p+1)$–dimensional submanifold which is called the $i$–th singular submanifold $N_i$ in the total spacetime mainfold $X$. These spatial $\tilde p$–dimension singular submanifolds $N_i$ are just the world volumes of the topological $\tilde p$–branes in $M$–theory. The number of solutions $\ell$ takes the role of the brane number. This is the novel result of the present work. Therefore by making use of $GBC$ theorem and $\phi $–mapping topological current theory, we have established a novel field theory of creating topological $\tilde p$–branes. We must pointed out that based on the description of $\tilde p$–branes as topological defects in space-time[@DiamantiniPLB1996; @duff], the vector field $\phi ^A(x)$ $(A=1,\cdots ,d)$ can be looked upon as the order parameter fields of $\tilde p$–branes. Inner topological structure of $\tilde p$–branes ================================================ From above discussions, we see that the kernel of $\phi $–mapping plays an crucial role in creating of topological $\tilde p$–branes. Here we will focus on the zero points of order parameter field $\vec \phi $ and will search for the inner topological structure of $\tilde{p}$–branes. It can be proved that there exists a $d$-dimensional submanifold $M_i$ in $X$ with local parametric equation $$\label{trcol}x^\mu =x^\mu (v^1,\cdots ,v^d),\quad \mu =1,\cdots ,D,$$ which is transversal to every $N_i$ at the point $p_i$ with metric $$\label{normal} g_{\mu \nu }\left.B^{\mu }_{I}B^{\nu }_{a}\right|_{p_i} =0, \quad I=0,1,\cdots ,p,\quad a=1,\cdots ,d,$$ where $$\frac{\partial x^\mu }{\partial u^I}=B^{\mu }_{I} , \quad \frac{\partial x^\nu}{\partial v^a} =B^{\nu }_{a},\quad \quad \mu ,\nu =1,2,\cdots ,D,$$ are tangent vectors of $N_i$ and $M_i$ respectively. As we have pointed out in Ref.[@DuanMengJMP1993], the unit vector field defined in (\[directionfield\]) gives a Gauss map $n:\partial M_i\rightarrow S^{d-1}$, and the generalized Winding Number can be given by the map $$\label{} W_i = \frac 1{A(S^{d-1})(d-1)!}\int_{\partial M_i}n^{*}(\varepsilon _{A_1\cdots A_d}n^{A_1}dn^{A_2}\wedge \cdots \wedge dn^{A_d}).$$ where $n^*$ denotes the pull back of map $n$ and $\partial M_i$ is the boundary of the neighborhood $M_i$ of $p_i$ on $X$ with $p_i\notin \partial M_i,$ $M_i\cap M_j=\emptyset $. It means that, when the point $v^b$ covers $\partial M_i$ once, the unit vector $n$ will cover the region $\partial M_i$ whose area is $W_i$ times of $A^{(d-1)}$, i.e., the unit vector $n$ will cover the unit sphere $A^{(d-1)}$ for $W_i$ times. Using the Stokes’ theorem in exterior differential form and duplicating the derivation of (\[current\]), we obtain $$\label{wind} W_i=\int_{M_i}\delta (\vec \phi (v))J(\frac \phi v)d^dv$$ where $J(\frac \phi v)$ is the usual Jacobian determinant of $\vec \phi $ with respect to $v$ $$\varepsilon ^{A_1\cdots A_d}J(\frac \phi v)=\varepsilon ^{\mu _1\cdots \mu _d}\partial _{\mu _1}n^{A_1}\partial _{\mu _2}n^{A_2}\cdots \partial _{\mu _d}n^{A_d}.$$ According to the $\delta $-function theory[@Schouten] and the $\phi $–mapping theory, we know that $\delta (\vec \phi)$ can be expanded as $$\label{ff} \delta (\vec \phi )=\sum_{i=1}^\ell c_i \delta (N_i)$$ where the coefficients $c_i$ must be positive, i.e., $c_i=|c_i|$. $\delta (N_i)$ is the $\delta $–function in $X$ on a submanifold $N_i$,[@Schouten; @Gelfand] $$\delta (N_i)=\int_{N_i}\delta ^D(x-z_i(u))\sqrt{g_u}d^{(\tilde p+1)}u,\quad \quad i=1,\cdots ,\ell ,$$ where $g_u=det(g_{IJ})$. Substituting (\[ff\]) into (\[wind\]), and calculating the integral, we get the expression of $c_i$, $$\label{} c_i=\frac{\beta _i}{\left|J \left(\frac{\phi }{v}\right)\right|_{p_i}}=\frac{\beta _i\eta _i}{\left. J \left(\frac{\phi }{v}\right)\right|_{p_i}},$$ where the positive integer $\beta _i=|W_i|$ is called the Hopf index of $\phi $–mapping on $M_i$, and $\eta _i=sgn(J(\frac \phi v))|_{p_i}=\pm 1$ is the Brouwer degree[@DuanMengJMP1993; @Hopf]. So we find the relations between the Hopf index $\beta _i,$ the Brouwer degree $ \eta _i$, and the winding number $W_i$ $$W_i=\beta _i\eta _i.$$ Therefore, the general topological current of the $ \tilde p$–branes can be expressed directly as $$\begin{array}{rl} \label{recurdi} \tilde j^{\lambda \lambda _1 \cdots \lambda _{\tilde p}} = & \frac{1}{\sqrt{g}}J^{\lambda \lambda _1 \cdots \lambda _{\tilde p}}(\frac{\phi }{x}) \\ & \cdot \sum_{i=1}^{\ell }\beta _i\eta _i\int_{N_i} \delta ^D(x-z_i(u))\sqrt{g_u}d^{(\tilde p+1)}u. \end{array}$$ From the above equation, we conclude that the $(\tilde p+1)$-dimensional singular submanifolds $N_i\;(i=1,2,\cdots ,\ell) $ are world volumns of $\tilde p$–branes, and the inner topological structure of $\tilde p$–branes current is labelled by the total expansion of $\delta (\vec \phi)$ which includes the topological information $\beta _i \eta _i$. In detail, $\beta _i$ characterizes the absolute value of the topological charge of every $\tilde p$–brane and $\eta _i=+1$ corresponds to $\tilde p$–brane while $\eta _i=-1$ to anti $\tilde p$–brane. Taking the parametre $u^0$ and $u^I \;\; (I=1,2,\cdots , \tilde p)$ as the timelike evolution parameter and spacelike parameters respectively, the topological current of $\tilde p$–branes just represents $\tilde p$–dimensional topological defects with topological charges $\beta _i \eta _i$ moving in the $D$–dimensional total manifold $X$. If we define a Lagrangian of $\tilde p$–brane as $$\label{GenNieLag} \label{nestring}L=\sqrt{\frac 1{(\tilde p+1)!}g_{\mu _0\nu _0}g_{\mu _1\nu _1}\cdots g_{\mu _{\tilde p}\nu _{\tilde p}} \tilde j^{\mu _0 \mu _1\cdots \mu _{\tilde p}} \tilde j^{\nu _0\nu _1\cdots \nu _{\tilde p}}},$$ which is just the generalization of Nielsen’s Lagrangian of string[@niel] and includes the total information of arbitrary dimensional $\tilde p$–branes in $X$. It obvious that the Euler equations corresponding to the Lagrangian (\[GenNieLag\]) will give the dynamics of the $\tilde{p}$-branes. From the above deductions, we can prove that $$L=\frac 1{\sqrt{g}}\delta (\vec \phi (x)).$$ Then, the action takes the form $$\begin{aligned} \label{aa} S && =\int_XL\sqrt{g}d^Dx =\int_X\delta (\vec \phi (x))d^Dx \nonumber \\ && =\sum_{i=1}^l\beta _i\eta _i\int_{N_i}\sqrt{g_u}d^{(\tilde p+1)}u,\end{aligned}$$ i.e. $$S=\sum_{i=1}^l\eta _iS_i,$$ where $S_i=\beta _i\int_{N_i}\sqrt{g_u}d^{(\tilde p+1)}u.$ This is just the straightforward generalized Nambu action for the string world-sheet action [@nambu]. Here this action for multi $\tilde p$–branes is obtained directly by $\phi $-mapping theory, and it is easy to see that this action is just Nambu action for multistrings when $D-d=2$.[@DuanLiuHopkins1987]. [99]{} -2mm J. H. Schwarz, hep-th/9607201. E. Witten, Nucl. Phys. [**B 443**]{}, 85(1995). P. K. Townsend, Phys. Lett. [**B 350**]{}, 184(1995). C. M. Hull, Nucl. Phys. [**B 468**]{}, 113(1996). K.S. Stelle and P.K. Townsend, “Are 2-branes better than 1?” in Proc. CAP Summer Institute, Edmonton, Alberta, July 1987, KEK library accession number 8801076. J. H. Schwarz and N. Seiberg, Rev. Mod. Phys. **71**, S112(1999). M. C. Diamantini, Phys. Lett. **B388**, 273(1996), arXiv:hep-th/9607090. J. Scherk an J. H. Schwarz, Phys. Lett. [**B52**]{}, 347(1974); J.H. Schwarz, Phys. Rep. [**89**]{}, 223(1982). Y. Nambu, Phys. Rep. [**23**]{}, 250(1976). R. I. Nepomechie, Phys. Rev. [**D31**]{}, 1921(1985). A. Strominger, Nucl. Phys. [**B 343**]{}, 167(1990). M. J. Duff, R.R. Khuri and J. X. Lu, Phys. Rep. [**256**]{}, 213(1995). C. M. Hull, Nucl. Phys. [**B509**]{}, 216(1998). G. R. Dvali, I. I. Kogan, and M. A. Shifman, Phys. Rev., **D62**, 106001(2000). J. A. R. Cembranos, A. Dobado, A. L. Maroto, Phys.Rev.**D65**, 026005(2002); L. Perivolaropoulos, hep-ph/0307269. Y. S. Duan, SLAC-PUB-3301/84. Y. S. Duan, J. R. Ren J. and Yang, Chinese Phys. Lett. [**20**]{}, 2133(2003). Y. S. Duan and X. Liu, J. HIGH ENERGY PHYS. **A041**, 0403(2004). Y. S. Duan, X. H. Meng , J. Math. Phys., [**34**]{}, 1149(1993). Y. S. Duan, S. Li, G. H. Yang, Nucl. Phys. [**B 514**]{}, 705(1998). Y. S. Duan, L. B. Fu, J. Math. Phys., [**39**]{}, 4343 (1998). Duan Y. S., Liu X. and Fu L. B., Phys. Rev. **D67**, 085022(2003). Y. S. Duan, H. Zhang, and S. Li, Phys. Rev. **B58**, 125(1998). Y. S. Duan, X. Liu and P. M. Zhang, J. Phys.: Condens. Matter **14**, 7941(2002). Y. S. Duan, H. Zhang and L. B. Fu, Phys. Rev. **E59**, 528(1999). Y. S. Duan, and H. Zhang, Phys. Rev. [**E 60**]{}, 2568(1999). L. B. Fu, Y. S. Duan, H. Zhang,Phys. Rev. [**D61**]{}, 045004(2000). Y. S. Duan; L. B. Fu; G. Jia, J. Math. Phys., **41**, 4379(2000). Y. S. Duan and J. C. Liu, *Proceedings of the 11th Johns Hopkins Workshop on Current Problem in Particle Theory*, **11**(1987)183, Lanzhou, China, edited by Y. S. Duan, G. Domokos, and S. Kovesi-Domokos, World Scientific, Singapore, (1988). Y. S. Duan and P. M. Zhang, Mod. Phys. Lett. **A16**, 2483(2001). Y. Jiang, Y. S. Duan, J. Math. Phys. **41**, 6463(2000). G. H. Yang, Y. Jiang, Y. S. Duan, Chin. Phys. Lett. **18**, 631(2001). R. I. Nepomechie, Phys. Rev. **D31**, 1921(1985). S. S. Chern, Ann. Math. [**45**]{}, 747(1944); [**[46]{}**]{}, 674(1945); S.S. Chern, [*Differentiable Manifolds.*]{} Lecture Notes Univ. of Chicago, (1959). J. S. Dowker, and J. P. Schofield, J. Math. Phys. [**31**]{}, 808(1990). Alvarez-Gaumé, L. Commun. Math. Phys. [**90**]{}, 161(1983). Eric W. Weisstein et al. “Differential k-Form.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/Differentialk-Form.html Spivak M., Differential Geometry, Publish or Perish, Boston. (1975). É. Goursat, [*A Course in Mathematical Analysis,*]{} translated by Earle Raymond Hedrick, (Ginn & Company, Boston, 1905) Vol. I. L. V. Toralballa, [*Theory of Functions*]{}, (Charies E. Merrill Books, Inc., Columbus, Ohio, (1963). J. A. Schouten, [*Tensor Analysis for Physicists*]{}, Oxford, Clarendon Press, (1951). I.M. Gelfand and G.E. Shilov, *Generalized Function* National Press of Mathematics Literature, Moscow, (1958) H. Hopf, Math. Ann. **96**, 209(1929). H. B. Nielsen and P. Olesen, Nucl. Phys. [**B57**]{}, 367(1973). Y. Nambu, [*Lectures at the Copenhagen Symposium*]{}, 1970.
--- abstract: 'For a class of piecewise hyperbolic maps in two dimensions, we propose a combinatorial definition of topological entropy by counting the maximal, open, connected components of the phase space on which iterates of the map are smooth. We prove that this quantity dominates the measure theoretic entropies of all invariant probability measures of the system, and then construct an invariant measure whose entropy equals the proposed topological entropy. We prove that our measure is the unique measure of maximal entropy, that it is ergodic, gives positive measure to every open set, and has exponential decay of correlations against Hölder continuous functions. As a consequence, we also prove a lower bound on the rate of growth of periodic orbits. The main tool used in the paper is the construction of anisotropic Banach spaces of distributions on which the relevant weighted transfer operator has a spectral gap. We then construct our measure of maximal entropy by taking a product of left and right maximal eigenvectors of this operator.' address: 'Department of Mathematics, Fairfield University, Fairfield CT 06824, USA' author: - 'Mark F. Demers' title: Uniqueness and Exponential Mixing for the Measure of Maximal Entropy for Piecewise Hyperbolic Maps --- [^1] Introduction {#sec:intro} ============ There has been a flurry of activity recently in establishing the existence and uniqueness of equilibrium states for broad classes of potentials and systems outside the uniformly hyperbolic setting. This topic traces back to the work of Margulis [@Mar0], who proved that the number of periodic orbits of length $L$ for the geodesic flow on a compact manifold of strictly negative curvature grow at an exponential rate determined by the topological entropy ${h_{\scriptsize{\mbox{top}}}}$ of the flow. To prove this result, Margulis constructed an invariant measure ${\mu_{\tiny{\mbox{top}}}}$ via conditional measures on the local stable and unstable manifolds of the flow which scaled by $e^{\pm t {h_{\scriptsize{\mbox{top}}}}}$. An important feature of the measure ${\mu_{\tiny{\mbox{top}}}}$ is that it is the unique measure of maximal entropy for the flow: its measure-theoretic entropy equals the topological entropy ${h_{\scriptsize{\mbox{top}}}}$. These results were generalized and further developed for broader classes of Anosov and Axiom A flows and diffeomorphisms through the work of Sinai, Bowen, Ruelle and many others using thermodynamic formalism [@Sin72; @BoRu; @ruelle1], topological techniques [@Bo1; @Bo2; @Bo3; @Bo4], and dynamical zeta functions [@PaPo83; @ruelle2]. Later, Dolgopyat’s proof of exponential decay of correlations for some geodesic flows [@Do] led to more precise asymptotics for counting periodic orbits [@PS]. Recent attempts to extend proofs of existence and uniqueness of equilibrium states in general, and measures of maximal entropy in particular, to the nonuniformly hyperbolic setting have employed symbolic dynamics [@Sar1; @Sar2; @lima; @BuSa], as well as adapting the approach of Bowen via a notion of non-uniform specification [@BCFT; @CFT; @CliWar; @CPZ]. These works have greatly broadened the classes of systems for which one can prove the existence and uniqueness of equilibrium states, yet they do not usually provide rates of mixing for the equilibrium states. Simultaneously, there have been advances made in the study of the transfer operator associated with hyperbolic systems with singularities, including dispersing billiards [@demzhang11; @demzhang13; @demzhang14]. This approach, which avoids the coding associated with Markov partitions or extensions, exploits the hyperbolicity of the system to prove that the action of the transfer operator on appropriately defined Banach spaces has good spectral properties. It was used recently to prove exponential decay of correlations for the finite horizon Sinai billiard flow [@bdl], adapting ideas of Dolgopyat [@Do] and Liverani [@Li2]. It was then applied to prove the existence and uniqueness of a measure of maximal entropy for finite horizon Sinai billiard maps [@max], establishing a variational principle for this class of billiards. For hyperbolic systems with discontinuities, a priori results that guarantee the existence of an invariant measure maximizing the entropy, or even a simple definition of topological entropy, are not available as they are for continuous maps and flows. Indeed, in order to overcome this shortcoming, one approach is to redefine the map as a continuous map on a noncompact space, and then apply generalized definitions of topological entropy in this setting. Yet such definitions can be cumbersome to work with, and the resulting entropy can depend on the choice of metric in the reduced space. To simplify matters, the first step in [@max] is to define an intuitive notion of growth in complexity given by the number of domains of continuity ${\mathcal{M}}_0^n$ for the map $T^n$. This leads to an asymptotic quantity $h_*$, which plays the role of topological entropy [@max Definition 2.1] (see also Definition \[def:h\_\*\] below). This quantity is proved to equal the supremum of the measure-theoretic entropies of the invariant measures for the billiard map, and a unique measure $\mu_*$ whose entropy achieves this maximum is constructed by taking a product of left and right maximal eigenvectors of an associated weighted transfer operator ${\mathcal{L}}$, following the methods in [@GL2] which generalize the classical Parry construction. Despite this success, the weight in the relevant transfer operator in [@max] is unbounded due to the unbounded expansion and contraction that occur near grazing collisions in dispersing billiards. The presence of this weight forced significant changes in the Banach spaces from [@demzhang11] on which the operator acted, and it was not possible to establish a spectral gap in this context. Indeed, the rate of mixing for the measure of maximal entropy is an open question for billiards. The purpose of the present paper is to demonstrate that under the additional assumption that the derivative of the map is bounded, the techniques employed in [@max] are sufficient to prove the existence and uniqueness of a measure of maximal entropy that is exponentially mixing. To this end, we study a class of piecewise hyperbolic maps, defined in Section \[setting\]. The existence and statistical properties of Sinai-Ruelle-Bowen (SRB) measures[^2] for this class of maps has been studied via a variety of techniques [@Pesin1; @Li1; @Y98; @demers; @liv]; yet currently there are no results regarding measures of maximal entropy, nor more general equilibrium states. In structure, this paper mainly follows the approach in [@max]. Yet there are several key differences between the class of piecewise hyperbolic maps studied here and dispersing billiards. The primary simplification is that our maps have bounded derivatives, as mentioned above, and this key fact permits us to prove a spectral gap for the relevant transfer operator, which leads to exponential decay of correlations for the measure of maximal entropy $\mu_*$. However, there are two additional difficulties in the current setting that are not present in Sinai billiards. - We do not assume that the singularity curves for our map $T$ satisfy the [*continuation of singularities*]{} property enjoyed by billiards. - We do not assume the map is associated with a continuous flow. Point (i) creates significant complications in the study of the rate of growth of $\# {\mathcal{M}}_0^n$, the number of maximal, connected domains of continuity of $T^n$. In particular, the submultiplicative property of $\# {\mathcal{M}}_0^n$ proved in [@max Lemma 3.3], and often exploited in that work, fails in the present context. Indeed, the uniform exponential upper and lower bounds on $\# {\mathcal{M}}_0^n$ proved in Proposition \[prop:M0n\] are completed only after the spectral gap for the operator ${\mathcal{L}}$ is established. Point (ii) has several consequences. The first is that the continuous flow provides a linear bound on the growth in complexity, which is exploited in [@max]. In the present work, this property is replaced by the complexity assumption (P1) introduced in Section \[sec:pw\]; while the growth in complexity may be exponential for our class of maps, it is slow relative to the minimum hyperbolicity constant for the map (and therefore also relative to $h_*$ by Lemma \[lem:growth\](d)). The second consequence of (ii) is that in [@max], there is a positive minimum distance between orbits that belong to different elements of ${\mathcal{M}}_0^n$. In the present context this may fail, so in Section \[sec:pw\] we define an adapted metric which we use to define the dynamical Bowen balls instrumental in the estimation of the entropy of $\mu_*$ in Section \[sec:max\]. The structure of the paper is as follows. We begin by defining in Definition \[def:h\_\*\] the exponential rate of growth in complexity, $h_*$, which counts the number of domains of continuity ${\mathcal{M}}_0^n$ of $T^n$. This quantity dominates the measure-theoretic entropies of the invariant measures (Theorem \[thm:initial\]). We then proceed to study the action of a weighted transfer operator, defined in Section \[sec:transfer\]. The Banach spaces we use are similar to those defined in [@demers; @liv] (not [@max]) for this class of maps, yet the operator has significant differences from the transfer operator with respect to the SRB measure studied in [@demers; @liv]. By proving a series of growth and fragmentation lemmas in Sections \[sec:growth\] and \[sec:lower\] that control the prevalence of short and long connected components of $T^{-n}W$ for local stable manifolds $W$, we are able to establish that the operator has a spectral gap in Section \[sec:spec\]. Finally, in Section \[sec:max\], we construct a measure $\mu_*$ out of the left and right eigenvectors of the transfer operator and show that it has exponential decay of correlations and that it is the unique invariant measure with entropy equal to $h_*$. The properties of the measure $\mu_*$ are summarized in Theorem \[thm:mu\]. Setting, Definitions and Results {#setting} ================================ In this section, we introduce a set of formal assumptions on our class of piecewise hyperbolic maps and state the principal results of the paper. Piecewise Hyperbolic Maps {#sec:pw} ------------------------- Let $M$ be a compact two-dimensional Riemannian manifold, possibly with boundary and not necessarily connected, and let $T:M \circlearrowleft$ be a piecewise uniformly hyperbolic map in the sense described below. There exist a finite number of pairwise disjoint open regions $\{ M^+_i \}_i$ such that $\cup_i \overline{M^+_i} = M$ and $\partial M^+_i$ comprises finitely many ${\mathcal{C}}^1$ curves of finite length. We will refer to ${\mathcal{S}}^+ = M \setminus \cup_i M_i^+$ as the singularity set for $T$. Define $M_i^- = T(M^+_i)$. We assume that $\cup_i \overline{M^-_i} = M$ and refer to the set ${\mathcal{S}}^- = M \backslash \cup_i M_i^-$ as the singularity set for $T^{-1}$. We require that $T\in\operatorname{Diff}^2(M \backslash {\mathcal{S}}^+,M \backslash {\mathcal{S}}^-)$ and that on each $M^+_i$, $T$ has a ${\mathcal{C}}^2$ extension[^3] to $\overline{M_i^+}$. Since the extension of $T$ is defined on $\partial M_i^+$, we will write $T({\mathcal{S}}^+)$ to denote the set of images of these boundary curves (on which the extension of $T$ may be multi-valued). In this notation, $T({\mathcal{S}}^+) = {\mathcal{S}}^-$ and $T^{-1}({\mathcal{S}}^-) = {\mathcal{S}}^+$. On each $M_i$, $T$ is uniformly hyperbolic: i.e., there exist constants $\Lambda>1$, $\kappa \in (0,1)$ and two continuous $DT$-strictly-invariant families of cones $C^s$ and $C^u$ defined on $M\backslash ({\mathcal{S}}^+ \cup \partial M)$ which satisfy, $$\label{eq:exp def} \begin{split} & \inf_{x\in M\backslash {\mathcal{S}}^+} \; \inf_{v \in C^u} \frac{\| DTv \|}{\|v\|} \; \ge \Lambda \, , \quad \inf_{x\in M\backslash {\mathcal{S}}^-} \; \inf_{v \in C^s} \frac{\| DT^{-1}v \|}{\|v\|} \; \ge \; \Lambda \, , \\ & \qquad \qquad \mbox{and} \quad \kappa \; := \; \inf_{x\in M\backslash {\mathcal{S}}^+} \; \inf_{v \in C^s} \frac{\| DTv \|}{\|v\|} \, . \end{split}$$ The strict invariance of the cone field together with the smoothness properties of the map implies that the stable and unstable directions are well-defined for each point whose trajectory does not meet a singularity line. In Section \[sec:admissible\], we define narrower cones with the same names and refer to them as the stable and unstable cones of $T$ respectively. We assume that the tangent vectors to the singularity curves in ${\mathcal{S}}^-$ are bounded away from $C^s$ and that those of ${\mathcal{S}}^+$ are bounded away from ${\mathcal{S}}^+$. This class of maps is similar to that studied in [@Pesin1; @Li1; @Y98; @demers; @liv]; see also [@LiWo] for the symplectic case. (Doubling boundary points.) \[con:pointwise\] It will be convenient in what follows to have $T$ defined pointwise on $M$, but a priori it is defined only on $\cup_i M_i^+$. Since $T$ is $C^2$ up to the closure of each $M_i^+$, we may extend $T$ to be defined on $\partial M_i^+$, making $T$ multivalued where these boundaries overlap. Following [@Li1], we adopt the convention that the image of any subset of $M$ under $T$ contains all such points, and continue to call this extended space $M$. We remark that although this convention is made for convenience, it follows from Theorem \[thm:mu\](a) that the measure $\mu_*$ is independent of how $T$ is defined on $\partial M_i^+$. Let $d(\cdot, \cdot)$ denote the Riemannian metric on $M$. The following related metric is better adapted to the dynamics. Define $$\label{eq:bar d} \bar d(x,y) = d(x,y), \quad \mbox{whenever $x,y$ belong to the same component $\overline M_i^+$},$$ and $\bar d(x,y) = 10\, {\mbox{diam}}(M)$ otherwise. Since we have doubled boundary points in $M$ according to Convention \[con:pointwise\], the extended space $M$ is compact in the metric $\bar d$. Denote by ${\mathcal{S}}_n^+ = \cup_{i=0}^{n-1} T^{-i} {\mathcal{S}}^+$ the set of singularity curves for $T^n$ and by ${\mathcal{S}}_n^- = \cup_{i=0}^{n-1} T^i {\mathcal{S}}^-$ the set of singularity curves for $T^{-n}$. Let $K(n)$ denote the maximum number of singularity curves in ${\mathcal{S}}_n^-$ or in ${\mathcal{S}}_n^+$ which intersect at a single point. We make the following assumption regarding the complexity of $T$. Condition (P1) can always be satisfied if $K(n)$ has polynomial growth (as is the case with a Sinai billiard on a torus); however, since (P1) is required only for some fixed $n_0$, it is not necessary to control $K(n)$ for all $n$ in order to verify the condition. \[rem:iterate p1\] If property (P1) holds for $\alpha_0$, then it holds for all $0 < \alpha < \alpha_0$ with the same $n_0$. Notice also that $K(kn_0) \leq K(n_0)^k$ which implies that the inequality in (P1) can be iterated to make $(\Lambda \kappa^{\alpha_0})^{-kn_0} K(kn_0)$ arbitrarily small once (P1) is satisfied for some $n_0$. In Section \[sec:admissible\] we will define a set of admissible stable curves ${\widehat{{\mathcal{W}}}}^s$, with tangent vectors belonging to the stable cone, which we will use to define our norms. For $W \in {\widehat{{\mathcal{W}}}}^s$, let $K_n$ denote the number of smooth connected components of $T^{-n}W$. For a fixed $N$, by shrinking the maximum length $\delta_0$ of leaves in ${\widehat{{\mathcal{W}}}}^s$, we can require that $K_N \leq K(N)+1$. This implies that choosing $N=kn_0$, we can make $(\Lambda \kappa^{\alpha_0})^{-N}K_N$ arbitrarily small. \[convention: n\_0=1\] In what follows, we will assume that $n_0 = 1$. If this is not the case, we may always consider a higher iterate of $T$ for which this is so by assumption (P1). We then choose $\delta_0$ small enough that $K_1 \Lambda^{-1} \kappa^{-\alpha_0}=:\rho<1$. We also assume the following. Property (P1) is standard for piecewise hyperbolic maps, and has been used in [@Pesin1; @Li1; @Y98; @demers; @liv]. The most common form is only to require the complexity bound in one direction, for example on ${\mathcal{S}}_n^-$ in [@Li1; @demers; @liv]. Here, we assume the symmetric version on both ${\mathcal{S}}_n^-$ and ${\mathcal{S}}_n^+$ in order to prove the super-multiplicativity property for $\# {\mathcal{M}}_0^n$, Proposition \[prop:super\]. In fact, the requirement for ${\mathcal{S}}_n^+$ is used only in the proof of Lemma \[lem:long elements\]. It follows from the piecewise hyperbolicity of $T$ and (P1) that $T$ admits an SRB measure [@Pesin1 Theorem 1]. The requirement that ${\mu_{\tiny{\mbox{SRB}}}}$ be smooth in Property (P2) is less essential to our argument. We use ${\mu_{\tiny{\mbox{SRB}}}}$ as our reference measure rather than the Riemannian volume $m$ in order to simplify the estimates involving the transfer operator. Assuming that ${\mu_{\tiny{\mbox{SRB}}}}$ is smooth allows us to prove the embedding lemma, Lemma \[lem:embed\], connecting our Banach spaces to the standard spaces of distributions. Our assumptions on the hyperbolicity of $T$ imply the following uniform expansion and bounded distortion properties along stable curves, which we record for future use. There exists $C_e>0$ such that for any $W \in {\widehat{{\mathcal{W}}}}^s$ and $n \ge 0$, $$\label{eq:one grow} |T^{-n}W| \ge C_e \Lambda^n |W| \, ,$$ where $|W|$ denotes the arc length of $W$ in the metric induced by the Riemannian metric on $M$. Suppose $W \in {\widehat{{\mathcal{W}}}}^s$ is such that $T^n$ is smooth on $W$ and $T^iW \in {\widehat{{\mathcal{W}}}}^s$, for $i = 0, \ldots, n$. By $J_WT^n$ we denote the Jacobian of $T^n$ along $W$ with respect to arc length. There exists in $C_d>0$, independent of $W$, such that for all $x,y \in W$ and all $n \ge 0$, $$\label{eq:distortion} \left| \frac{J_WT^n(x)}{J_WT^n(y)} - 1 \right| \le C_d d_W(x,y) \, ,$$ where $d_W(\cdot, \cdot)$ denotes arc length distance along $W$. A Definition of Topological Entropy {#sec:ent def} ----------------------------------- Following [@max], for $k, n \ge 0$, let ${\mathcal{M}}_{-k}^n$ denote the set of maximal connected components of $M \setminus ({\mathcal{S}}^+_n \cup {\mathcal{S}}^-_k)$, where we define ${\mathcal{S}}^{\pm}_0 = \emptyset$. Note that by definition, elements of ${\mathcal{M}}_{-k}^n$ are open in $M$. With this notation, ${\mathcal{M}}_0^n$ denotes the set of maximal, open, connected components of $M$ on which $T^n$ is continuous, while ${\mathcal{M}}_{-n}^0$ has the analogous property for $T^{-n}$. (Topological entropy of $T$.) \[def:h\_\*\] Define ${\displaystyle}h_*(T) = \limsup_{n \to \infty} \frac 1n \log \left( \# {\mathcal{M}}_0^n \right)$. By definition, if $A \in {\mathcal{M}}_0^n$, then $T^nA \in {\mathcal{M}}_{-n}^0$, so that $\# {\mathcal{M}}_0^n = \# {\mathcal{M}}_{-n}^0$. Thus $h_*(T) = h_*(T^{-1})$, i.e. this definition is symmetric in time. Indeed, the limsup in the definition is in fact a limit, which follows from Proposition \[prop:M0n\]. In order to connect $h_* = h_*(T)$ to the dynamical refinements of a fixed partition, for each $k \in \mathbb{N}$, define ${\mathcal{P}}_k$ to be the maximal connected components of $M$ on which $T^k$ and $T^{-k}$ are continuous. That is, ${\mathcal{P}}_k$ is the partition of $M$ defined by $M \setminus ({\mathcal{S}}_k^+ \cup {\mathcal{S}}_k^-)$, together with the boundary curves associated to each element, according to Convention \[con:pointwise\]. If we let ${\mathring{{\mathcal{P}}}}_k$ denote the collection of interiors of elements of ${\mathcal{P}}_k$, then we have ${\mathring{{\mathcal{P}}}}_k = {\mathcal{M}}_{-k}^k$. For $n \ge 1$, define ${\mathcal{P}}_k^n = \bigvee_{i=0}^n T^{-i}{\mathcal{P}}_k$. ${\mathcal{P}}_k^n$ is still a pointwise partition of $M$, yet its elements may not be open sets, and it may occur that ${\mathcal{P}}_k^n$ contains isolated points due to multiple boundary curves intersecting at one point. Furthermore, we do not assume that the elements of ${\mathcal{P}}_k^n$ are connected sets.[^4] Thus, although the collection of interiors ${\mathring{{\mathcal{P}}}}_k^n$ is a partition of $M \setminus ({\mathcal{S}}_{k+n}^+ \cup {\mathcal{S}}_k^-)$, it may be that ${\mathring{{\mathcal{P}}}}_k^n \neq {\mathcal{M}}_{-k}^{k+n}$. Our first lemma provides a rough upper bound on the number of isolated points that can be created by refinements of ${\mathcal{P}}_k$. Let $\# {\mathcal{S}}^{\pm}$ denote the number of smooth components of ${\mathcal{S}}^{\pm}$. \[lem:isolated\] For each $k, n \ge 1$, the number of isolated points in ${\mathcal{P}}_k^n$ is at most $$2 (\# {\mathcal{S}}^- + \# {\mathcal{S}}^+) \sum_{j=1}^{k+n} {\mathcal{M}}_0^j.$$ By Convention \[con:pointwise\], there are no isolated points in ${\mathcal{P}}_1$. Next, for each $n \ge1$, at time $n$, isolated points in ${\mathcal{P}}_1^n$ can be produced by intersections of corner points in the boundary of ${\mathcal{P}}_1^{n-1}$ with elements of ${\mathcal{S}}^-$. Moreover, each pair of smooth curves $S \in {\mathcal{S}}_n^+$ and $S' \in {\mathcal{S}}^-$ intersect at most twice per element of ${\mathcal{M}}_0^n$. Thus the number of new isolated points created at time $n$ is at most $2 \# {\mathcal{S}}^- \# {\mathcal{M}}_0^n$. Applying this estimate inductively, we have $$\mbox{number of isolated points in ${\mathcal{P}}_1^n$} \le 2 \# {\mathcal{S}}^- \sum_{j=1}^n \# {\mathcal{M}}_0^j \, .$$ Next, for each $k$, applying a similar inductive argument to $T^{-1}$, we have $$\begin{split} \mbox{number of isolated points in ${\mathcal{P}}_k$}& \le 2 \# {\mathcal{S}}^+ \sum_{j=1}^k \# {\mathcal{M}}_{-j}^0 + 2 \# {\mathcal{S}}^- \sum_{j=1}^k \# {\mathcal{M}}_0^j \\ & \le 2(\# {\mathcal{S}}^+ + \# {\mathcal{S}}^-) \sum_{j=1}^k \# {\mathcal{M}}_0^j \, , \end{split}$$ where we have used the fact that $\# {\mathcal{M}}_0^j = \# {\mathcal{M}}_{-j}^0$. Finally, refining ${\mathcal{P}}_k$, we create at most $2 \# {\mathcal{S}}^- \# {\mathcal{M}}_0^{k+j}$ new isolated points in ${\mathcal{P}}_k^j$ at time $j$. Summing over $j \le n$, we complete the proof of the lemma. Statement of Main Results {#sec:main} ------------------------- Our first result establishes a connection between the rates of growth of $\# {\mathcal{P}}_k^n$ and $\# {\mathcal{M}}_0^n$, and uses this to prove that $h_*$ dominates the measure-theoretic entropies of the invariant measures of $T$. \[thm:initial\] Let $T$ be a piecewise hyperbolic map as defined in Section \[sec:pw\], but not necessarily satisfying conditions (P1) and (P2). - For each $k,n \ge 1$, $\# {\mathring{{\mathcal{P}}}}_k^n \le \#{\mathcal{M}}_{-k}^{k+n}$ and $\# {\mathcal{P}}_k^n \le C (k+n) \# {\mathcal{M}}_{-k}^{k+n}$, for some $C>0$ depending only on $T$. - For all $k \ge 1$, ${\displaystyle}\limsup_{n \to \infty} \frac 1n \log (\# {\mathcal{M}}_{-k}^n) = h_*$. - ${\displaystyle}\sup_k \lim_{n \to \infty} \frac 1n \log \# {\mathcal{P}}_k^n = \sup_k \lim_{n \to \infty} \frac 1n \log \# {\mathring{{\mathcal{P}}}}_k^n\le h_*$. - ${\displaystyle}h_* \ge \sup \{ h_\mu(T) : \mu \mbox{ is an invariant probability measure for $T$} \} .$ a\) The first inequality is straightforward since by definition, both ${\mathring{{\mathcal{P}}}}_k^n$ and ${\mathcal{M}}_{-k}^{k+n}$ are partitions of $M \setminus ({\mathcal{S}}^+_{n+k} \cup {\mathcal{S}}^-_k)$, yet ${\mathring{{\mathcal{P}}}}_k^n$ may have disconnected components. Thus ${\mathcal{M}}_{-k}^{k+n}$ is a refinement of ${\mathring{{\mathcal{P}}}}_k^n$. The second inequality follows by noting that ${\mathcal{P}}_k^n$ equals ${\mathring{{\mathcal{P}}}}_k^n$ plus isolated points, and then applying Lemma \[lem:isolated\]. b\) The value of the limsup is the same for each $k$ since by definition, $A \in {\mathcal{M}}_{-k}^n$ if and only if $T^kA \in {\mathcal{M}}_0^{n+k}$. Thus $\# {\mathcal{M}}_{-k}^n = \# {\mathcal{M}}_0^{n+k}$. c\) We first remark that $\# {\mathcal{P}}_k^{n+m} \le \# {\mathcal{P}}_k^n \# {\mathcal{P}}_k^m$, and also $\# {\mathring{{\mathcal{P}}}}_k^{n+m} \le \# {\mathring{{\mathcal{P}}}}_k^n \# {\mathring{{\mathcal{P}}}}_k^m$ (which can be proved as in [@max Lemma 3.3]), thus the two limits in part (c) exist by subadditivity. The fact that both limits are bounded by $h_*$ follows from parts (a) and (b) of the theorem. d) Let $\mu$ denote a $T$-invariant probability measure. Due to the uniform hyperbolicity of $T$, the family of partitions ${\mathcal{P}}_k$ is a sufficient family, i.e. ${\mathcal{P}}_{k+1}$ is a refinement of ${\mathcal{P}}_k$, and $\bigvee_{k=1}^\infty {\mathcal{P}}_k$ separates points in $M$. By [@walters Theorem 4.22], $h_\mu(T) = \lim_{k\to \infty} h_\mu(T, {\mathcal{P}}_k)$. Next, $$h_\mu(T, {\mathcal{P}}_k) = \lim_{n \to \infty} \frac 1n H_\mu({\mathcal{P}}_k^n) \le \lim_{n \to \infty} \frac 1n \log (\# {\mathcal{P}}_k^n) \le h_* \, ,$$ by part (c) of the Theorem. Thus $h_\mu(T) \le h_*$. \[thm:mu\] Let $T$ be a piecewise hyperbolic map as defined in Section \[sec:pw\], satisfying conditions (P1) and (P2). There exists a $T$-invariant probability measure $\mu_*$ with the following properties. - The measure $\mu_*$ has no atoms, and there exists $C>0$ such that for any ${\varepsilon}>0$, $$\mu_*({\mathcal{N}}_{\varepsilon}({\mathcal{S}}^{\pm})) \le C {\varepsilon}^{1/p} \, ,$$ where $p>1$ is from and ${\mathcal{N}}_{\varepsilon}(\cdot)$ denotes the ${\varepsilon}$-neighborhood of a set in the Riemannian metric on $M$. This implies in particular, that $\mu_*$-a.e. $x \in M$ has a stable and unstable manifold of positive length, and that $x$ approaches ${\mathcal{S}}^{\pm}$ at a subexponential rate. - $\mu_*(O) > 0$ for any open set $O \subset M$. - $(T^n, \mu_*)$ is ergodic for all $n \in \mathbb{Z}^+$. - $\mu_*$ has exponential decay of correlations against Hölder continuous functions. - The measure $\mu_*$ is the unique $T$-invariant probability measure satisfying $h_{\mu_*}(T) = h_*$. Theorem \[thm:mu\] will be proved in Section \[sec:max\]. In particular, items (a)-(c) are proved in Section \[sec:hyper\], item (d) is proved in Proposition \[prop:decay\], and item (e) is proved in Sections \[sec:entropy\] and \[sec:unique\]. Let $T$ be a piecewise hyperbolic map as defined in Section \[sec:pw\], satisfying conditions (P1) and (P2). $T$ satisfies the following variational principle: For all $k \ge 0$, $$\lim_{n \to \infty} \frac 1n \log \left( \# {\mathcal{M}}_{-k}^n \right) = h_* = \sup \{ h_\mu(T) : \mu \mbox{ is an invariant probability measure for $T$} \} .$$ The fact that the limit defining $h_*$ exists (rather than simply the $\limsup$ from Definition \[def:h\_\*\]) follows from Proposition \[prop:M0n\], and the independence from $k$ follows from Theorem \[thm:initial\](b). The second equality follows from Theorem \[thm:initial\](c) together with Theorem \[thm:mu\](e). Theorem \[thm:mu\](a) implies that $\int_M |\log d(x, {\mathcal{S}}^{\pm})| \, d\mu_* < \infty$ (see Corollary \[cor:atomic\](c)), so that $\mu_*$ is $T$-adapted in the language of [@lima]. This allows us to make the following connection to the growth of periodic orbits of $T$. Let $P_n(T) = \{ x \in M : \# \{T^kx : k \in \mathbb{Z} \} = n \}$ denote the set of points of prime period $n$ for $T$. \[cor:per\] Under the assumptions of Theorem \[thm:mu\], $\displaystyle \liminf_{n \to \infty} \# P_n(T) e^{-n h_*} = 1$. The proof relies on the construction of a countable Markov partition for hyperbolic maps with singularities carried out in [@lima]. The class of maps in the present paper satisfy conditions (A1)-(A6) in [@lima], which are general enough to admit dispersing billiards. Since $\mu_*$ is $T$-adapted and hyperbolic (see Corollary \[cor:atomic\]), we may apply [@lima Corollary 1.2] to conclude that there exist $p \ge 1$ and $C>0$ such that the number of points of period $np$ for $T$ is at least $C e^{np h_*}$ for all $n \ge 1$. Next, applying [@Bu Main Theorem] as in [@Bu Theorem 1.5], we conclude that we may take $p=1$ and asymptotically, $C=1$ for large $n$. In the course of proving the growth lemmas in Section \[sec:banach\], we establish the following uniform bounds on the growth of $\# {\mathcal{M}}_0^n$, which may be of independent interest, and are needed for the proof of uniqueness in Section \[sec:unique\]. \[prop:M0n\] There exists a constant $C_\#>0$ such that for all $n \ge 1$, $$C_\# e^{n h_*} \le \# {\mathcal{M}}_0^n = \# {\mathcal{M}}_{-n}^0 \le C_\#^{-1} e^{n h_*} \, .$$ The upper bound is Corollary \[cor:upper M\], while the lower bound is Lemma \[lem:lower M\]. \[cor:uniform growth\] There exists $\bar C>0$ such that for all stable curves $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta_1/3$ and all $n \ge n_1$, where both $\delta_1>0$ and $n_1$ are from , we have $$\bar C e^{n h_*} \le |T^{-n}W| \le \bar C^{-1} e^{n h_*} \, .$$ Let $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta_1/3$. We use the notation of Section \[sec:growth\] regarding the connected components ${\mathcal{G}}_n(W)$ of $T^{-n}W$. Lemma \[lem:growth\](b), Lemma \[lem:lower\] and Proposition \[prop:M0n\] together yield, $$c_0 C_\# e^{n h_*} \le c_0 \# {\mathcal{M}}_0^n \le \# {\mathcal{G}}_n(W) \le C \delta_0^{-1} \# {\mathcal{M}}_0^n \le C \delta_0^{-1} C_\#^{-1} e^{n h_*} \, .$$ Then on the one hand, $$|T^{-n}W| = \sum_{W_i \in {\mathcal{G}}_n(W)} |W_i| \le \delta_0 \# {\mathcal{G}}_n(W) \, ,$$ since each element of ${\mathcal{G}}_n(W)$ has length at most $\delta_0$, completing the upper bound of the corollary. On the other hand, by , $$|T^{-n}W| = \sum_{W_i \in {\mathcal{G}}_n^{\delta_1}(W)} |W_i| \ge \tfrac{2\delta_1}{9} \# {\mathcal{G}}_n(W) \, ,$$ proving the lower bound. Banach Spaces and Growth Lemmas {#sec:banach} =============================== In this section we define the Banach spaces we will use in the analysis of the transfer operator and prove several key lemmas controlling the growth in complexity of $T^n$. Stable Curves {#sec:admissible} ------------- We begin with a definition of stable curves as graphs of functions in local charts, following [@demers; @liv]. We will use the fact that the uniform hyperbolicity of $T$ guarantees the existence of stable $E^s(x)$ and unstable $E^u(x)$ directions in the tangent space ${\mathcal{T}}_xM$ at Lebesgue-almost-every $x \in M$. For $\tau$ sufficiently small, we define the stable cone at $x \in M$ by $$\hat C^s(x) = \{ u + v \in {\mathcal{T}}_xM : u \in E^s(x), v \perp E^s(x), \| v \| \le \tau \| u \| \} \, .$$ Define $\hat C^u(x)$ analogously. These families of cones are strictly invariant, $DT^{-1}(x) \hat C^s(x) \subsetneq \hat C^s(T^{-1}x)$ and $DT(x) \hat C^u(x) \subsetneq C^u(Tx)$. For each $i$, we choose a finite number of coordinate charts $\{ \chi_j \}_{j=1}^L$, whose domains $R_j$ are either $(-r_j, r_j)^2$ if $\chi_j$ maps only to the interior of $M_i^+$, or $(-r_j, r_j)$ restricted to one side of a piecewise ${\mathcal{C}}^1$ curve (the preimage of a piece of $\partial M_i^+$) which we place so that it passes through the origin. For each $j$, $R_j$ has a centroid $x_j$, and $\chi_j$ satisfies, - $D\chi_j(x_j)$ is an isometry; - $D\chi_j(x_j) \cdot (\mathbb{R} \times 0) = E^s(\chi_j(x_j)$; - The ${\mathcal{C}}^2$-norm of $\chi_j$ and its inverse are bounded by $1+\tau$; - There exists $c_j \in (\tau, 2\tau)$ such that the cone $C_j = \{ u+v \in \mathbb{R}^2 : u \in \mathbb{R} \times \{ 0 \}, v \in \{ 0 \} \times \mathbb{R}, \| v \| \le c_j \| u \| \}$ satisfies: For each $y \in R_j$ such that $\chi_j(y) \notin {\mathcal{S}}^-$, $D\chi_j(y) C_j \supset \hat C^s(\chi_j(y))$, and $DT^{-1}(D\chi_j(y)C_j) \subset \hat C^s(T^{-1}(\chi_j(y)))$; - $M_i^+ \subset \cup_{j=1}^L \chi_j(R_j \cap (-\frac{r_j}{2}, \frac{r_j}{2})^2)$. Choose $r_0 \le \frac 12 \min_j r_j$; $r_0$ may be further reduced later, depending on $\delta$. Fix $B < \infty$ and consider the set of functions $$\Xi := \{ F \in {\mathcal{C}}^2([-r,r], \mathbb{R}) : r \in (0, r_0], F(0)=0, |F|_{{\mathcal{C}}^1} \le \tau, |F|_{{\mathcal{C}}^2} \le B \} \, .$$ Define $I_r = (-r,r)$. For $x \in R_j \cap (-r_j/2, r_j/2)^2$ such that $x + (t, F(t)) \in R_j$ for $t \in I_r$, define $G(x,r,F)(t) := \chi_j(x + (t, F(t))$ for $t \in I_r$, i.e. $G(x,r,F)$ is the lift of the graph of $F$ to $M$. To abbreviate notation, we will refer to $G(x,r,F)$ as $G_F$. It follows from the construction that $|G_F|_{{\mathcal{C}}^1} \le (1+\tau)^2$ and $G_F^{-1} \le 1 + \tau$. Our set of admissible stable curves is defined by, $${\widehat{{\mathcal{W}}}}^s := \{ W = G(x,r,F)(I_r) : x \in R_j \cap (r_j/2, r_j/2)^2, r \le r_0, F \in \Xi \} \, .$$ If necessary, we reduce $r_0$ so that $\sup_{W \in {\widehat{{\mathcal{W}}}}^s} |W| \le \delta_0$, where $\delta_0$ is the length scale chosen in Convention \[convention: n\_0=1\]. Due to the uniform hyperbolicity of $T$, if $T^{-n}{\widehat{{\mathcal{W}}}}^s$ represents the connected components of $T^{-n}W$ for $W \in {\widehat{{\mathcal{W}}}}^s$, then choosing $B$ large enough, it follows that $T^{-n}{\widehat{{\mathcal{W}}}}^s \subset {\widehat{{\mathcal{W}}}}^s$, up to subdivision of long curves. With this choice of $B$, the set of real local stable manifolds of length at most $\delta_0$, which we denote by ${\mathcal{W}}^s$, satisfies ${\mathcal{W}}^s \subset {\widehat{{\mathcal{W}}}}^s$. Next, we define two notions of distance[^5] which are used in the definition of our norms, namely the strong unstable norm. For two curves $W_1(\chi_{i_1}, x_1, r_1, F_1)$ and $W_2(\chi_{i_2}, x_2, r_2, F_2)$, we define the distance between them to be, $$d_{{\mathcal{W}}^s}(W_1, W_2) = \eta(i_1, i_2) + |x_1-x_2| + |r_1-r_2| + |F_1 - F_2|_{{\mathcal{C}}^1(I_{r_1} \cap I_{r_2})},$$ where $\eta(i_1,i_2) = 0$ if $i_1=i_2$ and $\eta(i_1, i_2) = \infty$ otherwise, i.e. we only compare curves in the same chart. Given $W_1, W_2$ with $d_{{\mathcal{W}}^s}(W_1, W_2) < \infty$ and two functions $\psi_i \in {\mathcal{C}}^0(W_i)$, we define the distance between them to be $$d_0(\psi_1, \psi_2) = |\psi_1 \circ G_{F_1} - \psi_2 \circ G_{F_2} |_{{\mathcal{C}}^0(I_{r_1} \cap I_{r_2})} \, .$$ Transfer operator {#sec:transfer} ----------------- The main tool we will use to construct the measure of maximal entropy is a weighted transfer operator, ${\mathcal{L}}$. Because we do not have a conformal measure at our disposal a priori, we will define the transfer operator acting on distributions defined via local stable manifolds. Let $\widetilde {\mathcal{W}}^s \subset {\widehat{{\mathcal{W}}}}^s$ denote the set of maximal connected local stable manifolds of $T$. Note that such manifolds have uniformly bounded length due to the the finite diameter of $M$ and the assumption that ${\mathcal{S}}^-$ is transverse to $C^s(x)$. Due to the uniform hyperbolicity of $T$, ${\mu_{\tiny{\mbox{SRB}}}}$-almost every point in $M$ has a stable manifold of positive length. For any local stable manifold $W$, and $\alpha \in (0,1]$, define the $\alpha$-Hölder norm of a test function $\psi : M \to \mathbb{C}$ by $$\label{eq:holder def} | \psi |_{{\mathcal{C}}^\alpha(W)} = | \psi |_{{\mathcal{C}}^0(W)} + H_W^\alpha(\psi) := \sup_W |\psi| + \sup_{x \neq y \in W} \frac{|\psi(x) - \psi(y)|}{d_W(x,y)^\alpha} \, ,$$ where $d_W(\cdot, \cdot)$ denotes distance induced by the Riemannian metric restricted to $W$. Let $\tilde {\mathcal{C}}^\alpha(W)$ denote the set of functions in ${\mathcal{C}}^0(W)$ with finite $| \cdot |_{{\mathcal{C}}^\alpha(W)}$ norm. With this notation, $\tilde {\mathcal{C}}^1(W)$ denotes the set of Lipschitz functions on $W$. Analogously, for each $n \ge 0$, define $H^\alpha_{\widetilde {\mathcal{W}}^s}(\psi) = \sup_{W \in \widetilde {\mathcal{W}}^s} H^\alpha_W(\psi)$, and $$\tilde {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s) = \{ \psi : M \to \mathbb{C} \mid |\psi|_\infty + H^\alpha_{\widetilde {\mathcal{W}}^s}(\psi) < \infty \} \, .$$ The set $\tilde {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s)$ together with the norm $|\psi|_{{\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s)} := |\psi|_\infty + H^\alpha_{\widetilde {\mathcal{W}}^s}(\psi)$ is a Banach space. Since stable manifolds cannot be cut under $T^n$, if $W \in \widetilde {\mathcal{W}}^s$, then $T^n W \subset V \in \widetilde {\mathcal{W}}^s$ for each $n \ge 0$. This together with the uniform hyperbolicity of $T$ and implies that if $\psi \in {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s)$, then $\psi \circ T \in {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s)$ (see also ). Then if $f \in (\tilde {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s))^*$ belongs to the dual of ${\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s)$, the operator ${\mathcal{L}}: (\tilde {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s))^* \to (\tilde {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s))^*$ is defined by, $$\label{eq:trans def} {\mathcal{L}}f (\psi) = f \left( \frac{\psi \circ T}{J^sT} \right) \quad \forall \psi \in {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s) \, ,$$ where $J^sT$ denotes the stable Jacobian of $T$. Note that by , $J^sT^n \in \tilde {\mathcal{C}}^1(\widetilde {\mathcal{W}}^s)$ for each $n \ge 1$. If $f \in {\mathcal{C}}^0(M)$, then we identify $f$ with a signed measure absolutely continuous with respect to ${\mu_{\tiny{\mbox{SRB}}}}$. We denote this action by, $$f(\psi) = \int_M \psi \, f \, d{\mu_{\tiny{\mbox{SRB}}}}\, ,$$ for $\psi \in {\mathcal{C}}^0(M)$. With this identification, we consider ${\mathcal{C}}^0(M) \subset (\tilde {\mathcal{C}}^\alpha(\widetilde {\mathcal{W}}^s))^*$. Then also by , for any $n \ge 1$, ${\mathcal{L}}^n f$ is absolutely continuous with respect to ${\mu_{\tiny{\mbox{SRB}}}}$ with density, $$\label{eq:trans} {\mathcal{L}}^n f = \frac{f \circ T^{-n}}{J^sT^n \circ T^{-n}} \, .$$ Definition of Norms {#sec:norms} ------------------- Let ${\mathcal{W}}^s$ denote those local stable manifolds having length at most $\delta_0$, where $\delta_0$ is from Convention \[convention: n\_0=1\]. Note that ${\mathcal{W}}^s \subset {\widehat{{\mathcal{W}}}}^s$, yet ${\mathcal{W}}^s \not\subset \widetilde {\mathcal{W}}^s$ since $\widetilde {\mathcal{W}}^s$ contains only maximal local stable manifolds (which are necessarily disjoint), while ${\mathcal{W}}^s$ contains stable manifolds of any length less than $\delta_0$, many of which may overlap. We will define our norms by integrating on elements of ${\mathcal{W}}^s$ against Hölder continuous test functions. For $W \in {\mathcal{W}}^s$ and $\alpha >0$, let ${\mathcal{C}}^\alpha(W)$ denote the closure of $\tilde {\mathcal{C}}^1(W)$ in the ${\mathcal{C}}^\alpha$ norm, defined in .[^6] In this notation, then ${\mathcal{C}}^1(W) = \tilde {\mathcal{C}}^1(W)$. Now given a function $f \in {\mathcal{C}}^1(M)$, define the weak norm of $f$ by $$|f|_w = \sup_{W \in {\mathcal{W}}^s} \sup_{\substack{\psi \in {\mathcal{C}}^1(W) \\ |\psi|_{{\mathcal{C}}^1(W)} \le 1}} \int_W f \, \psi \, dm_W \, ,$$ where $m_W$ denotes arc length along $W$. Let $|W| = m_W(W)$. Next, choose $\alpha, \beta < 1$ and $p > 1$ such that $$\label{eq:restrict} 0 < 2\beta \le 1/p \le 1 - \alpha \le \alpha_0, \qquad \mbox{and} \quad 1/p < \alpha \, .$$ Define the strong stable norm of $f$ by $$\| f \|_s = \sup_{W \in {\mathcal{W}}^s} \sup_{\substack{\psi \in {\mathcal{C}}^\alpha(W) \\ |\psi|_{{\mathcal{C}}^\alpha(W)} \le |W|^{-1/p}}} \int_W f \, \psi \, dm_W \, .$$ Recalling the notion of distance $d_{{\mathcal{W}}^s}(\cdot , \cdot)$ between curves $W \in {\mathcal{W}}^s$ and the distance $d_0(\cdot, \cdot)$ between test functions on nearby curves defined in Section \[sec:admissible\] and fixing ${\varepsilon}_0 \le r_0$, we define the strong unstable norm of $f$ by, $$\| f \|_u = \sup_{{\varepsilon}\le {\varepsilon}_0} \sup_{\substack{W_1, W_2 \in {\mathcal{W}}^s \\ d_{{\mathcal{W}}^s}(W_1, W_2) \le {\varepsilon}}} \sup_{\substack{|\psi_i|_{{\mathcal{C}}^1(W_i)} \le 1 \\ d_0(\psi_1, \psi_2) = 0}} {\varepsilon}^{-\beta} \left| \int_{W_1} f \, \psi_1 \, dm_{W_1} - \int_{W_2} f \, \psi \, dm_{W_2} \right| \, .$$ Define the strong norm of $f$ by $\| f \|_{{{\mathcal B}}} = \| f\|_s + c_u \| f \|_u$, where $c_u >0$ is a constant to be chosen in the proof of Lemma \[lem:radius\]. Finally, our weak space ${{\mathcal B}}_w$ is defined to be the completion of ${\mathcal{C}}^1(M)$ in the weak norm, $| \cdot |_w$, while our strong space ${{\mathcal B}}$ is defined to be the completion of ${\mathcal{C}}^1(M)$ in the strong norm $\| \cdot \|_{{{\mathcal B}}}$. \[rem:why real\] The definition of our spaces ${{\mathcal B}}$ and ${{\mathcal B}}_w$ is nearly the same as that in [@demers; @liv Section 2.2], the key difference being that the norms in [@demers; @liv] integrate along cone-stable curves ${\widehat{{\mathcal{W}}}}^s$, while our norms here integrate on local stable manifolds ${\mathcal{W}}^s$. This change is necessary since the potential for our weighted transfer operator, $1/J^sT$, is Hölder continuous along real stable manifolds, yet may only be measurable along arbitrary stable curves. By restricting our norms to this smaller set of curves, we are able to prove the essential Lasota-Yorke inequalities, Proposition \[prop:LY\]. Preliminary facts about the Banach spaces {#sec:prelim} ----------------------------------------- \[lem:piece\] Let ${\mathcal{Q}}$ be a (mod 0 w.r.t. ${\mu_{\tiny{\mbox{SRB}}}}$) finite partition of $M$ into open, simply connected sets such that there exist constants $\bar K, C_{{\mathcal{Q}}} > 0$ such that for each $Q \in {\mathcal{Q}}$, and $W \in {\mathcal{W}}^s$, $Q \cap W$ comprises at most $\bar K$ connected components and for any ${\varepsilon}>0$, $m_W({\mathcal{N}}_{\varepsilon}(\partial Q) \cap W) \le C_{{\mathcal{Q}}} {\varepsilon}^{1/2}.$ - Let $\gamma > \beta/(1-\beta)$ and suppose ${\varphi}$ is a function on $M$ such that $\sup_{Q \in {\mathcal{Q}}} |{\varphi}|_{{\mathcal{C}}^\gamma(Q)} < \infty$. Then ${\varphi}\in {{\mathcal B}}$. - There exists $C>0$ such that if ${\varphi}$ is such that $\sup_{Q \in {\mathcal{Q}}} |{\varphi}|_{{\mathcal{C}}^1(Q)} < \infty$ and $f \in {{\mathcal B}}$, then ${\varphi}f \in {{\mathcal B}}$ and $\| {\varphi}f \|_{{{\mathcal B}}} \le C \| f \|_{{{\mathcal B}}} \sup_{Q \in {\mathcal{Q}}} |{\varphi}|_{{\mathcal{C}}^1(Q)}$. To prove (a), a function ${\varphi}$ as in the statement of the lemma can be approximated by ${\mathcal{C}}^1$ functions using mollification precisely as in [@demzhang14 Lemma 3.5]. Part (b) follows along similar lines using [@demzhang14 Lemma 5.3]. Both proofs use the restrictions in we have assumed for the parameters appearing in the norms. In particular. we need $\beta \le 1/(2p)$, rather than simply $\beta \le 1/p$, due to the weak transversality condition assumed on $\partial {\mathcal{Q}}$. \[lem:embed\] Let $f \in {\mathcal{C}}^1(M)$ and $\psi \in \tilde {\mathcal{C}}^1(\widetilde {\mathcal{W}}^s)$. Then, $$|f(\psi)| = \left| \int_M f \, \psi \, d{\mu_{\tiny{\mbox{SRB}}}}\right| \le C |f|_w (|\psi|_\infty + H^1_{\widetilde {\mathcal{W}}^s}(\psi)) \, .$$ Let $f \in {\mathcal{C}}^1(M)$ and $\psi \in \tilde {\mathcal{C}}^1(\widetilde {\mathcal{W}}^s)$. We will estimate $$f(\psi) = \int_M f \, \psi \, d{\mu_{\tiny{\mbox{SRB}}}}\, .$$ To this end, we choose a foliation ${\mathcal{F}}= \{ W_\xi \}_{\xi \in \Xi} \subset {\mathcal{W}}^s$ of maximal local stable manifolds subdivided according to the length scale $\delta_0$. We then disintegrate the measure ${\mu_{\tiny{\mbox{SRB}}}}$ into conditional measures ${\mu_{\tiny{\mbox{SRB}}}}^\xi$ on $W_\xi \in {\mathcal{F}}$ and a factor measure ${\hat \mu_{\tiny{\mbox{SRB}}}}(\xi)$ on the index set $\Xi$ of stable manifolds. Since ${\mu_{\tiny{\mbox{SRB}}}}$ is smooth by assumption $(P2)$, it follows from [@Pesin1 Proposition 6] (see also [@chernov; @pw eq. (3.7)]) that the conditional measures ${\mu_{\tiny{\mbox{SRB}}}}^\xi$ are absolutely continuous with respect to arc length, $d{\mu_{\tiny{\mbox{SRB}}}}^\xi = |W_\xi|^{-1} g_\xi dm_{W_\xi}$, where $g_\xi$ is given by[^7] $$\frac{g_\xi(x)}{g_\xi(y)} = \lim_{n \to \infty} \frac{J_{W_\xi}T^n(x)}{J_{W_\xi}T^n(y)} \quad \mbox{for all } x,y \in W_\xi \, .$$ It follows from a standard estimate[^8] and that $g_\xi$ is uniformly log-Lipschitz continuous on $W_\xi$, i.e. there exists $C_g \ge 1$ such that $$\label{eq:log g} 0 < C_g^{-1} \le \inf_{\xi \in \Xi} \inf_{W_\xi} g_\xi \le \sup_{\xi \in \Xi} |g_\xi|_{{\mathcal{C}}^1(W_\xi)} \le C_g < \infty \, .$$ Using this disintegration, we write, $$\label{eq:decomp} \begin{split} |f(\psi)| & = \left| \int_{\xi \in \Xi} \int_{W_\xi} f \, \psi \, g_\xi \, |W_\xi|^{-1} dm_{W_\xi} d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \right| \\ & \le \int_{\xi \in \Xi} |f|_w |\psi|_{{\mathcal{C}}^1(W_\xi)} |g_\xi|_{{\mathcal{C}}^1(W_\xi)} |W_\xi|^{-1} d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \\ & \le C_g |f|_w \big(|\psi|_\infty + H^1_{\widetilde {\mathcal{W}}^s}(\psi) \big) \int_{\xi \in \Xi} |W_\xi|^{-1} d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \, . \end{split}$$ To bound this last integral, we will apply some results of [@chernov; @pw], which studies hyperbolic maps with singularities in an axiomatic context (Assumptions (H.1)-(H.5) in that paper), which include the class of maps in the present paper, in addition to many dispersing and semi-dispersing billiards. Indeed, the final integral in is precisely the ${\mathcal{Z}}$-function, ${\mathcal{Z}}_1({\mathcal{F}})$, defined in [@chernov; @pw eq. (4.7)] which governs the average length of stable manifolds in the family ${\mathcal{F}}$. (See also [@chernov; @book Exercise 7.15 and Proposition 7.17] for a similar application of these ideas.) The parameters $p$ and $q$ in [@chernov; @pw] are both equal to 1 in our context, due to our property (P1) and Convention \[convention: n\_0=1\], which imply that $T$ satisfies the one-step expansion condition, [@chernov; @pw Condition (H.5)] with parameter $q=1$, $$\label{eq:one step} \sup_{W \in {\mathcal{W}}^s} \sum_{V_i \subset T^{-1}W} \left( \frac{|W|}{|V_i|} \right)^q \, \frac{|TV_i|}{|W|} \le K_1 \Lambda^{-1} \le \rho <1\, ,$$ where $V_i$ are the maximal, connected components of $T^{-1}W$. The required bound on ${\mathcal{Z}}_1({\mathcal{F}})$ follows from [@chernov; @pw Lemma 4] (again with $q=1$) since ${\mu_{\tiny{\mbox{SRB}}}}$ is obtained as the limit of standard pairs with finite valued ${\mathcal{Z}}$-function. \[lem:include\] There is a sequence of continuous inclusions, $${\mathcal{C}}^1(M) \hookrightarrow {{\mathcal B}}\hookrightarrow {{\mathcal B}}_w \hookrightarrow ({\mathcal{C}}^\alpha({\mathcal{W}}^s))^* \, .$$ The first two inclusions are injective. The continuity of the first inclusion follows from Lemma \[lem:piece\] and its injectivity is obvious. The continuity of the second inclusion follows from $| \cdot |_w \le \| \cdot \|_s$. Its injectivity is a result of the fact that we have defined $\| \cdot \|_s$ with respect to ${\mathcal{C}}^\alpha(W)$ rather than $\tilde {\mathcal{C}}^\alpha(W)$, and ${\mathcal{C}}^1(W)$ is dense in ${\mathcal{C}}^\alpha(W)$. Finally, the continuity of the third inclusion follows from Lemma \[lem:embed\]. \[lem:compact\] The unit ball of ${{\mathcal B}}$ is compactly embedded in ${{\mathcal B}}_w$. The lemma follows from [@demers; @liv Lemma 3.5]. The fact that [@demers; @liv Lemma 3.5] uses the family of admissible curves ${\widehat{{\mathcal{W}}}}^s$ while we use the smaller set ${\mathcal{W}}^s \subset {\widehat{{\mathcal{W}}}}^s$ does not affect the argument since the family of functions defining ${\mathcal{W}}^s$ in each chart is still compact in the ${\mathcal{C}}^1$-metric. Growth Lemmas {#sec:growth} ------------- In this section, we prove several growth lemmas which will be instrumental in establishing precise upper and lower bounds on the spectral radius of our transfer operator. Given a curve $W \in {\widehat{{\mathcal{W}}}}^s$, let ${\mathcal{G}}_1(W)$ denote the maximal connected components of $T^{-1}W$ on which $T$ is smooth, with long pieces subdivided so that they have length between $\delta_0/2$ and $\delta_0$. In particular, elements of ${\mathcal{G}}_1(W)$ must belong to a single element of ${\mathcal{M}}_0^1$, i.e. to a single component $M_i^+$ of $M$. Inductively, define ${\mathcal{G}}_n(W)$ to denote the collection of maximal connected components of $T^{-1}V$, where $V \in {\mathcal{G}}_{n-1}(W)$, again subdividing long pieces into curves of length between $\delta_0/2$ and $\delta_0$. We call ${\mathcal{G}}_n(W)$, the $n$th generation of $W$. For each $n$, let $L_n(W)$ denote those elements of ${\mathcal{G}}_n(W)$ having length at least $\delta_0/3$. Let ${\mathcal{I}}_n(W)$ denote those elements $W_i \in {\mathcal{G}}_n(W)$ such that for each $0 \le k \le n-1$, $T^kW_i \subset V \in {\mathcal{G}}_{n-k}(W)$ and $|V| < \delta_0/3$, i.e. ${\mathcal{I}}_n(W)$ represents those elements in ${\mathcal{G}}_n(W)$ that have always been short from time $1$ to time $n$. \[lem:growth\] There exists $C>0$ such that for all $W \in {\widehat{{\mathcal{W}}}}^s$, and all $n \ge 0$, - ${\displaystyle}\# {\mathcal{I}}_n(W) \le K_1^n \le \rho^n \kappa^{\alpha_0 n} \Lambda^n \;$ ; - ${\displaystyle}\# {\mathcal{G}}_n(W) \le C \delta_0^{-1} \# {\mathcal{M}}_0^n \;$ ; - ${\displaystyle}\sum_{W_i \in {\mathcal{G}}_n(W)} \frac{|W_i|^{1/p}}{|W|^{1/p}} \le C \delta_0^{-1+1/p} \kappa^{-n/p} (\# {\mathcal{M}}_0^n)^{1-1/p} \;$ ; - ${\displaystyle}\# {\mathcal{M}}_0^n \ge C \delta_0 \Lambda^n \,$ . \(a) This estimate follows from the fact that curves $W_i \in {\mathcal{I}}_n(W)$ have always been contained in a short element of ${\mathcal{G}}_{n-k}(W)$ for each $k$ between 0 and $n-1$. Thus property (P1) (recalling also Convention \[convention: n\_0=1\]) can be applied inductively in $k$ to each element of ${\mathcal{I}}_{n-k}(W)$, yielding the claimed bound on the cardinality of these elements. \(b) The bound is trivial since each element of ${\mathcal{G}}_n(W)$ belongs by definition to one element of ${\mathcal{M}}_0^n$. Since the stable diameter of each component of ${\mathcal{M}}_0^n$ is uniformly bounded in $n$, the connected components of $T^{-n}W$ are subdivided into at most $C \delta_0^{-1}$ curves to form the elements of ${\mathcal{G}}_n(W)$, for some uniform $C>0$. \(c) Note that for $W_i \in {\mathcal{G}}_n(W)$, using , $$|T^nW_i| = \int_{W_i} J_{W_i}T^n \, dm_{W_i} \ge |W_i| \kappa^n \, .$$ Thus, $$\begin{split} \sum_{W_i \in {\mathcal{G}}_n(W)} \frac{|W_i|^{1/p}}{|W|^{1/p}} & \le \kappa^{-n/p} \sum_{W_i \in {\mathcal{G}}_n(W)} \frac{|T^nW_i|^{1/p}}{|W|^{1/p}} \le \kappa^{-n/p} \left( \sum_{W_i \in {\mathcal{G}}_n(W)} 1 \right)^{1-1/p} \\ & \le C \kappa^{-n/p} \delta_0^{-1+1/p} (\# {\mathcal{M}}_0^n )^{1-1/p} \, , \end{split}$$ where we have used the Hölder inequality and part (b) of the lemma. \(d) Applying part (b) of the lemma, we have $$|T^{-n}W| = \sum_{W_i \in {\mathcal{G}}_n(W)} |W_i| \le \delta_0 \# {\mathcal{G}}_n(W) \le C \# {\mathcal{M}}_0^n \, .$$ Then recalling and applying this to $W \in {\mathcal{W}}^s$ with $|W| = \delta_0$ completes the proof of the lemma. Next we proceed to show that most elements of ${\mathcal{G}}_n(W)$ are long, if the length scale is chosen appropriately. For $\delta \in (0, \delta_0)$ and $W \in {\widehat{{\mathcal{W}}}}^s$, define ${\mathcal{G}}_n^\delta(W)$ to be the smooth components of $T^{-n}W$, with pieces longer than $\delta$ subdivided to have length between $\delta/2$ and $\delta$, i.e. ${\mathcal{G}}_n^\delta(W)$ is defined precisely like ${\mathcal{G}}_n(W)$, but with $\delta_0$ replaced by $\delta$. Define $L_n^\delta(W)$ to be the set of curves in ${\mathcal{G}}_n^\delta(W)$ having length at least $\delta/3$, and let $S^\delta_n(W) = {\mathcal{G}}_n^\delta(W) \setminus L_n^\delta(W)$. Similarly, let ${\mathcal{I}}_n^\delta(W)$ denote those elements of $S^\delta_n(W)$ that have no ancestors of length at least $\delta/3$. \[lem:most grow\] For all ${\varepsilon}>0$, there exist $\delta \in (0, \delta_0)$ and $n_1 \in \mathbb{N}$ such that for all $n \ge n_1$, $$\# L_n^\delta(W) \ge (1-{\varepsilon}) \# G_n^\delta(W) \, , \quad \mbox{ for all $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta/3$.}$$ Fix ${\varepsilon}>0$ and by Property (P1) and Remark \[rem:iterate p1\], choose $n_1$ sufficiently large that $3 C_e^{-1} (K(n_1) + 1)\Lambda^{-n_1} < {\varepsilon}$ and $\Lambda^{n_1} \ge e$, where $C_e \le 1$ is from . Next, choose $\delta>0$ sufficiently small that if $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \le \delta$, then $T^{-n}W$ comprises at most $K(n)+1$ smooth components of length at most $\delta_0$ for all $n \le 2n_1$. Now let $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta/3$. We shall prove the following inequality for $n \ge n_1$, $$\# S_n^\delta(W) \le \frac{{\varepsilon}}{1-{\varepsilon}} \# {\mathcal{G}}_n^\delta(W) \, .$$ Applying this inequality for ${\varepsilon}'>0$ sufficiently small that $\frac{{\varepsilon}'}{1-{\varepsilon}'} < {\varepsilon}$ then implies the required estimate. For $n \ge n_1$, write $n = kn_1 + \ell$ for some $0 \le \ell < n_1$. If $k=1$, the above inequality follows immediately since there are at most $K(n_1+\ell)+1$ elements of $S^\delta_{n_1+\ell}(W)$ by choice of $\delta$, while by , $T^{-n_1-\ell}W| \ge C_e \Lambda^{n_1 +\ell} |W| \ge C_e \Lambda^{n_1+\ell} \delta/3$. Thus ${\mathcal{G}}_n^\delta(W)$ must contain at least $C_e \Lambda^{n_1+\ell}/3$ curves since each has length at most $\delta$. Thus, $$\frac{\# S_{n_1+\ell}^\delta(W)}{\# {\mathcal{G}}_n^\delta(W)} \le 3 C_e^{-1} \frac{K(n_1+\ell)+1}{\Lambda^{n_1+\ell}} \le 3C_e^{-1} \frac{Kn_1+1}{\Lambda^{n_1}} < {\varepsilon}\, ,$$ where we have used that $\frac{1}{n_1} \le \log \Lambda$ in order to drop the term involving $\ell$, which holds by assumption on $n_1$. On the other hand, if $k >1$ then we split $n$ into $k-1$ blocks of length $n_1$ and one block of length $n_1+\ell$. We group elements $W_i \in S^\delta_{kn_1+\ell}(W)$ by most recent long ancestor $V_j \in L^\delta_{t n_1}(W)$: $t$ is the greatest index $\le k-1$ such that $T^{(k-t)n_1+\ell}W_i \in V_j$ and $V_j \in L_{t n_1}^\delta(W)$. Note that we only consider ancestors occurring in blocks of length $n_1$. It is irrelevant for our estimate whether $W_i$ has a long ancestor at an intermediate time. Since each $|V_j| \ge \delta/3$, it follows that ${\mathcal{G}}_{(k-t)n_1+ \ell}^\delta(V_j)$ must contain at least $C_e \Lambda^{(k-t)n_1}/3$ curves of length at most $\delta$. Thus using Lemma \[lem:growth\](a), we have $$\label{eq:est short} \begin{split} \frac{\# S_{kn_1+\ell}^\delta(W)}{\# {\mathcal{G}}_{kn_1+\ell}^\delta(W)} & = \frac{\# {\mathcal{I}}_{kn_1+\ell}^\delta(W)}{\# {\mathcal{G}}_{kn_1+\ell}^\delta(W)} + \frac{\sum_{t=1}^{k-1} \sum_{V_j \in L_{tn_1}^\delta(W)} \# {\mathcal{I}}_{(k-t)n_1+\ell}^\delta(V_j) } {\# {\mathcal{G}}_{kn_1+\ell}^\delta(W)} \\ & \le \frac{(K(n_1)+1)^k}{C_e \Lambda^{kn_1}/3} + \sum_{t=1}^{k-1} \frac{ \sum_{V_j \in L_{tn_1}^\delta(W)} (K(n_1)+1)^{k-t}}{\sum_{V_j \in L_{tn_1}^\delta(W)} C_e \Lambda^{(k-t)n_1}/3} \\ & \le 3 C_e^{-1} \sum_{t=1}^k (K(n_1)+1)^t \Lambda^{-tn_1} \le \sum_{t=1}^k {\varepsilon}^t \le \frac{{\varepsilon}}{1-{\varepsilon}} \, . \end{split}$$ The following corollary extends Lemma \[lem:most grow\] to arbitrarily short curves, and is used in Lemma \[lem:leafwise\] to prove the positivity of our maximal eigenvector on all elements of ${\mathcal{W}}^s$. \[cor:most grow\] There exists $C_2 > 0$ such that for any ${\varepsilon}, \delta$ and $n_1$ as in Lemma \[lem:most grow\], $$\# L_n^\delta(W) \ge (1-2{\varepsilon}) \# {\mathcal{G}}_n^\delta(W) \, , \quad \forall W \in {\widehat{{\mathcal{W}}}}^s, \; \forall n \ge C_2 n_1 \frac{|\log(|W|/\delta)|}{|\log {\varepsilon}|} \, .$$ Fix ${\varepsilon}, \delta$ and $n_1$ from Lemma \[lem:most grow\]. Suppose $W \in {\widehat{{\mathcal{W}}}}^s$ has $|W| < \delta/3$, and let $n > n_1$. We decompose ${\mathcal{G}}_n^\delta(W)$ as in Lemma \[lem:most grow\], and estimate the second sum in precisely as before. The first term on the right hand side of , $\# {\mathcal{I}}_n^\delta(W)/ \# {\mathcal{G}}_n^\delta(W)$, is handled differently. Let $n_2$ denote the least integer $\ell$ such that ${\mathcal{G}}_\ell^\delta(W)$ contains at least one element of length $\delta/3$. Since $|T^{-\ell}W| \ge C_e \Lambda^\ell |W|$ by , and ${\mathcal{G}}_\ell^\delta(W) \le K_1^\ell$ by (P1) and Convention \[convention: n\_0=1\] while $|T^{-\ell}W| \le \delta_0$, at least one element of ${\mathcal{G}}_\ell^\delta(W)$ must have length at least $\frac{C_e \Lambda^\ell |W|}{K_1^\ell} \ge C_e \rho^{-\ell} |W|$. Thus $$n_2 \le \frac{|\log (3C_e |W| \delta^{-1})|}{|\log \rho|} \, .$$ Then calling $V$ the element of ${\mathcal{G}}_{n_2}^\delta(W)$ having length at least $\delta/3$, we have $$\# {\mathcal{G}}_n^\delta(W) \ge \# {\mathcal{G}}_{n-n_2}^\delta(V) \ge C_e \Lambda^{n-n_2}/3 \, .$$ Thus $$\frac{\# {\mathcal{I}}_n^\delta(W)}{\# {\mathcal{G}}_n^\delta(W)} \le \frac{3 (K(n_1)+1)^{\lfloor n/n_1 \rfloor}}{C_e \Lambda^n} \Lambda^{n_2} \le {\varepsilon}^{\lfloor n/n_1 \rfloor} \Lambda^{n_2} \, .$$ Finally, since $n_2 = \mathcal{O}(|\log (|W|/\delta)|)$, we may choose $C_2$ sufficiently large, that if $n \ge C_2 n_1 \frac{|\log (|W|/\delta)|}{|\log {\varepsilon}|}$, then the quantity on the right is at most ${\varepsilon}$, completing the proof of the corollary. Choosing ${\varepsilon}= 1/3$, we let $\delta_1>0$ and $n_1$ be the corresponding quantities from Lemma \[lem:most grow\]. Fixing this choice of $\delta_1$ and $n_1$, we have $$\label{eq:delta_1} \# L_n^{\delta_1}(W) \ge \tfrac 23 \# {\mathcal{G}}_n^{\delta_1}(W), \quad \mbox{ for all $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta_1/3$ and all $n \ge n_1$.}$$ Our next lemma show that a positive fraction of elements of ${\mathcal{M}}_0^n$ and ${\mathcal{M}}_{-n}^0$ have length at least $\delta_1$ in some direction. This will be essential to establishing the lower bounds of Section \[sec:lower\]. For $A \subset M$, let ${\mbox{diam}}^s(A)$ denote the stable diameter of $A$, i.e. the length of the longest stable curve in $A$. Similarly, define the unstable diameter ${\mbox{diam}}^u(A)$ to be the length of the longest unstable curve in $A$. The boundary of the partition defined by ${\mathcal{M}}_{-n}^0$ is comprised of unstable curves belonging to ${\mathcal{S}}^-_n = \cup_{i=0}^{n-1} T^i({\mathcal{S}}^-)$. Similarly, $\partial {\mathcal{M}}_0^n$ is comprised of the stable curves, ${\mathcal{S}}^+_n = \sup_{i=0}^{n-1} T^{-i}({\mathcal{S}}^+)$. In what follows, we will find it convenient to invoke Convention \[con:pointwise\] regarding the definition of $T^{\pm 1}$ on each smooth component of ${\mathcal{S}}^{\pm}$. Let $L_u({\mathcal{M}}_{-n}^0)$ denote those elements of ${\mathcal{M}}_{-n}^0$ whose unstable diameter is at least $\delta_1/3$, and let $L_s({\mathcal{M}}_0^n)$ denote those elements of ${\mathcal{M}}_0^n$ whose stable diameter is at least $\delta_1/3$. The following lemma is the analogue of Lemma \[lem:most grow\] for these dynamically defined partitions. \[lem:long elements\] There exist $C_{n_1}>0$ and $n_2 \ge n_1$ such that for all $n \ge n_2$, $$\# L_s({\mathcal{M}}_0^n) \ge C_{n_1} \delta_1 \# {\mathcal{M}}_0^n \quad \mbox{and} \quad \# L_u({\mathcal{M}}_{-n}^0) \ge C_{n_1} \delta_1 \# {\mathcal{M}}_{-n}^0 \, .$$ We prove the bound for $L_s({\mathcal{M}}_0^n)$. In order to prove the lemma, we will use the fact that the boundary of ${\mathcal{M}}_0^n$ is the set $\cup_{j=0}^{n-1} T^{-j}{\mathcal{S}}^+$. Let $S_s({\mathcal{M}}_0^n)$ denote the elements of ${\mathcal{M}}_0^n$ whose stable diameter is less than $\delta_1/3$. We have ${\mathcal{M}}_0^n = L_s({\mathcal{M}}_0^n) \cup S_s({\mathcal{M}}_0^n)$. Similarly, let $S_s(T^{-j} {\mathcal{S}}^+)$ denote the set of stable curves in $T^{-j} {\mathcal{S}}^+$ whose length is less than $\delta_1/3$. The following sublemma will prove useful for establishing key claim in the proof. \[sub:cross\] If a smooth stable curve $V_i \in T^{-i}{\mathcal{S}}^+$ intersects a smooth curve $V_j \subset T^{-j}{\mathcal{S}}^+$ for $i<j$, then $V_j$ must terminate on $V_i$. Suppose such an intersection occurs for $j >i$. Then $T^{i+1}(V_i) \subset {\mathcal{S}}^-$ is an unstable curve, while $T^{i+1}(V_j) \subset {\mathcal{S}}^+_{j-i-1}$ is a stable curve. Thus $T^{i+1}(V_j)$ must cross ${\mathcal{S}}^-$ transversally, and so $T^{i}(V_j)$ will be split into at least two smooth components since ${\mathcal{S}}^-$ is the singularity set for $T^{-1}$. This implies that $V_j$ cannot be a single smooth curve. Using the sublemma, we establish the following claim: $\# S_s({\mathcal{M}}_0^n) \le 2 \sum_{j=0}^{n-1} \# S_s(T^{-j}{\mathcal{S}}^+) + B_1 n$, for some $B_1 >0$. According to the sublemma, if $A \in S_s({\mathcal{M}}_0^n)$, then either $\partial A$ contains a short curve in $T^{-j}{\mathcal{S}}^+$ or $\partial A$ contains an intersection point of two curves in $T^{-j}{\mathcal{S}}^+$, for some $0 \le j \le n-1$. But intersections of curves within $T{-j}{\mathcal{S}}^+$ are images of intersections of curves within ${\mathcal{S}}^+$, and the cardinality of cells created by such intersections is bounded by some uniform constant $B_1>0$ depending only on ${\mathcal{S}}^+$. Since each short curve in $T^{-j}{\mathcal{S}}^+$ belongs to the boundary of at most two elements of $S_s({\mathcal{M}}_0^n)$, the claim follows. Now, we subdivide ${\mathcal{S}}^+$ into $\ell_0$ smooth curves $V_i$ of length between $\delta_1/3$ and $\delta_1$. For $j \ge n_1$, recalling the notation $S_j^{\delta_1}(V_i)$ for the short elements of the $j$th generation ${\mathcal{G}}_j^{\delta_1}(V_i)$ of subcurves in $T^{-j}V_i$, we have by , $$\label{eq:short bound} \# S_s(T^{-j}{\mathcal{S}}^+) = \sum_{i=1}^{\ell_0} \# S_j^{\delta_1}(V_i) \le \tfrac 13 \sum_{i=1}^{\ell_0} \# L_j^{\delta_1}(V_i) \, .$$ Next, using , we estimate the sum over $j$ in the claim by splitting it over two parts, $$\label{eq:j split} \# S_s({\mathcal{M}}_0^n) \le B_1n + 2 \sum_{j=0}^{n_1-1} \# S_s(T^{-j}{\mathcal{S}}^+) + \tfrac 23 \sum_{j=n_1}^{n-1} \sum_{i=0}^{\ell_0} \# L_j^{\delta_1}(V_i) \, .$$ The cardinality of the first sum up to $n_1-1$ is bounded by some constant $\bar C_{n_1}$ depending only on the map $T$ and $n_1$, but independent of $n$. Next, we wish to relate $\# L_j^{\delta_1}(V_i)$ to $\# L_s({\mathcal{M}}_0^n)$ for $j \ge n_1$. Note that if $V' \in L_j^{\delta_1}(V_i)$, then $|T^{n-j}V'| \ge C \Lambda^{n-j} \delta_1/3$, so that ${\mathcal{G}}_{n-j}^{\delta_1}(V_i) \ge C \Lambda^{n-j}/3$. Now for each $j$ such that $n_1 \le j \le n-1-n_1$, and $V' \in L_j^{\delta_1}(V_i)$, we may apply , so that $$\label{eq:relate j} \# L_{n-1}^{\delta_1}(V_i) \ge \sum_{V' \in L_j^{\delta_1}(V_i)} \# L_{n-1-j}^{\delta_1}(V') \ge C' \Lambda^{n-1-j} \# L_j^{\delta_1}(V_i) \, .$$ For $j > n-n_1$, we compare $j$ with $j+n_1$ so that we may again apply , $$\# L_{j+n_1}^{\delta_1}(V_i) \ge C' \Lambda^{n_1} \# L_j^{\delta_1}(V_i) \, .$$ Putting together this estimate with in , we estimate, $$\label{eq:almost} \begin{split} \# S_s({\mathcal{M}}_0^n) & \le B_1 n + \bar C_{n_1} + \sum_{j=n_1}^{n-1-n_1} \Lambda^{j+1-n} \# L_s(T^{-n+1} {\mathcal{S}}^+) + \sum_{j=n-n_1}^{n-1} \Lambda^{-n_1} \# L_s(T^{-j-n_1}{\mathcal{S}}^+) \\ & \le B_1 n + \bar C_{n_1} + C \delta_1^{-1} \# L_s({\mathcal{M}}_0^n) + \sum_{j=n-n_1}^{n-1} C \delta_1^{-1} \# L_s({\mathcal{M}}_0^{j+n_1+ 1}) \, , \end{split}$$ where in the second line we have used the fact that $\# L_s(T^{-k} {\mathcal{S}}^+) \le C \delta_1^{-1} \# L_s({\mathcal{M}}_0^{k+1})$ for $k \ge n-1$, which follows from Sublemma \[sub:cross\]. To estimate the last sum in note that if $A \in L_s({\mathcal{M}}_0^{n+\ell})$, for $1 \le \ell \le n_1$, then $A \subset A' \in L_s({\mathcal{M}}_0^n)$. Furthermore, since there is a one-to-one correspondence between elements of ${\mathcal{M}}_0^{n+\ell}$ and elements of ${\mathcal{M}}_{-\ell}^n$, each element $A' \in L_s({\mathcal{M}}_0^n)$ can be subdivided into at most $B_{n_1}$ elements of ${\mathcal{M}}_0^{n+\ell}$, where $B_{n_1}$ is a constant depending only on ${\mathcal{S}}_{n_1}^-$ and not on $n$. Thus, $$\sum_{j=n-n_1}^{n-1} \# L_s({\mathcal{M}}_0^{j+n_1+ 1}) \le n_1 B_{n_1} \# L_s({\mathcal{M}}_0^n) \,$$ and so $$\# S_s({\mathcal{M}}_0^n) \le B_1 n + \bar C_{n_1} + C \delta_1^{-1} ( 1 + n_1 B_{n_1} ) \# L_s({\mathcal{M}}_0^n) \, .$$ Finally, since $\# {\mathcal{M}}_0^n = \# L_s({\mathcal{M}}_0^n) + \# S_s({\mathcal{M}}_0^n)$, we estimate, $$\# L_s({\mathcal{M}}_0^n) \ge \frac{ \# {\mathcal{M}}_0^n - \bar C_{n_1} - B_1 n}{1 + C \delta_1^{-1}(1+n_1B_{n_1})} \, .$$ Since ${\mathcal{M}}_0^n \ge C\delta_0 \Lambda^n$ by Lemma \[lem:growth\](d) and $n_1$ is fixed, we may choose $n_2 \in \mathbb{N}$ such that $\# {\mathcal{M}}_0^n - \bar C_{n_1} - B_1 n \ge \frac 12 \# {\mathcal{M}}_0^n$, for all $n \ge n_2$. We conclude that there exists $C_{n_1} > 0$ such that for $n \ge n_2$, $\# L_s({\mathcal{M}}_0^n) \ge C_{n_1} \delta_1 \# {\mathcal{M}}_0^n$, completing the proof of the lemma for $L_s({\mathcal{M}}_0^n)$. The lower bound for $\# L_u({\mathcal{M}}_{-n}^0)$ follows similarly, using the fact that (P1) also allows us to control the evolution of unstable curves under $T^n$ by controlling the complexity of ${\mathcal{S}}_n^+$. Note that the analogue of Lemma \[lem:most grow\] holds for forward iterates of unstable curves using precisely the same proof. The constant $\kappa$ does not appear in this argument, i.e. the fact that the rate of expansion has a maximum is not needed for the proof. Lower bounds on growth {#sec:lower} ---------------------- The prevalence of long pieces established in Lemmas \[lem:most grow\] and \[lem:long elements\] have the following important consequences. \[lem:lower\] Let $\delta_1$ be the length scale from . There exists $c_0>0$, depending on $\delta_1$, such that for all $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta_1/3$ and $n\ge 1$, we have $\# {\mathcal{G}}_n(W) \ge c_0 \# {\mathcal{M}}_0^n$. This, in turn, implies the supermultiplicativity property for ${\mathcal{M}}_0^n$. \[prop:super\] There exists $c_1>0$ such that for all $j, n \in \mathbb{N}$ with $j \le n$, it holds, $$\# {\mathcal{M}}_0^n \ge c_1 \# {\mathcal{M}}_0^{n-j} \# {\mathcal{M}}_0^j \, .$$ In order to establish Lemma \[lem:lower\], we recall the construction of Cantor rectangles. For $x \in M$, let $W^s(x)$ and $W^u(x)$ denote the maximal smooth components of the local stable and unstable manifolds of $x$. We begin by defining a solid rectangle $D \subset M$ to be a closed region whose boundary comprises exactly two stable manifolds and two unstable manifolds of positive length. Given such a region $D$, define the [*locally maximal Cantor rectangle $R$ in $D$*]{} to be the union of all points in $D$ whose local stable and unstable manifolds completely cross $D$. Locally maximal Cantor rectangles are endowed with a natural product structure: for any $x,y \in R$, $W^u(x) \cap W^s(y)$ belongs to $R$. Such rectangles are closed, so their boundary coincides with the boundary of $D$. In this case, we write $D = D(R)$ to denote the fact that $D$ is the smallest solid rectangle containing $R$. Following [@Li1], for a Cantor rectangle $R$, we call the [*core*]{} of $R$ to be $R \cap D_{1/4}$, where $D_{1/4}$ is an approximately concentric rectangle in $D(R)$ with side lengths $1/4$ the size lengths of $D$. For a locally maximal Cantor rectangle $R$, we say that a stable (respectively unstable) curve $W$ [*properly crosses*]{} $R$ if $W$ intersects the rectangle $D_{1/4}(R)$, but does not terminate in $D(R)$, and $W$ does not cross either of the stable (resp. unstable) boundaries of both $D(R)$ and $D_{1/4}(R)$. Applying [@Li1 Theorem 4.10], we may choose locally maximal Cantor rectangles ${{\mathcal R}}_{\delta_1} = \{ R_1, \cdots, R_k \}$, with ${\mu_{\tiny{\mbox{SRB}}}}(R_i)>0$, whose stable and unstable boundaries have length at most $\frac{1}{10} \delta_1$ such that any stable or unstable curve of length at least $\delta_1/3$ properly crosses at least one of them.[^9] Furthermore, we may choose the rectangles sufficiently small that both $R_i$ and $R_i \cap D_{1/4}(R_i)$ have positive ${\mu_{\tiny{\mbox{SRB}}}}$-measure for each $i$. The number of rectangles $k$ depends on $\delta_1$. For brevity, denote by $R_i^* = R_i \cap D_{1/4}(R_i)$, the core of $R_i$. Due to the mixing property of $(T, {\mu_{\tiny{\mbox{SRB}}}})$, there exist ${\varepsilon}>0$ and $n_3 \in \mathbb{N}$ such that for all $n \ge n_3$, and all $1 \le i,j, \le k$, ${\mu_{\tiny{\mbox{SRB}}}}(R_i^* \cap T^{-n}R_j) \ge {\varepsilon}$. We claim that for each $n$, at least one Cantor rectangle $R_i \in {{\mathcal R}}_{\delta_1}$ is fully crossed in the unstable direction by at least $\frac 1k \#L_u({\mathcal{M}}_{-n}^0)$ elements of of ${\mathcal{M}}_{-n}^0$. This is because if $A \in {\mathcal{M}}_{-n}^0$, then $\partial A$ is comprised of unstable curves belonging to ${\mathcal{S}}_n^-$. Since unstable manifolds cannot be cut under iteration by $T^{-n}$, ${\mathcal{S}}_n^-$ cannot intersect the unstable boundaries of $R_i$. Thus if $A \cap R_i \neq \emptyset$, then either $\partial A$ terminates inside $R_i$ or $A$ fully crosses $R_i$. This implies that elements of $L_u({\mathcal{M}}_{-n}^0)$ fully cross at least one $R_i$, and so at least one $R_i$ must be fully crossed by at least $\frac 1k$ such elements. With the claim established, for each $n$, let $R_{i_n}$ denote a Cantor rectangle that is fully crossed by at least $\frac 1k \# L_u({\mathcal{M}}_{-n}^0)$ elements of ${\mathcal{M}}_{-n}^0$. Now take $W \in {\widehat{{\mathcal{W}}}}^s$ with $|W| \ge \delta_1/3$. By construction, there exists $R_j \in {{\mathcal R}}_{\delta_1}$ such that $W$ properly crosses $R_j$ in the stable direction. For $n \in \mathbb{N}$, using mixing, we have ${\mu_{\tiny{\mbox{SRB}}}}(R_{i_n}^* \cap T^{-n_3}R_j) \ge {\varepsilon}$. By [@Li1 Lemma 4.13], there is a curve $V \in {\mathcal{G}}_{n_3}^{\delta_1}(W)$ that properly crosses $R_{i_n}$ in the stable direction. By choice of $R_{i_n}$, this implies that $\# {\mathcal{G}}_n(V) \ge \frac 1k \# L_u({\mathcal{M}}_{-n}^0)$. Thus, $$\# {\mathcal{G}}_{n+n_3}(W) \ge \tfrac 1k \# L_u({\mathcal{M}}_{-n}^0) \implies {\mathcal{G}}_n(W) \ge \tfrac{C'}{k} \# L_u({\mathcal{M}}_{-n}^0 ) \, ,$$ where $C'$ is a constant depending only on $n_3$ since, as in the proof of Lemma \[lem:long elements\], the refinement from ${\mathcal{M}}_{-n}^0$ to ${\mathcal{M}}_{-n-n_3}^0$ depends only on the cardinality of ${\mathcal{M}}_0^{n_3}$, which is independent of $n$. Finally, by Lemma \[lem:long elements\], $\# L_u({\mathcal{M}}_{-n}^0) \ge C_{n_1} \delta_1 \# {\mathcal{M}}_{-n}^0$, which proves the lemma for $n \ge \max \{ n_2, n_3 \}$ since $\# {\mathcal{M}}_{-n}^0 = \# {\mathcal{M}}_0^n$. The lemma extends to all $n \in \mathbb{N}$ by possibly reducing the constant $c_0$ since there are only finitely many values to correct for. Recall that since $T^{-j}({\mathcal{S}}^-_j \cup {\mathcal{S}}^+_{n-j}) = {\mathcal{S}}^+_n$, there is a one-to-one correspondence between elements of ${\mathcal{M}}_{-j}^{n-j}$ and ${\mathcal{M}}_0^n$ for each $j < n$. Thus $\# {\mathcal{M}}_0^n = \# {\mathcal{M}}_{-j}^{n-j}$, and this latter partition is obtained by taking the maximal connected components of ${\mathcal{M}}_{-j}^0 \bigvee {\mathcal{M}}_0^{n-j}$. To prove the lemma, we will show that a positive fraction, independent of $j$ and $n$, of elements of ${\mathcal{M}}_0^{n-j}$ intersect a positive fraction of elements of ${\mathcal{M}}_{-j}^0$. Recall that $L_u({\mathcal{M}}_{-j}^0)$ denotes those elements of ${\mathcal{M}}_{-j}^0$ with unstable diameter of length at least $\delta_1/3$ while $L_s({\mathcal{M}}_0^{n-j})$ denotes those elements of ${\mathcal{M}}_0^{n-j}$ with stable diameter of length at least $\delta_1/3$. If $A \in L_s({\mathcal{M}}_0^{n-j})$ and $V \subset A$ is a stable curve with $|V| \ge \delta_1/3$, then $\# {\mathcal{G}}_j(V) \ge c_0 {\mathcal{M}}_0^j$ by Lemma \[lem:lower\]. Remark that up to subdivision of long pieces, each component of ${\mathcal{G}}_j(V)$ corresponds to one component of $V \setminus {\mathcal{S}}^-_j$. Thus $V$ intersects at least $c_0 \# {\mathcal{M}}_0^j = c_0 \# {\mathcal{M}}_{-j}^0$ elements of ${\mathcal{M}}_{-j}^0$. Applying this estimate to each $A \in L_s({\mathcal{M}}_0^{n-j})$, we obtain $$\# {\mathcal{M}}_0^n \ge \# L_s({\mathcal{M}}_0^{n-j}) \cdot c_0 \# {\mathcal{M}}_0^j \ge C_{n_1}\delta_1 c_0 \# {\mathcal{M}}_0^{n-j} \# {\mathcal{M}}_0^j \, ,$$ where we have applied Lemma \[lem:long elements\] in the second inequality. This proves the lemma when $n-j \ge n_2$. For $n-j \le n_2$, since $\# {\mathcal{M}}_0^{n-j} \le \# {\mathcal{M}}_0^{n_2}$, we obtain the lemma by possibly decreasing the value of $c_1$ since there are only finitely many values to correct for. \[cor:upper M\] For all $n \in \mathbb{N}$, $\# {\mathcal{M}}_0^n \le 2 c_1^{-1} e^{n h_*}$, where $c_1>0$ is from Proposition \[prop:super\]. The proof follows using Proposition \[prop:super\], precisely as in [@max Proposition 4.6]. Spectral Properties of ${\mathcal{L}}$ {#sec:spec} ====================================== In this section, we prove the following theorem. \[thm:spectral\] The operator ${\mathcal{L}}$ acting on ${{\mathcal B}}$ is quasi-compact, with spectral radius equal to $e^{h_*}$ and essential spectral radius bounded by $\max \{ \Lambda^{-\beta}, \rho \} e^{h_*}$. Since $T$ is topologically mixing, ${\mathcal{L}}$ has a spectral gap: $e^{h_*}$ is a simple eigenvalue (multiplicity 1 and no Jordan blocks) and the rest of the spectrum of ${\mathcal{L}}$ is contained in a disk of radius strictly smaller than $e^{h_*}$. Let $\nu_0 \in {{\mathcal B}}$ be an eigenfunction for eigenvalue $e^{h_*}$ defined by . Then $\nu_0$ is a non-negative Radon measure on $M$. The quasi-compactness of ${\mathcal{L}}$ is proved in Lemma \[lem:radius\], following the Lasota-Yorke inequalities of Proposition \[prop:LY\]. The fact that ${\mathcal{L}}$ has a spectral gap is proved in Lemma \[lem:gap\], while the characterization of $\nu_0$ is proved in Lemma \[lem:peripheral\]. Lasota-Yorke Inequalities {#sec:LY} ------------------------- The following proposition is the key component in establishing the quasi-compactness of ${\mathcal{L}}$. \[prop:LY\] There exists $C>0$ such that for all $n \ge 0$ and $f \in {{\mathcal B}}$, $$\begin{aligned} |{\mathcal{L}}^n f |_w & \le & C \delta_0^{-1} (\# {\mathcal{M}}_0^n) |f|_w \, , \label{eq:weak norm} \\ \| {\mathcal{L}}^n f \|_s & \le & C \delta_0^{-2} (\# {\mathcal{M}}_0^n) ( \max \{ \Lambda^{-\alpha n} , \rho^n \} \| f \|_s + \kappa^{-n/p} |f|_w ) \, , \label{eq:stable norm} \\ \| {\mathcal{L}}^n f \|_u & \le & C \delta_0^{-1} (\# {\mathcal{M}}_0^n) (\Lambda^{-\beta n} \| f \|_u + \kappa^{-n/p}\| f \|_s ) \, . \label{eq:unstable norm} \end{aligned}$$ By density, it suffices to prove the proposition for $f \in {\mathcal{C}}^1(M)$. ### Weak norm bound Take $f \in {\mathcal{C}}^1(M)$, $W \in {\mathcal{W}}^s$ and $\psi \in {\mathcal{C}}^1(W)$, $|\psi|_{{\mathcal{C}}^1(W)} \le 1$. Recalling that ${\mathcal{G}}_n(W)$ denotes the decomposition of $T^{-n}W$ into elements of ${\mathcal{W}}^s$, we estimate for $n \ge 1$, $$\int_W {\mathcal{L}}^n f \, \psi \, dm_W = \sum_{W_i \in {\mathcal{G}}_n(W)} \int_{W_i} f \, \psi \circ T^n \, dm_{W_i} \le |f|_w \sum_{W_i \in {\mathcal{G}}_n(W)} |\psi \circ T^n|_{C^1(W_i)} \, ,$$ where we have applied the weak norm of $f$ to the integral on each $W_i$. Next, using the uniform contraction of $T$ along stable curves, we have $$\label{eq:holder} \frac{|\psi \circ T^n(x) - \psi \circ T^n(y)|}{d_{W_i}(x,y)} = \frac{|\psi \circ T^n(x) - \psi \circ T^n(y)|}{d_W(T^nx, T^ny)} \frac{d_W(T^nx, T^ny)}{d_{W_i}(x,y)} \le C |J^sT^n|_{{\mathcal{C}}^0(W_i)} H^1_W(\psi) \, ,$$ for some uniform constant $C>0$, using . Then since $|\psi \circ T^n|_{{\mathcal{C}}^0(W_i)} \le |\psi |_{{\mathcal{C}}^0(W)}$, we have $|\psi \circ T^n|_{{\mathcal{C}}^1(W_i)} \le C |\psi|_{{\mathcal{C}}^1(W)} \le C$. Finally, applying Lemma \[lem:growth\](b) to the sum over ${\mathcal{G}}_n(W)$ and taking the supremum over $\psi \in {\mathcal{C}}^1(W)$ and $W \in {\mathcal{W}}^s$ completes the proof of . ### Strong stable norm bound Let $f \in {\mathcal{C}}^1(M)$, $W \in {\mathcal{W}}^s$ and $\psi \in {\mathcal{C}}^\alpha(W)$ with $|\psi|_{{\mathcal{C}}^\alpha(W)} \le |W|^{-1/p}$. Let $n \ge 1$. For each $W_i \in {\mathcal{G}}_n(W)$, define ${\overline{\psi}}_i = |W_i|^{-1} \int_{W_i} \psi \circ T^n \, dm_{W_i}$. Proceeding as before, we estimate $$\label{eq:stable split} \int_W {\mathcal{L}}^n f \, \psi \, dm_W = \sum_{W_i \in {\mathcal{G}}_n(W)} \int_{W_i} f \, (\psi \circ T^n - {\overline{\psi}}_i) \, dm_{W_i} + \sum_{W_i \in {\mathcal{G}}_n(W)} {\overline{\psi}}_i \int_{W_i} f \, dm_{W_i} \, .$$ To each term in the first sum on the right hand side, we apply the strong stable norm, $$\int_{W_i} f \, (\psi \circ T^n - {\overline{\psi}}_i) \le \| f \|_s |W_i|^{1/p} |\psi \circ T^n - {\overline{\psi}}_i|_{{\mathcal{C}}^\alpha(W_i)} \le C \| f \|_s \frac{|W_i|^{1/p}}{|W|^{1/p}} |J^sT^n|_{{\mathcal{C}}^0(W_i)}^\alpha \, ,$$ where we have applied the analogous estimate to to the difference $\psi \circ T^n - {\overline{\psi}}_i$ with the exponent $\alpha$. Since $\alpha > 1/p$, using bounded distortion , we estimate $$|W_i|^{1/p} |J^sT^n|_{{\mathcal{C}}^0(W_i)}^\alpha \le C |T^nW_i|^{1/p} \Lambda^{-n(\alpha - 1/p)} \, .$$ Finally, summing over $W_i$, we obtain, $$\label{eq:first stable} \begin{split} \sum_{W_i \in {\mathcal{G}}_n(W)} & \int_{W_i} f \, (\psi \circ T^n - {\overline{\psi}}_i) \le C \| f\|_s \Lambda^{-n(\alpha - 1/p)} \sum_{W_i \in {\mathcal{G}}_n(W)} \frac{|T^nW_i|^{1/p}}{|W|^{1/p}} \\ & \le C \| f\|_s \Lambda^{-n(\alpha - 1/p)} \left( \sum_i \frac{|T^nW_i|}{|W|} \right)^{1/p} ( \# {\mathcal{G}}_n(W_i))^{1- 1/p} \\ & \le C \delta_0^{-1+1/p} \| f\|_s \Lambda^{-n(\alpha - 1/p)} (\# {\mathcal{M}}_0^n)^{1-1/p} \le C \delta_0^{-1} \Lambda^{- \alpha n } \| f \|_s \# {\mathcal{M}}_0^n \, , \end{split}$$ where in the second line we have used the Hölder inequality and in the third we have used Lemma \[lem:growth\](b) and (d). Next, we estimate the second sum in . For this estimate, we group $W_i \in {\mathcal{G}}_n(W)$ by most recent long ancestor as follows. Recall that $L_k(W)$ denotes those elements of ${\mathcal{G}}_k(W)$ whose length is at least $\delta_0/3$. If $V_j \in L_k(W)$ is such that $T^{n-k}(W_i) \subset V_j$ and $k \le n$ is the largest such index with this property, then we say that $V_j$ is the most recent long ancestor of $W_i$. Let ${\mathcal{I}}_{n-k}(V_j)$ denote those elements of ${\mathcal{G}}_n(W)$ whose most recent long ancestor is $V_j$. If no such ancestor exists, then $W_i \in {\mathcal{I}}_n(W)$. Thus, $$\sum_{W_i \in {\mathcal{G}}_n(W)} {\overline{\psi}}_i \int_{W_i} f \, dm_{W_i} = \sum_{k=1}^n \sum_{V_j \in L_k(W)} \sum_{W_i \in {\mathcal{I}}_{n-k}(V_j)} {\overline{\psi}}_i \int_{W_i} f \, dm_{W_i} + \sum_{W_i \in {\mathcal{I}}_n(W)} {\overline{\psi}}_i \int_{W_i} f \, dm_{W_i} \, .$$ We use the strong stable norm to estimate the terms in ${\mathcal{I}}_n(W)$, $$\label{eq:short stable} \begin{split} \sum_{W_i \in {\mathcal{I}}_n(W)} & {\overline{\psi}}_i \int_{W_i} f \, dm_{W_i} \le \| f\|_s \sum_{W_i \in {\mathcal{I}}_n(W)} \frac{|W_i|^{1/p}}{|W|^{1/p}} \le \| f \|_s \kappa^{-n/p} \sum_{W_i \in {\mathcal{I}}_n(W)} \frac{|T^nW_i|^{1/p}}{|W|^{1/p}} \\ & \le \|f \|_s \kappa^{-n/p} K_1^{n(1-1/p)} \; \le \; \| f\|_s \kappa^{-n/p} \rho^n \kappa^{\alpha_0 n} \Lambda^n \; \le \; \| f\|_s \rho^n C\delta_0^{-1} \# {\mathcal{M}}_0^n \, , \end{split}$$ where we have used for the second inequality, the Hölder inequality and Lemma \[lem:growth\](a) for the third and fourth inequalities, and the fact that $\alpha_0 \ge 1/p$ (from ) and Lemma \[lem:growth\](d) for the last inequality. For the remainder of the terms, we use the weak norm of $f$, and sum using Lemma \[lem:growth\](a) from time $k$ to time $n$, $$\begin{split} \sum_{k=1}^n & \sum_{V_j \in L_k(W)} \sum_{W_i \in {\mathcal{I}}_{n-k}(V_j)} {\overline{\psi}}_i \int_{W_i} f \, dm_{W_i} \le \sum_{k=1}^n \sum_{V_j \in L_k(W)} \sum_{W_i \in {\mathcal{I}}_{n-k}(V_j)} |W|^{-1/p} |f|_w \\ & \le \sum_{k=1}^n \sum_{V_j \in L_k(W)} 3 \delta_0^{-1/p} K_1^{n-k} \frac{|V_j|^{1/p}}{|W|^{1/p}} |f|_w \le \sum_{k=1}^n C \delta_0^{-1} K_1^{n-k} \kappa^{-k/p} (\# {\mathcal{M}}_0^k)^{1-1/p} |f|_w \\ & \le C \delta_0^{-1} |f|_w \kappa^{-n/p} \sum_{k=1}^n \rho^{n-k} \kappa^{\alpha_0(n-k)} \Lambda^{n-k} \# {\mathcal{M}}_0^k \le C \delta_0^{-2} c_1^{-1} \kappa^{-n/p} \# {\mathcal{M}}_0^n |f|_w \, , \end{split}$$ where we have used Lemma \[lem:growth\](c) to sum over $V_j \in L_k(W)$, as well as the fact that $$\Lambda^{n-k} \# {\mathcal{M}}_0^k \le C \delta_0^{-1} \# {\mathcal{M}}_0^{n-k} \# {\mathcal{M}}_0^k \le C \delta_0^{-1} c_1^{-1} \# {\mathcal{M}}_0^n \, ,$$ by Proposition \[prop:super\]. Putting this estimate together with and in yields, $$\int_W {\mathcal{L}}^n f \, \psi \, dm_W \le C \delta_0^{-2} \big( (\Lambda^{-\alpha n} + \rho^n) \|f \|_s + \kappa^{-n/p} |f|_w \big) \# {\mathcal{M}}_0^n \, ,$$ and taking the appropriate suprema over $W$ and $\psi$ completes the proof of . ### Strong unstable norm bound Let $f \in {\mathcal{C}}^1(M)$ and ${\varepsilon}\in (0, {\varepsilon}_0)$. Take $W^1, W^2 \in {\mathcal{W}}^s$ with $d_{{\mathcal{W}}^s}(W^1, W^2) \le {\varepsilon}$, and $\psi_k \in {\mathcal{C}}^1(W^k)$ such that $|\psi_k|_{{\mathcal{C}}^1(W^k)} \le 1$ and $d_0(\psi_1, \psi_2) = 0$. For $n \ge1$, we subdivide ${\mathcal{G}}_n(W^k)$ into matched and unmatched pieces as follows. To each $W^1_i \in {\mathcal{G}}_n(W^1)$, we associate a family of vertical (in the chart) segments $\{ \gamma_x \}_{x \in W^1_i}$ of length at most $C \Lambda^{-n} {\varepsilon}$ such that if $\gamma_x$ is not cut by an element of ${\mathcal{S}}_n^+$, its image $T^n\gamma_x$ will have length $C{\varepsilon}$ and will intersect $W^2$. Due to the uniform transversality of stable and unstable cones, such a segment $T^i \gamma_x$ will belong to the unstable cone for each $i = 0, \ldots, n$, and so undergo the uniform expansion due to . In this way, we obtain a partition of $W^1$ into intervals for which $T^n\gamma_x$ is not cut and intersects $W^2$ and subintervals for which this is not the case. This defines an analogous partition of $T^{-n}W^1$ and $T^{-n}W^2$. We call two curves $U^1_j \subset T^{-n}W^1$ and $U^2_j \subset T^{-n}W^2$ [*matched*]{} if they are connected by the foliation $\gamma_x$ and their images under $T^n$ are connected by $T^n\gamma_x$. We call the remaining components of $T^{-n}W^k$ [*unmatched*]{} and denote them by $V^k_i$. With this decomposition, there is at most one matched piece and two unmatched pieces for each $W^k_i \in {\mathcal{G}}_n(W^k)$, and we may write $T^{-n}W^k = (\cup_j U^k_j) \cup (\cup_i V^k_i)$. We proceed to estimate, $$\label{eq:unstable split} \left| \int_{W^1} {\mathcal{L}}^n f \, \psi_1 - \int_{W^2} {\mathcal{L}}^n f \, \psi_2 \right| \le \sum_j \left| \int_{U^1_j} f \, \psi_1 \circ T^n - \int_{U^2_j} f \, \psi_2 \circ T^n \right| + \sum_{k,i} \left| \int_{V^k_i} f \, \psi_k \circ T^n \right| \, .$$ We begin by estimating the contribution from unmatched pieces. We say a curve $V^1_i$ is created at time $j$, $1 \le j \le n$, if $j$ is the first time that $T^{n-j}V^1_i$ is not part of a matched curve in $T^{-j}W^1$. Define, $${\mathcal{V}}_{j,\ell} = \{ i : V^1_i \mbox{ is created at time $j$ and $T^{n-j}V^1_i \subset W^1_\ell \in {\mathcal{G}}_j(W^1)$} \} \, .$$ Note that $\cup_{i \in {\mathcal{V}}_{j,\ell}} V^1_i = W^1_\ell$. Due to the expansion of $T$ in the unstable cone and the uniform transversality of ${\mathcal{S}}_j^-$ with the stable cone, it follows that $|W^1_\ell| \le C \Lambda^{-j} {\varepsilon}$. Now applying the strong stable norm to each such curve at the time it is created, $$\label{eq:unmatched} \begin{split} \sum_i \int_{V^1_i} f \, \psi_1 \circ T^n & = \sum_{j=1}^n \sum_{W^1_\ell \in {\mathcal{G}}_j(W)} \int_{W^1_\ell} {\mathcal{L}}^{n-j} f \, \psi_i \circ T^{n-j} \\ & \le \sum_{j=1}^n \sum_{W^1_\ell \in {\mathcal{G}}_j(W)} |W^1_\ell|^{1/p} \| {\mathcal{L}}^{n-j} f \|_s |\psi \circ T^{n-j}|_{{\mathcal{C}}^\alpha(W^1_\ell)} \\ & \le \sum_{j=1}^n \sum_{W^1_\ell \in {\mathcal{G}}_j(W)} C \Lambda^{-j/p} {\varepsilon}^{1/p} \delta_0^{-1} \kappa^{-(n-j)/p} (\# {\mathcal{M}}_0^{n-j}) \| f \|_s \\ & \le C \delta_0^{-1} {\varepsilon}^{1/p} \| f \|_s \kappa^{-n/p} \sum_{j=1}^n \Lambda^{-j/p} \# {\mathcal{M}}_0^j \# {\mathcal{M}}_0^{n-j} \le C \delta_0^{-1} {\varepsilon}^{1/p} \| f \|_s \kappa^{-n/p} \# {\mathcal{M}}_0^n \, , \end{split}$$ where we have applied in the second inequality (actually, a simpler version suffices with no need to subtract the average of the test function on each $W_i$), and Proposition \[prop:super\] in the fourth. A similar estimate holds over the curves $V^2_i$. Next, we estimate the matched pieces. Recall that according to our notation in Section \[sec:admissible\] the curve $U^1_j$ is associated with the quadruple $(i_j, x_j, r_j, F^1_j)$ so that $F^1_j$ is defined in the chart $\chi_{i_j}$ and $U^1_j = G(x_j, r_j, F^1_j)(I_{r_j})$. By definition of our matching process, it follows that $U^2_j = G(x_j, r_j, F^2_j)(I_{r_j})$ for some function $F^2_j$ defined in the same chart, so that the point $x_j + (t, F^1_j(t))$ is associated with the point $x_j + (t, F^2_j(t))$ by the vertical line ${(0,s)}_{s \in \mathbb{R}}$ in the chart. Recall that $G_{F^k_j} = \chi_{i,j}(x_j + (t, F^k_j(t))$, for $t \in I_{r_j}$. Define $${\widetilde{\psi}}_j = \psi_1 \circ T^n \circ G_{F^1_j} \circ G_{F^2_j}^{-1} \, .$$ The function ${\widetilde{\psi}}_j$ is well-defined on $U^2_j$ and $d_0({\widetilde{\psi}}_j , \psi_1 \circ T^n) = 0$. We can then estimate, $$\label{eq:unstable second split} \sum_j \left| \int_{U^1_j} f \, \psi_1 \circ T^n - \int_{U^2_j} f \, \psi_2 \circ T^n \right| \le \sum_j \left| \int_{U^1_j} f \, \psi_1 \circ T^n - \int_{U^2_j} f \, {\widetilde{\psi}}_j \right| + \left| \int_{U^2_j} f \, ({\widetilde{\psi}}_j - \psi_2 \circ T^n) \right| .$$ We estimate the first term on the right side of using the strong unstable norm. It follows from the uniform hyperbolicity of $T$ and the usual graph transform arguments (see [@demers; @liv Section 4.3]), that $$d_{{\mathcal{W}}^s}(U^1_j, U^2_j) \le C \Lambda^{-n} {\varepsilon}\, .$$ Moreover, by definition $G_{F^1_j}, G_{F^2_j}^{-1} \in {\mathcal{C}}^1$ so that by , $|{\widetilde{\psi}}_j |_{{\mathcal{C}}^1(U^2_j)} \le C |\psi_1|_{{\mathcal{C}}^1(W^1)}$ for some uniform constant $C$. Thus, $$\label{eq:match unstable} \sum_j \left| \int_{U^1_j} f \, \psi_1 \circ T^n - \int_{U^2_j} f \, {\widetilde{\psi}}_j \right| \le C {\varepsilon}^\beta \Lambda^{-\beta n} \| f\|_u \delta_0^{-1} \# {\mathcal{M}}_0^n \, ,$$ where we have used Lemma \[lem:growth\](b) to sum over the matched pieces since there is at most one matched piece per element of ${\mathcal{G}}_n(W^1)$. We estimate the second term on the right side of using the strong stable norm, $$\sum_j \left| \int_{U^2_j} f \, ({\widetilde{\psi}}_j - \psi_2 \circ T^n) \right| \le \sum_j \| f \|_s |U^2_j|^{1/p} |{\widetilde{\psi}}_j - \psi_2 \circ T^n|_{{\mathcal{C}}^\alpha(U^2_j)} \, .$$ It follows from [@demers; @liv Lemma 4.2 and eq. (4.20)] that, $$|{\widetilde{\psi}}_j - \psi_2 \circ T^n|_{{\mathcal{C}}^\alpha(U^2_j)} \le C {\varepsilon}^{1-\alpha} \, .$$ Putting this together with the above estimate and summing over $j$ yields, $$\sum_j \left| \int_{U^2_j} f \, ({\widetilde{\psi}}_j - \psi_2 \circ T^n) \right| \le C {\varepsilon}^{1-\alpha} \| f \|_s \delta_0^{-1} \# {\mathcal{M}}_0^n \, .$$ Finally, collecting the above estimate with in and adding the estimate over unmatched pieces from , yields by , $$\left| \int_{W^1} {\mathcal{L}}^n f \, \psi_1 - \int_{W^2} {\mathcal{L}}^n f \psi_2 \right| \le C \delta_0^{-1} \big({\varepsilon}^\beta \Lambda^{-\beta n} \| f \|_u + {\varepsilon}^{1-\alpha} \| f\|_s + {\varepsilon}^{1/p} \kappa^{-n/p} \|f \|_s \big) \# {\mathcal{M}}_0^n \, .$$ Then, since $\beta \le \min \{ 1- \alpha, 1/p \}$ according to , we may divide through by ${\varepsilon}^\beta$, and take the appropriate suprema to complete the proof of . A spectral gap for ${\mathcal{L}}$ {#sec:spectral} ---------------------------------- We prove that ${\mathcal{L}}$ has a spectral gap in a series of lemmas, first establishing its quasi-compactness, Lemma \[lem:radius\], then characterizing elements of its peripheral spectrum, Lemmas \[lem:peripheral\] and \[lem:leafwise\], and finally concluding the existence of a spectral gap, Lemma \[lem:gap\]. \[lem:radius\] The spectral radius of ${\mathcal{L}}$ on ${{\mathcal B}}$ is $e^{h_*}$, while its essential spectral radius is at most $\sigma e^{h_*}$ for any $\sigma > \max \{ \Lambda^{-\beta}, \rho \}$. Thus ${\mathcal{L}}$ is quasi-compact on ${\mathcal{L}}$. Moreover, the peripheral spectrum of ${\mathcal{L}}$ contains no Jordan blocks. First we establish the upper bound on the spectral radius of ${\mathcal{L}}$ using Proposition \[prop:LY\] and Corollary \[cor:upper M\]. Fix $\sigma <1$ such that $\sigma > \max \{ \Lambda^{-\beta }, \rho \}$. Next, choose $N>0$ such that $C \delta_0^{-2} 2 c_1^{-1} \max \{ \Lambda^{-\beta N}, \rho^N \} \le \frac 12 \sigma^N$. Finally, choose $c_u >0$ such that $c_u C \delta_0^{-2} 2 c_1^{-1} \kappa^{-N/p} \le \frac 12 \sigma^N$. Then, $$\begin{split} \| {\mathcal{L}}^N f \|_{{{\mathcal B}}} & = \| {\mathcal{L}}^N f \|_s + c_u \| {\mathcal{L}}^N f \|_u \\ & \le \left( \tfrac 12 \sigma^N \| f \|_s + C \delta_0^{-2} 2 c_1^{-1} \kappa^{-N/p} |f|_w + c_u \tfrac 12 \sigma^N \| f \|_u + c_u C \delta_0^{-1} 2 c_1^{-1} \kappa^{-N/p} \| f \|_s \right) e^{N h_*} \\ & \le \left( \sigma^N \| f \|_{{{\mathcal B}}} + C' \delta_0^{-2} \kappa^{-N/p} |f|_w \right) e^{N h_*} \, . \end{split}$$ This is the standard Lasota-Yorke inequality for ${\mathcal{L}}$, which, coupled with the compactness of the unit ball of ${{\mathcal B}}$ in ${{\mathcal B}}_w$, is sufficient to conclude [@hennion] that the essential spectral radius of ${\mathcal{L}}$ is at most $\sigma e^{h_*}$, and its spectral radius is at most $e^{h_*}$. To prove the lower bound on the spectral radius, we estimate using and Lemma \[lem:lower\]. Take $W \in {\mathcal{W}}^s$ with $|W| \ge \delta_1/3$. Then for $n \ge n_1$ we have, $$\label{eq:lower spec} \begin{split} \| {\mathcal{L}}^n 1\|_{{{\mathcal B}}} & \ge \int_W {\mathcal{L}}^n 1 \, dm_W = \sum_{W_i \in {\mathcal{G}}_n^{\delta_1}} |W_i| \ge \sum_{W_i \in L_n^{\delta_1}(W)} \delta_1/3 \\ & \ge \frac{2 \delta_1}{9} \# {\mathcal{G}}_n(W) \ge \frac{2 \delta_1}{9} c_0 \# {\mathcal{M}}_0^n \, . \end{split}$$ Then taking the limit as $n \to \infty$ and using the definition of $h_*$, $$\limsup_{n \to \infty} \frac 1n \log \| {\mathcal{L}}^n \|_{{{\mathcal B}}} \ge \limsup_{n \to \infty} \frac 1n \log \big( \| {\mathcal{L}}^n 1 \|_{{{\mathcal B}}} / \| 1 \|_{{{\mathcal B}}} \big) \ge \limsup_{n \to \infty} \frac 1n \log \big( \# {\mathcal{M}}_0^n \big) = h_* \, ,$$ which proves that the spectral radius of ${\mathcal{L}}$ is at least $e^{h_*}$. We conclude that the spectral radius of ${\mathcal{L}}$ is in fact $e^{h_*}$ and so ${\mathcal{L}}$ is quasi-compact since its essential spectral radius is bounded by $\sigma e^{h_*}$. Finally, the lack of Jordan blocks stems from Corollary \[cor:upper M\] and Proposition \[prop:LY\], which together imply $\| {\mathcal{L}}^n \|_{{{\mathcal B}}} \le C e^{n h_*}$ for all $n \ge 0$. Let $\mathbb{V}_\theta$ denote the eigenspace associated to the eigenvalue $e^{h_* + 2\pi i \theta}$. Due to the quasi-compactness of ${\mathcal{L}}$ and the absence of Jordan blocks, the spectral projector $\Pi_\theta : {{\mathcal B}}\to \mathbb{V}_\theta$ is well-defined in the uniform topology of $L({{\mathcal B}}, {{\mathcal B}})$ and can be realized as, $$\label{eq:theta} \Pi_\theta = \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-k h_* } e^{-2\pi i \theta k} {\mathcal{L}}^k \, .$$ Let ${\mathbb{V}}= \oplus_{\theta} {\mathbb{V}}_\theta$, where the sum is taken over $\theta$ corresponding to eigenvalues of ${\mathcal{L}}$. Note that ${\mathbb{V}}$ is finite dimensional by the quasi-compactness of ${\mathcal{L}}$. Analogously, define $$\label{eq:nu def} \nu_0 = \Pi_0 1 := \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-k h_* } {\mathcal{L}}^k \, .$$ Since we have proved uniform bounds of the form $\| {\mathcal{L}}^k \|_{{{\mathcal B}}} \le C e^{kh_*}$, the limit above exists and satisfies ${\mathcal{L}}\nu_0 = e^{h_*} \nu_0$. A priori, however, $\nu_0$ may be 0 (if $e^{h_*}$ is not in the spectrum of ${\mathcal{L}}$). The following lemma shows this is not the case, and provides an important characterization of the peripheral spectrum of ${\mathcal{L}}$. (Peripheral spectrum of ${\mathcal{L}}$) \[lem:peripheral\] - The distribution $\nu_0 = \Pi_0 1 \neq 0$ is a non-negative Radon measure and $e^{h_*}$ is in the spectrum of ${\mathcal{L}}$. - All elements of ${\mathbb{V}}$ are signed measures, absolutely continuous with respect to $\nu_0$. - The spectrum of $e^{-h_*} {\mathcal{L}}$ consists of a finite number of cyclic groups; in particular, each $\theta$ is rational. \(a) By density of ${\mathcal{C}}^1(M)$ in ${{\mathcal B}}$, since $\mathbb{V}_\theta$ is finite-dimensional, it follows that $\Pi_\theta {\mathcal{C}}^1(M) = \mathbb{V}_\theta$. Thus for each $\nu \in {\mathbb{V}}$, $\nu \neq 0$, there exists $f \in {\mathcal{C}}^1(M)$ such that $\Pi_\theta f = \nu$. Moreover, for every $\psi \in {\mathcal{C}}^1(M)$, we have $$\label{eq:proj} | \nu(\psi) | = | \Pi_\theta f(\psi) | \le \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-h_* k} |{\mathcal{L}}^k f(\psi)| \le |f|_\infty \Pi_0 1(|\psi|) \, ,$$ so that $\Pi_0 1 \neq 0$ since $\nu \neq 0$. In particular, $e^{h_*}$ is an eigenvalue of ${\mathcal{L}}$. Moreover, since $\Pi_0 1$ is positive as an element of $({\mathcal{C}}^1(M))^*$, it follows from [@Sch Sect. I.4] that $\nu_0 = \Pi_0 1$ is a non-negative Radon measure on $M$. \(b) Applying again to $\nu \in {\mathbb{V}}_\theta$, we conclude that every element of ${\mathbb{V}}_\theta$ is a signed measure, absolutely continuous with respect to $\nu_0$. Moreover, setting $f_\nu = \frac{d\nu}{d(\nu_0)}$, it follows that $f_\nu \in L^\infty(M, \nu_0)$. \(c) Suppose $\nu \in {\mathbb{V}}_\theta$. Then using part (b), for any $\psi \in {\mathcal{C}}^1(M)$, $$\label{eq:density} \begin{split} \int_M \psi \, f_\nu \, d\nu_0 & = \nu(\psi) = e^{-h_*} e^{-2\pi i \theta} {\mathcal{L}}\nu (\psi) = e^{-h_*} e^{-2\pi i \theta} \nu (\frac{\psi \circ T}{J^sT}) \\ & = e^{-h_*} e^{-2\pi i \theta} \nu_0 (f_\nu \frac{\psi \circ T}{J^sT}) = e^{-h_*} e^{-2\pi i \theta} {\mathcal{L}}\nu_0 (\psi f_\nu \circ T^{-1}) \\ & = e^{-2 \pi i \theta} \int_M \psi \, f_\nu \circ T^{-1} \, d\nu_0 \, . \end{split}$$ Thus $f_\nu \circ T^{-1} = e^{2 \pi i \theta} f_\nu$, $\nu_0$-a.e. Define $f_{\nu, k} = (f_\nu)^k \in L^\infty(\nu_0)$. It follows as in [@demers; @liv Lemma 5.5], that $d\nu_k := f_{\nu,k} d\nu_0 \in {{\mathcal B}}$ for each $k \in \mathbb{N}$. Then since ${\mathcal{L}}\nu_k = e^{2\pi i k \theta} \nu_k$, it follows that $e^{2 \pi i k \theta}$ is in the peripheral spectrum of ${\mathcal{L}}$ for each $k$. By the quasi-compactness of ${\mathcal{L}}$, this set must be finite, and so $\theta$ must be rational. We remark that elements of ${{\mathcal B}}_w$ can be viewed as both distributions on $M$, as well as families of [*leafwise distributions*]{} on stable manifolds as follows (cf. [@max Definition 7.5]). For $f \in {\mathcal{C}}^1(M)$, the map defined by $${\mathcal{K}}_{(W, f)}(\psi) = \int_W f \psi \, dm_W, \qquad \psi \in {\mathcal{C}}^1(W) \, ,$$ can be viewed as a distribution of order 1 on $W$. Since ${\mathcal{K}}_{(W,f)}(\psi) \le |f|_w |\psi|_{{\mathcal{C}}^1(W)}$, ${\mathcal{K}}_{(W, \cdot)}$ can be extended to $f \in {{\mathcal B}}_w$. We denote this extension by $\int_W \psi f$, and we call the associated family of distributions the [*leafwise distribution*]{} $(f, W)_{W \in {\mathcal{W}}^s}$ corresponding to $f$. If, in addition, $f \in {{\mathcal B}}_w$ satisfies $\int_W \psi f \ge 0$ for all $\psi \ge 0$, then by [@Sch Section I.4], the leafwise distribution is in fact a leafwise measure. Recall the disintegration of ${\mu_{\tiny{\mbox{SRB}}}}$ used in the proof of Lemma \[lem:embed\] into conditional measures ${\mu_{\tiny{\mbox{SRB}}}}^\xi$ on the family of stable manifolds ${\mathcal{F}}= \{ W_\xi \}_{\xi \in \Xi}$, and a factor measure ${\hat \mu_{\tiny{\mbox{SRB}}}}$ on the index set $\Xi$. We have $d{\mu_{\tiny{\mbox{SRB}}}}^\xi = |W_\xi|^{-1} g_\xi dm_{W_\xi}$, where $g_\xi$ is uniformly log-Hölder continuous by . \[lem:leafwise\] Let $\nu_0^\xi$ and $\hat \nu_0$ denote the conditional measures on $W_\xi$ and factor measure on $\Xi$respectively, obtained by disintegrating $\nu_0$ on the family of stable manifolds ${\mathcal{F}}$. For all $\psi \in {\mathcal{C}}^1(M)$, $$\int_{W_\xi} \psi \, d\nu^\xi_0 = \frac{\int_{W_\xi} \psi \, g_\xi \, \nu_0}{\int_{W_\xi} g_\xi \, \nu_0} \quad \mbox{for all $\xi \in \Xi$, and} \quad d\hat\nu_0(\xi) = |W_\xi|^{-1} \Big( \int_{W_\xi} g_\xi \, \nu_0 \Big) \, d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \, .$$ Moreover, viewed as a leafwise measure, $\nu_0(W) > 0$ for all $W \in {\mathcal{W}}^s$. We prove the last claim first. For $W \in {\mathcal{W}}^s$, let $n_2 \le \bar C_2 |\log(|W|/\delta_1)|$ be the constant from the proof of Corollary \[cor:most grow\] applied in the case ${\varepsilon}= 1/3$ and $\delta_1$ as chosen in . Let $V \in {\mathcal{G}}_{n_2}^{\delta_1}(W)$ have $|V| \ge \delta_1/3$. Then using and Lemma \[lem:lower\], $$\begin{split} \int_W \nu_0 & = \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-k h_*} \int_W {\mathcal{L}}^k 1 \, dm_W \ge \lim_{n \to \infty} \frac 1n \sum_{k=n_1+n_2}^{n-1} e^{-k h_*} \sum_{W_i \in {\mathcal{G}}_{k-n_2}(V)} |W_i| \\ & \ge \lim_{n \to \infty} \frac 1n \sum_{k=n_1+n_2}^{n-1} e^{-k h_*} \tfrac{2\delta_1}{9} c_0 \# {\mathcal{M}}_0^{k-n_2} = \tfrac{2c_0 \delta_1}{9} e^{-(n_1+n_2)h_*}\lim_{n \to \infty} \sum_{k=0}^\infty e^{-k h_*} {\mathcal{M}}_0^k \, . \end{split}$$ We claim that the last limit cannot be 0. For suppose it were 0. Then for any $W \in {\mathcal{W}}^s$, $\psi \in {\mathcal{C}}^1(W)$, we would have by Lemma \[lem:growth\](b), $$\begin{split} \int_W \psi \, \nu_0 & = \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-k h_*} \int_W \psi \, {\mathcal{L}}^k 1 \, dm_W \le \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-k h_*} \sum_{W_i \in {\mathcal{G}}_k(W)} |\psi|_\infty |W_i| \\ & \le \lim_{n \to \infty} \frac 1n \sum_{k=0}^{n-1} e^{-k h_*} C \# {\mathcal{M}}_0^k = 0 \, , \end{split}$$ which would imply $\nu_0 = 0$, a contradiction. This proves the claim, and recalling the definition of $n_2$, we conclude that $$\label{eq:nu positive} \nu_0(W) \ge C' |W|^{h_* \bar C_2} \qquad \mbox{for all $W \in {\mathcal{W}}^s$.}$$ With established, the remainder of the proof follows from the definition of convergence in the weak norm, precisely as in [@max Lemma 7.7]. We are finally ready to prove the final point of our characterization of the peripheral spectrum of ${\mathcal{L}}$. \[lem:gap\] ${\mathcal{L}}$ has a spectral gap on ${{\mathcal B}}$. Recalling Lemma \[lem:peripheral\](c), suppose $\nu_q \in {\mathbb{V}}_{p/q}$. Then ${\mathcal{L}}^q \nu_q = e^{q h_*}\nu_q$ and ${\mathcal{L}}^q \nu_0 = e^{q h_*} \nu_0$. Since $T^q$ is also mixing, it suffices to prove that mixing implies the eigenspace corresponding to $e^{h_*}$ is simple in order to conclude that ${\mathcal{L}}$ can have no other eigenvalues of modulus $e^{h_*}$, i.e. ${\mathcal{L}}$ has a spectral gap. We proceed to prove this claim. Suppose $\nu_1 \in {\mathbb{V}}_0$. We will show that $\nu_1 = c \nu_0$ for some constant $c >0$. By , there exists $f_1 \in L^\infty(\nu_0)$ such that $f_1 \nu_1 = \nu_0$ and $f_1 \circ T = f_1$, $\nu_0$-a.e. Letting $$S_nf_1(x) = \sum_{k=0}^{n-1} f_1 \circ T^k(x) \, ,$$ it follows that the ergodic average $\frac 1n S_nf_1 = f_1$ for all $n \ge 0$. This implies that $f_1$ is constant on stable manifolds. In addition, since by Lemma \[lem:leafwise\] and , the factor measure $\hat \nu_0$ is equivalent to ${\hat \mu_{\tiny{\mbox{SRB}}}}$ on the index set $\Xi$, we have that $f_1 = f_1 \circ T$ on ${\hat \mu_{\tiny{\mbox{SRB}}}}$ a.e. $W_\xi \in {\mathcal{F}}$, i.e. $f_1 = f_1 \circ T$, ${\mu_{\tiny{\mbox{SRB}}}}$-a.e. By the ergodicity of ${\mu_{\tiny{\mbox{SRB}}}}$, $f_1=$ constant ${\mu_{\tiny{\mbox{SRB}}}}$-a.e. But since this constant value holds on each stable manifold $W_\xi \in {\mathcal{F}}$, using again the equivalence of $\hat\nu_0$ and ${\hat \mu_{\tiny{\mbox{SRB}}}}$, we conclude that $f_1$ is constant $\nu_0$-a.e. Construction and Properties of the Measure of Maximal Entropy {#sec:max} ============================================================= Since ${\mathcal{L}}: {{\mathcal B}}\to {{\mathcal B}}$ has a spectral gap, we may decompose ${\mathcal{L}}$ as $$\label{eq:L decomp} {\mathcal{L}}^n f = e^{n h_*} \Pi_0 f + R^n f \, \mbox{ for any $\ge 1$, $f \in {{\mathcal B}}$},$$ where $\Pi_0^2 = \Pi_0$, $\Pi_0 R = R \Pi_0 = 0$ and there exists $\bar \sigma <1$ and $C>0$ such that $\| e^{-n h_*} R^n \|_{{{\mathcal B}}} \le C \bar \sigma^n$. Indeed, we may recharacterize the definition of the spectral projector $\Pi_0$ in as, $$\Pi_0 f = \lim_{n \to \infty} e^{-n h_*} {\mathcal{L}}^n f \, ,$$ where convergence is in the ${{\mathcal B}}$ norm. Indeed, letting $W \in {\mathcal{W}}^s$ with $|W| \ge \delta_1/3$, we have by Lemma \[lem:growth\](b), $$\begin{split} 0 < \nu_0(W) & = \lim_{n \to \infty} e^{-n h_*} \int_W {\mathcal{L}}^n 1 \, dm_W = \lim_{n \to \infty} e^{-n h_*} \sum_{W_i \in {\mathcal{G}}_n(W)} |W_i| \\ & \le \liminf_{n \to \infty} C e^{-n h_*} \# {\mathcal{M}}_0^n \, . \end{split}$$ This implies the final limit cannot be 0. We have proved the following. \[lem:lower M\] There exists $\bar c_1>0$ such that $\# {\mathcal{M}}_0^n \ge \bar c_1 e^{n h_*}$ for all $n \ge 1$. Next, consider the dual operator, ${\mathcal{L}}^* : {{\mathcal B}}^* \to {{\mathcal B}}^*$, which also has a spectral gap. Recalling our identification of $f \in {\mathcal{C}}^1(M)$ with the measure $f d{\mu_{\tiny{\mbox{SRB}}}}$ from Section \[sec:transfer\], define $$\label{eq:tnu def} {\tilde\nu}_0 := \lim_{n \to \infty} e^{-n h_*} ({\mathcal{L}}^*)^n d{\mu_{\tiny{\mbox{SRB}}}}\, ,$$ where convergence is in the dual norm, $\| \cdot \|_{{{\mathcal B}}^*}$. Clearly, ${\tilde\nu}_0 \in {{\mathcal B}}^*$, and ${\mathcal{L}}^* {\tilde\nu}_0 = e^{h_*} {\tilde\nu}_0$. By the positivity of the operator ${\mathcal{L}}^*$, we have ${\tilde\nu}_0(f) \ge 0$ for each $f \in {\mathcal{C}}^1(M)$ (recalling ${\mathcal{C}}^1(M) \subset {{\mathcal B}}$). Thus again applying [@Sch Section I.4], we conclude that ${\tilde\nu}_0$ is a Radon measure on $M$. Next, defining $f_n = e^{-n h_*} {\mathcal{L}}^n 1 \in {{\mathcal B}}$ for $n \ge 1$, we have, $${\tilde\nu}_0(f_n) = \lim_{k \to \infty} e^{-k h_*} \langle f_n, ({\mathcal{L}}^*)^k d{\mu_{\tiny{\mbox{SRB}}}}\rangle = \lim_{k \to \infty} e^{-k h_*} \langle {\mathcal{L}}^k f_n, d{\mu_{\tiny{\mbox{SRB}}}}\rangle \, ,$$ where $\langle \cdot, \cdot \rangle$ denotes the pairing between an element of ${{\mathcal B}}$ and an element of ${{\mathcal B}}^*$. Then, decomposing ${\mu_{\tiny{\mbox{SRB}}}}$ into its conditional measures ${\mu_{\tiny{\mbox{SRB}}}}^\xi$ and factor measure ${\hat \mu_{\tiny{\mbox{SRB}}}}$ on $W_\xi$, $\xi \in \Xi$, as in the proof of Lemma \[lem:embed\], and letting $\Xi^{\delta_1} \subset \Xi$ denote the set of indices such that $|W_\xi| \ge \delta_1/3$, we estimate $$\begin{split} {\tilde\nu}_0(f_n) & = \lim_{k \to \infty} \int_M f_{n+k} \, d{\mu_{\tiny{\mbox{SRB}}}}= \lim_{k \to \infty} \int_{\Xi} d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \, e^{-(n+k)h_*} \int_{W_\xi} {\mathcal{L}}^{n+k} 1 \, g_\xi \, dm_{W_\xi} |W_\xi|^{-1} \\ & \ge \lim_{k \to \infty} \int_{\Xi^{\delta_1}} d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \, e^{-(n+k) h_*} \sum_{W_{\xi, i} \in L_{n+k}^{\delta_1}(W_\xi)} \inf_{W_\xi} g_\xi \; |W_{\xi, i}| |W_\xi|^{-1} \\ & \ge \lim_{k \to \infty} \int_{\Xi^{\delta_1}} d{\hat \mu_{\tiny{\mbox{SRB}}}}(\xi) \, e^{-(n+k) h_*} C_g^{-1} \tfrac{2 c_0}{9} \# {\mathcal{M}}_0^{n+k} \ge {\hat \mu_{\tiny{\mbox{SRB}}}}(\Xi^{\delta_1}) C_g^{-1} \tfrac{2 c_0}{9} \bar c_1 \, , \end{split}$$ for all $n \ge 1$, where we have used and Lemma \[lem:lower\] for the second inequality, and Lemma \[lem:lower M\] for the third. Since this lower bound is independent of $n$, we have ${\tilde\nu}_0(\nu_0) > 0$. We can at last formulate the following definition. \[def:mu\_\*\] For $\psi \in {\mathcal{C}}^1(M)$, define, $$\mu_*(\psi) := \frac{\langle \psi \nu_0 , {\tilde\nu}_0 \rangle}{\langle \nu_0, {\tilde\nu}_0 \rangle} \, .$$ The measure $\mu_*$ is a probability measure on $M$ due to the positivity of $\nu_0$ and ${\tilde\nu}_0$, and since $\langle \nu_0, {\tilde\nu}_0 \rangle \neq 0$. Moreover, $\mu_*(\psi \circ T) = \mu_*(\psi)$ so that $\mu_*$ is an invariant measure for $T$. We may also characterize the spectral projector $\Pi_0$ in terms of this pairing: for any $f \in {{\mathcal B}}$, it follows from and that, $$\label{eq:projector} \Pi_0 f = \frac{\langle f, {\tilde\nu}_0 \rangle}{\langle \nu_0, {\tilde\nu}_0 \rangle } \nu_0 \, .$$ It follows immediately from the spectral gap of ${\mathcal{L}}$ that $\mu_*$ has exponential decay of correlations. \[prop:decay\] For all $q>0$, there exist constants $C = C(q)$ and $\gamma = \gamma(q) >0$ such that for all ${\varphi}, \psi \in {\mathcal{C}}^q(M)$, $$\left| \int_M {\varphi}\, \psi \circ T^n \, d\mu_* - \int_M {\varphi}\, d\mu_* \int_M \psi \, d\mu_* \right| \le C |{\varphi}|_{{\mathcal{C}}^q(M)} |\psi|_{{\mathcal{C}}^q(M)} e^{-\gamma n} \quad \mbox{ for all $n \ge 0$.}$$ We prove the proposition for ${\varphi}, \psi \in {\mathcal{C}}^1(M)$. The result for $q \in (0,1)$ then follows by a standard approximation argument. First we verify that $\psi \circ T^n {\tilde\nu}_0$ is an element of ${{\mathcal B}}^*$ for $\psi \in {\mathcal{C}}^1(M)$ and $n \ge 1$. We do this by noting that for any $\psi \in {\mathcal{C}}^1(M)$, $\psi {\tilde\nu}_0 \in {{\mathcal B}}^*$ by simply defining, $$\langle f, \psi {\tilde\nu}_0 \rangle := \langle \psi f, {\tilde\nu}_0 \rangle \quad \mbox{ for any $f \in {{\mathcal B}}$,}$$ and the expression on the right is bounded by $|\psi|_{{\mathcal{C}}^1} \| f \|_{{{\mathcal B}}} \| \nu_0 \|_{{{\mathcal B}}^*}$ by Lemma \[lem:piece\](b), and so the pairing defines a bounded, linear functional on ${{\mathcal B}}$, with norm at most $| \psi|_{{\mathcal{C}}^1} \| {\tilde\nu}_0 \|_{{{\mathcal B}}^*}$. Next, define for $n \ge 1$, $$\label{eq:dual} \langle f , \psi \circ T^n {\tilde\nu}_0 \rangle := \langle e^{-n h_*} {\mathcal{L}}^n f, \psi \nu_0 \rangle = \langle \psi \, e^{-n h_*} {\mathcal{L}}^n f, \nu_0 \rangle \, .$$ The expression on the right is bounded by $$\| \psi e^{-n h_*} {\mathcal{L}}^n f \|_{{{\mathcal B}}} \| {\tilde\nu}_0 \|_{{{\mathcal B}}^*} \le |\psi|_{{\mathcal{C}}^1(M)} e^{-n h_*} \| {\mathcal{L}}^n f \|_{{{\mathcal B}}} \| {\tilde\nu}_0 \|_{{{\mathcal B}}^*} \le C |\psi|_{{\mathcal{C}}^1(M)} \| f \|_{{{\mathcal B}}} \| {\tilde\nu}_0 \|_{{{\mathcal B}}^*} \, ,$$ where we have used Lemma \[lem:piece\](b) for the first inequality and for the second, which implies $e^{-n h_*} \| {\mathcal{L}}^n f \|_{{{\mathcal B}}} \le C$. Thus defines a bounded, linear functional on ${{\mathcal B}}$, so $\psi \circ T^n {\tilde\nu}_0 \in {{\mathcal B}}^*$. Finally, using Definition \[def:mu\_\*\] and , noting that ${\varphi}\nu_0 \in {{\mathcal B}}$ by Lemma \[lem:piece\](b), and recalling , we write $$\begin{split} \int_M {\varphi}\, \psi \circ T^n & \, d\mu_* = \frac{\langle {\varphi}\, \nu_0, \psi \circ T^n {\tilde\nu}_0 \rangle }{\langle \nu_0, {\tilde\nu}_0 \rangle} = \frac{ \langle e^{-n h_*} {\mathcal{L}}^n( {\varphi}\, \nu_0) , \psi {\tilde\nu}_0 \rangle }{\langle \nu_0, {\tilde\nu}_0 \rangle} \\ & = \frac{\langle \Pi_0({\varphi}\nu_0) + e^{-n h_*} R^n({\varphi}\nu_0), \psi {\tilde\nu}_0 \rangle }{\langle \nu_0, {\tilde\nu}_0 \rangle} = \frac{\langle {\varphi}\nu_0, {\tilde\nu}_0 \rangle}{\langle \nu_0, {\tilde\nu}_0 \rangle} \frac{\langle \nu_0, \psi {\tilde\nu}_0 \rangle}{\langle \nu_0, {\tilde\nu}_0 \rangle} + \frac{ \langle e^{-n h_*} R^n({\varphi}\nu_0), \psi {\tilde\nu}_0 \rangle }{\langle \nu_0, {\tilde\nu}_0 \rangle} \, , \end{split}$$ where we have used . The first term on the right is simply $\int_M {\varphi}\, d\mu_* \int_M \psi \, d\mu_*$. The second term is bounded by, $$e^{-n h_*} \| R^n({\varphi}\nu_0) \|_{{{\mathcal B}}} \| \psi {\tilde\nu}_0 \|_{{{\mathcal B}}^*} \le C \bar \sigma^n \| {\varphi}\nu_0 \|_{{{\mathcal B}}} | \psi |_{{\mathcal{C}}^1} \| {\tilde\nu}_0 \|_{{{\mathcal B}}^*} \le C' \bar \sigma^n |{\varphi}|_{{\mathcal{C}}^1} |\psi|_{C^1} \, ,$$ where we have used Lemma \[lem:piece\](b) and $C'$ depends on $\| \nu_0 \|_{{{\mathcal B}}}$ and $\| {\tilde\nu}_0 \|_{{{\mathcal B}}^*}$. Hyperbolicity and Ergodicity of $\mu_*$ {#sec:hyper} --------------------------------------- We begin by showing that $\mu_*$ gives small measure to ${\varepsilon}$-neighborhoods of the singularity sets ${\mathcal{S}}_n^{\pm}$. \[lem:control\] For any $k \in \mathbb{N}$, there exists $C_k >0$ such that $$\mu_*({\mathcal{N}}_{\varepsilon}({\mathcal{S}}_k^{\pm})) \le C_k {\varepsilon}^{1/p} \, .$$ In particular, for any $\gamma > p$ and $k \in \mathbb{N}$, for $\mu_*$-a.e. $x \in M$, there exists $C>0$ such that $$\label{eq:approach} d(T^nx, {\mathcal{S}}_k^{\pm}) \ge C n^{-\gamma} \, , \quad \mbox{for all $n \ge 0$.}$$ First we prove the claimed bounds with respect to $\nu_0$ for each ${\mathcal{S}}^-_k$, $k \ge 1$. Let $1_{k, {\varepsilon}}$ denote the indicator function of the set ${\mathcal{N}}_{\varepsilon}({\mathcal{S}}^-_k)$. Since ${\mathcal{S}}_k^-$ comprises finitely many smooth curves, all uniformly transverse to the stable cone, by Lemma \[lem:piece\](b), $1_{k, {\varepsilon}} \nu_0 \in {{\mathcal B}}$, and in particular, $1_{k, {\varepsilon}} \nu_0 \in {{\mathcal B}}_w$. We claim that, $$\label{eq:nu bound} \nu_0({\mathcal{N}}_{\varepsilon}({\mathcal{S}}_k^-)) \le C |1_{k, {\varepsilon}} \nu_0|_w \le C_k {\varepsilon}^{1/p} \, .$$ Indeed, the first inequality follows from Lemma \[lem:embed\]. To prove the second inequality, let $W \in {\mathcal{W}}^s$ and $\psi \in {\mathcal{C}}^1(W)$ with $| \psi|_{{\mathcal{C}}^1(W)} \le 1$. Due to the transversality of ${\mathcal{S}}_k^-$ with the stable cone, $W \cap {\mathcal{N}}_{\varepsilon}({\mathcal{S}}_k^-)$ comprises at most a finite number $N_k$ of curves, depending only on ${\mathcal{S}}_k^-$ and $\delta_0$, and not on $W$, each having length at most $C{\varepsilon}$. Thus, $$\int_W 1_{k, {\varepsilon}} \, \psi \, \nu_0 = \sum_i \int_{W_i} \psi \, \nu_0 \le \| \nu_0 \|_{{{\mathcal B}}} |W_i|^{1/p} |\psi|_{{\mathcal{C}}^\alpha(W_i)} \le C N_k {\varepsilon}^{1/p} \, ,$$ and taking the supremum over $\psi$ and $W$ proves the second inequality in . Next, it follows from and Lemma \[lem:embed\] that $$\label{eq:weak dual} |{\tilde\nu}_0(f)| \le C |f|_w, \qquad \mbox{for all $f \in {{\mathcal B}}_w$,}$$ so that in fact ${\tilde\nu}_0 \in {{\mathcal B}}_w^* \subset {{\mathcal B}}^*$. Thus for each $k \ge 1$, by , $$\mu_*({\mathcal{N}}_{\varepsilon}({\mathcal{S}}_k^-)) = {\tilde\nu}_0(1_{k,{\varepsilon}} \nu_0) \le C |1_{k,{\varepsilon}} \nu_0|_w \le C C_k {\varepsilon}^{1/p} \, .$$ To prove the bound for ${\mathcal{S}}_k^+$, we use the invariance of $\mu_*$ together with the fact that $T^{-k}{\mathcal{S}}_k^- = {\mathcal{S}}_k^+$. Moreover, we have $T^k({\mathcal{N}}_{\varepsilon}({\mathcal{S}}_k^+)) \subset {\mathcal{N}}_{C \kappa_+^k {\varepsilon}}({\mathcal{S}}_k^-)$, where $\kappa_+$ is the maximum rate of expansion in the unstable cone. Finally, to prove , we fix $\gamma > p$ and estimate for each $k \in \mathbb{N}$, $$\sum_{n \ge 1} \mu_*({\mathcal{N}}_{n^{-\gamma}}({\mathcal{S}}_k^{\pm})) \le C_k \sum_{n \ge 1} n^{-\gamma/p} < \infty \, .$$ Thus by the Borel-Cantelli Lemma, $\mu_*$-a.e. $x \in M$ visits ${\mathcal{N}}_{n^{-\gamma}}({\mathcal{S}}_k^{\pm})$ only finitely many times along its orbit, completing the proof of the lemma. Lemma \[lem:control\] immediately implies the following corollary. \[cor:atomic\] The following items establish the hyperbolicity of the measure $\mu_*$. - For any ${\mathcal{C}}^1$ curve $V$ uniformly transverse to the stable cone, there exists $C>0$ such that $\nu_0({\mathcal{N}}_{{\varepsilon}}(V)) \le C{\varepsilon}$ for all ${\varepsilon}>0$. - The measures $\nu_0$ and $\mu_*$ have no atoms, and $\mu_*(W) = 0$ for all local stable and unstable manifolds, $W$. - $\int_M |\log d(x, {\mathcal{S}}_1^{\pm})| \, d\mu_* < \infty$. - $\mu_*$-a.e. $x \in M$ has a stable and an unstable manifold of positive length. The proof follows directly from the control established on the measures of the neighborhoods of the singularity sets in Lemma \[lem:control\]. The argument follows exactly as in [@max Corollary 7.4]. Indeed, with the control established in Lemma \[lem:control\], we may follow the same arguments as in [@max Section 7.3] to establish the ergodicity of the measure $\mu_*$. Indeed, our control is stronger than the bounds $\mu_*({\mathcal{N}}_{\varepsilon}({\mathcal{S}}_k^{\pm})) \le C_k |\log {\varepsilon}|^\gamma$ for some $\gamma>1$ available in [@max], and the Hölder continuity of our strong norm $\| \cdot \|_u$ is stronger than the logarithmic modulus of continuity available in [@max]. The key result is establishing the absolute continuity of the unstable foliation with respect to $\mu_*$. Given a locally maximal Cantor rectangle $R$, let ${\mathcal{W}}^{s/u}(R)$ be the set of stable/unstable manifolds that cross $D(R)$ completely (see Section \[sec:lower\]). \[prop:cont\] Let $R$ be a locally maximal Cantor rectangle with $\mu_*(R)>0$. Fix $W^0 \in {\mathcal{W}}^s(R)$, and for $W \in {\mathcal{W}}^s(R)$, let $\Theta_W : W^0 \cap R \to W \cap R$ denote the holonomy map sliding along unstable manifolds in ${\mathcal{W}}^u(R)$. Then $\Theta_W$ is absolutely continuous with respect to $\mu_*$. This is [@max Corollary 7.9]. Its proof relies on the analogous property of absolute continuity for $\nu_0$, which in turn follows from the control established by the strong norm, and Lemma \[lem:control\]. The final step in the proof is to show that on each $W \in {\mathcal{W}}^s(R)$, the conditional measure $\mu_*^W$ of $\mu_*$ is equivalent to the leafwise measure $\nu_0$ restricted to $W$, i.e. there exists $C_W >0$ such that $$\label{eq:equivalence} C_W \mu_*^W \le \nu_0|_W \le C_W^{-1} \mu_*^W \, .$$ \[cor:ergodic\] The absolute continuity of the unstable holonomy with respect to $\mu_*$ implies the following additional properties. - $(T^n, \mu_*)$ is ergodic for all $n \ge 1$. - For any open set $O \subset M$, we have $\mu_*(O) > 0$. a\) Using absolute continuity, one establishes that each Cantor rectangle belongs to a single ergodic component following the usual Hopf argument [@max Lemma 7.15]. Then the ergodicity of $T^n$ follows from the assumption that $T$ is topologically mixing [@max Proposition 7.16]. b\) The proof is identical to the proof of [@max Proposition 7.11]. Entropy of $\mu_*$ {#sec:entropy} ------------------ In this section, we prove that the measure-theoretic entropy of $\mu_*$ is $h_*$, by estimating the measure of dynamically defined Bowen balls for $T^{-1}$. Recall the metric $\bar d$ defined in . For $n \ge 0$ and ${\varepsilon}> 0$ and $x \in M$, define $$B_n(x, {\varepsilon}) = \{ y \in M : \bar d(T^{-j}y, T^{-j}x) \le {\varepsilon}, \, \forall \, 0 \le j \le n \} \, .$$ \[lem:bowen\] There exists $C>0$ such that for all ${\varepsilon}>0$ sufficiently small and all $n \ge 0$, we have $$\mu_*(B_n(x,{\varepsilon})) \le C e^{-n h_*} \, .$$ Fix $x \in M$, ${\varepsilon}>0$ and $n \ge 0$, and let $1_{n, {\varepsilon}}^B$ denote the indicator function of the Bowen ball $B_n(x,{\varepsilon})$. We shall prove $$\label{eq:goal} \mu_*(B_n(x,{\varepsilon}))) = {\tilde\nu}_0(1_{n, {\varepsilon}}^B \nu_0) \le C |1_{n, {\varepsilon}}^B \nu_0|_w \le C' e^{-n h_*} \, ,$$ where $C'>0$ can be chosen independent of ${\varepsilon}$. The first inequality follows from , once we show that $1_{n,{\varepsilon}}^B \nu_0 \in {{\mathcal B}}_w$. To see this, write $$1_{n ,{\varepsilon}}^B = \prod_{j=0}^n 1_{{\mathcal{N}}_{\varepsilon}(T^{-j}x)} \circ T^{-j} = \prod_{j=0}^n {\mathcal{L}}_{\mbox{\tiny SRB}}^j(1_{{\mathcal{N}}_{{\varepsilon}}(T^{-j}x)}) \, ,$$ where ${\mathcal{L}}_{\mbox{\tiny SRB}}$ denotes the transfer operator with respect to ${\mu_{\tiny{\mbox{SRB}}}}$. Since ${\mathcal{L}}_{\mbox{\tiny SRB}}$ preserves ${{\mathcal B}}_w$ (and also ${{\mathcal B}}$) by [@demers; @liv], the claim follows since $1_{{\mathcal{N}}_{\varepsilon}(T^{-j}x)}$ satisfies the assumptions of Lemma \[lem:piece\]: $\partial {\mathcal{N}}_{\varepsilon}(T^{-j}x)$ consists of a single circular arc, together with possibly part of $\partial M$, both of which satisfy the weak transversality condition of that lemma for ${\varepsilon}$ sufficiently small. Applying Lemma \[lem:piece\](b) inductively in $j$ completes the proof of the claim, and of the first inequality in . Next, since $\nu_0$ is a non-negative leafwise measure by Lemma \[lem:leafwise\], we have $\int_W \psi \, \nu_0 \ge 0$ for all $W \in {\mathcal{W}}^s$ and $\psi \ge 0$. Then since $|\int_W \psi \, \nu_0| \le \int_W |\psi| \, \nu_0$, we can achieve the supremum in the weak norm of $\nu_0$ by restricting to test functions $\psi \ge 0$. Now take $W \in {\mathcal{W}}^s$, $\psi \in {\mathcal{C}}^1(W)$ with $\psi \ge 0$ and $|\psi|_{{\mathcal{C}}^1(W)} \le 1$, and suppose that $W \cap B_n(x,{\varepsilon}) \neq \emptyset$. Then using that $\nu_0$ is an eigenfunction of ${\mathcal{L}}$, $$\int_W \psi \, 1_{n,{\varepsilon}}^B \, \nu_0 = \int_W \psi \, 1_{n,{\varepsilon}}^B \, e^{-n h_*} {\mathcal{L}}^n \nu_0 = e^{-n h_*} \sum_{W_i \in {\mathcal{G}}_n(W)} \int_{W_i} \psi \circ T^n \, 1_{n,{\varepsilon}}^B \circ T^n \, \nu_0 \, .$$ Observe that $1_{n , {\varepsilon}}^B \circ T^n = 1_{T^{-n}(B_n(x,{\varepsilon}))}$, and that $$T^{-n}(B_n(x,{\varepsilon})) = \{ y \in M : \bar d(T^jx, T^jy) \le {\varepsilon}, \, \forall \, 0 \le j \le n \} \, .$$ Thus recalling , if ${\varepsilon}< 10 \, {\mbox{diam}}(M)$, and $\bar d(T^jx, T^jy) \le {\varepsilon}$, then $T^jx, T^jy$ belong to the same set $\overline M_{i_j}^+$ for each $j$, and so $T^{-n}(B_n(x,{\varepsilon}))$ belongs to a single element of ${\mathcal{M}}_0^n$. Thus for ${\varepsilon}< \delta_0$, there are at most two $W_i \in {\mathcal{G}}_n(W)$ with $W_i \cap T^{-n}(B_n(x, {\varepsilon})) \neq 0$. On such $W_i$, the positivity of $\nu_0$ implies, $$\int_{W_i} \psi \circ T^n \, 1_{n,{\varepsilon}}^B \circ T^n \, \nu_0 \le \nu_0(W_i) \le |\nu_0|_w \, .$$ Thus, $$\int_W \psi \, 1_{n,{\varepsilon}}^B \, \nu_0 \le e^{-n h_*} 2 |\nu_0|_w \, ,$$ and taking the supremum over $\psi$ and $W$ yields the final inequality in . \[prop:entropy\] For $\mu_*$ defined as in Definition \[def:mu\_\*\], we have $h_{\mu_*}(T) = h_*$. Recall that $\int_M |\log d(x, {\mathcal{S}}_1^{\pm})| \, d\mu_* < \infty$ by Corollary \[cor:atomic\](c), and that $\mu_*$ is ergodic by Corollary \[cor:ergodic\]. Thus applying [@DWY Proposition 3.1],[^10] we conclude that for $\mu_*$-a.e. $x \in M$, $$\lim_{{\varepsilon}\to 0} \liminf_{n \to \infty} - \frac 1n \log \mu_*(B_n(x,{\varepsilon})) = \lim_{{\varepsilon}\to 0} \limsup_{n \to \infty} - \frac 1n \log \mu_*(B_n(x,{\varepsilon})) = h_{\mu_*}(T^{-1} = h_{\mu_*}(T) \, .$$ On the other hand, Lemma \[lem:bowen\] implies that for all ${\varepsilon}>0$ sufficiently small, $$\liminf_{n \to \infty} - \frac 1n \log \mu_*(B_n(x,{\varepsilon})) \ge h_* \, .$$ Thus $h_{\mu_*}(T) \ge h_*$. But $h_{\mu_*}(T) \le h_*$ by Theorem \[thm:initial\](d), so equality follows. Uniqueness of $\mu_*$ {#sec:unique} --------------------- In this section we prove that $\mu_*$ is the unique invariant probability measure with $h_{\mu_*}(T) = h_*$. The proof of uniqueness follows very closely the proof of uniqueness in [@max Section 7.7]. We include the proof to point out several differences in the initial estimates on elements of ${\mathcal{M}}_{-n}^0$, and for completeness. The idea of the proof is to adapt Bowen’s proof of the uniqueness of equilibrium states to the setting of maps with discontinuities. The key estimates will be to show that while not all elements of ${\mathcal{M}}_{-n}^0$ satisfy good lower bounds on their measure, most elements (in the sense of Lemma \[lem:most good\]) have satisfied good lower bounds at some point in the recent past (in the sense of Lemma \[lem:long good\]). Recall that ${\mathcal{M}}_0^n$ denotes the set of maximal, open connected components on which $T^n$ is smooth, while ${\mathcal{M}}_{-n}^0$ denotes the analogous set for $T^{-n}$. Choose $\delta_2>0$ sufficiently small that for all $n, k \in \mathbb{N}$, if $A \in {\mathcal{M}}_{-k}^n$ is such that ${\mbox{diam}}^u(A) \le \delta_2$ and ${\mbox{diam}}^s(A) \le \delta_2$, then $A \setminus {\mathcal{S}}^{\pm}$ consists of no more that $K_1$ connected components. Such a choice of $\delta_2$ is possible by property (P1) and Convention \[convention: n\_0=1\]. For $n \ge 1$, define $$B_{-2n}^0 = \{ A \in {\mathcal{M}}_{-2n}^0 : \forall j, \, 0 \le j \le n/2, \, T^{-j}A \subset E \in {\mathcal{M}}_{-n+j}^0 \mbox{ such that } {\mbox{diam}}^u(E) < \delta_2 \} \, .$$ Define $B_0^{2n} \subset {\mathcal{M}}_0^{2n}$ analogously with ${\mbox{diam}}^u(E)$ replaced by ${\mbox{diam}}^s(E)$. Next, let $$\label{eq:B2n} B_{2n} := \{ A \in {\mathcal{M}}_{-2n}^0 : \mbox{ either $A \in B_{-2n}^0$ or $T^{-2n}A \in B_0^{2n}$ } \} \, ,$$ and $G_{2n} = {\mathcal{M}}_{-2n}^0 \setminus B_{2n}$. We think of $B_{2n}$ as the set of ‘bad’ elements and $G_{2n}$ as the set of ‘good’ elements. Note that for any $n \ge 1$, each $A \in {\mathcal{M}}_{-n}^0$ satisfies ${\mbox{diam}}^s(A) \le C \Lambda^{-n}$. We choose $\bar n \in \mathbb{N}$ such that $C\Lambda^{-\bar n} \le \delta_2$. Our first lemma shows that the cardinality of $B_{2n}$ is small relative to $e^{n h_*}$ for large $n$. \[lem:most good\] There exists $C>0$ such that for all $n \ge \bar n$, $$\# B_{2n} \le C e^{3n h_*/2} K_1^{n/2} \le C \rho^{n/2} e^{2n h_*} \, .$$ For $n \ge \bar n$, suppose $A \in B_{-2n}^0 \subset {\mathcal{M}}_{-2n}^0$. For simplicity assume $n$ is even; otherwise, we may use $\lfloor n/2 \rfloor$ in place of $n/2$. For $0 \le j \le n/2$, let $A_j$ denote the element of ${\mathcal{M}}_{-3n/2 - j}^0$ containing $T^{- (n/2-j)}A \in {\mathcal{M}}^{n/2-j}_{-3n/2-j}$. Since $A \in B_{-2n}^0$ and by choice of $\bar n$, it follows that $\max \{ {\mbox{diam}}^s(A_j), {\mbox{diam}}^u(A_j) \} \le \delta_2$ for each $0 \le j \le n/2$. By choice of $\delta_2$, the number of connected components of ${\mathcal{M}}^1_{-3n/2-j}$ in each $A_j$ is at most $K_1$. Fixing $A_0 \in {\mathcal{M}}_{-3n/2}^0$ and applying this estimate inductively in $j$, we conclude that $\# \{ A' \in B_{-2n}^0 : T^{-n/2}A' \subset A_0 \} \le K_1^{n/2}$. Summing over the possible $A_0 \in {\mathcal{M}}_{-3n/2}^0$ yields, $$\# B_{-2n}^0 \le \# {\mathcal{M}}_{-3n/2}^0 K_1^{n/2} \le C e^{3n h_*/2} \rho^{n/2} \Lambda^{n/2} \le C \rho^{n/2} e^{2n h_*} \, ,$$ where we have used Proposition \[prop:M0n\] and Convention \[convention: n\_0=1\] for the second inequality, and Lemma \[lem:growth\](d) for the third. Next, if $A \in {\mathcal{M}}_0^n$, then ${\mbox{diam}}^u(A) \le C\Lambda^{-n}$ as well, so the same choice of $\bar n$ permits the analogous estimate to hold for $\# B_0^{2n}$ for $n \ge \bar n$. Finally, since there is a one-to-one correspondence between elements of ${\mathcal{M}}_0^n$ and ${\mathcal{M}}_{-n}^0$, we have $\# B_{2n} \le \# B_{-2n}^0 + \# B_0^{2n}$, completing the proof of the lemma. Our next lemma shows that long elements of ${\mathcal{M}}_{-j}^0$ enjoy good lower bounds on their $\mu_*$-measure. These lower bounds will eventually be linked to elements of $G_{2n}$. \[lem:long good\] There exists a constant $C_{\delta_2} > 0$ such that for all $j \ge 1$ and $A \in {\mathcal{M}}_{-j}^0$ such that $\min \{ {\mbox{diam}}^u(A), {\mbox{diam}}^s(T^{-j}A) \} \ge \delta_2$, it follows that, $$\mu_*(A) \ge C_{\delta_2} e^{-j h_*} \, .$$ As in the proof of Lemma \[lem:lower\], we choose a finite set ${{\mathcal R}}_{\delta_2} = \{ R_1, \ldots, R_\ell \}$ of locally maximal Cantor rectangles with $\mu_*(R_i) >0$, such that every stable curve of length $\delta_2$ properly crosses at least one $R_i$ in the stable direction, and every unstable curve of length $\delta_2$ properly crosses at least one $R_i$ in the unstable direction. Now let $j \ge 1$ and $A \in {\mathcal{M}}_{-j}^0$ be as in the statement of the lemma. By choice of ${{\mathcal R}}_{\delta_2}$, an unstable curve in $A$ properly crosses at least one $R_i \in {{\mathcal R}}_{\delta_2}$. Since $\partial A \subset {\mathcal{S}}_n^-$, $\partial A$ cannot intersect any unstable manifolds in $R_i$ since unstable manifolds cannot be cut under $T^{-n}$. Thus $A$ must fully cross $R_i$ in the unstable direction. Similarly, $T^{-j}A \in {\mathcal{M}}_0^j$ must fully cross at least one rectangle $R_k \in {{\mathcal R}}_{\delta_2}$ in the stable direction. Let $\Xi_i$ denote the index set of the family of stable manifolds comprising $R_i$. If $\xi \in \Xi$, set $W_{\xi, A} = W_\xi \cap A$. Since $T^{-j}$ is smooth on $A$ and $T^{-j}A$ fully crosses $R_k$ in the stable direction, it must be that $T^{-j}(W_{\xi,A})$ is a single curve that properly crosses $R_k$, and so contains a stable manifold in the family corresponding to $R_k$. Let $s >0$ denote the length of the shortest stable manifold in the rectangles belonging to ${{\mathcal R}}_{\delta_2}$. Applying , we estimate for $\xi \in \Xi_i$, $$\int_{W_{\xi, A}} \nu_0 = e^{-j h_*} \int_{W_{\xi,A}} {\mathcal{L}}^j \nu_0 = e^{-j h_*} \int_{T^{-j}(W_{\xi,A})} \nu_0 \ge e^{-j h_*} C' s^{h_* \bar C_2} \, .$$ Next, we let $D(R_i)$ denote the smallest solid rectangle containing $R_i$, and disintegrate $\mu_*$ on $\{ W_\xi \}_{\xi \in \Xi_i}$ into conditional measures $\mu_*^\xi$ and a factor measure $\hat \mu_*$ on $\Xi_i$. Then using the equivalence of the conditional measure $\mu_*^\xi$ with $\nu_0$ on $\mu_*$-a.e. $\xi \in \Xi_i$ from , we have $$\begin{split} \mu_*(A) & \ge \mu_*(A \cap D(R_i)) \ge \int_{\Xi_i} \mu_*^\xi(A) \, d\hat \mu_*(\xi) \\ & \ge \int_{\Xi_i} C_\xi^{-1} \nu_0(W_{\xi,A}) \, d\hat \mu_*(\xi) \ge C' s^{h_* \bar C} e^{-j h_*} \int_{\Xi_i} C_\xi^{-1} \, d\hat \mu_*(\Xi_i) \, , \end{split}$$ which completes the proof of the lemma due to the finiteness of ${{\mathcal R}}_{\delta_2}$. Our main proposition of the section is the following. \[prop:max\] The measure $\mu_*$ is the unique measure of maximal entropy. Since $\mu_*$ is ergodic, it suffices to prove that if $\mu$ is an invariant probability measure that is singular with respect to $\mu_*$, then $h_\mu(T) < h_{\mu_*}(T)$. Note first that with respect to the metric $\bar d$ defined in , both $T$ and $T^{-1}$ are expansive: if ${\varepsilon}_0 < {\mbox{diam}}(M)$ and $\bar d(T^jx, T^jy) < {\varepsilon}_0$ for all $j \in \mathbb{Z}$, then $x=y$. By definition of $\bar d$, the condition $\bar d(T^jx, T^jy) < {\varepsilon}_0$ for all $j$ implies that $T^jx, T^jy$ belong to the same domain $\overline M_{i_j}^+$ for all $j$, which implies $x=y$ since the sequence of partitions ${\mathcal{P}}_k$ from Section \[sec:ent def\] is generating. For $n \ge 1$, define ${\mathcal{Q}}_n$ to be the partition of maximal, connected components of $M$ (with boundary points doubled according to Convention \[con:pointwise\]) on which $T^{-n}$ is continuous. By the discussion of Section \[sec:ent def\], ${\mathcal{Q}}_n$ consists of elements with non-empty interior which correspond to elements of ${\mathcal{M}}_{-n}^0$, plus isolated points. Since the entropy of an atomic measure is 0, we may assume that $\mu$ gives 0 mass to the isolated points, and it follows from Lemma \[lem:control\] that $\mu_*$ does as well. Thus the only elements of ${\mathcal{Q}}_n$ with positive measure correspond to elements of ${\mathcal{M}}_{-n}^0 = B_n \cup G_n$. Accordingly, we throw out the atoms in ${\mathcal{Q}}_n$ and continue to call this collection of sets by the same name. Since $\mu$ is singular with respect to $\mu_*$, there exists a Borel set $F \subset M$ with $T^{-1}F = F$, $\mu_*(F) = 0$, and $\mu(F)=1$. Our first step is to approximate $F$ by elements of ${\mathcal{Q}}_n$. \[sub:diff\] For each $n \ge \bar n$, there exists a finite union ${\mathcal{C}}_n$ of elements of ${\mathcal{Q}}_n$ such that $$\lim_{n \to \infty} (\mu + \mu_*)((T^{-n/2} {\mathcal{C}}_n) \bigtriangleup F) = 0 \, .$$ This is [@max Sublemma 7.24], and its proof relies on the fact that the diameters of elements of $T^{-n/2}({\mathcal{Q}}_n)$ tend to 0 as $n$ increases due to the uniform hyperbolicity of $T$. The invariance of $F$ implies in addition that $$\lim_{n \to \infty} (\mu + \mu_*)({\mathcal{C}}_n \bigtriangleup F) = \lim_{n \to \infty} (\mu + \mu_*)((T^{n/2}{\mathcal{C}}_n) \bigtriangleup F) = 0 \, .$$ By the proof of [@max Sublemma 7.24], for each $n$, there exists a compact set ${\mathcal{K}}(n)$ that defines the approximating collection $\tilde {\mathcal{C}}_n = T^{-n/2} {\mathcal{C}}_n \subset {\mathcal{M}}_{-n/2}^{n/2}$, and satisfying ${\mathcal{K}}(n) \nearrow F$ as $n \to \infty$. To exploit this approximation, we group elements $Q \in {\mathcal{Q}}_{2n}$ according to whether $T^{-n}Q \subset \cup \tilde {\mathcal{C}}_n$ or $T^{-n} Q \cap (\cup \tilde {\mathcal{C}}_n) = \emptyset$, where $\cup \tilde {\mathcal{C}}_n$ denotes the union of elements of $\tilde {\mathcal{C}}_n$ in $M$. Since we have eliminated isolated points, if $T^{-n}Q \cap (\cup \tilde {\mathcal{C}}_n) \neq \emptyset$, then $T^{-n}Q \in {\mathcal{M}}_{-n}^n$ is contained in an element of ${\mathcal{M}}_{-n/2}^{n/2}$ that intersects ${\mathcal{K}}(n)$. Thus $Q \subset \cup T^n \tilde {\mathcal{C}}_n = \cup T^{n/2} {\mathcal{C}}_n$. As noted above, the diameters of $T^{-n}{\mathcal{Q}}_{2n}$ tend to 0 as $n \to \infty$, so by the expansive property of $T$, since the image under $T^{2n}$ of each element of ${\mathcal{Q}}_{2n}$ is connected, ${\mathcal{Q}}_{2n}$ is a generating partition for $T^{2n}$ for $n$ large enough. Thus, $$h_\mu(T^{2n}) = h_\mu(T^{2n}, {\mathcal{Q}}_{2n}) \le H_\mu({\mathcal{Q}}_{2n}) = - \sum_{Q \in {\mathcal{Q}}_{2n}} \mu(Q) \log \mu(Q) \, .$$ And so, $$\begin{split} 2n h_\mu(T) & = h_\mu(T^{2n}) \le - \sum_{Q \in {\mathcal{Q}}_{2n}} \mu(Q) \log \mu(Q) \\ & \le - \sum_{Q \subset \cup T^n \tilde {\mathcal{C}}^n} \mu(Q) \log \mu(Q) - \sum_{Q \cap (\cup T^n \tilde {\mathcal{C}}^n) = \emptyset} \mu(Q) \log \mu(Q) \\ & \le \frac 2e + \mu( \cup T^n \tilde {\mathcal{C}}^n) \log \# ({\mathcal{Q}}_{2n} \cap T^n\tilde {\mathcal{C}}_n) + \mu(M \setminus (\cup T^n \tilde {\mathcal{C}}^n)) \log \#({\mathcal{Q}}_{2n} \setminus (T^n \tilde {\mathcal{C}}_n)) \, , \end{split}$$ where in the last line we have used that for $p_j>0$, $\sum_{j=1}^N p_j \le 1$, it holds $$- \sum_{j=1}^N p_j \log p_j \le \frac 1e + (\log N) \sum_{j=1}^N p_j \, ,$$ see for example [@KH eq. (20.3.5)]. We have applied this fact with $p_j = \mu(Q)$ to both sums separately. Next, since $- h_{\mu_*}(T) = \left( \mu(\cup T^n\tilde {\mathcal{C}}_n) + \mu(M \setminus (\cup T^n \tilde {\mathcal{C}}_n)) \right) \log e^{-h_*}$, we estimate for $n \ge \bar n$, $$\label{eq:splitting C_n} \begin{split} 2n&(h_\mu(T) - h_{\mu_*}(T)) - \frac 2e \\ & \le \mu(\cup T^n \tilde {\mathcal{C}}_n) \log \sum_{Q \subset \cup T^n \tilde {\mathcal{C}}_n} e^{-2n h_*} + \mu(M \setminus (\cup T^n \tilde {\mathcal{C}}_n)) \log \sum_{Q \in {\mathcal{Q}}_{2n}\setminus (T^n \tilde {\mathcal{C}}_n)} e^{-2n h_*} \\ & \le \mu(\cup {\mathcal{C}}_n) \log \left( \sum_{Q \in G_{2n} \cap T^n \tilde {\mathcal{C}}_n} e^{-2n h_*} + \sum_{Q \in B_{2n} \cap T^n \tilde {\mathcal{C}}_n} e^{-2n h_*} \right) \\ & \qquad + \mu(M \setminus (\cup {\mathcal{C}}_n) \log \left( \sum_{Q \in G_{2n} \setminus T^n \tilde {\mathcal{C}}_n} e^{-2n h_*} + \sum_{Q \in B_{2n} \setminus T^n \tilde {\mathcal{C}}_n} e^{-2n h_*} \right) \, , \end{split}$$ where for the last inequality, we have used the invariance of $\mu$. By Lemma \[lem:most good\], the sums over the two subsets of $B_{2n}$ are bounded by $C\rho^{n/2}$. We focus on estimating the sums over the two subsets of $G_{2n}$. The following is proved in [@max Section 7.7]: For each $Q \in G_{2n} \subset {\mathcal{M}}_{-2n}^0$, there exists $j, k \in \mathbb{N}$, $0 \le j,k \le n/2$ and $\bar E \in {\mathcal{M}}_{-2n+j+k}^0$ such that $T^{-j}Q \subset \bar E$ and $\min \{ {\mbox{diam}}^u(\bar E), {\mbox{diam}}^s(T^{-2n+j+k}) \} \ge \delta_2$. We call such a triple $(\bar E, j, k)$ an [*admissible triple*]{} for $Q \in G_{2n}$, and note that by Lemma \[lem:long good\], $$\label{eq:bar E} \mu_*(\bar E) \ge C_{\delta_2} e^{(-2n+j+k)h_*} \, .$$ There may be many admissible triples for a fixed $Q \in G_{2n}$. Define the unique [*maximal triple*]{} for $Q$ by taking first the maximum $j$, then the maximum $k$ over all admissible triples for $Q$. Denote by ${\mathcal{E}}_{2n}$ the set of maximal triples corresponding to elements of $G_{2n}$, and for $(\bar E,j,k) \in {\mathcal{E}}_{2n}$, set $${\mathcal{A}}_M(\bar E, j,k) = \{ Q \in G_{2n} : (\bar E, j,k) \mbox{ is the maximal triple for $Q$} \} \, .$$ Since $\bar E \in {\mathcal{M}}_{-2n+j+k}$ and $G_{2n} \subset {\mathcal{M}}_{-2n}^0$, it follows from Proposition \[prop:M0n\] that $\# {\mathcal{A}}_M(\bar E, j, k) \le C e^{(j+k)h_*}$ for some $C$ independent of $(\bar E, j,k)$ and $n$. The following sublemma is [@max Sublemma 7.25], which implies that if we organize our counting according to maximal triples, we avoid unwanted redundancies. \[sub:disjoint\] If $(\bar E_1, j_1, k_1), (\bar E_2, j_2, k_2)$ are distinct elements of ${\mathcal{E}}_{2n}$ with $j_2 \ge j_1$, then $T^{-(j_2-j_1)}\bar E_1 \cap \bar E_2 = \emptyset$. If $Q \in T^n\tilde {\mathcal{C}}_n \cap {\mathcal{A}}_M(\bar E,jk)$, then by definition of the maximal triple, $T^{-n+j} \bar E \in {\mathcal{M}}_{-n+k}^{n-j}$ contains $T^{-n}Q$. Since $j, k \le n/2$, $T^{-n+j}\bar E$ is contained in an element of ${\mathcal{M}}_{-n/2}^{n/2}$ that also contains $T^{-n}Q$ and intersects ${\mathcal{K}}(n)$. Thus $T^{-n+j} \bar E \subset \cup \tilde {\mathcal{C}}_n$ whenever $T^n\tilde {\mathcal{C}}_n \cap {\mathcal{A}}_M(\bar E, j ,k) \neq \emptyset$, and so ${\mathcal{A}}_m(\bar E, j,k) \subset T^n \tilde {\mathcal{C}}_n$ whenever $T^n\tilde {\mathcal{C}}_n \cap {\mathcal{A}}_M(\bar E, j ,k) \neq \emptyset$. Using these observations together with , we estimate $$\begin{split} & \sum_{Q \in G_{2n} \cap T^n \tilde {\mathcal{C}}_n} e^{-2n h_*} \le \sum_{(\bar E, j,k) \in {\mathcal{E}}_{2n} : \bar E \subset T^{n-j}\tilde {\mathcal{C}}_n} \sum_{Q \in {\mathcal{A}}_M(\bar E,j,k)} e^{-2n h_*} \\ & \le \sum_{(\bar E, j,k) \in {\mathcal{E}}_{2n} : \bar E \subset T^{n-j}\tilde {\mathcal{C}}_n} C e^{(-2n +j+k)h_*} \le \sum_{(\bar E, j,k) \in {\mathcal{E}}_{2n} : \bar E \subset T^{n-j}\tilde {\mathcal{C}}_n} C' \mu_*(\bar E) \\ & \le \sum_{(\bar E, j,k) \in {\mathcal{E}}_{2n} : \bar E \subset T^{n-j}\tilde {\mathcal{C}}_n} C' \mu_*(T^{-n+j}\bar E) \le C' \mu_*(\cup \tilde {\mathcal{C}}_n) = C' \mu_*(\cup {\mathcal{C}}_n) \, , \end{split}$$ where we have used the invariance of $\mu_*$ and the constant $C'$ is independent of $n$. In the last line we have used Sublemma \[sub:disjoint\] in order to sum over the elements of ${\mathcal{E}}_{2n}$ without double counting. Similarly, since $T^{-n+j}\bar E \subset M \setminus \tilde {\mathcal{C}}_n$ whenever $T^n \tilde {\mathcal{C}}_n \cap {\mathcal{A}}_M(\bar E , j,k) = \emptyset$, the sum over $Q \in G_{2n} \setminus T^n {\mathcal{C}}_n$ is bounded by $C' \mu_*(M \setminus (\cup \tilde {\mathcal{C}}_n))$. Putting these estimates together with allows us to conclude the argument, $$\begin{split} 2n(h_\mu(T) - h_{\mu_*}(T)) - \frac 2e & \le \mu(\cup {\mathcal{C}}_n) \log \left(C' \mu_*(\cup {\mathcal{C}}_n) + C \rho^{n/2} \right) \\ & \quad + \mu(M \setminus (\cup {\mathcal{C}}_n)) \log \left(C' \mu_*(M \setminus (\cup {\mathcal{C}}_n)) + C \rho^{n/2}\right) \, . \end{split}$$ Then since $\mu(\cup {\mathcal{C}}_n) \to 1$ and $\mu_*(\cup {\mathcal{C}}_n) \to 0$ as $n \to \infty$, the quantity on the right side of the inequality tends to $-\infty$. This forces $h_\mu(T) < h_{\mu_*}(T)$ to permit the left side to tend to $-\infty$ as well. [BCFT]{} V. Baladi, M.F. Demers, and C. Liverani, *Exponential decay of correlations for finite horizon Sinai billiard flows,* Invent. Math. [**211**]{} (2018), 39–177. V. Baladi and M.F. Demers, *On the measure of maximal entropy for finite horizon Sinai billiard maps*, to appear in Journal Amer. Math. Soc. R. Bowen, *Periodic points and measures for Axiom A diffeomorphisms,* Trans. Amer. Math. Soc. **154** (1971) 377–397 R. Bowen, *Topological entropy for non-compact sets,* Trans. Amer. Math. Soc. **49** (1973) 125–136 R. Bowen, *Maximizing entropy for a hyperbolic flow,* Math. Systems Theory **7** (1974) 300–303 R. Bowen, *Some systems with unique equilibrium states,* Math. Systems Theory **8** (1974/75) 193–202 R. Bowen and D. Ruelle, *The ergodic theory of Axiom A flows*, Inventiones Math. [**29**]{} (1975), 181–202. M. Brin and A. Katok, *On local entropy,* Geometric Dynamics (Rio de Janeiro, 1981) Lecture Notes in Mathematics [**1007,**]{} Springer: Berlin (1983) 30–38 K. Burns, V. Climenhaga, T. Fisher, and D.J. Thompson, *Unique equilibrium states for geodesic flows in nonpositive curvature,* Geom. Funct. Anal. **28** (2018) 1209–1259 J. Buzzi, *The degree of Bowen factors and injective codings of diffeomorphisms,* arXiv:1807.04017, v3 (December 2019). J. Buzzi, S. Crovisier and O. Sarig, *Measures of maximal entropy for surface diffeomorphisms*, arXiv:1811.02240, v2 (January 2019). N.I. Chernov and R. Markarian, *Chaotic Billiards,* Math. Surveys and Monographs [**127**]{}, Amer. Math. Soc. (2006) N.I. Chernov and H.-K. Zhang, *On statistical properties of hyperbolic systems with singularities*, J. Stat. Phys. [**136**]{} (2009), 615–642. V. Climenhaga, T. Fisher, and D.J. Thompson, *Unique equilibrium states for Bonatti-Viana diffeomorphisms*, Nonlinearity [**31**]{}:6 (2018), 2532–2577. V. Climenhaga, G. Knieper, and K. War, *Uniqueness of the measure of maximal entropy for geodesic flows on certain manifolds without conjugate points,* arXiv:1903.09831, v1 (March 2019). V. Climenhaga, Ya. Pesin and A. Zelerowicz, *Equilibrium measures for some partially hyperbolic systems*, arXiv:1810.08663, v3 (July 2019). M.F. Demers and C. Liverani, *Stability of statistical properties in two-dimensional piecewise hyperbolic maps*, Trans. Amer. Math. Soc. [**360**]{}:9 (2008), 4777-4814. M.F. Demers, P. Wright, and L.-S. Young, *Entropy, Lyapunov exponents and escape rates in open systems,* Ergod. Th. Dynam. Sys. [**32**]{} (2012) 1270–1301 M.F. Demers and H.-K. Zhang, *Spectral analysis for the transfer operator for the Lorentz gas,* J. Mod. Dyn. [**5**]{} (2011) 665–709 M.F. Demers and H.-K. Zhang, *A functional analytic approach to perturbations of the Lorentz gas*, Comm. Math. Phys. [**324**]{} (2013) 767–830 M.F. Demers and H.-K. Zhang, *Spectral analysis of hyperbolic systems with singularities,* Nonlinearity [**27**]{} (2014) 379–433 D. Dolgopyat, *On decay of correlations in Anosov flows,* Ann. of Math. [**147**]{} (1998), 357–390. S. Gou[ë]{}zel and C. Liverani, *Compact locally maximal hyperbolic sets for smooth maps: fine statistical properties,* J. Diff. Geom. **79** (2008), 433–477. H. Hennion, *Sur un théorème spectral et son application aux noyaux Lipchitziens,* Proc. Amer. Math. Soc. [**118**]{}:2 (1993), 627–634. A. Katok and B. Hasselblatt, [*Introduction to the Modern Theory of Dynamical Systems,*]{} Cambridge University Press (1995). Y. Lima and C. Matheus, *Symbolic dynamics for non-uniformly hyperbolic surface maps with discontinuities*, Ann. Sci. Éc. Norm. Supér. [**51**]{}:1 (2018), 1–38. C. Liverani, [*Decay of correlations,*]{} Ann. of Math. [**142**]{} (1995), 239–301. C. Liverani, *On contact Anosov flows*, Ann. of Math. [**159**]{}:3 (2004), 1275–1312. C. Liverani and M.P. Wojtkowski, *Ergodicity in Hamiltonian systems*, Dynamics Reported [**4**]{} (1995), 130–202. R. Mañé, *A proof of Pesin’s formula,* Ergodic Th. Dynam. Sys. [**1**]{} (1981), 95–102. G.A. Margulis, *Certain applications of ergodic theory to the investigation of manifolds of negative curvature* (Russian) Funkcional. Anal. i Pril. **3** (1969) 89–90 G.A. Margulis, *On some Aspects of the Theory of Anosov systems,* with a survey by R. Sharp: *Periodic orbits of hyperbolic flows,* Springer: Berlin (2004) W. Parry and M. Pollicott, *An analogue of the prime number theorem for closed orbits of Axiom A flows,* Ann. of Math. **118** (1983), 573–591. Ya.B. Pesin, *Dynamical systems with generalized hyperbolic attractors: hyperbolic, ergodic and topological properties*, Ergod. Th. and Dynam. Sys. [**12**]{}:1 (1992), 123–151. M. Pollicott, and R. Sharp, *Exponential error terms for growth functions on negatively curved surfaces,* Amer. J. Math. **120** (1998) 1019–1042 D. Ruelle, *Thermodynamic Formalism: The Mathematical Structures of Classical Equilibrium Statistical Mechanics*, Addison-Wesley, 1978, 183 pp. D. Ruelle, *Locating resonances for Axiom A dynamical systems*, J. Stat. Phys. [**44**]{} (1986), 281–292. O. Sarig, *Bernoulli equilibrium states for surface diffeomorphisms,* J. Mod. Dyn. [**5**]{} (2011) 593–608 O. Sarig, *Symbolic dynamics for surface diffeomorphisms with positive entropy,* J. Amer. Math. Soc. [**26**]{} (2013) 341–426 L. Schwartz, *Théorie des distributions,* Publications de l’Institut de Mathématique de l’Université de Strasbourg, Hermann: Paris (1966) Ya. Sinai, *Gibbs measures in ergodic theory*, Russian Math. Surveys [**27**]{}:4 (1972), 21–69. P. Walters, *An Introduction to Ergodic Theory,* Graduate Texts in Math. [**79**]{} Springer: New York (1982). L.-S. Young, [*Statistical properties of dynamical systems with some hyperbolicity,*]{} Ann. of Math. [**147**]{} (1998), 585–650. [^1]: MD was partly supported by NSF grant DMS 1800321. [^2]: Recall that an SRB measure for a hyperbolic system is an invariant probability measure whose conditional measures on local unstable manifolds are absolutely continuous with respect to the Riemannian volume. [^3]: This implies in particular that $\| DT \|$ is bounded on each $M_i^+$, so that this class of maps does not include dispersing billiards. [^4]: Contrast this with [@max Lemma 3.1], where the analogous construction yields connected elements due to the property of continuation of singularities enjoyed by dispersing billiards. [^5]: Neither of these distances will satisfy the triangle inequality, but that is irrelevant for our purposes. [^6]: This space is strictly smaller than the set of ${\mathcal{C}}^\alpha$ functions, yet contains ${\mathcal{C}}^{\alpha'}$ for each $\alpha' > \alpha$. We adopt this usage in order that the embedding of our strong space in our weak space is injective (Lemma \[lem:include\]). [^7]: Both [@Pesin1] and [@chernov; @pw] give this formula for the conditional measures of ${\mu_{\tiny{\mbox{SRB}}}}$ on unstable manifolds. Yet, due to our assumption (P2), ${\mu_{\tiny{\mbox{SRB}}}}$ is an SRB measure for $T^{-1}$ as well, and so enjoys the analogous properties on stable manifolds of $T$. [^8]: Note $J_{W_\xi}T^n(z) = \prod_{j=0}^{n-1} J_{T^jW_\xi}T(T^jz)$ and for brevity let $g_n = J_{W_\xi}T^n$. The limit of $g_n(x)/g_n(y)$ exists if the limit of $\log (g_n(x)/g_n(y))$ exists. Now for $n, k \ge 1$, we may estimate using and , $$\left| \log \frac{g_n(x)}{g_n(y)} - \log \frac{g_{n+k}(x)}{g_{n+k}(y)} \right| = \log \frac{J_{T^nW_\xi}T^k(T^nx)}{J_{T^nW_\xi}T^k(T^ny)} \le C_d d_{T^nW_\xi}(T^nx, T^ny) \le C_d C_e^{-1} \Lambda^{-n} d_{W_\xi}(x,y) \, ,$$ so that the sequence $\log (g_n(x)/g_n(y))$ is Cauchy and therefore converges. Thus the limit defining $g_\xi$ exists. A similar estimate shows that $g_n$ is log-Lipschitz with Lipschitz constant at most $C_d$, bounded independently of $n$, and so this bound carries over to $g_\xi$. [^9]: Once a Cantor rectangle of some size is constructed around ${\mu_{\tiny{\mbox{SRB}}}}$-almost-every $x \in M$, the existence of such a finite family for any fixed length scale $\delta_1$ follows from the compactness of the set of stable (and also unstable) curves of length $\ge \delta_1/3$ in the Hausdorff metric, as in [@chernov; @book Lemma 7.87]. [^10]: Which is a slight modification of the Brin-Katok local entropy theorem [@brin], applying [@mane Lemma 2]. See also [@max Corollary 7.17].
--- author: - 'David Emmanuel-Costa,' - 'Edison T. Franco' - and Ricardo González Felipe title: '$\mathsf{SU(5)\times SU(5)}$ unification revisited' --- Introduction {#sec:Introduction} ============ On the quest for the theory beyond the Standard Model (SM), supersymmetric grand unified theories (SUSY GUTs) have revealed many attractive features which can solve some of the aspects left unexplained in the SM. This idea is supported by the unification of the gauge couplings that occurs, through renormalization group evolution, at a scale around $10^{16}$ GeV in the minimal supersymmetric standard model (MSSM). In the latter case, the SUSY threshold is set in the TeV region. Since the appearance of the simplest GUT models proposed in 1974 by Georgi and Glashow, and based in the gauge group $\mathsf{SU(5)}$ [@Georgi:1974sy], the search for gauge groups compatible with a unification scheme has been actively pursued in the literature [@Georgi:1975qb; @Langacker:1980js; @Raby:2008gh]. Yet the unification and breaking patterns are far from being established. The low-energy supersymmetric $\mathsf{SU(5)}$ version [@Sakai:1981gr] has been quoted as an excellent unification theory, since in this model gauge couplings unify very precisely at one-loop level without the need of new particles. Moreover, at two-loop [@Barger:1992ac] and three-loop [@Martens:2010nm] levels, gauge unification can also be achieved if threshold effects are taken into account. Besides being successful in unifying gauge couplings, GUTs should also address other theoretical challenges. The proton should live long enough [@Buras:1977yy; @Langacker:1980js; @Sakai:1981pk; @Hayato:1999az; @Nath:2006ut]. This requirement usually leads to the well-known doublet-triplet splitting problem, i.e. the $\mathsf{SU(2)}_{L}$ doublet and the $\mathsf{SU(3)}_{C}$ colour triplet belonging to the same multiplet must have a strong mass hierarchy. In other words, the parameters in the Higgs potential responsible for the doublet and triplet masses must be highly fine tuned. Going beyond the simplest $\mathsf{SU(5)}$ unification, it is also conceivable that the unification group has a semi-simple structure, as in the original left-right symmetric Pati-Salam model [@Pati:1973rp; @Pati:1974yy]. In this direction, the SUSY left-right $\mathsf{SU(5)}\times \mathsf{SU(5)}$ model [@Davidson:1987mi; @Dine:2002se] has many attractive features that are absent in minimal realizations of the $\mathsf{SU(5)}$ theory. Indeed, R-parity can be automatically conserved, proton decay is suppressed because heavy and light fermions do not mix, the doublet-triplet splitting problem is alleviated [@Barr:1996kp; @Dine:2002se], a generalized seesaw mechanism for fermion masses can be easily incorporated, and nonvanishing neutrino masses are naturally explained. Furthermore, $\mathsf{SU(5)}\times \mathsf{SU(5)}$ theories can be easily embedded in superstring constructions [@Mohapatra:1996fu; @Mohapatra:1996iy] which aim at unifying gravity with electroweak and strong forces. In what concerns unification, it is worth noticing that the same discrete permutation symmetry that guarantees the left-right nature of $\mathsf{SU(5)}\times \mathsf{SU(5)}$ (i.e. the one-to-one correspondence among left and right matter field representations) also leads to the unification of gauge couplings into a single constant. If one assumes that the $\mathsf{SU(5)}\times \mathsf{SU(5)}$ group breaks directly to the SM gauge group $\mathsf{SU(3)}_C \times \mathsf{SU(2)}_L \times \mathsf{U(1)}_Y$ at the unification scale $\Lambda$, then the three SM gauge couplings $g_a \, (a=s,w,y)$ meet together into a single value, $$\label{unifcond} \alpha_U=k_3\,\alpha_s =k_2\,\alpha_w =k_1\,\alpha_y\,,$$ where $\alpha_a = g_a^2/(4\pi)$. The coefficients $k_i$ are group factors, $k_i = (\text{Tr}\, T^2_i) / (\text{Tr}\, T^2)$, $(i=1,2,3)$, where $T$ and $T_i$ are generators of the GUT group properly normalized over the full group and its SM subgroup $G_i$, respectively. For $\mathsf{SU(5)}\times \mathsf{SU(5)}$ one obtains the non-canonical values $k_1=13/3, k_2 =1$ and $k_3=2$. The corresponding weak mixing angle at the unification scale is given by $$\sin ^{2}\theta _{W} = \frac{\alpha_y}{\alpha_y+\alpha_w}=\frac{1}{1+k_{1}/k_{2}}=\frac{3}{16}\,.$$ It is commonly believed that this value cannot be reconciled with measurements at the electroweak scale, since it is rather small and, in general, $\sin ^{2}\theta _{W}$ decreases from high to low energies [@Cho:1993jb; @Mohapatra:1996fu; @Mohapatra:1996iy]. Yet, if some appropriate representations are taken into account in the renormalization group evolution of the gauge couplings, this may not be the case. In particular, we shall show that the inclusion of the $(\overline{15},1)+(1,15)$ and their conjugate $(15,1)+(1,\overline{15})$ representations is sufficient to drive $\sin ^{2}\theta _{W}$ to the correct value. This is due to the fact that the $\mathsf{SU(2)}_{L}$ triplets contained in the $15$ and $\overline{15}$ representation of $\mathsf{SU(5)}_{L}$ strongly adjust the $\alpha _{w}$ coupling constant. It is also remarkable that the above representations play a crucial role in implementing the seesaw mechanism for neutrino masses. In this work we revive the idea of grand unification in the supersymmetric version of the left-right $\mathsf{SU(5)\times SU(5)}$ gauge group. Our aim is to demonstrate that, with the addition of a minimal particle content, it is possible not only to unify the SM gauge coupling constants into a single GUT value, but also to bring the theory into agreement with the electroweak observational data. The paper is organized as follows. In Sec. \[sec:model\] we introduce the particle content of the model and discuss possible breaking patterns to the SM gauge group. We also briefly address the question of fermion masses in the context of the generalized seesaw. The unification of gauge couplings at one-loop and two-loop levels is studied in  Sec. \[sec:gut\] and a general numerical analysis is presented in Sec. \[sec:num\]. Finally, our concluding remarks are given in Sec. \[sec:Conclusions\]. The model {#sec:model} ========= The supersymmetric left-right $\mathsf{SU(5)\times SU(5)}$ gauge group contains two copies per generation of the usual SUSY $\mathsf{SU(5)}$ theory. In the left-handed picture, the $(\overline{5}+10,1)$ fermion representations, denoted by $\psi$ and $\chi$, are given by $$\label{eq:ferm} \psi =\left[ \begin{array}{c} D_{1}^{c} \\ D_{2}^{c} \\ D_{3}^{c} \\ e \\ -\nu \end{array} \right] \sim (\overline{5},1),\quad \chi =\frac{1}{\sqrt{2}}\left[ \begin{array}{ccccc} 0 & U_{3}^{c} & -U_{2}^{c} & -u_{1} & -d_{1} \\ -U_{3}^{c} & 0 & U_{1}^{c} & -u_{2} & -d_{2} \\ U_{2}^{c} & -U_{1}^{c} & 0 & -u_{3} & -d_{3} \\ u_{1} & u_{2} & u_{3} & 0 & -E^{c} \\ d_{1} & d_{2} & d_{3} & E^{c} & 0 \end{array} \right] \sim (10,1),$$ while the $(1,5+\overline{10})$ fields, represented by $\psi^{c}$ and $\chi^{c}$, are $$\label{eq:fermconj} \psi ^{c}=\left[ \begin{array}{c} D_{1} \\ D_{2} \\ D_{3} \\ e^{c} \\ -\nu ^{c} \end{array} \right] \sim (1,5),\quad \chi ^{c}=\frac{1}{\sqrt{2}}\left[ \begin{array}{ccccc} 0 & U_{3} & -U_{2} & -u_{1}^{c} & -d_{1}^{c} \\ -U_{3} & 0 & U_{1} & -u_{2}^{c} & -d_{2}^{c} \\ U_{2} & -U_{1} & 0 & -u_{3}^{c} & -d_{3}^{c} \\ u_{1}^{c} & u_{2}^{c} & u_{3}^{c} & 0 & -E \\ d_{1}^{c} & d_{2}^{c} & d_{3}^{c} & E & 0 \end{array} \right] \sim (1,\overline{10}).$$ The multiplets of Eqs. (\[eq:ferm\]) and (\[eq:fermconj\]) have extra fermions beyond those present in the SM: the vector-like fermions ($U$,$U^c$,$D$,$D^c$,$E$,$E^c$) and the well-motivated right-handed neutrino, $\nu^{c}$. There is no vector-like analog of the neutrino. To discuss the breaking scheme to the SM gauge group, one needs to specify the Higgs content. Among the different possibilities, here we consider the following pattern: $$\label{eq:pattern} \begin{array}{c} \mathsf{SU(5)}_{L}\times \mathsf{SU(5)}_{R} \\ \downarrow \Lambda \\ \mathsf{SU(3)}_{L}\times \mathsf{SU(2)}_{L}\times \mathsf{U(1)}_{L}\times \mathsf{SU(3)}_{R} \times \mathsf{SU(2)}_{R}\times \mathsf{U(1)}_{R} \\ \downarrow \Lambda_{LR} \\ \mathsf{SU(3)}_{C} \times \mathsf{SU(2)}_{L}\times \mathsf{SU(2)}_{R}\times \mathsf{U(1)}_{B-L}\\ \downarrow v_{R} \\ \mathsf{SU(3)}_{C}\times \mathsf{SU(2)}_{L}\times \mathsf{U(1)}_{Y}\\ \downarrow v_{L} \\ \mathsf{SU(3)}_{C}\times \mathsf{U(1)}_{em}\,. \end{array}$$ We identify $\mathsf{SU(3)}_{C}$ with the $\mathsf{SU(3)}_{L+R}$ diagonal subgroup and $\mathsf{U(1)}_{B-L}$ with $\mathsf{U(1)}_{L+R}\,$. The breaking energy scales $\Lambda, \Lambda_{LR}$ and $v_{R}$ are determined by the Higgs content of the model. In this implementation, we need the adjoint representations of both $\mathsf{SU(5)}$ subgroups. We introduce $\Phi _{L}\sim (24,1)$ and $\Phi _{R}\sim (1,24)$, which accomplish the first breaking of $\mathsf{SU(5)_{L}\times SU(5)_{R}}$ at the scale $\Lambda$ but preserve the discrete left-right symmetry. To achieve the left-right symmetry breaking at the scale $\Lambda_{LR}$, the Higgs fields $\omega\sim (5,\overline{5})$, $\overline{\omega}\sim (\overline{5},5)$, $\Omega \sim (10,\overline{10})$ and $\overline{\Omega}\sim (\overline{10},10)$ are introduced[^1]. The last two steps in the pattern  are driven by the additional Higgs fields $\phi_R\sim (1,\bar{5})$, $\phi_R^c\sim (1,5)$ and $\phi_L\sim (5,1)$, $\phi_L^c \sim (\bar{5},1)$, respectively. Finally, as mentioned in the Introduction, the representations $T_{L} \sim (15,1)$, $T_{L}^c \sim (\overline{15},1)$, $T_{R} \sim (1,\overline{15})$ and $T_{R}^c \sim (1,15)$ turn out to be crucial for unification and are responsible for the Majorana masses of neutrinos. One of the attractive features of the $\mathsf{SU(5)\times SU(5)}$ theory is the possibility of a generalized seesaw mechanism to give masses to all SM fermions through the heavy vector-like fermions [@Cho:1993jb]. The Yukawa contribution to the superpotential reads as $$\begin{aligned} W_Y=\,\psi^{c} Y_{1}\omega\psi + \chi^{c} Y_{2}\Omega\chi + \sqrt{2}\psi Y_{3}\chi \phi_{L}^c + \sqrt{2}\psi^{c}Y_{3}\chi^{c}\phi_{R}^c + \frac{1}{4}\chi Y_{4}\chi \phi_{L}+ \frac{1}{4}\chi^{c}Y_{4}\chi^{c}\phi_{R}\,,\end{aligned}$$ where $Y_i$ denote the Yukawa coupling matrices. We choose the breaking directions as $\left\langle \omega \right\rangle_{k}^{k}=\left\langle \Omega \right\rangle _{12}^{12}=\left\langle \Omega \right\rangle_{23}^{23}=\left\langle \Omega \right\rangle_{31}^{31}=\left\langle \Omega \right\rangle_{45}^{45}=\Lambda_{LR}$, $k=1,2,3$ and $\left\langle \phi_{L,R}\right\rangle =(0,0,0,0,v_{u\,L,R})^{T}$, $\left\langle \phi_{L,R}^c\right\rangle =(0,0,0,0,v_{d\,L,R})^{T}$, with $v_{L,R}^2 = v_{u\,L,R}^2+v_{d\,L,R}^2\,$. The final mass contribution to all charged fermions can then be written as $$\begin{aligned} \label{eq:genseesaw} \begin{split} -\mathcal{L}_{m}=& \begin{pmatrix} u & U \end{pmatrix} \begin{pmatrix} 0 & Y_{4}v_{u\,L} \\ Y_{4}v_{u\,R} & \,-Y_{2}\Lambda_{LR} \end{pmatrix} \begin{pmatrix} u^{c} \\ U^{c} \end{pmatrix}+ \begin{pmatrix} d & D \end{pmatrix} \begin{pmatrix} 0 & Y_{3}v_{d\,L} \\ Y_{3}^{\mathsf{T}}v_{d\,R} & \,-Y_{1}\Lambda_{LR} \end{pmatrix} \begin{pmatrix} d^{c} \\ D^{c} \end{pmatrix}\,+\\ &\begin{pmatrix} e & E \end{pmatrix} \begin{pmatrix} 0 & Y_{3}^{\mathsf{T}}v_{d\,L} \\ Y_{3}v_{d\,R} & \,-Y_{2}\Lambda_{LR} \end{pmatrix} \begin{pmatrix} e^{c} \\ E^{c} \end{pmatrix}. \end{split}\end{aligned}$$ By means of the above procedure a generalized type-I seesaw mechanism can be implemented for all light quarks and charged leptons, provided that the vector-like fermion masses, which are proportional to the $\Lambda_{LR}$ scale, are heavy enough and $v_Lv_R\ll\Lambda_{LR}^2$. As it turns out, heavy vector-like fermion masses are also required for a successful unification of gauge couplings. For the sake of simplicity, we shall assume that the breaking pattern  to the SM gauge group occurs at a unique energy scale, *i.e.* $v_R\approx\Lambda_{LR}\approx\Lambda$. In this case the fermion mass spectrum has the approximate seesaw form $m_f=\mathcal{O}(y_fv_L)$ and $M_V=\mathcal{O}(y_V\Lambda_{LR})$, for light and heavy fermions, respectively. The precise realization of this generalized seesaw for fermions is beyond the scope of this work. It is our aim, instead, to discuss in detail how gauge couplings unify in this theory. For the neutrino sector, the relevant terms in the superpotential are $$W_{N}=\sqrt{2}\,Y_{5}\,(\psi \psi T_{L}+\psi^{c}\psi ^{c}T_{R}),$$ if one assumes R-parity conservation. Then, introducing two additional supermultiplets, $(5,\overline{5})$ and $(\overline{5},5)$, with vacuum alignment in the lepton doublet direction, light neutrinos would acquire masses through the conventional (type-I and/or type-II) seesaw mechanisms. It is worth noticing that, in the absence of the Higgs multiplets $\phi_R$, $\phi_R^c$, $\phi_L$ and $\phi_L^c$, R-parity is automatically conserved[^2] [@Mohapatra:1996iy]. In the latter case, quark and charged lepton masses would arise from higher dimension operators instead of the generalized seesaw Lagrangian terms given in Eq. . Gauge coupling unification {#sec:gut} ========================== The two-loop renormalization group equations (RGE) for the gauge coupling constants $\alpha_i \, (i=1,2,3)$ can be written in the form $$\label{eq:rge} \frac{d}{dt}\alpha^{-1}_i=-\frac{b_i}{2\pi k_i}- \frac{1}{8\pi^2} \sum_{j} \frac{b_{ij}\, \alpha_j}{k_i k_j}\,-\,\frac{1}{32\pi^3 k_i}\sum_{f=u,d,e}C_{if}\, \text{Tr} \left(Y_f^{\dagger}Y_f\right)\,,$$ where $\alpha_1 = k_1\,\alpha_y,\, \alpha_2 = k_2\,\alpha_w$ and $\alpha_3 = k_3\,\alpha_s$; $b_i$ are the usual one-loop beta coefficients; $b_{ij}$ and $C_{if}$ are the two-loop beta coefficients (see Appendix \[a1:beta\]). The quantities $Y_f$ denote the quark and lepton Yukawa coupling matrices. At the unification scale $\Lambda$, the gauge couplings $\alpha_i$ obey the relation $\alpha_U = \alpha_1 = \alpha_2= \alpha_3$ (cf. Eq. ). To get some insight into the unification in the one-loop approximation, let us define the effective beta coefficients $B_i$ [@Giveon:1991zm], $$B_i\equiv\frac{1}{k_i}\left(b_i+\sum_I b_i^I\,r_I\right),$$ where $$r_I = \frac{\ln\left(\Lambda/M_I\right)}{\ln\left(\Lambda/M_Z\right)}\,.$$ In the above expression, $M_I$ denotes an intermediate energy scale between the electroweak scale $M_Z$ and the GUT scale $\Lambda$, and the coefficients $b_{i}^I$ account for the new contribution to the one-loop beta functions $b_{i}$ above the threshold $M_I$. It is also convenient to introduce the differences $B_{ij}\equiv B_i-B_j$, such that $$B_{ij}= B^{\text{SM}}_{ij}+\sum_I\Delta^I_{ij}r_I\,,$$ where $B^{\text{SM}}_{ij}$ corresponds to the SM particle contribution and $$\Delta^I_{ij}= \frac{b^I_i}{k_i}-\frac{b^I_j}{k_j}\,.$$ The following $B$-test is then obtained, $$\label{eq:Btest} B\equiv\frac{B_{23}}{B_{12}}=\frac{\sin^2\theta_W-\dfrac{k_2}{k_3}\dfrac{\alpha} {\alpha_s}} {\dfrac{k_2}{k_1}-\left(1+\dfrac{k_2}{k_1}\right)\sin^2\theta_W}\,,$$ together with the GUT scale relation $$\label{eq:Ltest} B_{12}\, \ln \left(\frac{\Lambda}{M_Z}\right)= \frac{2\pi}{\alpha}\left[\frac{ 1}{k_1}-\left(\frac{1}{k_1}+\frac{1}{k_2} \right)\sin^2\theta_W\right ].$$ Notice that the right-hand sides of Eqs.  and  depend only on low-energy electroweak data and the group factors $k_i$. Adopting the following experimental values at $M_Z$ [@Nakamura:2010zzi] $$\begin{aligned} \alpha^{-1}&=127.916\pm0.015\,, \\ \sin^2\theta_W&=0.23116\pm0.00013\,, \\ \alpha_s&=0.1184\pm0.0007\,,\end{aligned}$$ the above relations read as $$\begin{aligned} \label{eq:Btestexp} \begin{split} B&=0.718\pm0.003\,, \\ B_{12}\,\ln\left(\frac{\Lambda}{M_Z}\right)&=185.0\pm0.2\,, \end{split}\end{aligned}$$ in the canonical GUT models with $k_i=(5/3,1,1)$, *e.g.* in $\mathsf{SU(5)}$ and $\mathsf{SO(10)}$. On the other hand, for the $\mathsf{SU(5)\times SU(5)}$ model where $k_i=(13/3,1,2)$ one obtains $$\begin{aligned} \label{eq:Btestexp2} B&=-3.687\pm0.012\,, \\ B_{12}\,\ln\left(\frac{\Lambda}{M_Z}\right)&=-43.19\pm0.13\,. \end{aligned}$$ The coefficients $B_{ij}$ that appear in the left-hand sides of Eqs.  and  strongly depend on the particle content of the theory. For instance, considering the SM particles with $n_H$ light Higgs doublets, one has $b_1=20/3+n_H/6$, $b_2=-10/3+n_H/6$ and $b_3=-7$, so that these coefficients are given by $$\label{eq:beffSM} B_{12}=\frac{22}{3}-\frac{n_H}{15}\,,\quad B_{23}=\frac{11}{3}+ \frac{n_H}{6}\,.$$ In the supersymmetric case they become $$\label{eq:beffMSSM} B_{12}=\frac{22}{3}-\frac{n_H}{15}-\left(\frac43+\frac{2\,n_H}{15}\right)r_S\,, \quad B_{23}=\frac{11}{3}+ \frac{n_H}{6}+\left(-\frac23+\frac{n_H}{3}\right)r_S\,,$$ with the “running weight” $r_S\simeq0.93$, for a low SUSY threshold $M_S \simeq 1$ TeV and a unification scale $\Lambda \simeq 10^{16}$ GeV. It is interesting to notice that Eqs.  and  together with the constraint allow to determine the number of the light Higgs doublets that would be required for the unification in the canonical GUT models, $$n_H=110\left(\frac{2B-1}{2B+5}\right)\approx7\quad\text{(SM)}\,,$$ $$\label{eq:BtMSSM} n_{H}=10\left(\frac{11-2r_{s}}{1+2r_{s}}\right)\left(\frac{2B-1}{2B+5} \right) \approx2\quad\text { (MSSM)}\,.$$ Clearly, the B-test fails badly in the SM case which possesses only one Higgs doublet, while Eq.  just corroborates the fact that the gauge couplings in the MSSM seemingly unify at one-loop level. Would one take only the MSSM particle content into account, the B-test would also fail badly in the SUSY $\mathsf{SU(5)\times SU(5)}$ case. Indeed, in such a case $B\approx1.625$ which is far above the required value given in Eq.  and, hence, the need for extra particles with suitable $B_{ij}$ coefficients. In Table \[tab1\] we present the relevant contributions $\Delta_{ij}$ to the $B_{ij}$ coefficients of the SUSY $\mathsf{SU(5)\times SU(5)}$ model which include, besides the MSSM threshold, the triplet $\Sigma_3$ and octet $\Sigma_8$ belonging to the $(24,1)$ representation, the triplets $T,T^c$ in the $(15,1)+(\overline{15},1)$ representation as well as the exotic vector-like chiral multiplets $U,U^c$, $D,D^c$ and $E,E^c$. MSSM $\Sigma_3$ $\Sigma_8$ $T$ $U$ $D$ $E$ --------------- --------- ------------ ------------ -------- ------- ------ ------- $\Delta_{12}$ -125/39 -2 0 -34/13 24/13 6/13 18/13 $\Delta_{23}$ 13/6 2 -3/2 4 -3/2 -3/2 0 : \[tab1\] The $\Delta_{ij}$ contributions to the $B_{ij}$ coefficients in the $\mathsf{SU(5)\times SU(5)}$ case. The SM contribution to the coefficients are $B^{\text{SM}}_{12}=185/39$ and $B^{\text{SM}}_{23}=1/3$. ![\[fig1\] The gauge coupling running at one-loop level for the canonical $\mathsf{SU(5)}$ MSSM (dashed lines) and the $\mathsf{SU(5)\times SU(5)}$ theory (solid lines), assuming the same unification scale, $\Lambda\simeq 2\times 10^{16}$ GeV. The SUSY scale is fixed at $M_S=1$ TeV. Notice that for the non-canonical case one needs $\Sigma_3$ and $\Sigma_8$ close to $M_{\Sigma}=10$ TeV and the triplets $T,T^c$ at a higher scale near $M_T=10^9$ GeV.](fig1.eps){width="12cm"} ![\[fig2\] The evolution of $\mathsf{sin}^2\theta_W$ at one-loop level for the canonical $\mathsf{SU(5)}$ MSSM (dashed line) and the $\mathsf{SU(5)\times SU(5)}$ theory (solid line), assuming $\Lambda\simeq 2 \times 10^{16}$ GeV, $M_S=1$ TeV, $M_{\Sigma}=10$ TeV and $M_T=10^9$ GeV.](fig2.eps){width="11cm"} Since Eqs.  require $B_{12}<0$ and $B_{23}>0$, it becomes clear from Table \[tab1\] that $\Sigma_3$ and $T$ improve unification, while $U$, $D$ and $E$ act in the opposite manner and, therefore, should be heavy enough. For illustration, in Fig. \[fig1\] we plot the one-loop running of the gauge couplings for the SUSY $\mathsf{SU(5)}$ and $\mathsf{SU(5)\times SU(5)}$ theories, assuming a common unification scale, $\Lambda= 2 \times 10^{16}$ GeV. The SUSY threshold $M_S$ is chosen in both cases at $1$ TeV. For the $\mathsf{SU(5)\times SU(5)}$ case, we assume a common mass scale $M_{\Sigma}$ for $\Sigma_3$ and $\Sigma_8$, and for the vector-like particles $U,D,E$ we set their mass scale $M_V=\Lambda$. The one-loop unification then demands $M_{\Sigma}\simeq10$ TeV and the triplets $T,T^c$ to have a mass $M_T\simeq10^9$ GeV. The evolution of $\mathsf{sin}^2\theta_W$ at one-loop level is given in Fig. \[fig2\]. As antecipated in the Introduction, adding the appropriate $\mathsf{SU(5)\times SU(5)}$ representations is essential for driving the running of $\mathsf{sin}^2\theta_W$ from the low value $3/16$ at GUT scale to its correct value at the electroweak scale. One may wonder whether two-loop effects significantly modify the above picture. The example presented in Fig. \[fig3\] shows that, although the values of the gauge couplings as a function of the energy scale $\mu$ are essentially unchanged, the two-loop effects tend to increase both $M_\Sigma$ and $M_T$ scales. In the next section we shall perform a two-loop numerical analysis in order to determine the full range of the relevant intermediate mass scales. ![\[fig3\] Comparison of the $\mathsf{SU(5)\times SU(5)}$ running of gauge couplings at one-loop level (dashed lines) and two-loop level (solid lines). For a fixed $M_S=1$ TeV and the same unification scale $\Lambda \simeq 2 \times 10^{16}$ GeV, two-loop effects increase the intermediate scales $M_\Sigma$ and $M_T$.](fig3.eps){width="12cm"} Numerical analysis {#sec:num} ================== In this section we present a general numerical analysis of the two-loop gauge coupling unification of the $\mathsf{SU(5)\times SU(5)}$ model sketched in Sec. \[sec:model\]. We adopt the $\overline{\rm{DR}}$ scheme, which is appropriate for the two-loop renormalization group evolution in supersymmetric models. The measure of unification used here is given by the quantity $$\label{epsilon} \epsilon =\sqrt{(\alpha _{1\Lambda}^{-1}-\alpha _{2\Lambda}^{-1})^{2}+(\alpha _{1\Lambda}^{-1}-\alpha _{3\Lambda}^{-1})^{2}+(\alpha _{2\Lambda}^{-1}-\alpha _{3\Lambda}^{-1})^{2}}\,,$$ which measures the “distance” between the couplings $\alpha _{i\Lambda}^{-1}\equiv\alpha _{i}^{-1}(\Lambda)$ at the unification scale $\Lambda$. Alternatively, one could use the quantity [@Auto:2003ys] $$\label{Rquantity} R=\frac{\max(\alpha_{1\Lambda},\, \alpha_{2\Lambda},\, \alpha_{3\Lambda})}{\min(\alpha_{1\Lambda},\, \alpha_{2\Lambda},\, \alpha_{3\Lambda})}\,,$$ which measures the amount of non-unification between the largest and the smallest gauge coupling value at the scale $\Lambda$. We have verified that both quantities lead to similar unification constraints. In particular, requiring $\epsilon\lesssim 0.1$ would correspond to $R-1\lesssim0.07$. Solving for the one-loop RGE of gauge couplings in the MSSM, and assuming a SUSY threshold $M_{S}=1$ TeV, the measure $\epsilon$ attains its minimum value, $\epsilon \simeq 0.50$, for $\Lambda \simeq 1.44 \times 10^{16}$ GeV. On the other hand, at two-loop level, its minimum is $\epsilon\simeq 0.18$ for $\Lambda\simeq 1.38 \times 10^{16}$ GeV, so that two-loop effects significantly improve unification. Inspired by the MSSM results, in our study we choose values of $\epsilon \leq 0.1$ as the criterion for unification. One can then expect that threshold effects would be sufficient to yield a perfect unification. ![\[fig4\]Intermediate mass scales $M_V$, $M_T$ and $M_\Sigma$ as functions of the unification scale $\Lambda$ in the $\mathsf{SU(5)\times SU(5)}$ model. The delimited color regions correspond to solutions $\alpha _{i \Lambda}^{-1}\, (i=1,2,3)$ with a unification measure $\epsilon \leq 0.1$ at two loops.](fig4.eps){width="12cm"} We proceed to integrate numerically the two-loop RGEs in Eqs.  from the electroweak scale $M_Z$ to a randomly chosen unification scale $\Lambda\gtrsim10^{14}$ GeV. The intermediate vector-like fermion mass scale $M_V$, and that of the triplet scalar, $M_T$, as well as the common scale $M_{\Sigma}$ for $\Sigma_3$ and $\Sigma_8$, are also randomly taken. The SUSY threshold scale is fixed at $M_{S}=1$ TeV. At two-loop level, the parameter space for the three relevant quantities, $M_{V}$, $M_{T}$, and $M_{\Sigma}$, is given as a function of the unification scale $\Lambda $ in Fig. \[fig4\]. We notice that every point corresponds to a different solution which has passed the criterion $\epsilon \leq 0.1$. As can be easily seen from the figure, the triplet mass scale $M_T$ can be close to the SUSY breaking mass scale $M_S$ for a low unification scale $\Lambda \simeq 10^{14}$ GeV. As $\Lambda$ increases, the value of $M_T$ also increases. We find $4.5\times10^3\,\text{GeV}\lesssim M_T\lesssim\,1.2\times10^{13}\,\text{GeV}$ for $10^{14}\,\text{GeV}\lesssim \Lambda\lesssim10^{18}\,\text{GeV}$. In contrast, the common mass scale $M_{\Sigma}$ decreases smoothly as $\Lambda$ increases and, for $\Lambda \sim 10^{18}$ GeV, can be as low as 1 TeV. The allowed mass range is $1.2\times10^3\,\text{GeV} \lesssim M_{\Sigma}\lesssim\,2.7\times10^{7}\,\text{GeV}$. We also note that, when $\Lambda \simeq 10^{14-15}$ GeV, both mass scales, $M_T$ and $M_{\Sigma}$, can be of the same order of magnitude. When compared to other intermediate states, vector-like fermions require a much higher mass scale. For $\Lambda \simeq 10^{14}$ GeV, we find the lower bound $M_V\gtrsim 3.2 \times 10^{10}$ GeV, while for $\Lambda \simeq 10^{17}$ GeV this bound gets more restrictive, $M_V\gtrsim10^{15}$ GeV. We have also verified how sensitive the results are with respect to the variation of the SUSY breaking mass scale. In fact, no significant changes occur and the variation of the SUSY mass scale in the interval $M_S=1-100$ TeV leads only to a slight dispersion of $M_{\Sigma}$ towards lower values. No relevant modification is either observed for the parameters in Fig. \[fig4\], if one considers a splitting between the masses of the triplet $\Sigma_3$ and octet $\Sigma_8$. Motivated by the rich scalar structure, we have also looked for solutions when two additional Higgs doublets are randomly inserted at some new threshold, $M_{H}$. The effects of the latter on the mass scales $M_{V}$, $M_{T}$ and $M_{\Sigma }$, given as a function of the unification scale $\Lambda$, are shown in Fig. \[fig5\]. While the inclusion of the two additional Higgs doublets does not significantly affect the parameter region of $M_V$ and $M_{\Sigma}$, it is clear from Fig. \[fig5\] that the triplet mass scale $M_T$ is shifted to much higher values, bringing $M_T$ close to the vector-like fermion mass scale for $\Lambda\gtrsim10^{16}$ GeV. The allowed ranges for the relevant scales are now given by $1.4\times10^{10}\,\text{GeV} \lesssim M_{V}\lesssim\,9.7\times10^{17}\,\text{GeV}$, $9.5\times 10^5\,\text{GeV} \lesssim M_{T}\lesssim\,4.4\times10^{16}\,\text{GeV}$, and $3.9\times10^3\,\text{GeV} \lesssim M_{\Sigma}\lesssim\,1.4\times10^{8}\,\text{GeV}$, with $M_H$ varying in the interval $10^3 \,\text{GeV} \lesssim M_{H}\lesssim\,9.4\times10^{17}\,\text{GeV}$. ![\[fig5\] As in Fig. \[fig4\], but for the $\mathsf{SU(5)\times SU(5)}$ model with two additional Higgs doublets at an intermediate scale. Notice that the triplet mass scale $M_T$ is significantly larger, and reaches values of the order of the exotic fermion mass scale $M_V$ at a high unification scale.](fig5.eps){width="12cm"} From the above results it becomes clear that the unification scale $\Lambda$ can reach and even exceed the perturbative string scale, $\Lambda_{s} \simeq 5.27\times 10^{17}$ GeV [@Kaplunovsky:1987rp; @Kaplunovsky:1992vs]. It is well known that $\mathsf{SU(5)\times SU(5)}$ theories can be embedded in the heterotic string context [@Gross:1984dd; @Gross:1985fr; @Gross:1985rr; @Barbieri:1994jq; @Maslikov:1996gn; @Kakushadze:1997ne]. Furthermore, in a minimal string-scale unification setup with vector-like fermions, it is conceivable to have unification of gauge couplings and gravity at the weakly coupled heterotic string scale [@EmmanuelCosta:2005nh]. We may ask ourselves whether it is possible to achieve such a unification in the $\mathsf{SU(5)\times SU(5)}$ framework under consideration. In the heterotic string scenario, an additional constraint on the gauge couplings must be verified at the string scale $\Lambda_s$, $$\begin{aligned} \label{eq:astring} \alpha_U=\alpha_\text{string}=\frac1{4\pi} \left(\frac{\Lambda}{\Lambda_{s}}\right)^{2}.\end{aligned}$$ Requiring $\Lambda\leq\Lambda_s$ in order to be in the perturbative regime, the constraint in Eq.  clearly implies a lower bound on the unified gauge coupling, namely, $\alpha_{U}^{-1}\geq 4\pi$. In Fig. \[fig6\], we present the upper values of $\alpha_{U}^{-1} \simeq \alpha_{2 \Lambda}^{-1}$ as a function of the unification scale $\Lambda$, together with the corresponding values of $\alpha_\text{string}^{-1}$. We conclude that string unification cannot be achieved, since $\alpha_{U}^{-1}$ is very small compared to the required value of $\alpha_\text{string}^{-1}$. This conclusion also remains valid when two additional Higgs doublets are included at an intermediate energy scale. ![\[fig6\] Upper values of $\alpha_{U}^{-1}$ at two-loop level in the $\mathsf{SU(5)\times SU(5)}$ model. The solid line corresponds to the $\mathsf{SU(5)\times SU(5)}$ model, assuming only two light Higgs doublets, while the dashed line corresponds to the case where two extra Higgs doublets are introduced at some intermediate scale.](fig6.eps){width="10cm"} Conclusions {#sec:Conclusions} =========== We have investigated the possibility to achieve unification of the SM gauge couplings in the context of a SUSY $\mathsf{SU(5)}_{L}\times \mathsf{SU(5)}_{R}$ GUT. For a successful gauge coupling unification, the inclusion of $(\overline{15},1)+(1,15)$ and their conjugates $(15,1)+(1,\overline{15})$ at an intermediate scale $M_T$ was essential to drive $\sin^2\theta_W$ to the correct value at the electroweak scale. From the two-loop numerical analysis, we have found that the intermediate mass scales $M_T$, $M_{\Sigma}$ for $\Sigma_3$, $\Sigma_8$ and $M_V$ for the vector-like fermions must be properly chosen to guarantee unification at the required level. As it can be clearly seen from Figs. \[fig4\] and \[fig5\], there is a wide region allowed for these mass scales. Models based on $\mathsf{SU(5)}_{L}\times \mathsf{SU(5)}_{R}$ unification enclose many attractive features. Compared with the standard $\mathsf{SU(5)}$ GUT, proton decay via dimension-six operators through heavy lepto-quark gauge bosons is suppressed, since at tree level the latter do not mediate transitions involving only light fermions. On the other hand, the presence of the color Higgs triplets $H_C^L$ and $H_C^R$, contained in the chiral super-quintets $\phi_L$, $\phi_L^c$, $\phi_R$ and $\phi_R^c$, may induce proton decay through dimension-five operators. Indeed, proton decay arises in the lowest order from the operators $\chi\chi\chi\psi$ and $\chi^c\chi^c\chi^c\psi^c$, which lead to the effective operators $QQQL$ with coefficients proportional to $Y_3Y_4/M_{H_C}$, for both left and right light matter fields. This requires that the mass scales of left and right color Higgs triplets should be heavy enough, thus constraining the unification scale [@Nath:2006ut]. In the absence of the fields $\phi_{L,R}$ and $\phi_{L,R}^c$ not only proton is stable at the renormalizable level, but also R-parity is automatically conserved [@Mohapatra:1996iy]. R-parity invariance is an appealing feature in SUSY theories, since the lightest supersymmetric particle is absolutely stable, thus providing a natural cold dark matter candidate. Finally, we have shown that, in the minimal $\mathsf{SU(5)}_{L}\times \mathsf{SU(5)}_{R}$ setup considered, it is not possible to achieve the unification of the gauge couplings with the gravitational coupling at the perturbative heterotic string scale. It would be interesting to investigate whether the inclusion of additional representations could help in bringing into agreement the four couplings. Acknowledgements {#acknowledgements .unnumbered} ================ This work was partially supported by Fundação para a Ciência e a Tecnologia (FCT, Portugal) through the Grant No. CERN/FP/109305/2009 and the project CFTP-FCT UNIT 777, which are partially funded through POCTI (FEDER). The work of D.E.C. was also supported by Accion Complementaria Luso-Espanhola FCT and MICINN with project number 20NSML3700. E.T.F. was partially supported by CNPq through the Grant No. 150416/2011-3 and by the EU project MRTN-CT-2006-035505. One-loop and two-loop beta coefficients {#a1:beta} ======================================= In this appendix we collect the $\beta$-function coefficients for the relevant particle content of the $\mathsf{SU(5)}\times\mathsf{SU(5)}$ theory. Below the SUSY threshold $M_S$, the $\beta$-function coefficients are those of the SM: $$\begin{aligned} b_i=\begin{pmatrix} 41/6 & -19/6 & -7\end{pmatrix}\,,\quad b_{ij}=\begin{pmatrix} {199}/{18} & 9/2 & {44}/{3} \\ 3/2 & 35/6 & 12 \\ {11}/{6} & 9/2 & -26 \end{pmatrix}.\end{aligned}$$ Above $M_S$, the coefficients are the usual MSSM ones: $$\begin{aligned} b_i=\begin{pmatrix} 11 & 1 & -3\end{pmatrix}, \quad b_{ij}=\begin{pmatrix} {199}/{9} & 9 & {88}/{3} \\ 3 & 25 & 24 \\ {11}/{3} & 9 & 14 \end{pmatrix}.\end{aligned}$$ The two-loop coefficients $C_{if}$ that account for the Yukawa contributions are $$\begin{aligned} C_{if}= \begin{pmatrix} {26}/{3} & {14}/{3} & 6 \\ 6 & 6 & 2 \\ 4 & 4 & 0 \end{pmatrix}.\end{aligned}$$ We have also the following coefficients for the triplet $\Sigma_3$, the octet $\Sigma_8$, the triplet $T$, and the vector-like fermions $U$, $D$ and $E$: $$\begin{aligned} b_i^{\Sigma_3}&=\begin{pmatrix} 0 & 2 & 0\end{pmatrix}\,,\quad b_{ij}^{\Sigma_3}=\begin{pmatrix} 0 & 0 & 0\\ 0 & 24 & 0\\ 0 & 0 & 0 \end{pmatrix}, \\ b_i^{\Sigma_8}&=\begin{pmatrix} 0 & 0 & 3\end{pmatrix}\,, \quad b_{ij}^{\Sigma_8}=\begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 54 \end{pmatrix}, \\ b_i^{T}&=\begin{pmatrix} 6 & 4 & 0\end{pmatrix} \,,\quad b_{ij}^{T}=\begin{pmatrix} 24 & 48 & 0\\ 16 & 48 & 0\\ 0 & 0 & 0 \end{pmatrix}, \\ b_i^{U}&=\begin{pmatrix} 8 & 0 & 3\end{pmatrix} \,,\quad b_{ij}^{U}=\begin{pmatrix} 128/9 & 0 & 128/3\\ 0 & 0 & 0\\ 16/3 & 0 & 34 \end{pmatrix}, \\ b_i^{D}&=\begin{pmatrix} 2 & 0 & 3\end{pmatrix} \,,\quad b_{ij}^{D}=\begin{pmatrix} 8/9 & 0 & 32/3\\ 0 & 0 & 0\\ 4/3 & 0 & 34 \end{pmatrix}, \\ b_i^{E}&=\begin{pmatrix} 6 & 0 & 0\end{pmatrix} \,,\quad b_{ij}^{E}=\begin{pmatrix} 24 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix},\end{aligned}$$ which are introduced at the appropriate intermediate scales. [99]{} H. Georgi and S. Glashow, [*[Unity of All Elementary Particle Forces]{}*]{}, [*Phys. Rev. Lett.*]{} [**32**]{} (1974) 438–441. H. Georgi, [*[Unified Gauge Theories]{}*]{}, [*In Coral Gables 1975, Proceedings, Theories and Experiments In High Energy Physics, New York*]{} (1975) 329–339. P. Langacker, [*[Grand Unified Theories and Proton Decay]{}*]{}, [*Phys. Rept.*]{} [**72**]{} (1981) 185. S. Raby, [*[SUSY GUT Model Building]{}*]{}, [*Eur. Phys. J. C*]{} [**59**]{} (2009) 223–247, \[[[0807.4921]{}](http://xxx.lanl.gov/abs/0807.4921)\]. N. Sakai, [*[Naturalness in Supersymmetric Guts]{}*]{}, [*Z. Phys. C*]{} [ **11**]{} (1981) 153. V. D. Barger, M. Berger, and P. Ohmann, [*[Supersymmetric grand unified theories: Two loop evolution of gauge and Yukawa couplings]{}*]{}, [*Phys. Rev. D*]{} [**47**]{} (1993) 1093–1113, \[[[hep-ph/9209232]{}](http://xxx.lanl.gov/abs/hep-ph/9209232)\]. W. Martens, L. Mihaila, J. Salomon, and M. Steinhauser, [*[Minimal Supersymmetric SU(5) and Gauge Coupling Unification at Three Loops]{}*]{}, [ *Phys. Rev. D*]{} [**82**]{} (2010) 095013, \[[[arXiv:1008.3070]{}](http://xxx.lanl.gov/abs/1008.3070)\]. A. Buras, J. R. Ellis, M. Gaillard, and D. V. Nanopoulos, [*[Aspects of the Grand Unification of Strong, Weak and Electromagnetic Interactions]{}*]{}, [ *Nucl. Phys. B*]{} [**135**]{} (1978) 66–92. N. Sakai and T. Yanagida, [*[Proton Decay in a Class of Supersymmetric Grand Unified Models]{}*]{}, [*Nucl. Phys. B*]{} [**197**]{} (1982) 533. Super-Kamiokande Collaboration, Y. Hayato [*et. al.*]{}, [*[Search for proton decay through p $\rightarrow$ anti-neutrino K+ in a large water Cherenkov detector]{}*]{}, [*Phys. Rev. Lett.*]{} [**83**]{} (1999) 1529–1533, \[[[hep-ex/9904020]{}](http://xxx.lanl.gov/abs/hep-ex/9904020)\]. P. Nath and P. Fileviez Perez, [*[Proton stability in grand unified theories, in strings and in branes]{}*]{}, [*Phys. Rept.*]{} [**441**]{} (2007) 191–317, \[[[ hep-ph/0601023]{}](http://xxx.lanl.gov/abs/hep-ph/0601023)\]. J. C. Pati and A. Salam, [*[Is Baryon Number Conserved?]{}*]{}, [*Phys. Rev. Lett.*]{} [**31**]{} (1973) 661–664. J. C. Pati and A. Salam, [*[Lepton Number as the Fourth Color]{}*]{}, [*Phys. Rev. D*]{} [**10**]{} (1974) 275–289. A. Davidson and K. C. Wali, [*[$SU(5)_L \times SU(5)_R$ hybrid unification]{}*]{}, [*Phys. Rev. Lett.*]{} [**58**]{} (1987) 2623. M. Dine, Y. Nir, and Y. Shadmi, [*[Product groups, discrete symmetries, and grand unification]{}*]{}, [*Phys. Rev. D*]{} [**66**]{} (2002) 115001, \[[[hep-ph/0206268]{}](http://xxx.lanl.gov/abs/hep-ph/0206268)\]. S. M. Barr, [*[The Stability of the gauge hierarchy in SU(5) $\times$ SU(5)]{}*]{}, [*Phys. Rev. D*]{} [**55**]{} (1997) 6775–6779, \[[[hep-ph/9607359]{}](http://xxx.lanl.gov/abs/hep-ph/9607359)\]. R. Mohapatra, [*[SU(5) $\times$ SU(5) unification model with see-saw mechanism and automatic R-parity conservation]{}*]{}, [*Phys. Lett. B*]{} [ **379**]{} (1996) 115–120, \[[[ hep-ph/9601203]{}](http://xxx.lanl.gov/abs/hep-ph/9601203)\]. R. Mohapatra, [*[SU(5)$\times$SU(5) unification and automatic R-parity conservation]{}*]{}, [*Phys. Rev. D*]{} [**54**]{} (1996) 5728–5733. P. L. Cho, [*[Unified universal seesaw models]{}*]{}, [*Phys. Rev. D*]{} [ **48**]{} (1993) 5331–5341, \[[[ hep-ph/9304223]{}](http://xxx.lanl.gov/abs/hep-ph/9304223)\]. A. Giveon, L. J. Hall, and U. Sarid, [*[SU(5) unification revisited]{}*]{}, [ *Phys. Lett. B*]{} [**271**]{} (1991) 138–144. Particle Data Group Collaboration, K. Nakamura [*et. al.*]{}, [ *[Review of particle physics]{}*]{}, [*J. Phys. G*]{} [**37**]{} (2010) 075021. D. Auto [*et. al.*]{}, [*[Yukawa coupling unification in supersymmetric models]{}*]{}, [*JHEP*]{} [**06**]{} (2003) 023, \[[[hep-ph/0302155]{}](http://xxx.lanl.gov/abs/hep-ph/0302155)\]. V. S. Kaplunovsky, [*[One Loop Threshold Effects in String Unification]{}*]{}, [*Nucl. Phys. B*]{} [**307**]{} (1988) 145, \[[[hep-th/9205068]{}](http://xxx.lanl.gov/abs/hep-th/9205068)\]. V. S. Kaplunovsky, [*[One loop threshold effects in string unification]{}*]{}, [[hep-th/9205070]{}](http://xxx.lanl.gov/abs/hep-th/9205070). D. J. Gross, J. A. Harvey, E. J. Martinec, and R. Rohm, [*[The Heterotic String]{}*]{}, [*Phys. Rev. Lett.*]{} [**54**]{} (1985) 502–505. D. J. Gross, J. A. Harvey, E. J. Martinec, and R. Rohm, [*[Heterotic String Theory. 1. The Free Heterotic String]{}*]{}, [*Nucl. Phys. B*]{} [**256**]{} (1985) 253. D. J. Gross, J. A. Harvey, E. J. Martinec, and R. Rohm, [*[Heterotic String Theory. 2. The Interacting Heterotic String]{}*]{}, [*Nucl. Phys. B*]{} [**267**]{} (1986) 75. R. Barbieri, G. R. Dvali, and A. Strumia, [*[Strings versus supersymmetric GUTs: Can they be reconciled?]{}*]{}, [*Phys. Lett. B*]{} [**333**]{} (1994) 79–82, \[[[ hep-ph/9404278]{}](http://xxx.lanl.gov/abs/hep-ph/9404278)\]. A. Maslikov, I. Naumov, and G. Volkov, [*[The Paths of unification in the GUST with the G x G gauge groups of E(8) $\times$ E(8)]{}*]{}, [*Phys. Lett. B*]{} [**409**]{} (1997) 160–172, \[[[hep-th/9612243]{}](http://xxx.lanl.gov/abs/hep-th/9612243)\]. Z. Kakushadze and S. Tye, [*[A Classification of three family grand unification in string theory. 2. The SU(5) and SU(6) models]{}*]{}, [*Phys. Rev. D*]{} [**55**]{} (1997) 7896–7908, \[[[hep-th/9701057]{}](http://xxx.lanl.gov/abs/hep-th/9701057)\]. D. Emmanuel-Costa and R. González Felipe, [*[Minimal string-scale unification of gauge couplings]{}*]{}, [*Phys. Lett. B*]{} [**623**]{} (2005) 111–118, \[[[ hep-ph/0505257]{}](http://xxx.lanl.gov/abs/hep-ph/0505257)\]. [^1]: Alternatively, one could break directly the left-right symmetry at the scale $\Lambda_{LR}=\Lambda$ without the need of the adjoint Higgs fields in the $(24,1)$ and $(1,24)$ representations. [^2]: Terms in the superpotential such as $\psi\phi_L$, $\chi\phi^{c}_L\phi^{c}_L$, $T_L\psi\phi^{c}_L$ and $T_L\chi\phi_L$ violate R-parity.
--- abstract: 'We evaluate the first three moments of central values of a family of qudratic Hecke $L$-functions in the Gaussian field with power saving error terms. In particular, we obtain asymptotic formulas for the first two moments with error terms of size $O(X^{1/2+\varepsilon})$. We also study the first and second mollified moments of the same family of $L$-functions to show that at least $87.5\%$ of the members of this family have non-vanishing central values.' address: 'School of Mathematical Sciences, Beihang University, Beijing 100191, China' author: - Peng Gao bibliography: - 'biblio.bib' title: 'Moments and Non-vanishing of central values of Quadratic Hecke $L$-functions in the Gaussian Field' --- \[subsection\][Theorem]{} \[subsection\][Proposition]{} \[subsection\][Lemma]{} \[subsection\][Corollary]{} \[subsection\][Conjecture]{} \[subsection\][Proposition]{} \[subsection\][Remark]{} @breve\#1\#2[@[$#1#2$]{} ]{} \#1\#2[$\m@th\sbox\tw@{$#1($} \hss\resizebox{#2}{\wd\tw@}{\rotatebox[origin=c]{90}{\upshape(}}\hss$]{} [**Mathematics Subject Classification (2010)**]{}: 11M06, 11M41, 11N37 [**Keywords**]{}: central values, Hecke $L$-functions, mean values, quadratic Hecke characters Introduction {#sec 1} ============ The study on moments of quadratic twists of $L$-functions at central values has important applications to problems such as class numbers of imaginary quadratic fields, ranks of elliptic curves and the existence of Landau-Siegel zeros. For the central values of the family of quadratic Dirichlet $L$-functions, M. Jutila evaluated the first two moments in [@Jutila] to show that there are infinitely many $L$-functions in this family with non-vanishing central values. This approach was further advanced by K. Soundararajan in [@sound1], who computed the first and second mollified moments of the family of primitive quadratic Dirichlet $L$-functions to show that at least $87.5\%$ of such $L$-functions have non-vanishing central values. In the same paper [@sound1], Soundrarajan also obtained the third moment of the family of primitive quadratic Dirichlet $L$-functions. Under the assumption of the Generalized Riemann Hypothesis (GRH), Q. Shen obtained an asymptotic formula in [@Shen] for the fourth moment of the same family. Analogue to quadratic twists by Dirichlet characters, there is also an intensive study on moments of various families of modular forms. Results on the first moments can be found in [@Munshi1; @Petrow1]. Assuming GRH, the second moment of quadratic twists of modular $L$-functions was computed by K. Soundararajan and M. P. Young in . Other than obtaining the main terms of the moments of families of $L$-functions, a lot of attention has been drawn upon the improvement of the error terms. For the first moment of the family of quadratic Dirichlet $L$-functions, the error term obtained in Jutila’s result is of size $O(X^{3/4+\varepsilon})$ (with the main term being about $X \log X$). An error term of the same size is obtained by A. I. Vinogradov and L. A. Takhtadzhyan [@ViTa] and was improved to $X^{19/32 + \varepsilon}$ by D. Goldfeld and J. Hoffstein in [@DoHo]. In fact, an error term of size $O(X^{1/2 + \varepsilon})$ is essentially implicit in [@DoHo] (see the remarks in the paragraph below Theorem 1.1 of [@Young1]). The result of Goldfeld and Hoffstein is obtained via the usage of Eisenstein series of metaplectic type. Using a different approach which involves with more classical tools from analytical number theory, M. P. Young in [@Young2] was able to establish the same estimation for the error term for a smoothed first moment. Young’s approach builds on the previous work of Soundararajan, who developed a type of Poisson summation formula for smoothed quadratic Dirichlet character sums. In the meanwhile, Young also introduced novel techniques such as using a recursive relation to lower down the error term successively as well as performing an intricate analysis of certain subsidiary terms whose sizes are difficult to control individually. These techniques have been successfully applied by Young later in [@Young2] to improve the error term in the smoothed third moment of the family of primitive quadratic Dirichlet $L$-functions and by K. Sono [@Sono] for the smoothed second moment of the same family. Inspired by the work of Soundararajan and Young, we expect to apply the methods in [@sound1; @Young1; @Young2] to study moments of other quadratic twists of $L$-functions. In this paper, we focus on the moments of a family of quadratic Hecke $L$-functions in the Gaussian field. Thus, we denote $K={\ensuremath{\mathbb Q}}(i)$ for the Gaussian field throughout the paper and we denote $\mathcal{O}_K={\ensuremath{\mathbb Z}}[i]$ for its ring of integers and $U_K=\{ \pm 1, \pm i \}$ for the group of units in $\mathcal{O}_K$. Recall that every ideal in $\mathcal{O}_K$ co-prime to $2$ has a unique generator congruent to $1$ modulo $(1+i)^3$ (see the definition above Lemma 8.2.1 in [@BEW]). These generators are called primary. We shall denote $\omega$ for a prime number in $\mathcal{O}_K$, by which we mean that the ideal $(\omega)$ generated by $\omega$ is a prime ideal. We write $N(n)$ for the norm of any $n \in K$. We further denote $\chi$ for a Hecke character of $K$ and we say that $\chi$ is of trivial infinite type if its component at infinite places of $K$ is trivial. We write $L(s,\chi)$ for the $L$-function associated to $\chi$ and we denote $\zeta_{K}(s)$ for the Dedekind zeta function of $K$. For any element $n \in \mathcal{O}_K$, we say $n$ is odd if $(n,2)=1$ and we say $n$ is square-free if the ideal $(n)$ is not divisible by the square of any prime ideals. We further denote $\chi_c={\left(\frac{c}{\cdot}\right)}$, where ${\left(\frac{\cdot}{\cdot}\right)}$ is the quadratic residue symbol defined in Section \[sec2.4\]. Similar to the arguments in Section 2.1 of , the symbol $\chi_{(1+i)^5d}$ defines a primitive quadratic Hecke character modulo $(1+i)^5d$ of trivial infinite type when $d \in \mathcal{O}_K$ is odd and square-free. We can thus consider the moments of the family of quadratic Hecke $L$-functions $L({\tfrac12}, \chi_{(1+i)^5d})$ with $d$ varying over odd and square-free elements in $\mathcal{O}_K$. The aim here is not only to obtain valid asymptotic formulas, but also to obtain error terms as good as those given by Young and Sono for the classical case. Our first result is the following \[theo:mainthm\] Let $\Phi:{\ensuremath{\mathbb R}}^{+} \rightarrow {\ensuremath{\mathbb R}}$ be a smooth function of compact support. Then for $1 \leq j \leq 3$ and any $\varepsilon>0$, we have $$\label{eq:mainthm} {\sideset{}{^*}\sum}_{(d,2)=1} L({\tfrac12}, \chi_{(1+i)^5d})^j \Phi{\left(\frac{N(d)}{X}\right)} = X P_i(\log{X}) + O(X^{\theta_i + \varepsilon}),$$ for some polynomials $P_i$ of degree $i(i+1)/2$ (depending on $\Phi$) and $\theta_i=1/2$ for $i=1,2$, $\theta_3=3/4$. Here the “$*$” on the sum over $d$ means that the sum is restricted to square-free elements $d$ in $\mathcal{O}_K$. In order to establish Theorem \[theo:mainthm\], we shall use recursive arguments to obtain the desired error terms in , starting from much larger error terms. This process actually requires us to consider a more general situation, namely the following “twisted” moments for primary $l \in \mathcal{O}_K$: $$\begin{aligned} \begin{split} \label{Malphal} M_{\alpha}(l) =& {\sideset{}{^*}\sum}_{(d,2)=1} L({\tfrac12}+ \alpha, \chi_{(1+i)^5d})\chi_{(1+i)^5d}(l) \Phi\left(\frac{N(d)}{X}\right), \\ M_{\alpha, \beta}(l) =& {\sideset{}{^*}\sum}_{(d,2)=1} L({\tfrac12}+ \alpha, \chi_{(1+i)^5d}) L({\tfrac12}+ \beta, \chi_{(1+i)^5d})\chi_{(1+i)^5d}(l)\Phi\left(\frac{N(d)}{X}\right), \\ M_{\alpha, \beta, \gamma}(l) =&{\sideset{}{^*}\sum}_{(d,2)=1} L({\tfrac12}+ \alpha, \chi_{(1+i)^5d}) L({\tfrac12}+ \beta, \chi_{(1+i)^5d}) L({\tfrac12}+ \gamma, \chi_{(1+i)^5d}) \chi_{(1+i)^5d}(l) \Phi\left(\frac{N(d)}{X}\right). \end{split}\end{aligned}$$ It is certainly expected that the case $i=1$ of is the easiest to study compared to higher moments. In fact, we shall only need to evaluate $M_{\alpha}(l)$ for $l$ being square-free while for higher moments, we need to evaluate $M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$ for a general $l$. In order to state our results concerning $M_{\alpha}(l), M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$, we need to first introduce a few notations. For $\Phi$ given as in the statement of Theorem \[theo:mainthm\], we shall set $$\begin{aligned} F(x)=\Phi(\frac {x}{X}).\end{aligned}$$ We recall that the Mellin transform $\hat{f}$ for any function $f$ is defined to be $$\begin{aligned} \widehat{f}(s) =\int\limits^{\infty}_0f(t)t^s\frac {{\mathrm{d}}t}{t}.\end{aligned}$$ It follows from this that we have $$\begin{aligned} \widehat{F}(s)=X^s\widehat{\Phi}(s).\end{aligned}$$ For a sequence of complex numbers $\alpha_1, \cdots, \alpha_j$ and a primary $n \in \mathcal{O}_K$, we define $$\begin{aligned} \label{eq:sigma} \sigma_{\alpha_1, \cdots, \alpha_j}(n) = \sum_{\substack{ a_1\cdots a_j=n \\ a_i \equiv 1 \bmod {(1+i)^3}, 1 \leq i \leq j } } \prod^j_{i=1}N(a_i)^{-\alpha_i}.\end{aligned}$$ For any primary $l \in \mathcal{O}_K$, we shall use $l_1, l_2$ for the unique primary elements in $\mathcal{O}_K$ such that $l=l_1l_2$ with $l_1$ being square-free and $l_2$ a square. We shall use the notation $l^*$ for $l_1$ as well. Using this notation, we define for each $l$, $$\begin{aligned} A_{\alpha_1, \cdots, \alpha_j}(l) = \sum_{\substack{n \equiv 1 \bmod {(1+i)^3} \\(n,2)=1}} \frac{\sigma_{\alpha_1, \cdots, \alpha_j}(l^* n^2)}{N(n)} \prod_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | n l}} (1 + N(\varpi)^{-1})^{-1}.\end{aligned}$$ We further define $B_{\alpha}(l), B_{\alpha,\beta}(l)$ and $B_{\alpha, \beta, \gamma}(l)$ such that $$\begin{aligned} \label{B} \begin{split} A_{\alpha}(l) = & \zeta_{K,2}(1 + 2\alpha)B_{\alpha}(l), \\ A_{\alpha, \beta}(l) = & \zeta_{K,2}(1 + 2\alpha) \zeta_{K,2}(1 + 2\beta) \zeta_{K,2}(1 + \alpha + \beta)B_{\alpha,\beta}(l), \\ A_{\alpha, \beta, \gamma}(l) = & \zeta_{K,2}(1 + 2\alpha) \zeta_{K,2}(1 + 2\beta) \zeta_{K,2}(1 + 2\gamma) \zeta_{K,2}(1 + \alpha + \beta) \zeta_{K,2}(1 + \alpha + \gamma) \zeta_{K,2}(1 + \beta + \gamma) B_{\alpha,\beta,\gamma}(l), \end{split}\end{aligned}$$ where we define the function $\zeta_{K,l}(s)$ for $l \in \mathcal{O}_K$ by removing the Euler factors from $\zeta_{K}(s)$ at prime ideals $(\varpi)$ with $\varpi | l$. We define similarly $L_{l}(s, \chi)$ for any Hecke character $\chi$ of $K$, so that $$\begin{aligned} L_l(s, \chi)=L(s,\chi)\prod_{\substack{ (\varpi) \\ \varpi | l}}\Big(1-\frac {\chi(\varpi)}{N(\varpi)^s} \Big ).\end{aligned}$$ We note here (see also the discussions below Lemma \[lemma:A\]) that $B_{\alpha}(l), B_{\alpha,\beta}(l)$ and $B_{\alpha, \beta, \gamma}(l)$ have absolutely convergent Euler products for the parameters $\alpha, \beta, \gamma$ in a neighborhood of the origin. For example, we have $$\begin{aligned} B_{\alpha}(l) = N(l^*)^{-\alpha/2}\prod_{\substack{ \varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}} (1 + N(\varpi)^{-1})^{-1} \prod_{\substack{ \varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi \nmid 2l}} {\left(}1 - N(\varpi)^{-2-2\alpha} (1+N(\varpi)^{-1})^{-1} {\right)}.\end{aligned}$$ We define further that $$\begin{aligned} \label{gamma} \Gamma_{\alpha} =& \left(\frac{32}{\pi^2} \right)^{-\alpha} \frac{\Gamma\left({\tfrac12}- \alpha \right)}{\Gamma\left({\tfrac12}+ \alpha \right)}.\end{aligned}$$ Now, we are ready to state our recursive results concerning the error terms for the asymptotic expressions of $M_{\alpha}(l)$, $M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$. \[theo:recursive2\] Suppose that for any primary $l \in \mathcal{O}_K$, we write $l=l_1l_2$ such that both $l_1, l_2$ are primary and that $l_1$ is square-free and $l_2$ is a square. If we have uniformly for $\alpha, \beta, \gamma$ lying in the rectangle $|\Re(s)| \leq \frac{\varepsilon}{\log{X}}$, $|\Im(s)| \leq X^{\varepsilon}$ that $$\begin{aligned} \label{eq:Malphal} M_{\alpha}(l) =& \pi \sum_{\epsilon_1 \in \{\pm 1 \}} A_{\epsilon_1 \alpha}(l) \Gamma_{\alpha}^{\delta_1} \frac{ \widehat{F}(1- \delta \alpha)}{2 \zeta_{K,2}(2) \sqrt{N(l)}} + O(X^{f} \sqrt{N(l)} (N(l) X)^{\varepsilon}) \quad \text{for $l$ square-free}, \\ \label{eq:Malpha2} M_{\alpha, \beta}(l) =&\pi \sum_{\epsilon_1, \epsilon_2 \in \{\pm 1 \}} A_{\epsilon_1 \alpha, \epsilon_2 \beta }(l) \Gamma_{\alpha, \beta}^{\delta_1, \delta_2} \frac{ \widehat{F}(1- \delta_1 \alpha - \delta_2 \beta)}{2 \zeta_{K,2}(2) \sqrt{N(l^*)}} + O(X^{f} \sqrt{N(l)} (N(l) X)^{\varepsilon}), \\ \label{eq:Malpha3} M_{\alpha, \beta, \gamma}(l) =&\pi \sum_{\epsilon_1, \epsilon_2, \epsilon_3 \in \{\pm 1 \}} A_{\epsilon_1 \alpha, \epsilon_2 \beta,\epsilon_3 \gamma}(l) \Gamma_{\alpha, \beta, \gamma}^{\delta_1, \delta_2, \delta_3} \frac{ \widehat{F}(1- \delta_1 \alpha - \delta_2 \beta - \delta_3 \gamma)}{2 \zeta_{K,2}(2) \sqrt{N(l^*)}} + O(X^{f} \sqrt{N(l)} (N(l) X)^{\varepsilon}),\end{aligned}$$ for $f>1/2$ in , and for $f>3/4$ in , then the expression holds for $f$ replaced by $\frac 14+\frac{f}{2}$, the expression holds for $f$ replaced by $1-\frac{1}{4f}$ and the expression holds for $f$ replaced by $\frac34 + \frac{f-\frac34}{2f}$. Here, we define $\Gamma_{\alpha,\beta,\gamma}^{\delta_1, \delta_2, \delta_3}$ to be $\Gamma_{\alpha}^{\delta_1} \Gamma_{\beta}^{\delta_2} \Gamma_{\gamma}^{\delta_3}$, where $\delta_i = 0$ if $\epsilon_i = +1$, and $\delta_i = 1$ if $\epsilon_i = -1$. Similar definitions apply to $\Gamma_{\alpha}^{\delta_1}$ and $\Gamma_{\alpha, \beta}^{\delta_1, \delta_2}$. We note here that our condition in Theorem \[theo:recursive2\] for $M_{\alpha}(l)$ is slightly different compared to those for $M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$. This is because that we only need $l$ to be square-free in the proof for the case of $M_{\alpha}(l)$ while for the other cases, a general $l$ is involved. We also note that in [@CFKRS], J. B. Conrey, D. Farmer, J. Keating, M. Rubinstein and N. Snaith produced a recipe that allows one to conjecture the asymptotics for the integral moments of families of $L$-functions. Modifying their recipe, one may obtain conjecturally the main terms for $M_{\alpha}(l), M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$ given in -, as Young and Sono did in [@Young1; @Young2; @Sono] for the case of Dirichlet $L$-functions. We can also obtain the the same main terms here by going directly through the arguments in the proof of Theorem \[theo:recursive2\] in the paper. Applying the convexity bound that (see [@iwakow Exercise 3, p. 100]) for $\Re(s) =\varepsilon$, $$\begin{aligned} L(1/2 + s, \chi_{(1+i)^5d}) \ll ((1+|s|)^2N(d))^{1/4+\varepsilon},\end{aligned}$$ we deduce that expressions - are valid for $f=1+j/4$ as an initial estimate. Arguing as the proof of Conjecture 3.3 in [@Young1], we see that this leads to a valid expression of and for $f={\tfrac{1}{2}}$, as well as a valid expression of for $f=\frac 34$. We summarize this in the following result. \[thm: Malphal1\] Let $M_{\alpha}(l), M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$ given in -. For any primary $l \in \mathcal{O}_K$ and any complex number $\alpha, \beta, \gamma$ lying in the rectangle $|\Re(s)| \leq \frac{\varepsilon}{\log{X}}$, $|\Im(s)| \leq X^{\varepsilon}$, the expression holds with an error of size $N(l)^{1/2 + \varepsilon} X^{\frac{1}{2} + \varepsilon}$ for any $\varepsilon>0$, when $l$ is square-free. For general $l$, the expression holds with an error of size $N(l)^{1/2 + \varepsilon} X^{\frac{1}{2} + \varepsilon}$ for any $\varepsilon>0$ and the expression holds with an error of size $N(l)^{3/4 + \varepsilon} X^{\frac{1}{2} + \varepsilon}$ for any $\varepsilon>0$. In Section \[sec: pfrecursive\], we shall prove Theorem \[theo:recursive2\] by assuming that each $\alpha, \beta, \gamma$ lies in a punctured rectangle of the form $|\Re(s)| \leq c_1/\log{X}$, $|\Im(s)| \leq c_2 X^{\varepsilon}$ minus $|\Re(s)| \leq c_1/(2\log{X})$, $|\Im(s)| \leq (c_2/2) X^{\varepsilon}$ for suitable $c_i$ depending on $\alpha, \beta, \gamma$ such that the distances between the parameters are at least $\gg 1/X^{\varepsilon}$. One then deduces the result for other cases following the arguments made in the paragraph above Section 3.3 in [@Young1] and the two paragraphs below Lemma 3.6 in [@Young2]. By considering the limit case of $\alpha, \beta, \gamma \rightarrow 0, l=1$ in Theorem \[thm: Malphal1\], we recover the statement of Theorem \[theo:mainthm\]. Note that we do not run into singularities here, see Lemma 2.3 in [@Sono] and the paragraph above it for an explanation. Note that the “twisted” moments $M_{\alpha}(l), M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$ appear naturally when mollifying central values. Thus, our result in Theorem \[thm: Malphal1\] also paves a way for us to consider the mollified moments of the same family of $L$-functions. We shall in fact evaluate the first and second mollified moments of this family in Section \[sect: nonvanishing\] to establish the following non-vanishing result on central values. \[thm: nonvanishing\] We have for all large $x$, and any fixed $\varepsilon > 0$, $$\begin{aligned} & \sum_{\substack{N(d) \leq x \\ (d, 2)=1 \\ L({\tfrac{1}{2}}, \chi_{(1+i)^5d}) \neq 0}}\mu_{[i]}(d)^2 \geq {\left(}\frac 78-\varepsilon {\right)}\sum_{\substack{N(d) \leq x \\ (d, 2)=1 }}\mu_{[i]}(d)^2.\end{aligned}$$ Thus, for at least $87.5\%$ of the odd square-free elements $d \in \mathcal{O}_{K}$, $L({\tfrac{1}{2}},\chi_{(1+i)^5d})\neq 0$. Our proof of Theorem \[theo:recursive2\] follows largely the line of treatment of Young in [@Young1; @Young2], as well as the approach of Sono in [@Sono] for the evaluation of $M_{\alpha, \beta}(l)$. We shall apply the approximate functional equation for $L(s, \chi_{(1+i)^5d})$ obtained in Section \[sect: apprfcneqn\] to express products involving $L({\tfrac{1}{2}}+\alpha, \chi_{(1+i)^5d})$ into two smoothed sums. Then we apply a two dimensional Poisson summation to convert the sum over $d$ into a dual sum. Shifting the contour of integrals leads to a contribution of poles, which in turn gives us two types of main terms, with the second type being contributed by non-zero squares in the dual sum. On the new line of the integration, we apply the recursive argument to obtain “tails" of these main terms, so that some of them combine naturally together. This leads to the main terms given in - with desired error terms of smaller sizes. The most intricate part of the above approach involves with representing the second type main terms so that they can be combined with certain terms coming from the recursive process. This requires a careful analysis on the Archimedean parts of functional equations of the corresponding $L$-functions as well as the two dimensional Fourier transforms of the weight functions involved. On the other hand, our proof of Theorem \[thm: nonvanishing\] owes much to the work of Soundararajan in [@sound1]. In fact, the error term in the asymptotic expression for $M_{\alpha, \beta}(l)$ given in Theorem \[thm: Malphal1\] is not strong enough in the $l$ aspect for us to choose a mollifier that is long enough to derive our result. We need thus to follow the original treatment of Soundararajan in [@sound1] to handle the second mollified moment. The proof of Theorem \[thm: nonvanishing\] is then made much easier, thanks to the existing approach available in [@sound1]. Preliminaries {#sec 2} ============= In this section, we include some auxiliary results needed in the proofs of our theorems. Quadratic residue symbol, Gauss sum and Poisson Summation {#sec2.4} --------------------------------------------------------- Recall that $K={\ensuremath{\mathbb Q}}(i)$ and it is well-known that $K$ have class number one. We denote $(D_K)$ for the discriminant of $K$ and recall that $D_{K}=-4$. For $n \in \mathcal{O}_{K}, (n,2)=1$, we denote the symbol ${\left(\frac{\cdot}{n}\right)}$ for the quadratic residue symbol $\pmod n$ in $K$. For a prime $\varpi \in {\ensuremath{\mathbb Z}}[i]$ with $N(\varpi) \neq 2$, the quadratic symbol is defined for $a \in \mathcal{O}_{K}$, $(a, \varpi)=1$ by ${\left(\frac{a}{\varpi}\right)} \equiv a^{(N(\varpi)-1)/2} \pmod{\varpi}$, with ${\left(\frac{a}{\varpi}\right)} \in \{ \pm 1 \}$. When $\varpi | a$, we define ${\left(\frac{a}{\varpi}\right)} =0$. Then the quadratic symbol can be extended to any composite $n$ with $(N(n), 2)=1$ multiplicatively. We further define ${\left(\frac{\cdot}{c}\right)}=1$ when $c \in U_K$. For any $n, r\in \mathcal{O}_K$, $(n,2)=1$, we define the quadratic Gauss sum $g(r, n)$ by $$\begin{aligned} \label{g2} g(r,n) = \sum_{x \bmod{n}} {\left(\frac{x}{n}\right)} \widetilde{e}{\left(\frac{rx}{n}\right)},\end{aligned}$$ where $$\begin{aligned} \widetilde{e}(z) =\exp \left( 2\pi i \left( \frac {z}{2i} - \frac {\bar{z}}{2i} \right) \right) .\end{aligned}$$ Let $\varphi_{[i]}(n)$ denote the number of elements in the reduced residue class of $\mathcal{O}_K/(n)$, we now recall from some explicitly evaluations of $g(r,n)$ for $n$ being primary. \[Gausssum\] (i) We have $$\begin{aligned} g(rs,n) & = \overline{{\left(\frac{s}{n}\right)}} g(r,n), \qquad (s,n)=1, \\ g(k,mn) & = g(k,m)g(k,n), \qquad m,n \text{ primary and } (m , n)=1 .\end{aligned}$$ (ii) Let $\varpi$ be a primary prime in $\mathcal{O}_K$. Suppose $\varpi^{h}$ is the largest power of $\varpi$ dividing $k$. (If $k = 0$ then set $h = \infty$.) Then for $l \geq 1$, $$\begin{aligned} g(k, \varpi^l)& =\begin{cases} 0 \qquad & \text{if} \qquad l \leq h \qquad \text{is odd},\\ \varphi_{[i]}(\varpi^l) \qquad & \text{if} \qquad l \leq h \qquad \text{is even},\\ -N(\varpi)^{l-1} & \text{if} \qquad l= h+1 \qquad \text{is even},\\ {\left(\frac{ik\varpi^{-h}}{\varpi}\right)}N(\varpi)^{l-1/2} \qquad & \text{if} \qquad l= h+1 \qquad \text{is odd},\\ 0, \qquad & \text{if} \qquad l \geq h+2. \end{cases}\end{aligned}$$ We quote the following Poisson summation formula from . \[Poissonsumformodd\] Let $n \in {\ensuremath{\mathbb Z}}[i], n \equiv 1 \pmod {(1+i)^3}$ and ${\left(\frac{\cdot}{n}\right)}$ be the quadratic residue symbol $\pmod {n}$. For any Schwartz class function $W$, we have $$\begin{aligned} \sum_{\substack {m \in {\ensuremath{\mathbb Z}}[i] \\ (m,1+i)=1}}{\left(\frac{m}{n}\right)} W\left(\frac {N(m)}{X}\right)=\frac {X}{2N(n)}{\left(\frac{1+i}{n}\right)}\sum_{k \in {\ensuremath{\mathbb Z}}[i]}(-1)^{N(k)} g(k,n)\widetilde{W}\left(\sqrt{\frac {N(k)X}{2N(n)}}\right),\end{aligned}$$ where $$\begin{aligned} \widetilde{W}(t) &=\int\limits^{\infty}_{-\infty}\int\limits^{\infty}_{-\infty}W(N(x+yi))\widetilde{e}\left(- t(x+yi)\right){\mathrm{d}}x {\mathrm{d}}y, \quad t \geq 0.\end{aligned}$$ Evaluation of certain integrals {#sect: Wiestm} ------------------------------- We will require an evaluation on $\widetilde{W}(t)$ for special choices of $W(t)$. First note that $\widetilde{W}(t) \in {\ensuremath{\mathbb R}}$ in general for any $t \geq 0$, since we have $$\begin{aligned} \label{Wt} \widetilde{W}(t) =\int\limits_{{\ensuremath{\mathbb R}}^2}\cos (2\pi t y)W(x^2+y^2) \ {\mathrm{d}}x {\mathrm{d}}y.\end{aligned}$$ We evaluate the above integral in polar coordinates to get $$\begin{aligned} \widetilde{W}(t) =& 4\int\limits^{\pi/2}_0\int\limits^{\infty}_0\cos (2\pi t r\sin \theta)W(r^2) \ r {\mathrm{d}}r {\mathrm{d}}\theta = 2\int\limits^{\pi/2}_0 \int\limits^{\infty}_0\cos (2\pi t r^{1/2}\sin \theta)W(r) \ {\mathrm{d}}r {\mathrm{d}}\theta.\end{aligned}$$ We now take $\Phi(t)$ as given in Theorem \[theo:mainthm\]. Fix a positive integer $m$, we let $G_j(s), 1 \leq j \leq m$ be entire, even functions, bounded in any strip $-A \leq \Re(s) \leq A$ for some $A>2$ such that $G_j(0)=1, 1\leq j \leq m$. We further let $\alpha_j, 1 \leq j \leq m$ be complex numbers and denote $(\alpha_j)$ for the sequence $(\alpha_1, \cdots, \alpha_j)$. We define further for $t>0$, $$\begin{aligned} \label{eq:Vdef} V_{(\alpha_j)}(t) = \frac{1}{2 \pi i} \int\limits\limits_{(2)} \frac{G_j(s)}{s} g_{(\alpha_j)}(s) t^{-s} ds,\end{aligned}$$ where $g_{(\alpha_j)}(s) =\prod^j_{i=1} g_{\alpha_i}(s)$ with $$ g_{\alpha}(s) = \left(\frac{2^{5/2}}{\pi}\right)^{s} \frac {\Gamma(\frac{1}{2}+\alpha+s)}{\Gamma(\frac{1}{2}+\alpha)}.$$ The functions $V_{(\alpha_j)}(t)$ appear naturally in the approximation functional equations involving products of $L(1/2+\alpha_j, \chi_{(1+i)^5d})$ (see Section \[sect: apprfcneqn\]). In our process, we need to evaluate $\widetilde{F_{n,j}}\left(t\right)$ for $1 \leq j \leq 3$ for a primary $n$, where $$\begin{aligned} \label{Fn} F_{n,j}(t)=\Phi(t)V_{(\alpha_j)}\left( \frac {N(n)}{(X t)^{j/2}} \right ).\end{aligned}$$ To do so, we first note that for any real number $c_u$, we have $$\begin{aligned} \Phi \left(t \right)=\frac 1{2\pi i}\int\limits\limits_{(c_u)}\widehat{\Phi}(u)t^{-u}du.\end{aligned}$$ Applying this together with , we see that for $t>0$, $$\begin{aligned} \widetilde{F_{n,j}}\left(t\right)= & 2\int\limits^{\pi/2}_0\int\limits^{\infty}_0\cos (2\pi t r^{1/2}\sin \theta)\frac 1{(2\pi i)^2}\int\limits\limits_{(c_u)} \int\limits\limits_{(c_s)}\widehat{\Phi}(1+u)r^{-u} g_{(\alpha_j)}(s) \left( \frac {N(n)}{ (X r)^{j/2}} \right )^{-s} du \frac {G_j(s)ds}{s}\frac {{\mathrm{d}}r}{r} {\mathrm{d}}\theta.\end{aligned}$$ We reverse the order of the three inner integrations above to arrive, after some changes of variables (first $r^{1/2} \to r$, then $2 \pi tr \sin \theta \to r$), at $$\begin{aligned} \begin{split} \widetilde{F_{n,j}}\left(t\right) =& 4\int\limits^{\pi/2}_0\frac 1{(2\pi i)^2} \int\limits\limits_{(c_s)}\int\limits\limits_{(c_u)}\widehat{\Phi}(1+u) g_{(\alpha_j)}(s) \left( \frac {N(n)}{X^{j/2}} \right )^{-s} \int\limits^{\infty}_0\cos (r)\left(\frac r{2\pi t \sin \theta}\right )^{js-2u}du \frac {ds}{s} \frac {{\mathrm{d}}r}{r} {\mathrm{d}}\theta \\ =& \frac {4}{(2\pi i)^2} \int\limits\limits_{(c_s)}\int\limits\limits_{(c_u)}\widehat{\Phi}(1+u) g_{(\alpha_j)}(s) \left( \frac {N(n)}{X^{j/2}} \right )^{-s}(2\pi t)^{-(js-2u)}\int\limits^{\pi/2}_0 (\sin \theta )^{-(js-2u)} {\mathrm{d}}\theta \int\limits^{\infty}_0\cos (r)r^{js-2u}\frac {{\mathrm{d}}r}{r} du \frac {G_j(s)ds}{s}. \end{split}\end{aligned}$$ We note that for $\Re(s)<1$, we have (see [@GR Formula 2, Section 8.380]) $$\begin{aligned} \label{sinint} \int\limits^{\pi/2}_0 (\sin \theta )^{-s} {\mathrm{d}}\theta=\frac 12 B(\frac {1-s}{2}, \frac 12)=\frac {\sqrt{\pi}}{2}\frac {\Gamma(\frac {1-s}{2})}{\Gamma(\frac {2-s}{2})},\end{aligned}$$ where $B(x,y)$ is the Beta function such that when $\Re(x), \Re(y)>0$ (see [@GR Formula 2, Section 8.384]) $$\begin{aligned} B(x, y)=\frac {\Gamma(x)\Gamma(y)}{\Gamma(x+y)},\end{aligned}$$ and that (see [@GR Formula 2, Section 8.338]) $\Gamma(\frac {1}{2})=\sqrt{\pi}$. We also note that (see [@GR Formula 9, Section 3.761]) for any $0<\Re(s)<1$: $$\begin{aligned} \label{cosMellin} \int\limits^{\infty}_0\cos (r)r^{s}\frac {{\mathrm{d}}r}{r} =\Gamma (s) \cos \left( \frac {\pi s}{2} \right).\end{aligned}$$ We now combine , and the following relation (see chapter 10 of [@Da]) $$\pi^{-{\tfrac12}} 2^{1-u} \cos(\tfrac{\pi}{2} s) \Gamma(s) = \frac{\Gamma {\left(\frac{s}{2}\right)}}{\Gamma{\left(\frac{1-s}{2}\right)}}$$ to see that $$\begin{aligned} \int\limits^{\pi/2}_0 (\sin \theta )^{-u} {\mathrm{d}}\theta \int\limits^{\infty}_0\cos (r)r^{u}\frac {{\mathrm{d}}r}{r}=\frac {\pi}{2}2^{u-1}\frac{\Gamma {\left(\frac{u}{2}\right)}}{\Gamma{\left(\frac{2-u}{2}\right)}}.\end{aligned}$$ This implies that $$\begin{aligned} \int\limits^{\pi/2}_0 (\sin \theta )^{-(js-2u)} {\mathrm{d}}\theta \int\limits^{\infty}_0\cos (r)r^{js-2u}\frac {{\mathrm{d}}r}{r}=\frac {\pi}{2}2^{s-2u-1}\frac{\Gamma {\left(\frac{js-2u}{2}\right)}}{\Gamma{\left(\frac{2-js+2u}{2}\right)}}.\end{aligned}$$ We then conclude that $$\begin{aligned} \label{Fnexprssion} \begin{split} \widetilde{F_{n,j}}\left(t\right) =& \frac {\pi}{(2\pi i)^2} \int\limits\limits_{(c_s)}\int\limits\limits_{(c_u)}\widehat{\Phi}(1+u) g_{(\alpha_j)}(s) \left( \frac {N(n)}{X^{j/2}} \right )^{-s}(\pi t)^{-(js-2u)}\frac{\Gamma {\left(\frac{js-2u}{2}\right)}}{\Gamma{\left(\frac{2-js+2u}{2}\right)}} du \frac {G_j(s)ds}{s}. \end{split}\end{aligned}$$ Lastly, for any primary $l$ and $n$, we evaluate $\widetilde{F_{n^2l,j }}\left(0\right)$ by applying directly to see that $$\begin{aligned} \label{F0} \begin{split} \widetilde{F_{n^2l,j}}\left(0\right) =& \int\limits^{\infty}_{-\infty}\int\limits^{\infty}_{-\infty}\Phi \left(N(x+yi) \right) V_{(\alpha_j)} \left( \frac {N(ln^2)}{(X N(x+yi))^{j/2}} \right ) {\mathrm{d}}x {\mathrm{d}}y \\ =& \frac {1}{2\pi i} \int\limits\limits_{(2)}g_{(\alpha_j)}(s)\left( \frac {X^{j/2} } {N(ln^2)}\right )^{s} \left (\int\limits^{\infty}_{-\infty}\int\limits^{\infty}_{-\infty}\Phi \left(N(x+yi) \right) N(x+yi)^{js/2} {\mathrm{d}}x {\mathrm{d}}y \right ) \frac { G_j(s) ds}{s} \\ =& \frac {\pi }{2\pi i} \int\limits\limits_{(2)} g_{(\alpha_j)}(s)\left( \frac {X^{j/2} } {N(ln^2)}\right )^{s} \widehat{\Phi}(1+\frac {js}2) \frac {G_j(s)ds}{s}, \end{split}\end{aligned}$$ since we have $$\begin{aligned} \int\limits^{\infty}_{-\infty}\int\limits^{\infty}_{-\infty}\Phi \left(N(x+yi) \right) N(x+yi)^{js/2} {\mathrm{d}}x {\mathrm{d}}y =\int^{2\pi}_0\int^{\infty}_0\Phi (r^2)r^{js}rdrd\theta =\pi \widehat{\Phi}(1+\frac {js}2).\end{aligned}$$ The approximate functional equation {#sect: apprfcneqn} ----------------------------------- Let $\chi$ be a primitive quadratic Hecke character $\pmod {m}$ of trivial infinite type defined on $\mathcal{O}_K$. As shown by E. Hecke, $L(s, \chi)$ admits analytic continuation to an entire function and satisfies the functional equation ([@iwakow Theorem 3.8]) $$\begin{aligned} \label{fneqn} \Lambda(s, \chi) = W(\chi)(N(m))^{-1/2}\Lambda(1-s, \overline{\chi}),\end{aligned}$$ where $|W(\chi)|=(N(m))^{1/2}$ and $$\begin{aligned} \Lambda(s, \chi) = (|D_K|N(m))^{s/2}(2\pi)^{-s}\Gamma(s)L(s, \chi).\end{aligned}$$ As $\chi$ is quadratic, we have $\chi=\overline{\chi}$ so that by setting $s=1/2$ in , we deduce that $$\begin{aligned} W(\chi)=N(m)^{1/2}.\end{aligned}$$ Thus, the functional equation in this case becomes $$\begin{aligned} \label{fneqnquad} \Lambda(s, \chi) = \Lambda(1-s, \chi).\end{aligned}$$ Let $s_j, 1 \leq j \leq n$ be complex numbers for some positive integer $n$. Write ${\bf s}=(s_1, \cdots, s_n)$ and $1-{\bf s}=(1-s_1, \cdots, 1-s_n)$. Let $G_n(s)$ be an entire, even function, bounded in any strip $-A \leq \Re(s) \leq A$ for some $A>2$ such that $G_n(0)=1$. For some $c >1$, consider the integral $$\begin{aligned} I({\bf s}, \chi)=\frac 1{2 \pi i}\int\limits_{(c)}\prod^n_{j=1}\Lambda(s_j+u, \chi)G_n(u) \frac {{\mathrm{d}}u}{u}.\end{aligned}$$ Moving the contour of integral to $\Re(u)=-c$, we see that $$\prod^n_{j=1}\Lambda(s_j, \chi)=I({\bf s}, \chi)- \frac 1{2 \pi i}\int\limits_{(-c)}\prod^n_{j=1}\Lambda(s_j+u, \chi)G_n(u) \frac {{\mathrm{d}}u}{u}.$$ We now apply the functional equation to obtain $$\begin{aligned} \label{Lambda} \begin{split} \prod^n_{j=1}\Lambda(s_j, \chi)= & I({\bf s}, \chi) -\frac 1{2 \pi i}\int\limits_{(-c)}\prod^n_{j=1}\Lambda(1-s_j+u, \chi)G_n(u) \frac {{\mathrm{d}}u}{u} \\ =& I({\bf s}, \chi)+ \frac 1{2 \pi i}\int\limits_{(-c)}\prod^n_{j=1}\Lambda(1-s_j+u, \chi)G_n(u) \frac {{\mathrm{d}}u}{u} \\ =& I({\bf s}, \chi)+I(1-{\bf s}, \chi), \end{split}\end{aligned}$$ where the second equality follows from a change of variable $u \rightarrow -u$ in the first integral above. Upon expanding $\Lambda(s_i+u), 1 \leq i \leq n$ into convergent Dirichlet series, we have $$\begin{aligned} I({\bf s}, \chi)=&\frac 1{2 \pi i}\int\limits_{(c)} \left ( \sum_{0 \neq \mathcal{A}_1, \cdots, \mathcal{A}_n \subset \mathcal{O}_K} \prod^n_{j=1} \frac{\chi(\mathcal{A}_j)}{N(\mathcal{A}_j)^{s_j+u}}\frac{(|D_K|N(m))^{(u+s_j)/2}\Gamma(s_j+u)}{(2\pi)^{u+s_j}} \right ) G_n(u) \frac {{\mathrm{d}}u}{u}, \\ I(1-{\bf s}, \chi)=& \frac 1{2 \pi i}\int\limits_{(c)} \left ( \sum_{0 \neq \mathcal{A}_1, \cdots, \mathcal{A}_n \subset \mathcal{O}_K} \prod^n_{j=1} \frac{\chi(\mathcal{A}_j)}{N(\mathcal{A}_j)^{1-s_j+u}}\frac{(|D_K|N(m))^{(1-s_j+u)/2}\Gamma(1-s_j+u)}{(2\pi)^{(1-s_j+u)}} \right ) G_n(u) \frac {{\mathrm{d}}u}{u}.\end{aligned}$$ Applying these expressions and dividing through $\prod^n_{j=1}(|D_K|N(m))^{s_j/2}(2\pi)^{-s_j}\Gamma(s_j)$ on both sides of , we obtain $$\begin{aligned} \prod^n_{j=1}L(s_j,\chi)=& \frac 1{2 \pi i}\int\limits_{(c)} \left ( \sum_{0 \neq \mathcal{A}_1, \cdots, \mathcal{A}_n \subset \mathcal{O}_K} \prod^n_{j=1} \frac{\chi(\mathcal{A}_j)}{N(\mathcal{A}_j)^{s_j+u}}\frac{(|D_K|N(m))^{u/2}\Gamma(s_j+u)}{(2\pi)^{u}\Gamma(s_j)} \right ) G_n(u) \frac {{\mathrm{d}}u}{u} \\ & + \frac 1{2 \pi i}\int\limits_{(c)} \left ( \sum_{0 \neq \mathcal{A}_1, \cdots, \mathcal{A}_n \subset \mathcal{O}_K} \prod^n_{j=1} \frac{\chi(\mathcal{A}_j)}{N(\mathcal{A}_j)^{1-s_j+u}}\frac{(|D_K|N(m))^{(1-2s_j+u)/2}\Gamma(1-s_j+u)}{(2\pi)^{(1-2s_j+u)}\Gamma(s_j)} \right ) G_n(u) \frac {{\mathrm{d}}u}{u}.\end{aligned}$$ Recalling that $D_K=-4$, we then deduce from the above by setting $s_j=\frac 12+\alpha_j$, $\chi=\chi_{(1+i)^5d}$ for $d$ odd and square-free that $$\begin{aligned} & \prod^n_{j=1}L({\tfrac{1}{2}}+\alpha_j,\chi) \\ =& \sum_{0 \neq \mathcal{A} \subset \mathcal{O}_K} \frac{\chi(\mathcal{A}) \sigma_{(\alpha_n)}(\mathcal{A})}{N(\mathcal{A})^{1/2}}V_{(\alpha_n)} \left( \frac {N(\mathcal{A})}{N(d)^{n/2}} \right)+N(d)^{- \sum^n_{j=1}\alpha_j}\prod^n_{j=1}\Gamma_{\alpha_j}\sum_{0 \neq \mathcal{A} \subset \mathcal{O}_K} \frac{\chi(\mathcal{A}) \sigma_{-(\alpha_n)}(\mathcal{A})}{N(\mathcal{A})^{1/2}}V_{-(\alpha_n)} \left( \frac {N(\mathcal{A})}{N(d)^{n/2}} \right).\end{aligned}$$ where $\Gamma_{\alpha}$ is defined in , $V_{(\alpha_n)}$ is defined in and $$\begin{aligned} \sigma_{(\alpha_n)}(\mathcal{A})= \sum_{ \prod^n_{j=1}\mathcal{A}_j=\mathcal{A} }N(\mathcal{A}_j)^{-\alpha_j}.\end{aligned}$$ As $\chi_{(1+i)^5d}(\mathcal{A}) \neq 0$ only when $(\mathcal{A}, 2)=1$, in which case we may replace $\mathcal{A}$ by its primary generator. We thus deduce from the above discussions the following approximate functional equation for products of quadratic Hecke $L$-functions. \[lem:AFE\] Let $G_j(s), 1 \leq j \leq 3$ be entire, even functions with rapid decay in the strip $|\Re(s)| \leq 10$ such that $G_j(0)=1, 1 \leq j \leq 3$. For $\chi_{(1+i)^5d}$ as above, we have $$\begin{aligned} \label{fcneqnL} \begin{split} \prod^j_{i=1}L({\tfrac12}+ \alpha_j, \chi_{(1+i)^5d}) = & \sum_{\substack{n \equiv 1 \bmod {(1+i)^3}}} \frac{\chi_{(1+i)^5d}(n)\sigma_{\alpha_1, \cdots, \alpha_j}(n)}{N(n)^{\frac{1}{2}+\alpha}} V_{(\alpha_j)} \left(\frac{ N(n)}{N(d)^{j/2}} \right) \\ & + \displaystyle N(d)^{-\sum^j_{i=1}\alpha_i}\Gamma_{\alpha_1, \cdots, \alpha_j} \sum_{\substack{n \equiv 1 \bmod {(1+i)^3}}} \frac{\chi_{(1+i)^5d}(n)\sigma_{-\alpha_1, \cdots, -\alpha_j}(n)}{N(n)^{\frac{1}{2}-\alpha}} V_{-(\alpha_j)} \left(\frac{ N(n)}{N(d)^{j/2}} \right), \end{split}\end{aligned}$$ where $\Gamma_{\alpha_1, \cdots, \alpha_j}= \displaystyle \prod^j_{i=1}\Gamma_{\alpha_i}(s)$ and $\sigma_{\alpha_1, \cdots, \alpha_j}(n)$ is defined in . Analytical behaviors of certain Dirichlet series {#sect: alybehv} ------------------------------------------------ In this section, we discuss the analytical behaviors of certain Dirichlet series that are needed in our proofs. The first result concerns the analytical behaviors of $A_{\alpha, \beta}(l)$ and $A_{\alpha, \beta, \gamma}(l)$ given in Theorem \[theo:recursive2\]. \[lemma:A\] Let $l = l_1 l_2$ and let $A_{\alpha, \beta}(l)$, $A_{\alpha, \beta, \gamma}(l)$ be as in Theorem \[theo:recursive2\]. Then both $A_{\alpha, \beta}(l)$ and $A_{\alpha, \beta, \gamma}(l)$ have meromorphic continuations to $\Re(\alpha), \Re(\beta), \Re(\gamma) > -{\tfrac{1}{2}}$. In fact, for any positive integer $M \geq 2$ there exist integers $d_{a,b}, d_{a,b,c}$ (possibly negative or zero) such that $$\begin{aligned} \label{eq:Aproduct} \begin{split} A_{\alpha,\beta}(l) =& C_{\alpha,\beta}(l)\prod_{\substack{1 \leq a + b\leq M-1 \\ a, b, c \geq 0}} \zeta_K(a+b+ 2a\alpha + 2b\beta)^{d_{a,b}}, \\ A_{\alpha,\beta,\gamma}(l) =& C_{\alpha,\beta,\gamma}(l)\prod_{\substack{1 \leq a + b + c \leq M-1 \\ a, b, c \geq 0}} \zeta_K(a+b+c + 2a\alpha + 2b\beta + 2c\gamma)^{d_{a,b,c}}, \end{split}\end{aligned}$$ where for any $\delta > 0$, $C_{\alpha,\beta}(l), C_{\alpha,\beta,\gamma}(l)$ are given by absolutely convergent Euler products in the region $\Re(\alpha), \Re(\beta), \Re(\gamma) > \frac{1-M}{2M} + \delta$. Moreover, in this region $C_{\alpha,\beta}(l), C_{\alpha, \beta, \gamma}(l)$ satisfy the bound $$C_{\alpha,\beta}(l), \quad C_{\alpha,\beta,\gamma}(l) \ll \sqrt{N(l_1)} N(l)^{\varepsilon}.$$ The proof of the above lemma is similar to that of [@Young1 Lemma 4.1] and [@Sono Lemma 4.1], so we shall omit it here. We only note here that when $a+b=1$ or $a + b + c =1$ then we have $d_{a,b}=d_{a,b,c} = 1$ and this readily implies the analytical behaviors of $B_{\alpha,\beta}(l)$ and $B_{\alpha,\beta,\gamma}(l)$ defined in . To facilitate our treatments in the proof of Theorem \[theo:recursive2\], we shall make the following remark similar to [@Young1 Remark 2.2] and [@Young2 Remark 2.2]. \[remark:zero\] We choose $G_1$ so that $G_1(\pm \alpha) = G_1({\tfrac12}\pm \alpha) = 0$. We choose $G_j(s), j=2,3$ to vanish at the poles of all the $\zeta_K$’s which occur in as numerators (i.e., with $d_{a,b}>0$ or $d_{a,b,c} > 0$) in the corresponding factorization of $A_{\alpha + s, \beta+s}$ or $A_{\alpha + s, \beta+s, \gamma+s}$, and also to be divisible by all the $\zeta_K$’s which occur in as denominators (i.e., with $d_{a,b}<0$ or $d_{a,b,c} < 0$) in the corresponding factorization of $A_{\alpha + s, \beta+s}$ or $A_{\alpha + s, \beta+s, \gamma+s}$ for $M$ large enough so that $A_{\alpha + s, \beta+s}, A_{\alpha + s, \beta+s, \gamma+s}$ have meromorphic continuations to $\Re(s) > -{\tfrac{1}{2}}+ \varepsilon$ for a given $\epsilon>0$. We also assume that $G_j(s)$ is symmetric under any permutation of $\{\alpha, \beta, \gamma\}$, and under switching any $\alpha, \beta, \gamma$ with its negative, and under switching $s$ with $-s$. Let $g(k,n)$ be defined as in . We now fix a generator for every prime ideal $(\varpi) \in \mathcal{O}_K$ together with $1$ as the generator for the ring ${\ensuremath{\mathbb Z}}[i]$ itself and extend to any ideal of $\mathcal{O}_K$ multiplicatively. We denote the set of such generators by $G$. Let $k_1 \in \mathcal{O}_K$ be square-free and $(l, a)=1$ for a primary element $l \in \mathcal{O}_K$. For fixed integer $j \geq 1$ and complex numbers $\alpha_i, 1 \leq i \leq j$, we define $J_{k_1,j}(v,w;l, a)$ as $$\begin{aligned} \label{eq:J} J_{k_1,j}(v,w;l, a) &=\sum_{\substack{n \equiv 1 \bmod {(1+i)^3} \\ (n,a)=1}} \sum_{\substack {k_2 \neq 0 \\ k_2 \in \mathcal{O}_K}} \frac {\sigma_{\alpha_1, \cdots, \alpha_j}(n)}{N(n)^{w}N(k_2)^{v}} \frac {g(k_1k^2_2,ln)}{N(ln)},\end{aligned}$$ where we use the convention throughout the paper that all sums over $k_2$ are restricted to $k_2 \in G$. Our next lemma gives the analytic properties of $J_{k_1,j}(v,w;l, a) $. \[lemma:Jprop\] Suppose that $l$ is primary such that $(l, 2a) =1$, $k_1$ is square-free, and $J_{k_1,j}(v,w;l, a) $ is given by for $\Re(v) > 2$ and $\Re(w) > 2$. Then $J_{k_1,j}(v,w;l, a)$ has a meromorphic continuation to $\Re(v) \geq 2$ and $\Re(w) > \delta$ for any $\delta > 0$, provided that $\alpha_i, 1 \leq i \leq j$ are small enough compared to $\delta$. Moreover, in this region we have $$J_{k_1,j}(v,w;l, a) =\prod^j_{i=1} L_{2al}({\tfrac12}+ w + \alpha_i, \chi_{ik_1}) I_{k_1,j}(v, w),$$ where $I_{k_1,j }(v, w)$ is analytic in this region and satisfies the bound $$I_{k_1,j}(v, w) \ll_{\delta, \varepsilon} N(l)^{-{\tfrac{1}{2}}+ \varepsilon} .$$ The proof of the above lemma is similar to that of [@Young1 Lemma 5.1], [@Young2 Lemma 5.2] and [@Sono Lemma 4.3], so we omit it here. Proof of Theorem \[theo:recursive2\] {#sec: pfrecursive} ==================================== Initial Treatment ----------------- We fix $\alpha_1=\alpha, \alpha_2=\beta, \alpha_3=\gamma$ throughout and we identify $M_{(\alpha_j)}(l)$ with $M_{\alpha}(l), M_{\alpha, \beta}(l)$ and $M_{\alpha, \beta, \gamma}(l)$ with $j=1,2,3$, respectively. This applies to similar notations such as $g_{(\alpha_j)}, \sigma_{(\alpha_j)}, V_{(\alpha_j)}$ as well. We apply the approximate functional equation for a fixed $1 \leq j \leq 3$ to write $M_{(\alpha_j)}(l) = M_1((\alpha_j),l) + M_{-1}((\alpha_j),l)$, where $$\begin{aligned} M_1((\alpha_j),l) =& {\sideset{}{^*}\sum}_{(d,2) = 1}F(N(d)) \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3}}} \frac{\chi_{(1+i)^5d}(nl) \sigma_{(\alpha_j)}(n)}{N(n)^{\frac{1}{2}}}V_{(\alpha_j)} \left(\frac{ N(n)}{N(d)^{j/2}}\right), \\ M_{-1}((\alpha_j),l) =& \Gamma_{(\alpha_j)} {\sideset{}{^*}\sum}_{(d,2) = 1} N(d)^{-\sum^j_{i=1}\alpha_i} F(N(d)) \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3}}} \frac{\chi_{(1+i)^5d}(nl) \sigma_{-(\alpha_j)}(n)}{N(n)^{{\tfrac12}}} V_{-(\alpha_j)} \left(\frac{N(n)}{N(d)^{j/2}}\right).\end{aligned}$$ We shall make the convention that we may often drop the dependence on $(\alpha_j)$ and $l$ to simply write $M_1$, $M_2$ and other expressions when there is no risk of confusion. We shall also mainly focus on evaluating $M_1$ as the evaluation of $M_2$ can be done by noticing the following remark: \[remark:M1toM2\] To derive an expression for $M_2$ via a corresponding term from $M_1$ involves swapping $\alpha_i$ and $-\alpha_i$, $1 \leq i \leq 3$, replacing $F(x)$ by $F_{-(\alpha_j)}(x) = x^{-\sum^j_{i=1}\alpha_i} F(x)$, and multiplying by $\Gamma_{(\alpha_j)}$, in that order. We now apply the Möbius inversion to remove the square-free condition over $d$ in $M_1$ and $M_2$. Let $\mu_{[i]}$ be the Möbius function in $\mathcal{O}_{K}$, we have $$ M_1 = \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1}} \mu_{[i]}(a) \sum_{(d,2)=1} F(N(d a^2)) \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3}\\ (n,2a)=1}} \frac{\chi_{(1+i)^5d}(nl)\sigma_{(\alpha_j)}(n)}{N(n)^{\frac{1}{2}}} V_{(\alpha_j)} \left(\frac{ N(n)}{N(a^2d)^{j/2}}\right).$$ Now we separate the terms with $N(a) \leq Y$ and with $N(a) > Y$ ($Y$ a parameter to be chosen later), writing $M_1 = M_N + M_R$, respectively. We similarly write $M_{-1} = M_{-N} + M_{-R}$. Estimating $M_R$: applying the recursion {#section:MR} ---------------------------------------- We now make a change of variable by letting $d \rightarrow b^2 d$ with the new $d$ being square-free to see that $$M_R = \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1\\ N(a) > Y}}\mu_{[i]}(a) \sum_{\substack{ b \equiv 1 \bmod {(1+i)^3} \\ (b,2l)=1}} {\sideset{}{^*}\sum}_{(d,2)=1} F(N(d (ab)^2)) \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,2ab)=1}} \frac{\chi_{(1+i)^5d}(nl)\sigma_{(\alpha_j)}(n)}{N(n)^{\frac{1}{2}}} V_{(\alpha_j)} \left(\frac{ N(n)}{N((ab)^2d)^{j/2}}\right).$$ We further let $c=ab$ to obtain $$\begin{aligned} M_R = \sum_{\substack{ c \equiv 1 \bmod {(1+i)^3} \\ (c,2l)=1}} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a |c \\ N(a) > Y}} \mu_{[i]}(a) {\sideset{}{^*}\sum}_{(d,2)=1} F\left(N(d c^2) \right) \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,2c)=1}} \frac{\chi_{(1+i)^5d}(nl)\sigma_{(\alpha_j)}(n)}{N(n)^{\frac{1}{2}}} V_{(\alpha_j)} \left(\frac{ N(n)}{N(c^2d)^{j/2}}\right).\end{aligned}$$ Using the definition of $V_{(\alpha_j)} $ as an integral representation given in , we see that the inner sum over $n$ above is $$ \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,2ab)=1}} \frac{\chi_{(1+i)^5d}(nl)\sigma_{(\alpha_j)}(n)}{N(n)^{\frac{1}{2}}} \frac{1}{2\pi i} \int\limits\limits_{(2)} \frac{G_j(s)}{s} g_{(\alpha_j)}(s) \frac{N(c^2 d)^{js/2}}{N(n)^s} ds.$$ We move the sum over $n$ inside the integral to get $$\begin{aligned} M_R =& \sum_{\substack{ c \equiv 1 \bmod {(1+i)^3} \\ (c,2l)=1}} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a |c \\ N(a) > Y}} \mu_{[i]}(a) {\sideset{}{^*}\sum}_{(d,2)=1} \chi_{(1+i)^5d}(l) F\left( N(d c^2) \right) \\ & \times \frac{1}{2 \pi i} \int\limits\limits_{({\tfrac12}+ \varepsilon)} (N(c^2d))^{js/2} \prod^j_{i=1}L({\tfrac12}+ \alpha_i + s, \chi_{(1+i)^5d}) \prod^j_{i=1} \prod_{\substack{ \varpi_j \equiv 1 \bmod {(1+i)^3} \\ \varpi_j | c}}\Big(1-\frac {\chi_{(1+i)^5d}(\varpi_j)}{N(\varpi_j)^{(1/2+\alpha+s)}} \Big ) \frac{G_j(s)}{s} g_{(\alpha_j)}(s) ds.\end{aligned}$$ We now move the line of integration to $\varepsilon$ without crossing any poles in this process by Remark \[remark:zero\]. Then expanding $\displaystyle \prod_{\substack{ \varpi_j \equiv 1 \bmod {(1+i)^3} \\ \varpi_j | c}}\Big(1-\frac {\chi_{(1+i)^5d}(\varpi_j)}{N(\varpi_j)^{(1/2+\alpha+s)}} \Big )$, we obtain that $$\begin{aligned} \label{eq:prerecursion} \begin{split} M_R =& \sum_{\substack{ c \equiv 1 \bmod {(1+i)^3} \\ (c,2l)=1}} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a |c \\ N(a) > Y}} \mu_{[i]}(a) \sum_{\substack{ r_i \equiv 1 \bmod {(1+i)^3} \\ 1 \leq i \leq 3 \\ r_i | c}} \prod^j_{i=1}\frac{\mu_{[i]}(r_i)}{N(r_i)^{{\tfrac{1}{2}}+ \alpha_i}} \\ & \times \frac{1}{2 \pi i} \int\limits\limits_{(\varepsilon)} {\sideset{}{^*}\sum}_{(d,2)=1} \chi_{(1+i)^5d}(l\prod^j_{i=1}r_i) F_{\frac{js}{2};N(c^2)}(N(d))\Big(N(r_i)\Big )^{-s} \prod^j_{i=1}L({\tfrac12}+ \alpha_i + s, \chi_{(1+i)^5d})\frac{G_j(s)}{s} g_{(\alpha_j)}(s) ds, \end{split}\end{aligned}$$ where $F_{\nu;y}(x) = (xy)^{\nu} F(xy)$ and $\varepsilon \asymp (\log{X})^{-1}$. Note that the inner sum over $d$ above is of the form $M_{(\alpha_j+s)}(l\prod^j_{i=1}r_i)$, but with a new weight function with smaller support ($N(d) \asymp X/N(c)^2$). Now we truncate the integral in so that $|\Im(s)| \leq (\log(X/N(c^2))^2$. When $N(c)^2 \leq X^{1-\varepsilon}$, the exponential decay of the integrand implies that the error introduced by this truncation is negligible. While when $N(c)^2 \geq X^{1-\varepsilon}$, the sum over $d$ is almost bounded so that the convexity bound $L(1/2 + \alpha + s, \chi_{(1+i)^5d}) \ll ((1+|s|)^2N(d))^{1/4+\varepsilon}$ implies that the error introduced is of size $O(X^{{\tfrac{1}{2}}+ \varepsilon})$. We can then apply Theorem \[theo:recursive2\] to the truncated integral, and the same argument as above allows us to extend the integral back to the whole vertical line, without introducing a new error. In this way, we can express $M_R$ as the sum of $2^j$ main terms plus an error of size $$\begin{aligned} \ll & X^{\varepsilon} \sum_{\substack{ c \equiv 1 \bmod {(1+i)^3} \\ (c,2l)=1}} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a |c \\ N(a) > Y}}|\mu_{[i]}(a)| \sum_{\substack{ r_i \equiv 1 \bmod {(1+i)^3} \\ 1 \leq i \leq 3 \\ r_i | c}} \prod^j_{i=1}\frac{|\mu_{[i]}(r_i)|}{N(r_i)^{{\tfrac{1}{2}}}} N(l \prod^j_{i=1}r_i) ^{1/2 + \varepsilon} {\left(\frac{X}{N(c)^2}\right)}^{f + \varepsilon} \\ \ll & \frac{X^{f + \varepsilon}}{Y^{2f - 1}} N(l)^{1/2 + \varepsilon}.\end{aligned}$$ For the main terms, by a direct application of Theorem \[theo:recursive2\], we see that $$\begin{aligned} & M_R(\epsilon_1, \cdots, \epsilon_j) \\ =& \pi \sum_{\substack{ c \equiv 1 \bmod {(1+i)^3} \\ (c,2l)=1}} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a |c \\ N(a) > Y}} \mu_{[i]}(a) \sum_{\substack{ r_i \equiv 1 \bmod {(1+i)^3} \\ 1 \leq i \leq 3 \\ r_i | c}} \prod^j_{i=1}\frac{\mu_{[i]}(r_i)}{N(r_i)^{{\tfrac{1}{2}}+ \alpha_i}} \frac{1}{\sqrt{N((l\prod^j_{i=1}r_i)^*)}} \frac{1}{2 \zeta_{K,2}(2)} \\ & \times \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} A_{\epsilon_1(\alpha_1+ s), \cdots, \epsilon_j(\alpha_j+ s)}(l\prod^j_{i=1}r_i) \Gamma_{\alpha_1+ s, \cdots, \alpha_j + s}^{\delta_1, \cdots, \delta_j}\widehat{F}_{js/2;N(c^2)}(w) \frac{1}{N(\prod^j_{i=1}r_i)^{s}} \frac{G_j(s)}{s} g_{(\alpha_j)}(s) ds,\end{aligned}$$ where we denote $w = 1 - \delta_1(\alpha_1+s) - \cdots- \delta_j(\alpha_j+s)$. Now we apply the relation $$\widehat{F}_{js/2;N(c^2)}(u) = \int_0^{\infty} (xN(c^2))^{\frac{js}{2}} F(N(c^2)x) x^{u} \frac{dx}{x} = N(c)^{-2u} \widehat{F}(\tfrac{js}{2} + u)$$ to see that $$\begin{aligned} \label{eq:MRmainterms} \begin{split} M_R(\epsilon_1, \cdots, \epsilon_j) =& \frac{\pi }{2 \zeta_{K,2}(2)}\sum_{\substack{ c \equiv 1 \bmod {(1+i)^3} \\ (c,2l)=1}}\frac{1}{N(c)^{2w}} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a |c \\ N(a) > Y}}\mu_{[i]}(a) \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} \Gamma_{\alpha_1+ s, \cdots, \alpha_j + s}^{\delta_1, \cdots, \delta_j} \frac{G_j(s)}{s} g_{(\alpha_j)}(s) \widehat{F}(\tfrac{js}{2} + w) \\ & \times \sum_{\substack{ r_i \equiv 1 \bmod {(1+i)^3} \\ 1 \leq i \leq 3 \\ r_i | c}} \prod^j_{i=1}\frac{\mu_{[i]}(r_i)}{N(r_i)^{{\tfrac{1}{2}}+ \alpha_i+s}}\frac{1}{\sqrt{N((l\prod^j_{i=1}r_i)^*)}} A_{\epsilon_1(\alpha_1+ s), \cdots, \epsilon_j(\alpha_j+ s)}(l\prod^j_{i=1}r_i) ds. \end{split}\end{aligned}$$ We summarize our discussions above in the following result. \[lemma:MRresult\] If Theorem \[theo:recursive2\] holds with a parameter $f > 1/2$ when $j=1,2$ and $f > 3/4$ when $j=3$, then $$\begin{aligned} M_R = \sum_{\epsilon_1, \cdots \epsilon_j \in \{\pm 1\}} M_R(\epsilon_1, \cdots, \epsilon_j) + O(\sqrt{N(l)} \frac{X^{f + \varepsilon}}{Y^{2f-1}})+O(X^{1/2+\varepsilon}),\end{aligned}$$ where $M_R(\epsilon_1, \cdots, \epsilon_j)$ is defined in . Estimating $M_R$: further simplifications {#section:proofofMRbound} ----------------------------------------- In this section, we show that some of the main terms appearing in Lemma \[lemma:MRresult\] can be treated as error terms as well by establishing \[lemma:MRbound\] If at least two of the $\epsilon_i$’s are $-1$, then for a special choice of $G_j(s)$ described in Remark \[remark:zero\], we have $$\label{eq:twonegeps} M_R(\epsilon_1, \epsilon_2) \ll Y X^{1/2} (N(l) X)^{\varepsilon}, \quad M_R(\epsilon_1, \epsilon_2, \epsilon_3) \ll Y X^{3/4} (N(l) X)^{\varepsilon},$$ and furthermore, $$\label{eq:threenegeps} M_R(-1, -1, -1) \ll X^{3/4} (N(l) X)^{\varepsilon}.$$ Since the proofs are similar, we prove only for $M_R(-1, -1, 1)$ here. We extend the sum over $a$ to all primary integers in $K$, and subtract the contribution from $N(a) \leq Y$, getting $M_R(-1,-1,1) = M'(-1,-1,1) - M''(-1,-1,1)$, respectively. To treat $M'(-1,-1,1)$, we note that the sum over $a$ becomes $\displaystyle \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ a | c}} \mu_{[i]}(a)$, which is not $0$ only when $c=1$. This implies that $c=r_1 = r_2 =r_3 =1$ so that $$\begin{aligned} M'(-1, -1, 1) = \frac{\pi}{2 \zeta_{K,2}(2)} \frac{1}{2\pi i} \int\limits_{(\varepsilon)} \Gamma_{\alpha + s} \Gamma_{\beta + s} \frac{G_3(s)}{s} g_{\alpha,\beta,\gamma}(s) \widehat{F}(1-\alpha-\beta-\tfrac{s}{2}) \frac{1}{\sqrt{N(l_1)}} A_{-\alpha - s, -\beta - s, \gamma + s}(l) ds.\end{aligned}$$ In view of Lemma \[lemma:A\] and our choice of $G_3(s)$ described in Remark \[remark:zero\], we can move the contour of integration to $\Re(s) = {\tfrac{1}{2}}- \delta$. Using the bound $$\widehat{F}(1-\alpha-\beta- \tfrac{s}{2}) \ll X^{1 - \tfrac{\sigma}{2}},$$ we see that $$M'(-1,-1,1) \ll N(l_1)^{-{\tfrac{1}{2}}} N(l_1)^{{\tfrac{1}{2}}+\varepsilon} N(l)^{\varepsilon} X^{1-1/4 + \varepsilon}.$$ As for $M''(-1,-1,1)$, we write $c=ab$ and note that the condition $r_1, r_2, r_3| ab$ is equivalent to $[r_1,r_2,r_3]/(a, [r_1,r_2,r_3]) |b$. We can then write $b=kr_1,r_2,r_3]/(a, [r_1,r_2,r_3])$ with $k$ being primary and $(k, 2l)=1$. On summing over $k$ first, we obtain that $$\begin{aligned} \begin{split} & M''(-1, -1, 1) \\ =& \frac{\pi}{2 \zeta_{K,2}(2)} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1\\ N(a) \leq Y}}\frac{\mu_{[i]}(a)}{N(a)^{2-2\alpha-2\beta-4s}} \frac{1}{2\pi i} \int\limits_{(\varepsilon)} \Gamma_{\alpha + s} \Gamma_{\beta + s} \frac{G_3(s)}{s} g_{\alpha,\beta,\gamma}(s) \widehat{F}(1-\alpha-\beta-\tfrac{s}{2}) \\ & \times \sum_{\substack{ r_1, r_2, r_3 \equiv 1 \bmod {(1+i)^3} \\ (r_1 r_2 r_3, 2l)=1}} \frac{\mu_{[i]}(r_1) \mu_{[i]}(r_2) \mu_{[i]}(r_3) }{N(r_1)^{{\tfrac{1}{2}}+ \alpha+s} N(r_2)^{{\tfrac{1}{2}}+ \beta+s} N(r_3)^{{\tfrac{1}{2}}+ \gamma+s}} \frac{1}{\sqrt{N((lr_1 r_2 r_3)^*)}} {\left(\frac{N((a,[r_1, r_2, r_3]))}{N([r_1, r_2, r_3])}\right)}^{2-2\alpha-2\beta-4s} \\ & \times \zeta_{K, 2l}(2-2\alpha-2\beta - 4s) A_{-\alpha - s, -\beta - s, \gamma + s}(lr_1 r_2 r_3) ds. \end{split}\end{aligned}$$ Again we move the contour of integration to $\Re(s) = {\tfrac{1}{2}}- \delta$ and bound everything trivially to see that $$\begin{aligned} & M''(-1,-1,1) \\ \ll & X^{3/4 + \delta/2}\sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1\\ N(a) \leq Y}} N(a)^{4\delta} \sum_{\substack{ r_1, r_2, r_3 \equiv 1 \bmod {(1+i)^3} \\ (r_1 r_2 r_3, 2l)=1}} \frac{N(lr_1 r_2 r_3)^{\varepsilon} |\mu_{[i]}(r_1) \mu_{[i]}(r_2) \mu_{[i]}(r_3)|}{N(r_1 r_2 r_3)^{1- \delta}} {\left(\frac{N((a,[r_1, r_2, r_3]))}{N([r_1, r_2, r_3])}\right)}^{4\delta}.\end{aligned}$$ We apply the bound $N((a, [r_1, r_2, r_3])) \leq N(a)$ and note that the sums over $r_1, r_2, r_3$ converge absolutely for any $\delta > 0$ by taking $\varepsilon$ small enough compared to $\delta$ to see that $$M''(-1,-1,1) \ll Y X^{3/4} (N(l) X)^{\varepsilon}.$$ Lastly, we bound $M_R(-1, -1, -1)$ by writing $c=ab$ again to see that $$\begin{aligned} & M_R(-1, -1, -1) \\ =& \frac{1}{2 \zeta_{K,2}(2)} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1\\ N(a) > Y}} \mu_{[i]}(a) \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} \Gamma_{\alpha + s, \beta + s, \gamma + s} \frac{G_3(s)}{s} g_{\alpha,\beta,\gamma}(s) \widehat{F}(1- \alpha-\beta-\gamma-\tfrac{3s}{2}) \\ & \times \sum_{\substack{ b \equiv 1 \bmod {(1+i)^3} \\ (b,2l)=1}}\frac{1}{N(ab)^{2(1-\alpha-\beta-\gamma-3s)}} \sum_{\substack{ r_1, r_2, r_3 \equiv 1 \bmod {(1+i)^3} \\ r_1, r_2, r_3 | ab}} \frac{\mu_{[i]}(r_1) \mu_{[i]}(r_2) \mu_{[i]}(r_3) }{N(r_1)^{{\tfrac{1}{2}}+ \alpha+s} N(r_2)^{{\tfrac{1}{2}}+ \beta+s} N(r_3)^{{\tfrac{1}{2}}+ \gamma+s}} \\ & \times \frac{A_{-\alpha - s, -\beta - s, -\gamma - s}(lr_1 r_2 r_3)}{\sqrt{N((lr_1 r_2 r_3)^*)}} ds.\end{aligned}$$ We now move the contour of integration to $\Re(s) = \frac16 - \delta$. This leads to the desired bound given in by noting that the sums over $a$ and $b$ converge absolutely. Combining Lemma \[lemma:MRresult\] and Lemma \[lemma:MRbound\], we deduce that \[lemma:MRresult1\] If Theorem \[theo:recursive2\] holds with a parameter $f > 1/2$ when $j=1,2$ and $f > 3/4$ when $j=3$, then $$\begin{aligned} \begin{split} M_R(\alpha, l) =& M_R(1) + M_R(-1) + O(\sqrt{N(l)} \frac{X^{f + \varepsilon}}{Y^{2f-1}})+O(X^{1/2+\varepsilon}), \\ M_R(\alpha, \beta, l) =& M_R(1,1) + M_R(1,-1) + M_R(-1,1)+O(\sqrt{N(l)} \frac{X^{f + \varepsilon}}{Y^{2f-1}})+O(Y X^{1/2} (N(l) X)^{\varepsilon}), \\ M_R(\alpha, \beta, \gamma, l) =& M_R(1,1,1) + M_R(1,1,-1) + M_R(1,-1, 1)+M_R(-1,1,1) +O(\sqrt{N(l)} \frac{X^{f + \varepsilon}}{Y^{2f-1}})+O(Y X^{3/4} (N(l) X)^{\varepsilon}). \end{split}\end{aligned}$$ Computing $M_N$: applying Poisson summation {#section:MN} ------------------------------------------- We recall that $$M_N = \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \mu_{[i]}(a) \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\(n,2a)=1 }} \frac{{\left(\frac{(1+i)^5}{nl}\right)}\sigma_{(\alpha_j)}(n) }{N(n)^{\frac{1}{2}}} \sum_{(d,2)=1} {\left(\frac{d}{nl}\right)} \Phi\left(\frac{N(d a^2)}{X}\right) V_{(\alpha_j)} \left(\frac{N(n)}{N(a^2d)^{j/2}}\right).$$ We now apply the Poisson summation formula given in Lemma \[Poissonsumformodd\] to see that $$\begin{aligned} & \sum_{(d,2)=1} {\left(\frac{d}{nl}\right)} \Phi\left(\frac{N(d a^2)}{X}\right) V_{(\alpha_j)} \left(\frac{N(n)}{N(a^2d)^{j/2}}\right) = \frac {X}{2N(a^2nl)} {\left(\frac{1+i}{nl}\right)} \sum_{k \in {\ensuremath{\mathbb Z}}[i]}(-1)^{N(k)}g(k, nl)\widetilde{F_{n,j}}\left(\sqrt{\frac {N(k)X}{2N(a^2nl)}}\right),\end{aligned}$$ where $F_{n,j}(t)$ is given in . We then deduce that $$\begin{aligned} M_N = \frac{X}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,2a)=1}} \frac{\sigma_{(\alpha_j)}(n)}{N(n)^{{\tfrac12}}} \sum_{k \in {\ensuremath{\mathbb Z}}[i]}(-1)^{N(k)} \frac{g(k, ln)}{N(ln)} \widetilde{F_{n,j}}\left(\sqrt{\frac {N(k)X}{2N(a^2nl)}}\right).\end{aligned}$$ Now we write $M_N = M_N(k=0) + M_N(k \neq 0)$, where $M_N(k=0)$ corresponds to the term with $k=0$. Computing $M_N$: the term $M_N(k=0)$ ------------------------------------ Note that by Lemma \[Gausssum\] we have $g(0,n)=\varphi_{[i]}(ln)$ if $ln = \square$ (i.e. $n = l_1 \square$), and $0$ otherwise. Thus we get $$\begin{aligned} M_N(k=0)=& \frac {X}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac {\mu_{[i]}(a)}{N(a)^2} \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,2a)=1}} \frac{\sigma_{(\alpha_j)}(l_1n^2)}{N(l_1n^2)^{\frac 12}}\frac{\varphi_{[i]}(ln)}{N(ln)}\widetilde{F_{n^2l_1}}\left(0\right).\end{aligned}$$ We then apply to deduce that $$\begin{aligned} M_N(k=0)=& \frac {\pi}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac {\mu_{[i]}(a)}{N(a)^2} \times \frac {1}{2\pi i} \int\limits\limits_{(2)} g_{(\alpha_j)}(s) \widehat{F}(1+\frac {js}2)D_N(k=0;s) \frac {G_j(s)ds}{s},\end{aligned}$$ where $$D_N(k=0;s) = \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,a)=1}} \frac{\sigma_{(\alpha_j)}(l_1n^2)}{N(l_1n^2)^{\frac 12 +s}}\frac{\phi_{[i]}(ln)}{N(ln)}.$$ \[lemma:MNk0\] For special choices of $G_j(s), 1 \leq j \leq 3$ described in Remark \[remark:zero\], we have for $j=1$, $l$ primary and square-free, $$\begin{aligned} \label{eq:right} M_{N}(k=0) + M_R(1) = \frac {\pi }{2\zeta_{K,2}(2)\sqrt{N(l)}} \frac{1}{2\pi i} \int_{(\varepsilon)} \widehat{F}(1 + \tfrac{s}{2})\frac{G_1(s)}{s} g_{\alpha}(s) A_{\alpha+s}(l) ds.\end{aligned}$$ For $j=2,3$ and a general primary $l$, $$\label{eq:MNk0andMR111} M_N(k=0) + M_R(1,\cdots, 1) = \pi A_{\alpha_1, \cdots, \alpha_j}(l) \frac{\widehat{F}(1)}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} + O(X^{(1-\frac j{4}) + \varepsilon} N(l)^{\varepsilon}).$$ The expression given in can be established by proceeding similarly to the treatment in Section 6.2 of [@Young1]. To prove , we use the expression by writing $c=ab$ there to see that $$M_R(1,\cdots,1) = \frac {\pi}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) > Y}} \frac {\mu_{[i]}(a)}{N(a)^2} \times \frac {1}{2\pi i} \int\limits\limits_{(2)} g_{\alpha_1, \cdots, \alpha_j}(s) \widehat{F}(1+\frac {3s}2)D_R(1, \cdots, 1;s) \frac {G_j(s)ds}{s},$$ where $$\begin{aligned} D_R(1,\cdots, 1;s) = & \frac{1}{\zeta_{K,2}(2)} \sum_{\substack{ b \equiv 1 \bmod {(1+i)^3} \\ (b,2l)=1}}\frac{1}{N(b)^2}\sum_{\substack{ r_i \equiv 1 \bmod {(1+i)^3} \\ 1 \leq i \leq 3 \\ r_i | c}} \prod^j_{i=1}\frac{\mu_{[i]}(r_i)}{N(r_i)^{{\tfrac{1}{2}}+ \alpha_i+s}} \\ & \times \sum_{\substack{ n\equiv 1 \bmod {(1+i)^3} \\ (n,2) = 1}} \frac{\sigma_{\alpha_1, \cdots, \alpha_j}((l\prod^j_{i=1}r_i)^* n^2)}{N((l \prod^j_{i=1}r_i)^* n^2)^{\frac12 + s}} \prod_{\substack{ \varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | n l \prod^j_{i=1}r_i }} (1+N(\varpi)^{-1})^{-1}.\end{aligned}$$ Now the arguments given in [@Young1 Section 6.2], [@Young2 Section 6.1] and the proof of [@Sono Lemma 4.4] carry over to our case with simple modifications to show that we have $$ D_N(k=0;s) = D_R(1, \cdots, 1;s).$$ We then conclude that $$\begin{aligned} M_N(k=0) + M_R(1,\cdots,1) = \frac {\pi}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1}} \frac {\mu_{[i]}(a)}{N(a)^2} \frac {1}{2\pi i} \int\limits\limits_{(2)} g_{\alpha_1, \cdots, \alpha_j}(s) \widehat{F}(1+\frac {js}2)D_N(k=0;s) \frac {G_j(s)ds}{s}.\end{aligned}$$ It follows from Lemma \[lemma:A\] and Remark \[remark:zero\] that we can move the contour of integration to $-1/2 + \varepsilon$ to cross a pole at $s=0$ only in the process. The residue at $s=0$ gives the desired main term and the error term is easily estimated to be of the desired size. Computing $M_N$: the term $M_N(k \neq 0)$ ----------------------------------------- Using the expression given in for $\widetilde{F_{n,j}}$, we see that $$\begin{aligned} M_N(k \neq 0) =& \frac{X}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \frac{\sigma_{\alpha_1,\cdots,\alpha_j}(n)}{N(n)^{{\tfrac12}}} \sum_{\substack {k \in {\ensuremath{\mathbb Z}}[i] \\ k \neq 0}}(-1)^{N(k)} \frac{g(k, ln)}{N(ln)} \widetilde{F_{n,j}}\left(\sqrt{\frac {N(k)X}{2N(a^2nl)}}\right) \\ =& \frac{\pi}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \frac {1}{(2\pi i)^2}\int\limits\limits_{(c_u)} \int\limits\limits_{(c_s)}\widehat{F}(1+u) g_{\alpha_1, \cdots, \alpha_j}(s) \left(\pi \left(\sqrt{\frac {1}{2N(a^2l)}}\right) \right )^{-(js-2u)}\frac{\Gamma {\left(\frac{js-2u}{2}\right)}}{\Gamma{\left(\frac{2-js+2u}{2}\right)}} \\ & \times \sum_{\substack{ n \equiv 1 \bmod {(1+i)^3} \\ (n,a)=1}} \sum_{\substack {k \in {\ensuremath{\mathbb Z}}[i] \\ k \neq 0}}(-1)^{N(k)} \frac{\sigma_{\alpha_1,\cdots,\alpha_j}(n)}{N(n)^{{\tfrac12}+(1-j/2)s+u}}\frac{1}{N(k)^{js/2-u}} \frac{g(k, ln)}{N(ln)}du \frac {G_j(s)ds}{s},\end{aligned}$$ where we set $c_s=c_u>3$ satisfying $c_s-c_u>2$. Now, we let $f(k)=g(k,n)/N(k)^s$ and we write $k = k_1k^2_2$ with $k_1$ square-free and $k_2 \in G$, where we recall here that $G$ is the set of generators of all ideals in $\mathcal{O}_K$ defined in Section \[sect: alybehv\]. We break the sum over $k_1$ into two sums, depending on $(k_1, 1+i)=1$ or not, to get $$\begin{aligned} \sum_{\substack {k \in {\ensuremath{\mathbb Z}}[i] \\ k \neq 0}}(-1)^{N(k)} f(k)= & {\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) \neq 1}}\sum_{k_2}f(k_1k^2_2)+{\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) = 1}}\sum_{k_2}(-1)^{N(k_2)}f(k_1k^2_2) \\ = & {\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) \neq 1}}\sum_{k_2}f(k_1k^2_2)+{\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) = 1}} \left (2\sum_{k_2}f(2k_1k^2_2)-\sum_{k_2}f(k_1k^2_2) \right ),\end{aligned}$$ where we note that $(1+i)$ is the only prime ideal in $\mathcal{O}_K$ that lies above the integral ideal $(2) \in {\ensuremath{\mathbb Z}}$. Note that when $(n, 1+i)=1$, $g(k,n)=g(2k,n)$ by Lemma \[Gausssum\]. It follows that we have $f(2k_1k^2_2)=4^{-s}f(k_1k^2_2)$ so that $$\begin{aligned} \sum_{\substack {k \in {\ensuremath{\mathbb Z}}[i] \\ k \neq 0}}(-1)^{N(k)} f(k)= (2^{1-2s}-1){\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) = 1}} \sum_{k_2}f(k_1k^2_2) +{\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) \neq 1}}\sum_{k_2}f(k_1k^2_2).\end{aligned}$$ We apply the above expression to recast $M_N(k \neq 0)$ as $$\begin{aligned} M_N(k \neq 0) &=\frac {\pi}{2} \left ( \ {\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) = 1}} \frac {1}{N(k_1)^{js/2-u}} \mathcal{M}_{1}(s,u,k_1,l)+ {\sideset{}{^*}\sum}_{\substack{k_1 \\ (k_1, 1+i) \neq 1}}\frac {1}{N(k_1)^{js/2-u}}\mathcal{M}_{2}(s,u, k_1,l)\right ),\end{aligned}$$ where $$\begin{aligned} \label{eq:MNpreDirichlet} \begin{split} \mathcal{M}_{1}(s,u, k_1,l) =& \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \frac {1}{(2\pi i)^2}\int\limits\limits_{(c_u)} \int\limits\limits_{(c_s)}\widehat{F}(1+u) g_{\alpha_1, \cdots, \alpha_j}(s) \left( \pi \left(\sqrt{\frac {1}{2N(a^2l)}}\right) \right )^{-(js-2u)}\\ &\times (2^{1-2(js/2-u)}-1) \frac{\Gamma {\left(\frac{js-2u}{2}\right)}}{\Gamma{\left(\frac{2-js+2u}{2}\right)}} J_{k_1,j}(js-2u, \frac 12+(1-\frac j2)s+u; l,a)du \frac {G_j(s)ds}{s}, \end{split}\end{aligned}$$ and $J_{k_1,j}(v,w;l, a)$ is defined in . The formula for $\mathcal{M}_{2}(s,u,k_1,l)$ is identical to except that the factor $2^{1-2(js/2-u)} - 1$ is omitted. We move the contours to $c_s = {\tfrac{1}{2}}+ \varepsilon$ and $c_u =\frac {j}{4}-1$ retaining the relation $jc_s-2u>2$. In view of Lemma \[lemma:Jprop\], $J_{k_1,j}$ remains analytic in the process. Again it follows from Lemma \[lemma:Jprop\] and Remark \[remark:zero\] that we cross poles of the Hecke $L$-functions at $u=-(1-\frac{j}{2})s - \alpha_i, 1 \leq i \leq j$ for $k_1 =\pm i$ only. For each $\alpha_i, 1 \leq i \leq j$, we denote $M_N(k_1 = \pm i, \alpha_i), 1 \leq i \leq 3$ for the contribution to $M_N(k_1 \neq 0)$ from the sums of the two residues corresponding to $k_1=\pm i$. Note further that by Lemma \[Gausssum\] and that we have $J_{i,j}(v,w;l, a)=J_{-i,j}(v,w;l, a)$ so that we shall denote $J_{j}(v,w;l, a)$ for $J_{i,j}(v,w;l, a)$ or $J_{-i,j}(v,w;l, a)$ from now on. Using this notation, we have $$\begin{aligned} \label{eq:MNksquarealpha} \begin{split} M_N(k_1=\pm i, \alpha_i) = & \pi \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \frac {1}{2\pi i} \int\limits\limits_{(c_s)}\widehat{F}(1 -(1-\frac{j}{2})s - \alpha_i) g_{\alpha_1, \cdots, \alpha_j}(s) \left( \pi \left(\sqrt{\frac {1}{2N(a^2l)}}\right) \right )^{-2(s + \alpha_i)} \\ & \times (2^{1-2(s + \alpha_i)}-1)\frac{\Gamma (s+\alpha_i)}{\Gamma (1-s-\alpha_i)} \text{Res}_{w=\frac{1}{2} - \alpha_i} J_j(2s + 2\alpha_i, w;l,a) \frac {G_j(s)ds}{s} . \end{split}\end{aligned}$$ On the new lines of integration, we argue as in Section 5.3 of [@Young2] using the following analogue estimation of for the second moment such that when $|\Re(\alpha)| \ll (\log{X})^{-1}$, $$\begin{aligned} {\sideset{}{^*}\sum}_{\substack{(d,2)=1 \\ N(d) \leq X }} \left| L_{2al}({\tfrac12}+\alpha, \chi_{(1+i)^5d}) \right|^2 \ll N(al)^{\varepsilon}\left( X(1+|\Im(\alpha)|) \right)^{1+\varepsilon}\end{aligned}$$ to see that the sum over $k_1$ converges absolutely on these lines of integration and that with our choices of $c_u$ and $c_s$, the contribution to $M_N$ from these error terms is $$\ll \sum_{N(a) \leq Y} N(a)^{-2} N(l a^2)^{1 + \varepsilon} N(l)^{-{\tfrac{1}{2}}+ \varepsilon} X^{j/4 + \varepsilon} \ll N(l)^{1/2 + \varepsilon} Y X^{j/4 + \varepsilon}.$$ We then conclude from the above that $$\begin{aligned} \label{MNknot0} M_N(k \neq 0) = \sum^j_{i=1}M_N(k_1=\pm i, \alpha_i) +O(X^{j/4 + \varepsilon} Y N(l)^{1/2 + \varepsilon}).\end{aligned}$$ Computing $M_N$: gathering terms -------------------------------- In this section we show that for any fixed $i$, the term $M_{\pm N}(k_1=\pm i, \alpha_i)$ combines naturally with the term $M_{\pm R}(\epsilon_1, \cdots, \epsilon_j)$, where we have $\epsilon_i=-1$ and $\epsilon_k=1$ for all $k \neq i$. As a preparation, we first establish an Archimedean-type identity. \[lemma:archcalc\] Let $u$ be a complex number. Then $$\begin{aligned} (2^{1-u} -1)\zeta_K(u) \frac {\Gamma(\frac {u}2)}{\Gamma(1-\frac {u}2)} = \frac {4}{\pi} \left ( \frac {\pi^2}{2} \right )^{u/2} \Gamma_{u/2}\zeta_{K,2}(1-u)\end{aligned}$$ where $\Gamma_u$ is defined by . We use the functional equation for $\zeta_K(u)$ to see that $$\begin{aligned} \pi^{-u}\zeta_K(u) = \pi^{-(1-u)} \frac {\Gamma(1 -u)}{\Gamma(u)} \zeta_K(1-u).\end{aligned}$$ Next, we apply the formula (see [@GR Formula 3, Section 8.335]): $$\begin{aligned} \Gamma(\frac {u}2)\Gamma (\frac {1+u}{2})=\frac {\sqrt{\pi}}{2^{u-1}}\Gamma(u)\end{aligned}$$ to see that $$\begin{aligned} \frac {\Gamma(1 -u)}{\Gamma(u)}\frac {\Gamma(\frac {u}2)}{\Gamma(1-\frac {u}2)} =\frac {2^{1-u}}{2^{u}}\frac {\Gamma(\frac {1-u}{2})}{\Gamma(\frac {1+u}{2})}.\end{aligned}$$ Note also that $$\begin{aligned} (2^{1-u} -1) \zeta_K(1-u) = 2^{1-u} \zeta_{K,2}(1-u).\end{aligned}$$ From this we obtain that $$\begin{aligned} (2^{1-u} -1)\zeta_K(u) \frac {\Gamma(\frac {u}2)}{\Gamma(1-\frac {u}2)} =& \pi^{-(1-2u)}\frac {2^{2(1-u)}}{2^u} \frac {\Gamma(\frac {1-u}{2})}{\Gamma(\frac {1+u}{2})}\zeta_{K,2}(1-u) = \frac {4}{\pi} \left ( \frac {\pi^2}{2} \right )^{u/2} \Gamma_{u/2}\zeta_{K,2}(1-u),\end{aligned}$$ as desired. Now we are ready to prove the next result. \[lemma:MNk0andMR111\] For special choices of $G_j(s), 1 \leq j \leq 3$ described in Remark \[remark:zero\], we have $$\begin{aligned} \label{eq:MT1} & M_N(k=0) + M_{-N}(k_1=\pm i, \alpha) + M_{R}(-1) + M_{-R}(1) = \frac{\pi \widehat{F}(1)}{2 \zeta_{K,2}(2)} N(l)^{-1/2}A_{\alpha}(l), \\ \label{eq:MNksquareandMR-11} & M_N(k_1=\pm i, \alpha) + M_R(-1,1)+ M_{-N}(k_1=\pm i, \beta) + M_{-R}(-1,1)= \pi A_{- \alpha, \beta}(l) \Gamma_{\alpha} \frac{ \widehat{F}(1- \alpha )}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}}, \\ \label{eq:MNksquareandMR-111} & M_N(k_1=\pm i, \alpha) + M_R(-1,1,1) = \pi A_{- \alpha, \beta,\gamma}(l) \Gamma_{\alpha} \frac{ \widehat{F}(1- \alpha )}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} + O(X^{3/4 + \varepsilon} N(l)^{\varepsilon}).\end{aligned}$$ The relations given in and are valid similarly if one replaces $\alpha$ by $\beta$ or by $\gamma$ (when $j=3$). We begin our proof in general by applying Lemma \[lemma:archcalc\] with $u=2(s + \alpha_i)$ to , thus obtaining $$\begin{aligned} \begin{split} M_N(k_1=\pm i, \alpha_i) = & \frac {\pi}{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \frac {1}{2\pi i} \int\limits\limits_{(c_s)}\widehat{F}(1 -(1-\frac{j}{2})s - \alpha_i) g_{\alpha_1, \cdots, \alpha_j}(s) \left( \frac {\pi}{\sqrt{2}} \right )^{-2(s + \alpha_i)} \\ & \times 2N(l a^2)^{s + \alpha_i}\Gamma_{s + \alpha_i}\zeta_{K,2}(1-2\alpha_i - 2s) \text{Res}_{w={\tfrac{1}{2}}- \alpha} \frac {4J_j(2s + 2\alpha_i, w;l,a)} {\pi\zeta_K(2s + 2\alpha_i)} \frac {G_j(s)ds}{s} . \end{split}\end{aligned}$$ Recall that $c_s = {\tfrac{1}{2}}+ \varepsilon$ and the residue of $\zeta_K(s)$ at $s=1$ equals $\pi/4$. We replace the residue of $\frac 4{\pi}J_j(2s + 2\alpha, w)$ at $w = {\tfrac{1}{2}}- \alpha_i$ by the value of $J_j(2s + 2 \alpha_i, w)/\zeta_K({\tfrac{1}{2}}+ w + \alpha_i)$ at $w = {\tfrac{1}{2}}- \alpha_i$ to see that $$\begin{aligned} M_N(k_1=\pm i, \alpha_i) =& \frac {\pi}2\sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\(a,2l)=1 \\ N(a) \leq Y}} \frac{\mu_{[i]}(a)}{N(a)^2} \frac {1}{2\pi i} \int\limits\limits_{(c_s)}\widehat{F}(1 -(1-\frac{j}{2})s - \alpha) g_{\alpha_1, \cdots, \alpha_j}(s) \frac {G_j(s)}{s} \\ & \times \Gamma_{s + \alpha_i} N(a)^{2 \alpha + 2s} D_N(k_1=\pm i, \alpha_i; s) ds,\end{aligned}$$ where $$\begin{aligned} D_N(k_1=\pm i, \alpha_i; s) = 2 N(l)^{s + \alpha_i} \zeta_{K,2}(1-2\alpha_i - 2s) \frac{J_j(2s + 2\alpha_i, w)}{ \zeta_K(2s + 2\alpha_i) \zeta_K({\tfrac{1}{2}}+ w + \alpha_i)} \Big|_{w={\tfrac{1}{2}}- \alpha_i}.\end{aligned}$$ Next, by writing $c=ab$ and setting $\epsilon_i=-1$ and $\epsilon_k=1$ for all $k \neq i$ in , we deduce that $$\begin{aligned} M_R(\epsilon_1, \cdots, \epsilon_j) =& \frac{\pi }{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1\\ N(a) > Y}}\frac{\mu_{[i]}(a)}{N(a)^2} \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} \frac{G_j(s)}{s} g_{\alpha_1, \cdots,\alpha_j}(s)\widehat{F}(1 -(1-\frac{j}{2})s - \alpha_i) \\ & \times \Gamma_{\alpha_i + s} N(a)^{2 \alpha_i + 2s} D_R(\epsilon_1, \cdots, \epsilon_j;s) ds,\end{aligned}$$ where $$\begin{aligned} D_R(-1,1,1;s) =& \frac{1}{ \zeta_{K,2}(2)} \sum_{\substack{ b \equiv 1 \bmod {(1+i)^3} \\ (b,2l)=1}} \frac{1}{N(b)^{2(1-\alpha_i-s)}} \\ & \times \sum_{\substack{ r_i \equiv 1 \bmod {(1+i)^3} \\ 1 \leq i \leq 3 \\ r_i | ab}} \prod^j_{i=1}\frac{\mu_{[i]}(r_i)}{N(r_i)^{{\tfrac{1}{2}}+ \alpha_i+s}}\frac{1}{\sqrt{N((l\prod^j_{i=1}r_i)^*)}} A_{\epsilon_1(\alpha_1+ s), \cdots, \epsilon_j(\alpha_j+ s)}(l\prod^j_{i=1}r_i) .\end{aligned}$$ Using arguments similar to those used in the proof of [@Young2 Lemma 6.2], we see that $$ D_N(k_1=\pm i, \alpha_i; s) = D_R(\epsilon_1, \cdots, \epsilon_j;s).$$ It follows from this that we have $$\begin{aligned} M_N(k_1=\pm i, \alpha_i) + M_R(\epsilon_1, \cdots, \epsilon_j) =& \frac{\pi }{2} \sum_{\substack{ a \equiv 1 \bmod {(1+i)^3} \\ (a,2l) = 1}}\frac{\mu_{[i]}(a)}{N(a)^2} \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} \frac{G_j(s)}{s} g_{\alpha_1, \cdots,\alpha_j}(s)\widehat{F}(1 -(1-\frac{j}{2})s - \alpha_i) \\ & \times \Gamma_{\alpha_i + s} N(a)^{2 \alpha_i + 2s} D_R(\epsilon_1, \cdots, \epsilon_j;s) ds.\end{aligned}$$ Grouping $ab$ into a variable and applying the Möbius formula implies that only $ab=1$ survives, which implies that $r_1 = r_2 = r_3 = 1$. Thus $$\begin{aligned} \label{MN} \begin{split} &M_N(k_1=\pm i, \alpha_i) + M_R(\epsilon_1, \cdots, \epsilon_j) \\ = & \frac{\pi}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} \frac{G_j(s)}{s} g_{\alpha_1, \cdots,\alpha_j}(s)\widehat{F}(1 -(1-\frac{j}{2})s - \alpha_i)\Gamma_{\alpha_i + s} A_{\epsilon_1(\alpha_1+ s), \cdots, \epsilon_j(\alpha_j+ s)}(l) ds. \end{split}\end{aligned}$$ When $j=3$, we can move the contour of integration to $-1/2 + \varepsilon$, crossing a pole at $s=0$ only, in view of Lemma \[lemma:A\] and Remark \[remark:zero\]. The residue at $s=0$ gives the main term in , and the error term is easily seen to be of the desired size. For $j=1,2$, we further obtain an expression for $M_{-N}(k_1=\pm i, \alpha_i) + M_{-R}(\epsilon_1, \cdots, \epsilon_j)$ from the above expression using Remark \[remark:M1toM2\], where $\epsilon_1, \cdots, \epsilon_j$ are the same as those in . Using the relation that $\widehat{F_{-(\alpha_j)}}(w)=\widehat{F}(w-\sum^j_{i=1}\alpha_i)$, we see that $$\begin{aligned} & M_{-N}(k_1=\pm i, \alpha_i) + M_{-R}(\epsilon_1, \cdots, \epsilon_j) \\ = & \frac{\pi}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} \frac{1}{2\pi i} \int\limits\limits_{(\varepsilon)} \frac{G_j(s)}{s} g_{-\alpha_1, \cdots,-\alpha_j}(s)\widehat{F}(1 -(1-\frac{j}{2})s +\alpha_i-\sum^j_{i=1}\alpha_i)\Gamma_{-\alpha_i + s} \Gamma_{(\alpha_j)} A_{\epsilon_1(-\alpha_1+ s), \cdots, \epsilon_j(-\alpha_j+ s)}(l) ds.\end{aligned}$$ We apply a change of variable $s \rightarrow -s$ to recast the above as $$\begin{aligned} \label{MNminus} \begin{split} & M_{-N}(k_1=\pm i, \alpha_i) + M_{-R}(\epsilon_1, \cdots, \epsilon_j) \\ =&-\frac{\pi}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} \frac{1}{2\pi i} \int\limits\limits_{(-\varepsilon)} \frac{G_j(s)}{s} g_{-\alpha_1, \cdots,-\alpha_j}(-s)\widehat{F}(1 +(1-\frac{j}{2})s +\alpha_i-\sum^j_{i=1}\alpha_i)\Gamma_{-\alpha_i - s}\Gamma_{(\alpha_j)} A_{\epsilon_1(-\alpha_1- s), \cdots, \epsilon_j(-\alpha_j- s)}(l) ds. \end{split}\end{aligned}$$ We now deduce from the identity $$ g_{-\alpha}(-s)\Gamma_{-\alpha-s} \Gamma_{\alpha} = g_{\alpha}(s)$$ and the identity $\Gamma_{\alpha}=\Gamma^{-1}_{-\alpha}$ that $$g_{-\alpha_1, \cdots,-\alpha_j}(-s)\Gamma_{-\alpha_i - s}\Gamma_{(\alpha_j)} =g_{\alpha_1, \cdots,\alpha_j}(s)\frac {\Gamma_{(\alpha_j+s)}}{\Gamma_{\alpha_i + s}}.$$ When $j=2$, the above allows us to see that the two integrands on the right-hand sides of and (with $a_i$ replaced by $a_{j-i+1}$) are negative to each other, hence the sum of the two integrals equals to the residue at $s=0$ of the integrand in , thus proving . Applying the above discussions similarly to the case $j=1$ by taking note of allows us to establish as well. Completion of the proof ----------------------- We are now able to complete the proof of Theorem \[theo:recursive2\]. We first consider the case $j=1$. In this case, we note that it follows from Remark \[remark:M1toM2\] that we also have $$\label{eq:MT2} M_{-N}(k=0) + M_{N}(k_1=\pm i, \alpha) + M_{R}(-1) + M_{R}(1) = \Gamma_{\alpha}\frac{\pi \widehat{F}(1-\alpha)}{2 \zeta_{K,2}(2)\sqrt{N(l)}}A_{-\alpha}(l).$$ Combining Lemma \[lemma:MRresult1\], and taking note of Remark \[remark:M1toM2\], we get $$\begin{aligned} \label{thm:combine} \begin{split} M_{\alpha}(l) =& M_N(k=0) + M_{-N}(k = 0) + M_N(k_1 = \pm i, \alpha) + M_{-N}(k_1 = \pm i, \alpha)\\ \\ & + M_{R}(1) + M_{R}(-1) + M_{-R}(1) + M_{-R}(-1) + O{\left(}\frac{X^{f + \varepsilon}}{Y^{2f-1}} N(l)^{1/2 + \varepsilon} + X^{1/4 + \varepsilon} Y N(l)^{1/2 + \varepsilon} +X^{1/2 + \varepsilon} {\right)}. \end{split}\end{aligned}$$ Now applying and in the above expression and setting $Y = X^{\frac{1}{4}}$ in allows us to see that the statement of Theorem \[theo:recursive2\] is valid for $j=1$. For the case $j=2$, we combine Lemma \[lemma:MRresult1\] and and Remark \[remark:M1toM2\] to obtain $$\begin{aligned} \label{thm:combine2} \begin{split} M_{\alpha, \beta}(l) =& M_N(k=0) + M_R(1, 1)+ M_{-N}(k=0) + M_{-R}(1, 1)+M_{-N}(k = 0) \\ & +M_N(k_1=\pm i, \alpha) + M_R(-1,1)+ M_{-N}(k_1=\pm i, \beta) + M_{-R}(-1,1)\\ & +M_N(k_1=\pm i, \beta) + M_R(1,-1)+ M_{-N}(k_1=\pm i, \alpha) + M_{-R}(1,-1) \\ & + O{\left(}\frac{X^{f + \varepsilon}}{Y^{2f-1}} N(l)^{1/2 + \varepsilon} + X^{1/2 + \varepsilon} Y N(l)^{1/2 + \varepsilon} {\right)}. \end{split}\end{aligned}$$ Now applying and in the above expression as well as Remark \[remark:M1toM2\] and setting $Y = X^{\frac{2f-1}{4f}}$ in allows us to see that the statement of Theorem \[theo:recursive2\] is valid for $j=2$. For the case $j=3$, we combine Lemma \[lemma:MRresult1\] and to see that $$\begin{aligned} \begin{split} M_1 =& M_N(k=0) + M_N(k_1=\pm i, \alpha) + M_N(k_1=\pm i, \beta) + M_N(k_1=\pm i, \gamma) \\ & +M_R(1,1,1) + M_R(1,1,-1) + M_R(1,-1, 1)+M_R(-1,1,1) \\ &+ O{\left(}\frac{X^{f + \varepsilon}}{Y^{2f-1}} N(l)^{1/2 + \varepsilon} + X^{3/4 + \varepsilon} Y N(l)^{1/2 + \varepsilon} {\right)}. \end{split}\end{aligned}$$ We now apply and to recast the above as $$\begin{aligned} \label{thm:M1} \begin{split} M_1 =& \pi A_{\alpha, \beta, \gamma}(l) \frac{\widehat{F}(1)}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} \\ & + \pi A_{- \alpha, \beta,\gamma}(l) \Gamma_{\alpha} \frac{ \widehat{F}(1- \alpha )}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}} + \pi A_{\alpha, -\beta,\gamma}(l) \Gamma_{\beta} \frac{ \widehat{F}(1- \beta )}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}}+ \pi A_{ \alpha, \beta,-\gamma}(l) \Gamma_{\gamma} \frac{ \widehat{F}(1- \gamma )}{2 \zeta_{K,2}(2) \sqrt{N(l_1)}}\\ &+ O{\left(}\frac{X^{f + \varepsilon}}{Y^{2f-1}} N(l)^{1/2 + \varepsilon} + X^{3/4 + \varepsilon} Y N(l)^{1/2 + \varepsilon} {\right)}. \end{split}\end{aligned}$$ We then obtain an asymptotic for $M_{-1}$ using Remark \[remark:M1toM2\], which gives the remaining four main terms in plus the same error as given in . We now readily deduce the assertion of Theorem \[theo:recursive2\] for $j=3$ by setting $Y = X^{\frac{f-\frac34}{2f}}$. This completes the proof of Theorem \[theo:recursive2\]. Proof of Theorem \[thm: nonvanishing\] {#sect: nonvanishing} ====================================== We consider the following mollifier $$\begin{aligned} \label{Md} M(d) =\sum_{\substack{ l \equiv 1 \bmod {(1+i)^3} \\ N(l) \leq M}} \lambda(l)\sqrt{N(l)}\chi_{(1+i)^5d}(l).\end{aligned}$$ Our goal is to choose $\lambda(l)$ optimally such that the following mollified first and second moments (corresponding to $j=1,2$, respectively) are comparable: $$\begin{aligned} S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^jM(d)^j; \Phi)=\frac 1X \sum_{\substack{ d \in \mathcal{O}_K \\ (d, 2)=1}}\mu^2_{[i]}(d)L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^jM(d)^j\Phi(\frac {N(d)}{X}).\end{aligned}$$ Here we set $M = (\sqrt{X})^{\theta}$ for some $\theta < 1 - \varepsilon$ and $\Phi$ is given in Theorem \[theo:mainthm\] such that we take $\Phi$ to be an approximation to the characteristic function of $(1, 2)$ so that $\widehat{\Phi}(1) \sim 1$. To specify $\lambda(l)$, we first make a linear change of variables to define for primary $\gamma$, $$\begin{aligned} \xi(\gamma)=\sum_{\substack{a \equiv 1 \bmod {(1+i)^3}}}\frac {\lambda(a\gamma)}{h(a)}\frac {N(a) d_{[i]}(a)}{\sigma_{[i]}(a)}.\end{aligned}$$ Note here that we can recover $\lambda$ from $\xi$ by the following relation: $$\begin{aligned} \label{eq:lambda} \lambda(l)=\sum_{\substack{a \equiv 1 \bmod {(1+i)^3}}}\frac {\mu_{[i]}(a)}{h(a)}\frac {N(a) d_{[i]}(a)}{\sigma_{[i]}(a)}\xi(la).\end{aligned}$$ Thus, in order to determine $\lambda(l)$, it suffices to define $\xi(\gamma)$. We shall assume that $\xi(\gamma)$ is supported on primary square-free elements $\gamma $ satisfying $N(\gamma) \leq M$. We then note that implies that $\lambda(l)$ is also supported on primary square-free elements $\gamma $ satisfying $N(\gamma) \leq M$. We shall further require that $$\begin{aligned} \label{eq:xibound} |\xi(\gamma)|=\frac 1{N(\gamma)\log^2 M}\prod_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | \gamma}}{\left(}1+O{\left(}\frac {1}{N(\varpi)} {\right)}{\right)}.\end{aligned}$$ It is then easy to deduce from this and that $\lambda(l) \ll N(l)^{-1+\varepsilon}$. First mollified moment ---------------------- Our evaluation of the first mollified moment requires us to evaluate $M_0(l)$ explicitly, where $M_0(l)$ is defined in . This can be done directly from Theorem \[thm: Malphal1\] by considering the limit as $\alpha \rightarrow 0$ of the asymptotic expression given in for $M_{\alpha}(l)$ (with $f=1/2$ there). In this way, we obtain the following result analogue to [@sound1 Proposition 1.2]: \[theo:1stmoment\] Let $\Phi$ be given in Theorem \[theo:mainthm\]. For any primary square-free $l \in \mathcal{O}_K$ and any $\varepsilon>0$, we have $$\begin{aligned} {\sideset{}{^*}\sum}_{(d,2)=1} L({\tfrac12}, \chi_{(1+i)^5d}) \Phi{\left(\frac{N(d)}{X}\right)}\chi_{(1+i)^5d}(l) =& \frac {\pi^2}{4} \frac{ \widehat{\Phi}(1)X}{\zeta_{K}(2)\sqrt{N(l)}}\frac {C}{g(l)}\left (\log\frac {\sqrt{X}}{N(l)}+C_2+\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}} \frac {C_2(\varpi)\log N(\varpi)}{N(\varpi)} \right ) \\ &+O(N(l)^{1/2 + \varepsilon} X^{\frac 12 + \varepsilon}),\end{aligned}$$ where $$\begin{aligned} C=\frac 1{3}\prod_{\substack{\varpi \equiv 1 \bmod {(1+i)^3}}}\left (1-\frac {1}{N(\varpi)(N(\varpi)+1)} \right ), \quad g(l)=\prod_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}}{\left(}\frac {N(\varpi)+1)}{N(\varpi)}{\right)}\left (1-\frac {1}{N(\varpi)(N(\varpi)+1)} \right ).\end{aligned}$$ Moreover, $C_2$ is a constant depending only on $\Phi$ and $C_2(\varpi) \ll 1$ for all $\varpi$. We then apply and Theorem \[theo:1stmoment\] to see that $$\begin{aligned} S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})M(d); \Phi) =& \frac {\pi^2}{4} \frac{ C\widehat{\Phi}(1)}{\zeta_{K}(2)}\sum_{\substack{l \equiv 1 \bmod {(1+i)^3} \\ N(l) \leq M}}\frac {\lambda(l)}{g(l)}\left (\log\frac {\sqrt{X}}{N(l)}+C_2+\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}} \frac {C_2(\varpi)}{N(\varpi)\log N(\varpi)} \right ) \\ &+O(X^{-\varepsilon}).\end{aligned}$$ We now define a multiplicative function $g_1(\gamma)$ on primary, square-free $\gamma$ such that for any primary prime $\varpi$, we have $$\begin{aligned} g_1(\varpi) =\frac 1{g(\varpi)}-\frac {2N(\varpi)}{h(\varpi)(N(\varpi)+1)}.\end{aligned}$$ We note that $g_1(\varpi)=-1+O(1/N(\varpi))$. Using to write $\lambda$ in terms of $\xi$, we derive that $$\begin{aligned} \sum_{\substack{l \equiv 1 \bmod {(1+i)^3} \\ N(l) \leq M}}\frac {\lambda(l)}{g(l)}\log\frac {\sqrt{X}}{N(l)}=& \sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3}}}g_1(\gamma)\left (\log (\sqrt{X}N(\gamma))+O(\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | \gamma}} \frac {\log N(\varpi)}{N(\varpi)}) \right ) \\ =& \sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3}}}g_1(\gamma)\left (\log (\sqrt{X}N(\gamma))\right )+O\left(\frac {1}{\log X} \right ),\end{aligned}$$ where the last estimation above follows from . Similar arguments imply that $$\begin{aligned} \sum_{\substack{l \equiv 1 \bmod {(1+i)^3} \\ N(l) \leq M}}\frac {\lambda(l)}{g(l)} \left ( C_2+\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}} \frac {C_2(\varpi)}{N(\varpi)\log N(\varpi)} \right ) \ll \frac {1}{\log X}.\end{aligned}$$ We then conclude from the above discussions that the first mollified moment is $$\begin{aligned} \label{1stmollifiedmoment} S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})M(d); \Phi)= \frac {\pi^2}{4} \frac{ C\widehat{\Phi}(1)}{\zeta_{K}(2)}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3}}}g_1(\gamma)\left (\log \left(\sqrt{X}N(\gamma) \right )\right )+O\left(\frac {1}{\log X} \right ).\end{aligned}$$ Second mollified moment ----------------------- To evaluate the second mollified moment, we shall not apply an approach similar to our treatment for the first mollified moment since the error term in the asymptotic expression for $M_{\alpha, \beta}(l)$ given in Theorem \[thm: Malphal1\] is too large in the $l$ aspect (of size $N(l)^{1/2 + \varepsilon}$). This would not allow us to take $\theta$ to be close to $1$. Rather, we follow the approach of Soundararajan in [@sound1] here. Let $Y$ be a parameter and we write $\mu_{[i]}^2(d)=M_Y(d)+R_Y(d)$ where $$M_Y(d)=\sum_{\substack {l^2|d \\ N(l) \leq Y}}\mu_{[i]}(l) \; \quad \mbox{and} \; \quad R_Y(d)=\sum_{\substack {l^2|d \\ N(l) > Y}}\mu_{[i]}(l).$$ We then have $$\begin{aligned} S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi)=S_M(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi)+S_R(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi),\end{aligned}$$ where $$\begin{aligned} S_M(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi)=& \frac 1X \sum_{\substack{ d \in \mathcal{O}_K \\ (d, 2)=1}}M_Y(d)L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^jM(d)^j\Phi(\frac {N(d)}{X}), \\ S_R(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi)=& \frac 1X \sum_{\substack{ d \in \mathcal{O}_K \\ (d, 2)=1}}R_Y(d)L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^jM(d)^j\Phi(\frac {N(d)}{X}).\end{aligned}$$ Similar to [@sound1 Proposition 1.1], we can show that when $N(l)\ll N(1)^{-1+\varepsilon}$, $$\begin{aligned} \label{SL} S_R(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi) \ll \frac {X^{\varepsilon}}{Y}+\frac {M^{j/2}}{X^{1/2-\epsilon}}.\end{aligned}$$ To evaluate $S_M(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi)$, we introduce two notations now. First, we denote for any integer $j \geq 0$, $$\begin{aligned} \Phi_{(j)}=\max_{0 \leq i \leq j}\int\limits_{{\ensuremath{\mathbb R}}}|\Phi^{(i)}(t)|dt.\end{aligned}$$ Secondly, for all integers $j > 0$, we define $\Lambda_j(n)$ to be the function defined on integral ideals of $K$ which equals the coefficient of $N(n)^{-s}$ in the Dirichlet series expansion of $(-1)^{j}\zeta^{(j)}_K(s)/\zeta_K(s)$. In particular, $\Lambda_1(n)$ is the usual von Mangoldt function $\Lambda(n)$ on $K$. We note that $\Lambda_j(n)$ is supported on elements $n$ in $\mathcal{O}_{K}$ such that $((n))$ has at most $j$ distinct prime ideal factors, and $\Lambda_j (n) \ll_j (\log N(n))^j$. Now, we are ready to state our result on $S_M(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi)$. We omit its proof here since it is similar to that of [@sound1 Proposition 1.2]. We only point out here the that triple pole of $\zeta_K(1+2s)^3$ at $s=0$ contributes a factor of $(\pi/4)^3$. One can also derive the main term given in below from $M_{\alpha, \beta}(l)$ defined in using Lemma 2.3 in [@Sono]. \[theo:2ndmoment\] Let $\Phi$ be given in Theorem \[theo:mainthm\]. For any primary $l \in \mathcal{O}_K$ such that $l=l_1l^2_2$ such that $l_1$ is primary and square-free, we have for any $\varepsilon>0$, $$\begin{aligned} \label{eq:2ndmoment} \begin{split} & S_M(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi) \\ =& \frac {\pi^4}{4^3} \frac{D \widehat{\Phi}(1)}{36\zeta_{K}(2)}\frac {d(l_1)}{\sqrt{N(l)}}\frac {N(l_1)}{\sigma_{[i]}(l_1)h(l)}\Big (\log^3\frac {X}{N(l_1)}-3\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}} \log^2 N(\varpi)\log \frac {X}{N(l_1)}+O(l) \Big ) \\ &+O \left(\Phi_{(2)}\Phi^{\epsilon}_{(3)}\frac {N(l)^{{\tfrac{1}{2}}+\varepsilon} Y^{1+\varepsilon}}{X^{{\tfrac{1}{2}}+\varepsilon}}+ \frac {N(l)^{\varepsilon} X^{\varepsilon}}{\sqrt{N(l_1)}Y}+\frac {N(l)^{\varepsilon} X^{\varepsilon}}{(N(l_1)X)^{1/4}} \right ), \end{split}\end{aligned}$$ where $h$ is the multiplicative function defined on primary prime powers by $$\begin{aligned} h(\varpi^k)=1+\frac 1{N(\varpi)}+\frac 1{N(\varpi)^2}-\frac 4{N(\varpi)(N(\varpi)+1)}, \quad (k \geq 1)\end{aligned}$$ and $$\begin{aligned} D=\frac 18\prod_{\substack{\varpi \equiv 1 \bmod {(1+i)^3}}}\left (1-\frac 1{N(\varpi)} \right )h(\varpi).\end{aligned}$$ Also, $$\begin{aligned} O(l)=& \sum^3_{j,k=0}\sum_{\substack{m \equiv 1 \bmod {(1+i)^3}\\ m |l_1}}\sum_{\substack{n \equiv 1 \bmod {(1+i)^3}\\ n |l_1}} \frac {\Lambda_j(m)}{N(m)}\frac {\Lambda_k(n)}{N(n)}D(m,n)Q_{j,k}(\log \frac {X}{N(l_1)}) \\ & -3(A+B\frac {\widehat{\Phi}'(1)}{\widehat{\Phi}(1)})\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi | l}} \log^2 N(\varpi).\end{aligned}$$ where $A$ and $B$ are absolute constants and $D(m, n) \ll 1$ uniformly for all $m$ and $n$. The $Q_{j,k}$ are polynomials of degree $\leq 2$ whose coefficients involve only absolute constants and linear combinations of $\frac {\widehat{\Phi}^{(j)}(1)}{\widehat{\Phi}(1)}$ for $1 \leq j \leq 3$. Combining and and setting $Y=X^{\varepsilon}$, we see that $$\begin{aligned} & S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi) \\ = & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{l \equiv 1 \bmod {(1+i)^3}}}\left ( \sum_{\substack{r,s \equiv 1 \bmod {(1+i)^3} \\ rs=l}} \lambda(r)\lambda(s)\right )\frac {\sqrt{N(l)}}{h(l)}\frac {d_{[i]}(l_1)}{\sqrt{N(l_1)})} \frac {N(l_1)}{\sigma_{[i]}(l_1)} \\ & \times \left ( \log^3\frac {X}{N(l_1)}-3\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |l_1}}\log^2 N(\varpi)\log \frac {X}{N(l_1)}+O(l) \right )+O(X^{-\varepsilon}).\end{aligned}$$ We write $r = a\alpha$ and $s = b\alpha$ where $a$ and $b$ are co-prime primary elements. As $\lambda$ is assumed to be supported on square-free elements, we deduce that $\alpha = l_2$ and $l_1 = ab$. Thus we obtain from the above that $$\begin{aligned} & S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi) \\ = & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{\alpha \equiv 1 \bmod {(1+i)^3}}} \frac {N(\alpha)}{h(\alpha)}\sum_{\substack{a,b \equiv 1 \bmod {(1+i)^3} \\ (a,b)=1}}\frac {\lambda(a\alpha)}{h(a)}\frac {\lambda(b\alpha)}{h(b)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \frac {b d_{[i]}(a)}{\sigma_{[i]}(b)} \\ & \times \left ( \log^3\frac {X}{N(ab)}-3\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |ab}}\log^2 N(\varpi)\log \frac {X}{N(ab)}+O(\alpha^2 ab) \right )+O(X^{-\varepsilon}).\end{aligned}$$ Using the Möbius function to remove the condition that $(a,b)=1$, we see that $$\begin{aligned} \label{secmoment0} \begin{split} & S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi) \\ = & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{\alpha \equiv 1 \bmod {(1+i)^3}}} \frac {N(\alpha)}{h(\alpha)}\sum_{\substack{\beta \equiv 1 \bmod {(1+i)^3}}}\frac {\mu_{[i]}(\beta)}{h(\beta)^2} \frac {\beta^2 d_{[i]}(\beta)^2}{\sigma_{[i]}(\beta)^2}\sum_{\substack{a,b \equiv 1 \bmod {(1+i)^3} }}\frac {\lambda(a\alpha\beta)}{h(a)}\frac {\lambda(b\alpha\beta)}{h(b)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \frac {b d_{[i]}(a)}{\sigma_{[i]}(b)} \\ & \times \left ( \log^3\frac {X}{N(ab\beta^2)}-3\sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |ab\beta}}\log^2 N(\varpi)\log \frac {X}{N(ab\beta^2)}+O(\alpha^2\beta^2 ab) \right )+O(X^{-\varepsilon}). \end{split}\end{aligned}$$ We further define a multiplicative function $H(n)$ on primary, square-free $n$ such that for any primary prime $\varpi$, $$\begin{aligned} H(\varpi)=1-\frac {4N(\varpi)}{h(\varpi)(N(\varpi)+1)^2}=1+O(\frac 1{N(\varpi)}).\end{aligned}$$ By setting $\gamma=\alpha\beta$ in and proceeding similarly to the arguments in Section 6.2 of [@sound1], we deduce that the second mollified moment is $$\begin{aligned} \label{secmoment} \begin{split} & S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M(d)^2; \Phi) \\ = & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} }}\frac {N(\gamma)H(\gamma)}{h(\gamma)}\sum_{\substack{a,b \equiv 1 \bmod {(1+i)^3} \\ (a,b)=1}}\frac {\lambda(a\gamma)}{h(a)}\frac {\lambda(b\gamma)}{h(b)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \frac {b d_{[i]}(a)}{\sigma_{[i]}(b)} \\ & \times \Big ( \log^3\frac {X}{N(ab)}-3 \log \frac {X}{N(ab)}\Big ( \sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |a}}\log^2 N(\varpi)+ \sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |b}}\log^2 N(\varpi) \Big )\Big )+O(\frac 1{\log X}). \end{split}\end{aligned}$$ Optimizing the mollified moments -------------------------------- It follows from that the second mollified moment looks like $$\begin{aligned} \label{secondmollifiedmoment0} & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\log^3 X \sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} }}\frac {N(\gamma)H(\gamma)}{h(\gamma)}\xi(\gamma)^2.\end{aligned}$$ As the above is a diagonal quadratic form of $\xi(\gamma)$, we see that in order to choose a mollifier to minimize for fixed , we need to choose $\xi(\gamma)$ so that it is proportional to $$\begin{aligned} & \frac {h(\gamma)g_1(\gamma)}{N(\gamma)H(\gamma)}\log (\sqrt{X}\gamma).\end{aligned}$$ We shall here follow the choice made in [@sound1 (6.8)] to choose for primary square-free $\gamma \leq M$ such that $$\begin{aligned} \xi(\gamma)=\frac {C}{D\log^3 M}\frac {h(\gamma)g_1(\gamma)}{N(\gamma)H(\gamma)}\log (\sqrt{X}\gamma).\end{aligned}$$ We notice that the above choice of $\xi$ does satisfy the condition . Similar to [@sound1 (6.8)], we see have that (keeping in mind that the residue of $\zeta_K(s)$ at $s=1$ is $\pi/4$) $$\begin{aligned} \label{elemargm} \begin{split} & \frac {C^2}{D \log^3 M}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} \\ N(\gamma) \leq x }}\mu^2_{[i]}((1+i)\gamma)\frac {h(\gamma)g_1(\gamma)^2}{N(\gamma)H(\gamma)} \\ =& \frac \pi{4} \frac {C^2}{2D}\prod_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ }}\left (1-\frac 1{N(\varpi)} \right )\left (1+ \frac {h(\varpi)g_1(\varpi)^2}{N(\varpi)H(\varpi)}\right )(\log (X) +O(1))\\ =& \frac \pi{4}\frac 49(\log (X) +O(1)). \end{split}\end{aligned}$$ We apply to via partial summation to see that the first mollified moment is $$\begin{aligned} \label{1stmollifiedmom} \begin{split} S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})M(d); \Phi)\sim & \frac {\pi^2}{4} \frac {C^2\widehat{\Phi}(1)}{D\zeta_{K}(2)\log^3 M}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} \\ N(\gamma) \leq M }}\mu^2_{[i]}((1+i)\gamma)\frac {h(\gamma)g_1(\gamma)^2}{N(\gamma)H(\gamma)}\log^2 (\sqrt{X}\gamma) \\ \sim & \left ( \frac \pi{4} \right )^2 \frac 29 \left ( \left (1+\frac 1{\theta} \right )^3-\frac 1{\theta^3} \right ) \frac {2\pi\widehat{\Phi}(1)}{3\zeta_{K}(2)}. \end{split}\end{aligned}$$ Now, we proceed to evaluate the second mollified moment for the chosen $\xi$. For this, we define for rational integers $j \geq 0$, $$\begin{aligned} & \xi_j(\gamma)= \sum_{\substack{a \equiv 1 \bmod {(1+i)^3}}}\frac {\lambda(a\gamma)}{h(a)}\frac {d_{[i]}(a)}{\sigma_{[i]}(a)} (\log N(a))^j.\end{aligned}$$ Similar to [@sound1 (6.11a)-(6.11c)], we see that for primary square-free element $\gamma$ satisfying $N(\gamma) \leq M$, we have $$\begin{aligned} \begin{split} \xi_1(\gamma)=& -\frac {C}{D \log^3 M}\frac {h(\gamma)g_1(\gamma)}{N(\gamma)H(\gamma)}\Big ( 2\log \frac {M}{N(\gamma)}\log (\sqrt{X}N(\gamma))+\log^2 \frac {M}{N(\gamma)}+O\Big(\log M(1+ \sum_{\substack{q \equiv 1 \bmod {(1+i)^3} \\ q | \gamma}}\frac {\log N(q)}{N(q)}\Big ) \Big ), \\ \xi_2(\gamma)=& \frac {C}{D \log^3 M}\frac {h(\gamma)g_1(\gamma)}{N(\gamma)H(\gamma)}\Big ( \log^2 \frac {M}{N(\gamma)}\log (\sqrt{X}N(\gamma))+\frac 2{3}\log^3 \frac {M}{N(\gamma)}+O\Big ( \log^2 M(1+ \sum_{\substack{q \equiv 1 \bmod {(1+i)^3} \\ q | \gamma}}\frac {\log N(q)}{N(q)}\Big ) \Big ), \\ \xi_3(\gamma) \ll & \frac {|h(\gamma)g_1(\gamma)|}{N(\gamma)H(\gamma)}{\left(}1+ \sum_{\substack{q \equiv 1 \bmod {(1+i)^3} \\ q | \gamma}}\frac {\log N(q)}{N(q)} {\right)}. \end{split}\end{aligned}$$ We now expand $\log^3 (X/N(ab))$ in terms of $\log X, \log N(a)$ and $\log N(b)$ to recast $$\begin{aligned} \label{secm:1stterm} & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} }}\frac {N(\gamma)H(\gamma)}{h(\gamma)}\sum_{\substack{a,b \equiv 1 \bmod {(1+i)^3} \\ (a,b)=1}}\frac {\lambda(a\alpha)}{h(a)}\frac {\lambda(b\alpha)}{h(b)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \frac {b d_{[i]}(a)}{\sigma_{[i]}(b)} \log^3\frac {X}{N(ab)}\end{aligned}$$ as a linear combination of terms $$\begin{aligned} & \frac {\pi^4}{4^3} \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} }}\frac {N(\gamma)H(\gamma)}{h(\gamma)}\xi_j(\gamma)\xi_k(\gamma)\log^l X,\end{aligned}$$ where $j+k+l=3$. We can evaluate these terms using the expressions for $\xi_i(\gamma), 1 \leq i \leq 3$. Then applying and partial summation, we see that $$\begin{aligned} \label{1stermest} \eqref{secm:1stterm} \sim \left ( \frac \pi{4} \right )^4 \left (\frac 2{81}+\frac {28}{135\theta}+\frac {11}{18\theta^2}+\frac {70}{81\theta^3}+\frac {16}{27\theta^4}+\frac {4}{27\theta^5} \right ) \frac{ 2\pi \widehat{\Phi}(1)}{3\zeta_{K}(2)}.\end{aligned}$$ This treats one of the terms given in . To treat the other terms, we proceed similarly to the treatments done on [@sound1 p. 485] to see that for primary, square-free $\gamma$ such that $N(\gamma) \leq M$, $$\begin{aligned} & \sum_{\substack{a \equiv 1 \bmod {(1+i)^3} }}\frac {\lambda(a\alpha)}{h(a)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \Big ( \sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |a}}\log^2 N(\varpi) \Big ) \\ =& - \frac {C}{D \log^3 M}\frac {h(\gamma)g_1(\gamma)}{N(\gamma)H(\gamma)}{\left(}\log^2 \frac {M}{N(\gamma)}\log (\sqrt{X}N(\gamma))+\frac 23\log^3\frac {M}{N(\gamma)}+O(\log^2 X) {\right)},\end{aligned}$$ and that $$\begin{aligned} & \sum_{\substack{a \equiv 1 \bmod {(1+i)^3} }}\frac {\lambda(a\alpha)}{h(a)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \log N(a) \Big ( \sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |a}}\log^2 N(\varpi) \Big ) \ll \frac {|h(\gamma)g_1(\gamma)|}{N(\gamma)H(\gamma)}\Big ( 1+ \sum_{\substack{q \equiv 1 \bmod {(1+i)^3} \\ q | \gamma}}\frac {\log N(q)}{N(q)} \Big ).\end{aligned}$$ As consequences, we see that $$\begin{aligned} & -\left ( \frac \pi{4} \right )^4 \frac{ D\widehat{\Phi}(1)}{36\zeta_{K}(2)}\sum_{\substack{\gamma \equiv 1 \bmod {(1+i)^3} }}\frac {N(\gamma)H(\gamma)}{h(\gamma)}\sum_{\substack{a,b \equiv 1 \bmod {(1+i)^3} \\ (a,b)=1}}\frac {\lambda(a\alpha)}{h(a)}\frac {\lambda(b\alpha)}{h(b)}\frac {ad_{[i]}(a)}{\sigma_{[i]}(a)} \frac {b d_{[i]}(a)}{\sigma_{[i]}(b)} \\ & \times \log \frac {X}{N(ab)}\Big ( \sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |a}}\log^2 N(\varpi)+ \sum_{\substack{\varpi \equiv 1 \bmod {(1+i)^3} \\ \varpi |b}}\log^2 N(\varpi) \Big ) \\ \sim & \left ( \frac \pi{4} \right )^4 \left (\frac 2{81}+\frac {4}{45\theta}+\frac {7}{54\theta^2}+\frac {2}{27\theta^3} \right ) \frac{ 2\pi \widehat{\Phi}(1)X}{3\zeta_{K}(2)}.\end{aligned}$$ Combining the above with , we find that the second mollified moment is $$\begin{aligned} \label{secondmoment} \sim \left ( \frac \pi{4} \right )^4 \left (\frac 4{81}+\frac {8}{27\theta}+\frac {20}{27\theta^2}+\frac {76}{81\theta^3}+\frac {16}{27\theta^4}+\frac {4}{27\theta^5} \right ) \frac{ 2\pi \widehat{\Phi}(1)}{3\zeta_{K}(2)}.\end{aligned}$$ Applying Cauchy-Schwarz inequality together with the first mollified moment and the second mollified moment , we have $$\begin{aligned} \label{comparison} \begin{split} & \sum_{\substack{X \leq N(d) \leq 2X \\ (d, 2)=1 \\ L({\tfrac{1}{2}}, \chi_{(1+i)^5d}) \neq 0}}\mu_{[i]}(d)^2 \geq \sum_{\substack{(d, 2)=1 \\ L({\tfrac{1}{2}}, \chi_{(1+i)^5d}) \neq 0}}\mu_{[i]}(d)^2\Phi(\frac {N(d)}{X}) \geq X \frac {S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})M(d); \Phi)^2}{S(L({\tfrac{1}{2}}, \chi_{(1+i)^5d})^2M^2(d); \Phi)} \\ \geq & \left (1-\frac 1{(\theta+1)^3} \right )\frac {2\pi}{3\zeta_K(2)}X={\left(}\frac 78+o(1) {\right)}\sum_{\substack{X \leq N(d) \leq 2X \\ (d, 2)=1 }}\mu_{[i]}(d)^2, \end{split}\end{aligned}$$ since we have that (see ) $$\begin{aligned} & \sum_{\substack{X \leq N(d) \leq 2X \\ (d, 2)=1 }}\mu_{[i]}(d)^2 \sim \frac {2\pi}{3\zeta_K(2)}X.\end{aligned}$$ We now set $\theta=1-\varepsilon$ in to see that the assertion of Theorem \[thm: nonvanishing\] follows by summing over $X=x/2^j$ for $j \geq 1$ and this completes the proof. [**Acknowledgments.**]{} P. G. is supported in part by NSFC grant 11871082.
--- abstract: 'Learning to automatically perceive smell is becoming increasingly important with applications in monitoring the quality of food and drinks for healthy living. In todays age of proliferation of internet of things devices, the deployment of electronic nose otherwise known as smell sensors is on the increase for a variety of olfaction applications with the aid of machine learning models. These models are trained to classify food and drink quality into several categories depending on the granularity of interest. However, models trained to smell in one domain rarely perform adequately when used in another domain. In this work, we consider a problem where only few samples are available in the target domain and we are faced with the task of leveraging knowledge from another domain with relatively abundant data to make reliable inference in the target domain. We propose a weakly supervised domain adaptation framework where we demonstrate that by building multiple models in a mixture of supervised and unsupervised framework, we can generalise effectively from one domain to another. We evaluate our approach on several datasets of beef cuts and quality collected across different conditions and environments. We empirically show via several experiments that our approach perform competitively compared to a variety of baselines.' author: - | Kehinde Owoeye\ Department of Computer Science\ University College London\ London, WC1E 6EA\ `ucabowo@ucl.ac.uk`\ bibliography: - 'references\_.bib' title: Learning to smell for wellness --- Introduction ============ Safeguarding the health and well being of millions of people in the world most especially in the developing regions of the world remain one of the seventeen key 2030 Agenda for Sustainable Development of the United Nations [@UN]. To achieve this, the quality of food and drink products remains a target to be monitored by appropriate authorities to ensure they are safe and healthy for all and sundry. Due to the proliferation of Internet of things devices, gas sensors in the form of electronic nose are becoming increasingly available and important for smelling and tasting chemicals, food and wines [@rodriguez2016electronic; @dataset2] for the purpose of assessing their quality. The data obtained from these devices can be used to build a machine learning model with applications in predicting in the future the exact quality of food and drink products at different levels of granularity. However, like most machine learning models, these models on their own do not scale when used in other different but similar domains where the features are different due to covariate shift or when there is a mismatch in the distribution of the labels in the respective domains. Existing methods proposed towards tackling this problem with respect to time series data have however been designed for domains where either data has been collected in well controlled environments [@purushotham2016variational], simple binary classification problems [@purushotham2016variational] or has considered separately the problem of domain adaptation and semi supervised learning in the same domain with few data points [@zhu2018novel]. There are however problems with these methods in relation to the contexts where they have been used. While weakly supervised learning methods don’t scale to other domains, recent work [@saito2019semi] has shown that conventional unsupervised domain adaptation methods designed to produce domain invariant features still perform poorly even when few samples are available in the target domain as well as fail to address adequately classification problems that exists around class boundaries in the target domain. In addition, while domain adaptation problems have hugely focused on generating domain invariant features, in practice, there is a mismatch in the label space coupled with the noisy nature of sensor data. In this paper, we consider a scenario where we address the problem of domain adaptation with only few samples (four samples) [@xu2007word] also known as semi-supervised/weakly supervised/few shots domain adaptation. Furthermore, we consider a situation where the classification is more fine-grained with the potential to misclassify for a naive classifier. We propose an approach that leverages a hierarchical model to find sub-groups in the source domain in an unsupervised manner using clustering. These sub-groups are then trained separately in a supervised learning framework with the aid of a recurrent neural network. A classifier is further trained on four samples per class in the target domain to map these data to the source domain clusters or models where the probability of classifying them accurately is best maximised in addition to training the source domain data in each cluster with the few labeled target domain data. We evaluate our approach on datasets of beef meat quality of different cuts collected across different spatiotemporal domains. Results on a variety of experiments with these datasets show that our approach performs competitively compared to competing baselines. Related Work ============ We discuss previous works relative to ours under three broad themes of transfer learning, semi-supervised learning and domain adaptation. **Transfer Learning:** Using all 85 datasets in the UCR archive [@chen2015ucr], a convolutional neural network (CNN) was proposed to classify time-series [@fawaz2018transfer]. They concluded that source data with some similarity to the target result in positive transfer and negative transfer if there is no similarity. In our case, we are only interested in using dataset with similarities in this case different beef cuts collected across different conditions but with varying difference in the distribution of the input features and labels. **Semi-Supervised Learning:** A lot of work have been carried out in this space with respect to time series  [@wei2006semi; @guan2007activity]. One key difference in our approach with respect to these works is that these methods are only designed for the domain where the source data is collected and perform poorly outside of this domain when the input and target distribution changes. In addition, we only consider a much more difficult few shot learning scenario where there are not mot more than four samples per class in the target domain. **Domain adaptation:** Building on the method proposed by [@ganin2016domain], a variational recurrent adversarial domain adaptation [@purushotham2016variational] was proposed to generate domain invariant features for healthcare time series data. Compared to the binary classification problem considered in this work, we have focused on even more challenging task of classifying noisy time-series data into four groups where there are non-trivial differences between the input and label data distribution in the source and target domain. In addition, while they have assumed access to all the input features of the target domain, we assume we only have access to just four samples per class with their associated labels. Problem formulation & Training objectives ========================================= Problem Formulation ------------------- Given two time series distributions $S(x_{t},y_{t})_{t = 1}^{N_{S}}$ and $T(x_{t},y_{t})_{t = 1}^{N_{T}}$ where the former represents the source domain and the latter the target domain while $x_{t}$ and $y_{t}$ represents the input features and the labels at each time-step $t$ respectively. Furthermore, $N_{S}$ and $N_{T}$ may or may not be equal and denotes the respective length of the two distributions and also the two distributions are different but similar in some respects. We assume during training we have access to all of the data from the source domain and only 4 samples per class from the target domain $\{S(x_{t},y_{t})_{t = 1}^{N_{S}}, T(x_{t},y_{t})_{t = 1}^{4n} \}$ where $n$ represents the number of unique labels in the target domain distribution. It has been shown that human categorization often asymptotes after just three or four examples [@xu2007word]. Our goal is to build a classifier with almost human level capability to predict the remaining labels $T(y_{t})_{t = 1}^{N_{T}-4n}$ in the target domain given $T(x_{t})_{t = 1}^{N_{T}-4n}$. Training objectives ------------------- There are three classifiers in the proposed model each with its training objective (See supplementary material for more details). The overall training objective however is given by:   E(\_[1]{},\_[2]{},\_[3]{}) = \_[i=1....N+4n]{} L(y\_[i]{},f(X\_[i]{};\_[3]{})|\ \_[i=1....4n]{} L(y\_[i]{},f(X\_[i]{};\_[2]{})|\ \_[i=1....N]{} L(y\_[i]{},f(X\_[i]{};\_[1]{}))))\ \[eqn:4\] Dataset ======= We gathered dataset of beef meats classified broadly into four groups of excellent, good, acceptable and spoiled with all the datasets skewed towards the spoiled meat. These data have been collected with the aid of electronic nose gas sensors and other sensors to measure variables such as humidity, temperature and TVC (continuous label of microbial population). Each data point in the datasets was recorded per minute in a sequential manner. **Dataset 1** is made up of time series data of beef quality collected across five different instances across two years. Each data instance is 2160 in length [@dataset1]. **Dataset 2** consist of an extra-lean fresh beef monitored for about 75 minutes under fluctuating conditions of humidity and temperature [@dataset2; @wijaya2017information; @wijaya2016sensor; @wijaya2017development]. **Dataset 3** contains 12 files of different beef meat cuts such as Inside - Outside, Round, Top Sirloin among others. Eleven gas sensors were used to collect the data [@DVN/XNFVTS_2018]. Model, Procedures, Experiments & Baselines ========================================== We consider experiments (154 in all) across all datasets where we aim to investigate the performance of our model across a variety of contexts. We aim to evaluate the performance of our model when there is significant difference in the distribution of the input features and labels across the source and target domains. Model architecture ------------------ We use four recurrent neural networks, LSTM [@hochreiter1997long] overall (two for training the two clusters of the source data alone and another two for training after adding the few target data) with four cells each and one logistic regression classifier. We use logistic regression to train the few labeled target data with the cluster labels as the input size here is too small for a neural network. The LSTMs with four cells each all employ a many to one classifier with a time-step of two. We Implement the model using Keras and Scikit learn. Training procedure ------------------ To train the classifier, we use the cluster label where the probability of predicting correctly the label of the data is maximized (two clusters constructed from the source data with the aid of Gaussian mixture model). In situations where none of the clusters can predict correctly any of the training target data label, we use the cluster label where the frequency of the label is higher. We run the model ten times settling for the iteration that performed well on the few target data in their new local domain. This model is run a further five times on the unseen target data to find the average classification accuracy. Baselines --------- We use logistic regression (LR), Ada-boost (AB) with a hundred estimators, Support vector machine (SVM), Semi Supervised learning (SS) [@wei2006semi], Deep Neural Network (DNN) with two layers each with 256 neurons, Long short term memory (LSTM) with one layer and 4 cells, Recurrent Domain Adaptation Neural Network (RDANN) [@ganin2016domain]. Results ======= Results (Table 1), show that our approach outperforms all baselines most of the times and overall across all experiments. Due to lack of the right quantity of data, most of the deep learning models appear to perform poorly. Source-Target LR AB SVM SS DNN LSTM R-DANN Ours ------------------ ----------- ----------- ------- ----------- ------- ------- -------- ----------- -- $1_{1-5} - 2 $ 19.59 69.43 11.26 36.24 5.95 46.02 47.18 **79.85** $1_{1}-3_{1-12}$ 46.58 22.00 33.60 14.65 5.30 37.55 44.86 **65.07** $1_{2}-3_{1-12}$ 33.97 54.73 25.89 13.59 7.97 41.21 37.61 **64.39** $1_{3}-3_{1-12}$ 36.65 54.73 26.37 13.34 3.54 60.62 33.79 **57.73** $1_{4}-3_{1-12}$ 39.41 31.37 28.42 13.63 10.53 13.31 32.62 **66.77** $1_{5}-3_{1-12}$ 58.99 64.53 30.02 69.32 12.24 23.74 35.43 **69.90** $2-1_{1-5}$ 52.06 59.44 49.03 38.81 11.69 14.08 42.77 **67.14** $2-3_{1-12}$ 75.41 **83.82** 73.51 27.11 7.21 61.60 63.51 78.59 $3_{1}-1_{1-5}$ 54.14 46.66 45.31 **64.46** 8.34 49.06 38.91 52.19 $3_{2}-1_{1-5}$ 59.34 57.83 45.09 **59.45** 19.44 44.27 38.91 54.85 $3_{3}-1_{1-5}$ 58.08 62.22 44.99 **71.04** 19.44 51.30 42.26 52.84 $3_{4}-1_{1-5} $ **63.36** 62.22 47.75 47.66 14.27 43.67 39.37 62.98 $3_{5}-1_{1-5 }$ 61.99 **62.22** 43.88 41.04 15.58 50.44 34.33 59.89 $3_{6}-1_{1-5}$ 54.97 **62.22** 44.56 34.50 14.38 45.37 37.59 60.23 $3_{7}-1_{1-5} $ 55.84 62.22 48.28 20.72 14.26 39.43 41.79 **65.33** $3_{8}-1_{1-5}$ 48.93 62.22 47.35 59.46 22.07 37.93 45.19 **62.39** $3_{9}-1_{1-5}$ 53.40 62.22 43.48 18.96 20.17 33.31 47.12 **64.46** $3_{10}-1_{1-5}$ 58.65 62.22 47.80 **68.15** 15.52 47.32 30.66 61.37 $3_{11}-1_{1-5}$ 55.69 62.22 43.67 **74.14** 13.31 48.44 35.92 56.77 $3_{12}-1_{1-5}$ 74.21 51.11 44.01 **100** 14.69 34.20 36.69 63.46 $3_{1-12}-2$ 51.56 88.08 60.28 91.33 20.52 33.60 78.74 **92.18** Avg 52.99 59.23 42.12 46.55 13.16 40.78 43.59 **66.59** Conclusion ========== In this paper we have introduced a new approach towards transferring knowledge from one time-series domain to another using only few samples for the purpose of assessing beef quality. Our approach leverages the construction of unsupervised classification tasks to improve actual beef quality classification tasks. We evaluate our approach on a time series data of beef quality cuts collected across different conditions. Results across a variety of experiments show that our approach performed competitively compared to competing baselines most especially when the distribution of the target domain labels differs significantly from that of the source domain. Our work is without its limitations, due to the number of experiments carried out and the total number of neural networks deployed, we have used the same hyper-parameters across all experiments. Careful tuning of the networks or change of architecture in the future can generate better results. In addition, Just like any other hierarchical model, this approach incurs additional computational cost. Furthermore, we envisage distributions with more classes can benefit from deep hierarchical clustering [@heller2005bayesian] compared to the flat clustering we have used. Future work could investigate a combination of some of the techniques used here together with adversarial domain adaptation methods. Supplementary Material {#supplementary-material .unnumbered} ====================== Approach: Additional Information ================================ The proposed approach leverages the construction of auxiliary tasks to improve the performance of a downstream supervised learning task. The essence of constructing auxiliary tasks is to aid the efficiency of learning a similar or related task. To construct auxiliary tasks for the purpose of our approach, we aim to find clusters in the source domain data where the probability of classifying each of the few labeled target data is maximized. The task therefore is defined as given the cluster label $C_{k}$ where the probability of classifying the few target labels is maximized, find $ \underset{ \hat{y}}{\mathrm{argmax}} \ p(\hat{y}|\hat{X},C_{k})$. The choice of the number of clusters is an open question. But it is essential to find a balance between the difficulty of finding $p(C_{k}| \hat{X}_{i=1...N_{t}})$ and that of $ \underset{ \hat{y}}{\mathrm{argmax}} \ p(\hat{y}|\hat{X}_{i=1...N_{t}},C_{k})$. There are four benefits of our approach with respect to domain adaptation. First, by finding clusters in the source distribution features, we are able to reduce the mismatch in the distribution of labels. Second, by allocating the target data to source models or cluster where its probability of being predicted is maximised, we reduce the mismatch in the feature distribution between the source and target domain. Third, since sensor data are extremely noisy, our approach has the potential to ensure extremely noisy inputs are represented in clusters where they appear as outliers enabling the efficient learning of the model parameters. And lastly, by only using labels for the classes that are far apart in the feature space, it is not necessary in some cases to obtain sample labels of the target data for all the classes as similar input features will be found in the same cluster attached to the same model. Algorithm ========= **Input:** Source data: $ S(x_{t},y_{t})_{t = 1}^{N_{S}} $, Target data: $T(\hat{x}_{t}, \hat{y}_{t} )_{t=1}^{4n}$ **Output:** Target domain class labels, $\hat{y}^{1},\hat{y}^{2},....,\hat{y}^{N}$ Find clusters $C_{k = 1....k_{n}}$ in the input dataset. Train each cluster $C_{k}$ with a RNN model $M_{i}$. Find the cluster $(C_{k})$ / model $(M_{k})$ where $ \underset{ \hat{y}}{\mathrm{argmax}} \ p(\hat{y}|\hat{X},C_{k})$ Retrain each of the RNN model $M_{k}$ again with the old source data in $C_{k}$ combined with the new data from $T(\hat{x}_{t}, \hat{y}_{t} )_{t=1}^{4n}$ Train a classifier to assign target data into the right cluster / model using $T(\hat{x}_{t})_{t=1}^{4n}$ and labels from $(C_{k})$ . **for** each datapoint in test data **do**: 1.5em Assign data to model $ M_{k} $ from step **6** using classifier from step **7**. 1.5em Run the RNN model $M_{k}$ attached to the assigned 1.5em cluster $C_{k}$ from step **6**. **end** **return** $ \hat{y}_{1},\hat{y}_{2},....,\hat{y}_{N} $ Training objectives: Additional Information =========================================== There are three classifiers in the proposed model each with its training objective. The training objective for classifying source domain labels alone is given by: $$\underset{\theta_{1}}{\mathrm{argmin}} \ E(\theta_{1}) = \dfrac{1}{N} \sum_{i=1....N} L(y_{i},f(X_{i};\theta_{1}))\\ \label{eqn:1}$$ The training objective for the local domain classification (training few labeled target data with cluster labels) is given by : $$\underset{\theta_{2}}{\mathrm{argmin}} \ E(\theta_{2}) = \dfrac{1}{4n} \sum_{i=1....4n} L(y_{i},f(X_{i};\theta_{2}))\\ \label{eqn:2}$$ Where $n$ is the number of unique classes in the target domain. The loss for classifying labels from all the local domains after adding few labeled data from the target domain (both source and target labels) is given by: $$\underset{\theta_{3}}{\mathrm{argmin}} \ E(\theta_{3}) = \dfrac{1}{N+4n}\sum_{i=1....N+4n} L(y_{i},f(X_{i};\theta_{3}))\\ \label{eqn:3}$$ The overall training objective is given by training objective (equation \[eqn:3\]) conditioned on training objective (equation \[eqn:2\]) which is conditioned on training objective (equation \[eqn:1\]).   E(\_[1]{},\_[2]{},\_[3]{}) = \_[i=1....N+4n]{} L(y\_[i]{},f(X\_[i]{};\_[3]{})|\ \_[i=1....4n]{} L(y\_[i]{},f(X\_[i]{};\_[2]{})|\ \_[i=1....N]{} L(y\_[i]{},f(X\_[i]{};\_[1]{}))))\ \[eqn:4\] Model Architecture ================== ![image](poster-pdf){width="12cm" height="5cm"} Baselines: Additional Information ================================= We compare our approach with several baselines with and without domain adaptation described below. While the domain adaptation baselines are fully unsupervised with the advantage of access to all target features during training, we still aim to compare these methods with our approach to see how these constraints influence performance. **Logistic regression (LR)**: We use a multinomial variant with a lbfgs solver.\ **Adaboost (AB)**: With 100 estimators.\ **Support Vector Machine (SVM):** One versus one. We add the few labeled data from the target distribution to the training data here and also for LR and AB above.\ **Semi-Supervised (SS):** Uses one nearest neighbour [@wei2006semi] classifier. We use a variant of this approach where we extend the original approach to a four way classifier since the original approach was proposed for binary classification. We build a dedicated classifier (assumed to be perfect) for each of the classes containing data from the source domain. We classify each data in the test data by assigning data into class where the one nearest neighbour has the minimum distance with respect to the four classifiers. Test data is added to the training data if the distance to the nearest neighbour is smaller than the mininum distance between samples in the same class in the training dataset. We use only the training data here to assess the ability of this approach to generalise when used on datasets from another domain while the test data is added to the training data as discussed above during testing.\ **Deep Neural Network (DNN)** : With two layers and 256 neurons each, trained over 100 epochs with dropout = 0.2, softmax layer and adam opimizer [@kingma2014adam].\ **Recurrent Neural Network (LSTM)**: A long short term memory (LSTM) network with 1 layer, timestep = 2, four cells, trained over 100 epochs with dropout = 0.2, softmax layer and adam optimizer [@kingma2014adam]. We use both training data and few labeled target data for training here and also for DNN.\ **Recurrent Domain Adaptation Neural Network (R-DANN)**: This is the domain adaptation approach of [@ganin2016domain] but with an LSTM in the feature extractor as in  [@purushotham2016variational]. Two layer feed-forward network with 128 neurons each are further added to the feature extractor as well as the source and domain classifiers. Relu activation is used throughout the feature extraction network and tanh for the LSTM with a softmax layer for classification. Data & Preprocessing: Additional Information ============================================ We provide more information on the datasets we have used here. We gathered dataset of beef meats classified broadly into four groups of excellent, good, acceptable and spoiled with all the datasets skewed towards the spoiled meat. These data have been collected with the aid of electronic nose gas sensors and other sensors to measure variables such as humidity, temperature and TVC (continuous label of microbial population). Each data point in the datasets was recorded per minute in a sequential manner. **Dataset 1:** This consists of time series data of beef quality collected across five different instances across two years. Each data instance is 2160 in length. Nine gas sensors (MQ135, MQ136, MQ2, MQ3, MQ4, MQ5, MQ6, MQ8, MQ9) were used to collect the data including the humidity and temperature sensors [@dataset1]. **Dataset 2:** This contains an extra-lean fresh beef monitored for about 75 minutes under fluctuating conditions of humidity and temperature. Ten Gas Sensors (MQ135, MQ136, MQ2, MQ3, MQ4, MQ5, MQ6, MQ7, MQ8, MQ9) were used to collect the data as well as humidity and temperature sensors [@dataset2; @wijaya2017information; @wijaya2016sensor; @wijaya2017development]. **Dataset 3:** Contains 12 files of different beef meat cuts such as Inside - Outside, Round, Top Sirloin among others. Eleven gas sensors (MQ135, MQ136, MQ137, MQ138, MQ2, MQ3, MQ4, MQ5, MQ6, MQ8, MQ9) were used to collect the data [@DVN/XNFVTS_2018]. To ensure the input features are uniform across all datasets collected, we remove the features corresponding to the humidity and temperature variable as well as those corresponding to the sensors MQ7, MQ138 and MQ137. Evaluation: Additional Information ================================== All results are reported using the source data and the few target training data except for SS and R-DANN to demonstrate their inherent limitations in the absence of target domain data. We evaluate all methods on the average classification accuracy from one dataset to another. For example, $2-1_{1-5}$ means model trained on dataset 2 containing just one file is tested on dataset 1 with five datasets or files. The average classification accuracy is thus based on the average of the accuracies of the model trained on dataset 2 and tested on the five datasets in dataset 1. Dataset Data length Beef cut ---------- ------------- ---------------- -------- -------- -------- ------- Dataset1 2160 - 0.111 0.306 0.139 0.444 2160 - 0.167 0.250 0.277 0.306 2160 - 0.168 0.250 0.167 0.417 2160 - 0.168 0.250 0.194 0.389 2160 - 0.168 0.250 0.194 0.389 Dataset2 4453 Extra-lean 0.063 0.046 0.040 0.851 Dataset3 2220 Inside Outside 0.0008 0.0005 0.0005 0.998 2220 Round 0.0008 0.0005 0.0005 0.998 2220 Top Sirloin 0.135 0.162 0.108 0.595 2220 Tenderloin 0.135 0.162 0.108 0.595 2220 Flap meat 0.135 0.162 0.108 0.595 2220 Striploin 0.135 0.162 0.108 0.595 2220 Rib eye 0.135 0.162 0.108 0.595 2220 Skirt meat 0.135 0.162 0.108 0.595 2220 Brisket 0.135 0.162 0.108 0.595 2220 Clod Chuck 0.135 0.162 0.108 0.595 2220 Shin 0.135 0.162 0.108 0.595 2220 Fat 0.108 0.135 0.162 0.595 : Distribution of all datasets showing the length, beef cut and the distribution of the different classes of beef quality across three datasets. It can be seen that the distribution of the classes is skewed towards the spoiled meat.[]{data-label="tab:1"}
--- abstract: 'We measure the remnant polarization of ferroelectric domains in BiFeO$_\mathrm{3}$ films down to 3.6 nm using low energy electron and photoelectron emission microscopy. The measured polarization decays strongly below a critical thickness of 5-7 nm predicted by continuous medium theory whereas the tetragonal distortion does not change. We resolve this apparent contradiction using first-principles-based effective Hamiltonian calculations. In ultra thin films the energetics of near open circuit electrical boundary conditions, i.e. unscreened depolarizing field, drive the system through a phase transition from single out-of-plane polarization to a nanoscale stripe domains, giving rise to an average remnant polarization close to zero as measured by the electron microscopy whilst maintaining the relatively large tetragonal distortion imposed by the non-zero polarization state of each individual domain.' author: - 'J. Rault' - 'W. Ren' - 'S. Prosandeev' - 'S. Lisenkov' - 'D. Sando' - 'S. Fusil' - 'M. Bibes' - 'A. Barthélémy' - 'L. Bellaiche' - 'N. Barrett' bibliography: - './biblio\_BFO.bib' title: 'Thickness-dependent polarization of strained BiFeO$_\mathrm{3}$ films with constant tetragonality' --- A major issue for prospective nanoscale, strain-engineered ferroelectric applications [@choi_enhancement_2004] is the decrease of the remnant polarization P$_\mathrm{r}$ of ultra-thin films. The depolarizing field arising from uncompensated surface charges reduces or even suppresses ferroelectricity below a critical thickness [@gerra_ionic_2006; @junquera_critical_2003]. Ferroelectric capacitors for example may exhibit a critical thickness [@kim_polarization_2005; @petraru_wedgelike_2008]. Lichtensteiger *et al.* [@lichtensteiger_ferroelectricity_2005] have shown that the decrease in P$_\mathrm{r}$ in PbTiO$_\mathrm{3}$ (PTO) thin films between 20 and 2.4 nm on Nb-doped SrTiO$_\mathrm{3}$ (STO) substrates is concomitant with that of the tetragonality (ratio of the out-of-plane to in-plane lattice parameter c/a). On La$_\mathrm{0.67}$Sr$_\mathrm{0.33}$MnO$_\mathrm{3}$ (LSMO) PTO polydomains were formed below 10 nm with high tetragonality [@lichtensteiger_monodomain_2007]. The formation of a polydomain state has been suggested for SrRuO$_\mathrm{3}$/Pb(Zr,Ti)O$_\mathrm{3}$/SrRuO$_\mathrm{3}$ capacitors with Pb(Zr,Ti)O$_\mathrm{3}$ thicknesses below 15 nm [@nagarajan_scaling_2006]. Pertsev and Kohlstedt showed the importance of misfit strain for the critical thickness of the monodomain-polydomain stability for PTO and BTO [@pertsev_elastic_2007]. Using piezo-response force microscopy (PFM), BiFeO$_\mathrm{3}$ (BFO) films have been shown to remain ferroelectric down to a few unit cells [@bea_ferroelectricity_2006; @chu_ferroelectric_2007; @maksymovych_ultrathin_2012] with both the remnant polarization and the slope of the piezoresponse hysteresis loop scaling with tetragonality. However, PFM is very local and can only provide indirect, semi-quantitative estimates of the polarization. Imperfect tip surface contact can contribute to polarization suppression via the depolarizing field. Direct electrical measurements of the polarization-field (P(E)) loop in ultrathin ferroelectric films are a challenge because of leakage current for thicknesses below a few tens of nm [@bea_ferroelectricity_2006; @kim_effect_2008]. They become impossible in the tunneling regime for ultra-thin films (5 nm or less) which, furthermore, is of the same order as the critical thickness, h$_\mathrm{eff}$, estimated from Landau-Ginzburg-Devonshire (LGD) elastic theory for polarization stability [@maksymovych_ultrathin_2012; @bratkovsky_abrupt_2000]. BFO can accommodate in-plane compressive strain via out-of-plane extension and through oxygen octahedron rotation about `<`111`>` [@infante_bridging_2010], a degree of freedom not available in P4mm PTO films. This interplay between strain, tetragonality and octahedra rotations leads to an unexpected decrease of T$_\mathrm{C}$ with strain, at odds with the variation of c/a ratio. Thus the relationship between structural parameters and the remnant out-of-plane polarization in very thin films remains an open question. In this Letter we have studied the polarization of BFO films from 70 to 3.6 nm thick using a combination of X-Ray Diffraction (XRD), Mirror Electron Microscopy (MEM) and PhotoElectron Emission Microscopy (PEEM). The electron microscopy techniques provide full-field imaging of the electrostatic potential above the surface and the work function whereas the tetragonality is measured by XRD. The results are interpreted in the light of a three-dimensional (3D) generalization of previously developed dead layer model for thin films within the framework of continuous medium theory that predicts a fast decrease of the polarization when decreasing the thickness. Interestingly, the extremely low polarization below h$_\mathrm{eff}$ does not scale with the tetragonality and is explained using first principles-based effective Hamiltonian calculations which show that as a function of screening the films undergo a phase transition from single to nanoscale stripe domains with an average polarization close to zero.\ Bilayers of BFO/LSMO were epitaxially grown on (001)-oriented STO substrates by pulsed laser deposition using a frequency tripled (h$\nu$ = 355 nm) Nd-doped Yttrium Aluminium Garnet (Nd:YAG) laser at a frequency of 2.5 Hz [@bea_ferroelectricity_2006]. The 20 nm thick LSMO layer is metallic and serves as a bottom electrode for ferroelectric characterization. X-ray Diffraction Measurements (XRD) on 70 nm to 3.6 nm-thick thin films were performed to track the out-of-plane parameter and the c/a ratio (Fig. \[fig:XRD\_exp\]). The c/a increases slightly from 1.050 for the thickest film (70 nm) to 1.053 for 7 nm, then remains constant down to 3.6 nm. This contrasts dramatically with the behavior of PTO reported in [@lichtensteiger_ferroelectricity_2005] where c/a decreases with thickness. The chemistry of the films was measured by X-ray Photoelectron Spectroscopy (XPS). Figure \[fig:XPS\] shows spectra from Bi 4f core-levels for thickest (70 nm) and thinnest (3.6 nm) films. The spectra are virtually identical for both films (and for intermediate thicknesses, see [^1]) showing that the chemical state and stoichiometry do not change. Bi 4f spectra have a second component shifted by 0.6 eV to higher binding energy (HBE) similar to that observed on single crystal BFO, which was associated to the presence of a surface phase or “skin layer” [@marti_skin_2011]. However, the proportion of the HBE component is thickness independent, suggesting that our strained thin films do not exhibit the several nanometer-thick skin observed on single crystals. C 1s spectra show that contamination of the BFO surface is similar for every thickness. Thus although we cannot exclude a contribution from extrinsic screening to the reduction in polarization, it is expected to be similar for all films [@Note1]. For the thickest BFO film (70 nm), the ferroelectric properties were investigated by standard polarization versus electric field P(E) loops (Fig. \[fig:PEloop\]). The piezo-response hysteresis loops are shown in Fig. \[fig:piezoLoop\]. They are position independent and exhibit similar coercive values as non-local P(E) loops, attesting sample homogeneity. In a BFO(001) film P$^\mathrm{+}$ and P$^\mathrm{-}$ states are the projections of `<`111`>` polarization along \[001\]. Poling of micron sized domains was performed by applying a d.c. voltage higher than the coercive voltage (inferred from the piezoresponse loops) on the tip while the bottom electrode was grounded. PFM imaging was carried out at an excitation frequency of 4-7 kHz and an a.c. voltage of 1 V. No morphology change occurred during poling as checked by AFM. A Low Energy Electron Microscope (LEEM, Elmitec GmbH) was used to measure the electron kinetic energy of the MEM (reflected electrons)-LEEM (backscattered electrons) transition with a spatial resolution of 30 nm. The transition energy (E) is a measure of electrostatic potential just above the sample surface [@cherifi_imaging_2010] and depends on polarization and the screening of polarization-induced surface-charge [@krug_extrinsic_2010]. It therefore allows a non-contact estimation of the remanent polarization for tunneling films, otherwise inaccessible to standard electrical methods. All experiments were done at least two days after domain writing to ensure that the observed contrast is not due to residual injected charges. Figure \[fig:LEEM\_image\] shows a typical MEM-LEEM image with a field of view (FoV) of 33 $\mathrm{\mu m}$ for incident electron energy (E$_{\mathrm{inc}}$) of 1.40 eV. The observed contrast reproduces well the PFM image of Fig. \[fig:PFM\]. A full image series across the MEM-LEEM transition (E) was acquired by varying E$_{\mathrm{inc}}$ from -2.0 to 3.0 eV. Figure \[fig:MEM\_LEEM\] displays the electron reflectivity curves showing the MEM (high reflectivity) to LEEM (low reflectivity) transition for the P$^\mathrm{+}$ (brown upwards triangles, E = 0.75 eV) and P$^\mathrm{-}$ (green downwards triangles, E = 1.20 eV) domains. Using complementary error function (erfc) fits we obtain MEM-LEEM transition maps showing clear contrast in the electrostatic potential just above the surface between the P$^\mathrm{+}$, P$^\mathrm{-}$ and unwritten regions (Fig. \[fig:SV\_MAP\]). The energy filtered PEEM experiments used a NanoESCA X-PEEM (Omicron Nanotechnology GmbH). PEEM of the photoemission threshold gives a direct, accurate ($\approx$ 20 meV) and nondestructive map of the work function [@mathieu_microscopic_2011] due, for example, to domain polarization [@barrett_influence_2010]. Image series were acquired over the photoemission threshold region with a mercury lamp excitation ($h\nu$ = 4.9 eV), the lateral resolution was estimated to be 200 nm and energy resolution 200 meV. Figure \[fig:PEEM\_image\] shows a typical PEEM image of the pre-poled P$^\mathrm{+}$ and P$^\mathrm{-}$ regions for the 70 nm BFO film. The energy contrast between oppositely polarized domains fits the PFM image except at the domain boundary where the lateral electric field induced by a P$^\mathrm{+}$/P$^\mathrm{-}$ domain wall deflects electrons [@nepijko_peculiarities_2001]. Further interpretation of this phenomenon which contains potentially valuable information on the electrical properties of domain walls is out of the scope of the present Letter. For every pixel, we extract the work function relative to the Fermi level of the sample environment from the threshold spectrum using an erfc to model the rising edge of the photoemission, see Fig. \[fig:Thresh\]. Figure \[fig:WF\_MAP\] maps the work function in the P$^\mathrm{+}$, P$^\mathrm{-}$ and as-grown regions. The difference in the MEM-LEEM transition of the P$^\mathrm{+}$ and P$^\mathrm{-}$ regions, $\Delta$E, varies from 450 meV for the 70 nm film to 25 meV for the 3.6 nm film and is plotted in Fig. \[fig:SV\_WF\] (black circles, left axis). The mean difference between P$^\mathrm{+}$ and P$^\mathrm{-}$ domains, $\Delta\Phi_{F} = \Phi_{F}(P^+) - \Phi_{F}(P^-)$, is plotted as a function of thickness in Fig. \[fig:SV\_WF\] (right axis). While $\Delta\Phi_{F}$ is 300 meV between 70 nm and 7 nm, between 7 and 5 nm it drops to 20 meV. For selected samples, the photoemission threshold was also imaged using X-rays rather than UV light to ensure that the contribution from direct transitions to threshold does not affect significantly the work function measurements [@Note1]. The polarization charges at the BFO surface are screened over a so-called dead layer leading to an inward ($P^+$) or outward ($P^-$) surface dipole. By measuring the work function (or surface potential) *difference* between two opposite domains, our method allows a direct measurement of the polarization-induced dipoles since any averaged non-ferroelectric contribution is canceled. The surface dipole difference, hence the work function difference, is proportional to the difference in polarization charges when going from the $P^+$ to the $P^-$ domains: $$\label{eq:1} \Delta\Phi_{F} \propto \frac{e}{\epsilon_0} \left( P^+.d^+ - P^-.d^- \right) \approx 2\frac{e}{\epsilon_0} P_r.d$$ where $P^{+/-}$ and $d^{+/-}$ are the polarization and dead layer thickness for the $P^{+/-}$ domains, $P_r$ is the average magnitude of the remnant polarization in the two poled domains and d is the average dead layer thickness. For the sake of generality, one can take into account electronic screening (intrinsic or extrinsic relaxations) via a high-frequency dielectric permittivity, but it would still leave a linear relation between remnant polarization and $\Delta\Phi_{F}$, $\Delta$E. P$_\mathrm{z}$/P$_\mathrm{max}$, where P$_\mathrm{z}$ is the measured out-of-plane polarization and P$_\mathrm{max}$ the value for the 70 nm film, is plotted as a function of film thickness in Fig. \[fig:P\_PMAX\]. As can be seen by comparison with Fig. \[fig:XRD\_exp\], the drop of polarization between 7 and 5 nm does not result from a decrease in the c/a ratio, contrary to PTO thin films [@lichtensteiger_ferroelectricity_2005]. Here the c/a ratio increases for thinner films and is constant at 1.054 below 5 nm. If there were no polarization then it should be about 1.03. However, PTO is almost fully relaxed whereas the BFO is compressively strained. Secondly, in BFO the polarization deviates appreciably from the \[001\] direction and is the macroscopic average of four `<`111`>` type distortions. We have therefore generalized the 1D dead layer LGD model of Bratkovsky and Levanyuk [@bratkovsky_abrupt_2000] to the 3D polarization case. It gives the following relation for thickness dependence of polarization [@Note1]: $$\label{eq:2} \frac{P_z}{P_{max}} = A \sqrt{B + \sqrt{1 - \frac{h_{eff}}{h}}}$$ where h$_\mathrm{eff}$ is the effective thickness below which the (macroscopic) P$_\mathrm{z}$ goes to zero, and A and B are fitting parameters. A good fit to the data is obtained with h$_\mathrm{eff}$ = 5.6 nm (see Fig. \[fig:P\_PMAX\], red curve), compared with 2.4 nm for PTO. To understand why the polarization suddenly drops when decreasing the thickness in ultrathin strained (001) BFO films, while the axial ratio is still very large for low thickness, we have conducted first principles-based, effective Hamiltonian calculations [@prosandeev_kittel_2010; @albrecht_ferromagnetism_2010; @kornev_finite-temperature_2007] that take into account free surfaces as in Ref. [@prosandeev_kittel_2010]. We used the lattice parameter of the STO substrate for the pseudo-cubic in-plane lattice constant of BFO, leading to a misfit strain of -1.8%, in agreement with the experimental value. The calculation includes the local electric dipoles, the strain tensor and tilting of the oxygen octahedra. The electrical boundary conditions are governed by a coefficient denoted as $\beta$, as described in Ref. . Practically, $\beta$ can vary between 0 (ideal open-circuit, maximal depolarizing field) and $\beta$ = 1 (ideal short-circuit, fully screened depolarizing field). Realistic systems lie between these two extremes. To determine $\beta$ for each of our grown films we first extract the P$_\mathrm{z}$/P$_\mathrm{max}$ values from a B-spline interpolation of the experimental data (Fig. \[fig:P\_PMAX\], blue diamonds) and then vary $\beta$ in the calculations until the predicted P$_\mathrm{z}$/P$_\mathrm{max}$ perfectly agrees with the experimentally-extracted one. Figure \[fig:beta\_t\] shows the resulting $\beta$ values. One can first note that $\beta$ decreases with thickness, indicating that the observed decrease of polarization is related to imperfect screening of the depolarizing field. Another important observation to be made here is that the vanishing of the overall z-component of the polarization (which occurs experimentally for thicknesses lower than 5.6 nm, see Fig. \[fig:P\_PMAX\]) is associated with values of $\beta$ lower than 0.4 (see Fig. \[fig:beta\_t\]). To understand what happens for these kinds of $\beta$ values, we performed additional first-principles-based effective Hamiltonian calculations on a (20$\times$20$\times$20) supercell allowing $\beta$ to vary. This supercell was chosen because around 8 nm the polarization is very sensitive to the thickness (see Fig. \[fig:P\_PMAX\]). The results are shown in Fig. \[fig:theory\]. At a critical value of $\beta$ around 0.35 the BFO supercell goes from a phase with a uniform out-of-plane polarization to a stripe domain phase with a vanishing overall out-of plane polarization. Fig. \[fig:E\_beta\] displays the energy of these two phases as a function of $\beta$ (the monodomain phase is more stable than the stripe nanodomains for $\beta$ above 0.35 and less stable for smaller $\beta$ values). The predicted evolution of the c/a ratio, and of the overall P$_\mathrm{z}$/P$_\mathrm{max}$, with $\beta$ for single and stripe domain phases are shown in Figs. \[fig:E\_beta\] and \[fig:P\_beta\], respectively. Interestingly, a continuous ferroelectric to paraelectric transition would lead to a large monotonic decrease of tetragonality (Fig. \[fig:E\_beta\], green triangles), which we do not measure below h$_\mathrm{eff}$. Rather, the transformation from ferroelectric monodomains to nanostripe domains leads to a (large) c/a similar to the one associated with short-circuit-like conditions (i.e. for which $\beta$ is close to 1). Such results are consistent with our experimental findings that c/a does not vary between 70 nm and 3.6 nm, and explains that such insensitivity to strain is likely due to the formation of nanostripe domains. The single to stripe domain transition also explains the loss of contrast in electronic microscopy observed in LEEM and PEEM contrast between 7 and 5 nm, because these stripes do not possess any overall z-component of the polarization. The stripes have a typical dimension of a few nanometers, which is below the lateral resolution of our experiments (The top left inset of Fig. \[fig:P\_beta\] shows the morphology of these domains). However, stripe domains in BFO thin films close to the h$_\mathrm{eff}$ value have been observed by PFM [@catalan_fractal_2008]. For such thin films one might also ask to what extent the screening at the LSMO/BFO interface affects the measured remnant polarization. Transmission electron microscopy of the interface between LSMO and a 3.2 nm BFO film suggests that the first three BFO unit cells are screened by the interface charge [@chang_atomically_2011]. This also fits nicely with our experimental observation of an abrupt decrease in polarization starting at 7 nm, 1.4 nm above the calculated h$_\mathrm{eff}$. In summary, we have measured the remnant polarization in ultrathin strained BFO(001) films using PEEM and LEEM. The polarization drops abruptly below a critical thickness h$_\mathrm{eff}$ whereas the tetragonality has a high constant value. First-principles-based effective Hamiltonian approach suggests that BFO exhibits a first order phase transition to stripe domains at h$_\mathrm{crit}$ = 5.6 nm, corresponding to a screening factor, $\beta$, of 0.35. This model fits perfectly the experimental measurement of the remnant polarization and the c/a ratio. J.R. is funded by a CEA Ph.D. grant CFR. This work was supported by the ANR projects “Meloïc” and “Nomilops”. L.B. thanks the financial support of DoE, Office of Basic Energy Sciences, under contract ER-46612 and ONR Grants N00014-11-1-0384 and N00014-08-1-0915. We also acknowledge NSF grants DMR-1066158 and DMR-0701558, and ARO Grant W911NF-12-1-0085 for discussions with scientists sponsored by these grants. Some computations were also made possible thanks to the MRI grant 0722625 from NSF, the ONR grant N00014-07-1-0825 (DURIP) and a Challenge grant from the DoD. We thank E. Jacquet, C. Carrétéro and H. Béa for assistance in sample preparation, K. Winkler, B. Krömker (Omicron Nanotechnology), C. Mathieu, D. Martinotti for help with the PEEM and LEEM experiments and P. Jégou for the XPS measurements. Supplementary Materials {#supplementary-materials .unnumbered} ======================= X-Ray PhotoEmission Spectroscopy for every film thicknesses ----------------------------------------------------------- X-ray PhotoEmission Spectroscopy (XPS) was carried out using a Kratos Ultra DLD with monochromatic Al K$\alpha$ (1486.7 eV). The analyzer pass energy of 20 eV gave an overall energy resolution (photons and spectrometer) of 0.35 eV. The sample is at floating potential and a charge compensation system was used. The binding energy scale was calibrated using a clean gold surface and the Au 4f$_{\mathrm{7/2}}$ line at 84.0 eV as a reference. A take-off angle of 90 [$~^{\circ}\text{C }$]{}, i.e., normal emission, was used for all spectra presented. The XPS spectra show that the chemical environment was identical within 1% (see \[fig:Bi4f\] and \[fig:Fe2p\]) for every films. Krug et al. [@krug_extrinsic_2010] pointed out the importance of adsorbates on LEEM and PEEM measurements. Figure \[fig:C1s\] shows that surface contamination is similar in 3.6 (low contrast) and 70 nm (high contrast) thin films indicating strongly that the disappearance of ferroelectric contrast is not due to differential contamination. Moreover, the 5 nm film has the lowest carbonates concentration and still shows weak ferroelectric contrast in LEEM/PEEM experiments (see Table \[tab:C1s\]). Thickness (nm) $\frac{I_{C1s}}{\sigma_{C1s}} / \frac{I_{Bi4f}}{\sigma_{Bi4f}}$ ---------------- ----------------------------------------------------------------- 3.6 2.22 5.0 1.49 7.0 3.30 20 1.40 70 2.20 : C 1s to Bi 4f ratio calculated from XPS spectra[]{data-label="tab:C1s"} Threshold spectra using X-ray source ------------------------------------ Photoemission at threshold shows a cut-off energy below which electrons cannot escape the surface. It is often assumed that only secondary electrons contribute to the threshold spectra and that complementary error function (erfc) is the correct function to deduce work function from the rising edge of the photoemission threshold. However, the emission spectrum of the Hg light source is peaked at only 4.9 eV. With such low photon energy, direct transitions might occur between p levels of the valence band and unoccupied s,d levels in the conduction band, provided of course that accessible final states lie above the vacuum level. They may give rise to intensity variations in the threshold spectra above cut-off energy and the shape of the rising edge of the photoemission threshold may be modified. In such a case the erfc parameters will no longer correctly describe threshold and inaccurate work function values may result. To check the effect of direct transitions on our work function values we took complementary image series using higher photon energy (Helium lamp h$\nu$ = 21.2 eV and X-ray source Al-K$\alpha$ h$\nu$ = 1486.70 eV) for three of the BFO films: 20, 7 and 5 nm thin films, the thicknesses around the single to stripe domains transition. The higher the photon energy, the more only true secondary electrons contribute to the photoemission threshold. Results are similar within our energy resolutions (see Fig. \[fig:WF\_SV\_complete\]) for 20 nm and 5 nm thin films. Notably, threshold widths for both types of sources are within 2%, showing weak influence of the direct transitions on the threshold spectra. In fact, it seems that p to s,d transitions in our BFO samples leave the measured position of the low energy cut-off in the spectra largely unchanged. Therefore, the influence of direct transitions on the work function can be neglected here. ![Thickness dependence of $\Delta\Phi_{F}$ (red squares for Hg lamp, green diamonds for X-ray source and HeI lamp) and $\Delta$E (black circles).[]{data-label="fig:WF_SV_complete"}](fig_WF_SV_supp.pdf) 3D generalization of Landau-Ginzburg-Devonshire to BiFeO$_{\mathrm{3}}$ thin films ---------------------------------------------------------------------------------- We start from the Ginzburg-Landau Free energy expressed in the form of the expansion with respect to the polarization P: $$F(P) = \frac{1}{2} \alpha_{\perp} \left( P_x^2 + P_y^2 \right) + \frac{1}{2} \alpha_{z} P_z^2 + \frac{1}{4} \beta_{1} \left( P_x^4 + P_y^4 \right) + \frac{1}{4} \beta_{2} P_z^4 + \frac{1}{2} \beta_{3} P_x^2 P_y^2 + \frac{1}{2} \beta_{4} \left( P_x^2 + P_y^2 \right) P_z^2 + \frac{1}{6} \gamma P_z^6 - \left( E_z + E_d \right)P_z \label{eq:S1}$$ If $P_x = P_y = P_{\perp}$ then the equilibrium conditions result in the following equations: $$\alpha_z P_z + \beta_2 P_z^3 + 2 \beta_4 P_z P_{\perp}^2 + \gamma P_z^5 = E_z + E_d \label{eq:S2a}$$ $$\alpha_{\perp} P_{\perp} + \beta_1 P_{\perp}^3 + \beta_3 P_z^2 P_{\perp} +\beta_4 P_{\perp} P_z^2 = 0 \label{eq:S2b}$$ Where: $$E_d = \frac{U \epsilon_0 \epsilon_d - P_z d}{\epsilon_0 \left( \epsilon_d h' + \epsilon_b d \right)} - \frac{U}{h} \label{eq:S3}$$ Here, $h'$ is the width of the polarized region. The total thickness of the film is $h = h' + d$, where $d$ is the dead layer width. We will assume that $d \ll h$. $\epsilon_0$ is the vacuum permittivity. $\epsilon_d$ is the dielectric constant of the dead layer and $\epsilon_b$ is the so-called background dielectric constant (which is independent of the film thickness)[@bratkovsky_abrupt_2000]. $U$ is the voltage between the contacts. Furthermore, $E_d$ is the depolarizing field [@bratkovsky_abrupt_2000; @maksymovych_ultrathin_2012], and $E_z = U / h$. From (\[eq:S2b\]), we have the choice, whether $P_{\perp} = 0$ or: $$P_{\perp}^2 = - \frac{1}{\beta_1 + \beta_3} \left( \alpha_{\perp} + \beta_4 P_z^2 \right) \label{eq:S4}$$ This latter equality reveals that the z-component of the polarization influences the in-plane component, and *vice versa*, the magnitude of the in-plane component of the polarization influences the z-component. Now we substitute Eq. (\[eq:S4\]) into Eq. (\[eq:S2a\]) and get: $$\alpha^P P_z + \beta^P P_z^3 + \gamma P_z^5 = \frac{U \epsilon_d}{\epsilon_d h + \epsilon_b d} \label{eq:S5}$$ Where: $$\begin{aligned} \alpha^P & = & \alpha_z + \frac{d}{\epsilon_0 \left( \epsilon_d h' + \epsilon_b d \right)} - 2 \beta_4 \frac{\alpha_{\perp}}{\beta_1 + \beta_3}\\ & = & \alpha^L + \frac{d}{\epsilon_0 \left( \epsilon_d h' + \epsilon_b d \right)}\\ & \approx & \alpha^L + \frac{d}{\epsilon_0 \epsilon_d h}\\ \beta^p & = & \beta_2 - \frac{2 \beta_4^2}{\beta_3 + \beta_1}\\ \alpha^L & = & \alpha_z - 2 \beta_4 \frac{\alpha_{\perp}}{\beta_1 + \beta_3} \label{eq:S6}\end{aligned}$$ Notice that $\alpha^P$ is modified with respect to $\alpha_z$, and can even change sign, because of the depolarizing field and the correction due to the coupling of the in-plane component of polarization with its out-of plane component. Furthermore, $\beta^p$ is smaller compared to $\beta_2$ when all $\beta$’s are positive. This modification can even result in a negative $\beta^p$ and therefore change the second-order phase transition to a first-order one. In the case U = 0, Equation (\[eq:S5\]) has two stable solutions. One is $P_z = 0$, while the other is: $$\label{eq:S7} P_z^2 = \frac{-\beta^p + \sqrt{\left( \beta^p \right)^2 - 4 \alpha^p \gamma}}{2 \gamma}$$ One can easily show that such latter solution can be rewritten as: $$\label{eq:S11} \frac{P_z}{P_{max}} = A \sqrt{B + \sqrt{1 - \frac{h_{eff}}{h}}}$$ Where $$\begin{aligned} D & = & \sqrt{\left( \beta^p \right)^2 - 4 \alpha^L \gamma}\\ A & = & \frac{1}{\sqrt{B + 1}}\label{AfuncB}\\ B & = & \frac{- \beta^p}{D}\\ h_{eff} & = & \frac{4 d \gamma}{\epsilon_0 \epsilon_d D^2} \label{eq:S12}\end{aligned}$$ Equation (\[eq:S11\]) is the one that has been used in the manuscript to fit the data of Fig. 5b. Note that in this fitting, we allowed $A$ to take an arbitrary value, because P$_{\mathrm{max}}$ in experiment is not very well defined (one cannot consider very thick films since they become too insulating for Photoemission Microscopy). However, the resulting $A$ was numerically found to be very close to its ideal value provided in (\[AfuncB\]). Specifically, the ratio between the actual and ideal values for $A$ was found to be about 1.05. Note that Equation (\[eq:S11\]) is, of course, valid provided that $$\begin{aligned} 1 - \frac{h_{eff}}{h} & \ge & 0\\ B + \sqrt{1 - \frac{h_{eff}}{h}} & \ge & 0 \\ \label{eq:S13}\end{aligned}$$ These two conditions were met in the fit of the data of Fig. 5b for films thicker than 5.6 nm, since we numerically found that $h_{eff}$ = 5.6 nm and B = 0.16. It is also interesting to realize that the solution of Equation (\[eq:S7\]) can adopt a more simple form than Equation (\[eq:S11\]) in some particular cases. For instance, if $\alpha^p < 0$, $\beta^p > 0$ and $\gamma = 0$ then: $$\label{eq:S8} P_z^2 = \frac{- \alpha^p}{\beta^p} = P^2_{max} \left( 1 - \frac{g_{eff}}{h} \right)$$ Where $$\begin{aligned} g_{eff} & = & \frac{d}{\epsilon_0 \epsilon_d \alpha^L}\\ P^2_{max} & = & - \frac{\alpha^L}{\beta^p} \label{eq:S9}\end{aligned}$$ Equation (\[eq:S8\]) has the same analytical form than the formula provided by Maksymovych *et al* [@maksymovych_ultrathin_2012]. However, the physical meaning of the parameters entering Equation (\[eq:S8\]) is different from those given in Ref. [@maksymovych_ultrathin_2012], because, here, the polarization has three non-zero Cartesian components (rather than a single one). [^1]: See Supplemental Material for additional details on the calculations and XPS spectra for every thickness
--- abstract: | Recently, there has been an upsurge of the number of articles on spatio-temporal modeling in statistical journals. Many of them focus on building good nonstationary spatio-temporal models. In this article, we introduce a state space based nonparametric nonstationary model for the analysis of spatio-temporal data. We consider that there are some fixed spatial locations (generally called the monitoring sites) and that the data have been observed at those locations over a period of time. To model the data we assume that the data generating process is driven by some latent spatio-temporal process, which itself is evolving with time in some unknown way. We model this evolutionary transformation via compositions of a Gaussian process and also model the unknown functional dependence between the data generating process and the latent spatio-temporal process (observational transformation) by another Gaussian process. We investigate this model in detail, explore the covariance structure and formulate a fully Bayesian method for inference and prediction. Finally, we apply our nonparametric model on two simulated data sets and a real data set and establish its effectiveness.\ [*Keywords:*]{} [Observational equation; Evolutionary equation; Gaussian process; State-space model; MCMC; Gibbs sampler; Posterior predictive distribution.]{}\ [*AMS 2000 Subject Classification:*]{} [Primary 62M20, 62M30; Secondary 60G15.]{} author: - 'Suman Guha[^1]  and Sourabh Bhattacharya[^2]' title: ' **Nonparametric Nonstationary Modeling of Spatio-Temporal Data Through State Space Approach** ' --- Introduction {#sec:intro} ============ Spatio-temporal modeling has received much attention in recent years. Particularly, the rise in global temperature being a major environmental concern, scientists are now taking keen interest in the study of the dynamics of such climatic spatio-temporal processes[@Furrer:Sain; @Jun:Knutti; @Sain:Furrer:Cressie; @Sang:Jun; @Smith:Tebaldi:Nychka; @Tebaldi:Sanso]. Another closely related class of spatio-temporal processes, that are also of much importance to climatologists, are daily rainfall and precipitation (like mist, snowfall etc.) across a region. Apart from climatology, many important spatio-temporal processes are also associated with different subfields of environmental and ecological science. To mention a few, studies on ground level concentration of ozone, $\mathrm{SO_{2}}$, $\mathrm{NO_{2}}$ and PM, species distribution over a region, change in land usage pattern over time, etc. Other than these areas, spatio-temporal models are also useful in geostatistics, hydrology, astrophysics, social science, archaeology, systems biology, and many more. So, knowledge of spatio-temporal modeling is becoming increasingly necessary for better understanding of a broad range of subjects. Hence, it is of no wonder that practitioners from distant fields are working together to develop new and effective spatio-temporal models. In what follows, we review some such existing models, discuss some issues related to them and then propose a novel nonparametric model. Then we devote the rest of the article to the development and exploration of the proposed model. Finally, some simulated and real data analysis results are presented which shows that the proposed model is particularly useful for analysis of nonstationary spatial time series data. The proofs of all our mathematical results are deferred to the Appendix. Existing approaches for spatio-temporal data {#sec:existing_approaches} ============================================ The term spatio-temporal data covers many different types of data. But here we consider only one specific type of spatio-temporal data. We consider that there are some arbitrary spatial locations $\bold{s}_{1}, \bold{s}_{2}, \bold{s}_{3}\cdots, \bold{s}_{n}$ and that the data have been observed at each of those spatial locations at times $t=1, 2, 3,\cdots, T $. Such data sets are very common in the context of environmental science. One important example is ground level ozone data. An early approach for analysis of this type of data was based on stationary Gaussian process. But this approach to modeling did not turn out to be adequate. Since time differs intrinsically from space in that time moves only forward, while there may not be any preferred direction in space, time cannot be treated simply as an additional co-ordinate attached with the spatial co-ordinates. Another drawback of this approach is stationarity which is seldom satisfied by real, physical processes. So, alternative approaches to modeling spatio-temporal data were clearly necessary. Briggs (1968)[@Briggs] proposed a simple method for constructing spatio-temporal models, based on a purely spatial model. Later, Cox and Isham (1988)[@Cox:Isham] adopted a similar approach for modeling rainfall pattern. Roughly, their idea was to start with a purely spatial stationary covariance kernel $C_{s}(\bold{h})$, where $\bold{h}\in \mathbb R^2$ and construct a spatio-temporal stationary covariance kernel $C(\bold{h},u)=\mathbb EC_{s}(\bold{h}-u\bold{v})$, where $\bold{v}\in \mathbb R^2$ is a random velocity vector and $u$ represents the time lag. In case the velocity vector is nonrandom, this model reduces to the classical frozen field model (see page 428 of [@Gelfand:Diggle:Fuentes:Guttorp]), which are useful for modeling environmental processes that are under the influence of prevailing winds or ocean currents. Again, this model suffers from the oversimplified assumption of stationarity. Sampson and Guttorp (1992)[@Sampson:Guttorp] proposed a nonparametric nonstationary model for analysing spatial data, based on spatial deformation. Later, they extended it to the spatio-temporal setup[@Bruno:Guttorp:Sampson:Cocchi; @Guttorp:Meiring:Sampson]. Schmidt and O’Hagan (2003)[@Schmidt:O'Hagan] gave a Bayesian formulation of the model in the spatial setup. One basic problem with this approach is the requirement of replicates of the data, which is rarely available in practice. Only very recently, Anderes and Stein (2008)[@Anderes:Stein] came up with some idea on estimating such deformation model from a single realization. Still, they considered only the spatial setup and implementing the model in real data problems requires solving a large and quite difficult optimization problem. (See [@Meiring:Monestiez] ). A completely different approach to modeling spatio-temporal data is the convolution approach proposed by Higdon et al. (1999)[@Higdon:Swall:Kern]. Initially, they proposed it for purely spatial processes and later extended it to the spatio-temporal setup[@Higdon1; @Higdon2]. Several extensions and modifications of the convolution approach are adopted in [@Majumdar:Gelfand; @Majumdar:Paul:Bautista; @Zhu:Wu]. The basic idea is to start with a white noise process in space (space-time) and create a nonstationary spatial (spatio-temporal) process by convolving it with a spatially varying kernel. A different type of kernel convolution based non stationary spatial process was formed by Fuentes and Smith (2001)[@Fuentes:Smith]. They convolved a family of spatially varying locally stationary processes with a spatial kernel to build a nonstationary process. A more physically motivated approach to modeling the kind of spatio-temporal data considered here, is based on the dynamic linear model (DLM) introduced by West and Harrison (1997)[@West:Harrison]. Such models are called dynamic spatio-temporal (DSTM) models. Stroud et al. (2001)[@Stroud:Muller:Sanso] proposed a dynamic spatio-temporal model for the analysis of tropical rainfall and Atlantic ocean temperature. Huerta et al. (2004)[@Huerta:Sanso:Stroud] proposed a seasonal dynamic spatio-temporal model for the analysis of ozone levels. A more flexible dynamic spatio-temporal model with spatially varying state vector is proposed in Banerjee et al. (2005)[@Banerjee:Gamerman:Gelfand]. Several other authors studied these models in the context of dimension reductions etc.[@Lopes:Salazar:Gamerman]. Although DSTMs are used to incorporate the covariate information in spatio-temporal modeling, these models can also be used in the absence of covariates. The basic idea behind the dynamic approach is to model the observed spatio-temporal process as a linear state space model, where the coefficients associated with different covariates constitute the state vector. Direct construction of nonstationary covariance functions is yet another approach. While Paciorek and Schervish (2006)[@Paciorek:Schervish] directly construct a nonstationary covariance function in the spatial setup, Fuentes (2001, 2002)[@Fuentes1; @Fuentes2] proposed a spectral method. All the above mentioned approaches mainly focus on achieving nonstationarity. But nonseparability in space-time is also another important aspect in the context of space time modeling. Roughly, nonseparability in space-time means that space effect and time effect interact in the formation of the process; to rephrase, space effect depends on time effect and vice versa. Although this is a highly realistic assumption, most of the spatio-temporal models existing in the literature assume separability. Some works on nonseparable models can be found in [@Cressie:Huang; @Gneiting; @Fuentes:Chen:Davis; @Fuentes:Chen:Davis:Lackmann]. Among them, Cressie and Huang (1999) [@Cressie:Huang] and Gneiting (2002)[@Gneiting] considered nonseparability in the stationary setup while the others have developed spatio-temporal processes, which are both nonseparable and nonstationary. Apart from the above mentioned approaches there are many more works in the context of spatio-temporal modeling. We attempted to provide only a broad overview and it is by no way exhaustive. Our proposed spatio-temporal process {#sec:our_proposal} ==================================== First recall, that there are some arbitrary spatial locations $\bold{s}_{1}, \bold{s}_{2}, \bold{s}_{3}\cdots, \bold{s}_{n}$ and the data $y(\bold{s}_{i},t)$ have been observed at those spatial locations at times $t=1, 2, 3,\cdots, T $. We assume that the observed spatio-temporal process $Y(\bold{s},t)$ is driven by an unobserved spatio-temporal process $X(\bold{s},t)$ which itself is evolving in time in some unknown way. So, we model them via the state space approach but instead of considering any known form of observational and evolutionary equations we allow them to be unknown. We propose a nonparametric state space based model, which can even capture the nonlinear evolution flexibly. We refer to it as a nonparametric state space based spatio-temporal model. Our model has the following form: $$\begin{aligned} Y(\bold{s},t)&=f(X(\bold{s},t))+\epsilon(\bold{s},t), \label{eqn:npr1}\\ X(\bold{s},t)&=g(X(\bold{s},t-1))+\eta(\bold{s},t); \label{eqn:npr2}\end{aligned}$$ where $ \bold{s}\in \mathbb{R}^2 $ and $ t\in \{1,2,3,\ldots\}$; $X(\cdot,0)$ is a spatial Gaussian process with appropriate parameters; $\epsilon(\cdot,t)$ and $\eta(\cdot,t)$ are temporally independent and identically distributed spatial Gaussian processes, and $g(\cdot)$ and $f(\cdot)$ are Gaussian processes on $\mathbb{R}$. They are all independent of each other. To elaborate, let us consider a hierarchical break up of our model. Suppose, that the observed process depends on the unobserved process through some unknown continuous function $f(\cdot)$. Then, for the purpose of Bayesian inference we must put some prior on $f(\cdot)$. Hence, we put a Gaussian process prior on $f(\cdot)$. Similarly we put a Gaussian process prior on $g(\cdot)$. This approach is able to capture arbitrary functional dependence between the observed and the latent variables, hence the term nonparametric. Finally, to complete the model specification, we need to describe the parameters associated with the Gaussian processes. We assume that the Gaussian process $f(x)$ has mean function of the form $\beta_{0f}+\beta_{1f}x$ (where $\beta_{0f}, \beta_{1f}$ are suitable parameters) and has covariance kernel of the form $c_{f}(x_{1},x_{2})=\gamma(\|x_1-x_2\|)$, where $\gamma$ is a positive definite function. It consists of parameters that determine the smoothness of the sample path of $f(\cdot)$. Moreover $\gamma$ is such that the centered Gaussian process associated with it has continuous sample paths. Thus, we consider isotropic covariance functions with suitable smoothness properties. Typical examples of $\gamma$ are exponential, powered exponential, Gaussian, Matérn etc. (see, for example, Table 2.1 of [@Banerjee04] for other examples of such covariance kernels). Similarly, parameters $\beta_{0g}, \beta_{1g}$ and $c_{g}(x_{1},x_{2})$ are associated with the Gaussian process $g(x)$. The zero mean Gaussian processes associated with the noise variables have covariance kernels $c_{\epsilon}(\bold{s},\bold{s}^{\prime})$ and $c_{\eta}(\bold{s},\bold{s}^{\prime})$ which are also of the form as discussed above. Regarding the Gaussian process associated with $X(\cdot,0)$, we assume a mean process of the form $\mu_{0}(\cdot)$ and isotropic covariance kernel $c_{0}(\cdot,\cdot)$. For convenience we introduce separate symbols for the mean vector and the covariance matrix associated with $(X(\bold{s}_{1},0),X(\bold{s}_{2},0),\cdots X(\bold{s}_{n},0))$ where $\bold{s}_{1}, \bold{s}_{2}, \bold{s}_{3}\cdots, \bold{s}_{n}$ are the spatial locations where the data are observed. We denote them by ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\Sigma}}_{0}$ respectively. From (\[eqn:npr1\]) and (\[eqn:npr2\]) it is not difficult to see that our model boils down to simple DSTM with no covariate if the process means associated with the Gaussian processes $f(\cdot)$ and $g(\cdot)$ are given by $\beta_{0f}=\beta_{0g}=0$, $\beta_{0f}=\beta_{0g}=1$, and the process variance (denoted by $\sigma_{f}^{2}$ and $\sigma_{g}^{2}$) become 0. The model equations (\[eqn:npr1\]) and (\[eqn:npr2\]) then reduce to the following forms $$\begin{aligned} Y(\bold{s},t) &=X(\bold{s},t)+\epsilon(\bold{s},t), \label{eqn:DLM1}\\ X(\bold{s},t) &=X(\bold{s},t-1)+\eta(\bold{s},t); \label{eqn:DLM2}\end{aligned}$$ where $ \bold{s}\in \mathbb{R}^2 $ and $ t\in\{1,2,3,\ldots\}$. Although we develop our Gaussian process based spatio-temporal model for equi-spaced time points, simple modification of this model can handle non equi-spaced time points as well. However, for the sake of simplicity and brevity in this article we will confine ourselves only within the framework of equi-spaced time points. Some measurability and existential issues ----------------------------------------- Before we proceed to explore the properties of our model, we need to ensure that a family of valid spatio-temporal stochastic processes is induced by the proposed model. Only then physical processes can be modeled by it. In general such issues are almost always trivially satisfied and so, never discussed in detail. But in this case we need to show that $f(X(\bold{s}_{1},t)),f(X(\bold{s}_{2},t)),\cdots,f(X(\bold{s}_{n},t))$ are jointly measurable for any $n$ and any set of spatial locations $\bold{s}_{1},\bold{s}_{2},\cdots,\bold{s}_{n}$, and this is not a trivial problem. The difficulty is that when $f$ and $X$ both are random, $f(X)$ need not be a measurable or valid random variable. It is the sample path continuity of $f(\cdot)$, which compels $f(X(\bold{s},t))$ to be measurable. The following theorem establishes this mathematically. \[thm:measurable\] The model defines a family of valid (measurable) spatio-temporal processes on $\mathbb R^{2}\times\mathbb Z^+$. Once it is established that the proposed model induces a family of valid spatio-temporal processes, we look into some important properties like the joint distributions of state variables and observed variables, covariance structure of the observed process etc. An important point is that although we develop our model considering the space is $\mathbb{R}^2$ all the results go through for $\mathbb{R}^d$ $(d>2)$ as well. Joint distribution of the variables ----------------------------------- It is of importance to derive the joint distribution of the observed variables. As our model is based on an implicit hierarchical structure, the joint distribution of the observed variables is non-Gaussian. But before going into the details, we need to derive the joint distribution of the state variables, which will also be required for our MCMC based posterior inference. \[thm:state\]Suppose that the spatio-temporal process is observed at locations $\bold{s}_{1}, \bold{s}_{2}, \bold{s}_{3}\cdots, \bold{s}_{n}$ for times $t=1, 2, 3,\cdots, T $. Then the joint distribution of the state variables is non-Gaussian and has the pdf $\mathlarger{\mathlarger{\frac{1}{(2\pi)^\frac{n}{2}}\frac{1}{|{\mathbf{\Sigma}}_{0}|^\frac{1}{2}}}}\exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}_{0}}^{-1}{\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}\right] \mathlarger{\mathlarger{\frac{1}{(2\pi)^\frac{nT}{2}}\frac{1}{|\tilde{\mathbf{\Sigma}}|^\frac{1}{2}}}}\times$ $\exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{1},0)\\x(\bold{s}_{2},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{2},0)\\ \vdots\\x(\bold{s}_{n},T)-\beta_{0g}-\beta_{1g} x(\bold{s}_{n},T-1)\end{pmatrix}}^{\prime}{\tilde{\mathbf{\Sigma}}}^{-1}{\begin{pmatrix}x(\bold{s}_{1},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{1},0)\\x(\bold{s}_{2},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{2},0)\\ \vdots\\x(\bold{s}_{n},T)-\beta_{0g}-\beta_{1g} x(\bold{s}_{n},T-1)\end{pmatrix}}\right] $ where ${\boldsymbol{\mu}}_{0}=(\mu_{01},\mu_{02},\cdots,\mu_{0n})^\prime$ and ${\mathbf{\Sigma}}_{0}$ are already defined to be the mean vector and the covariance matrix of $(X(\bold{s}_1, 0), X(\bold{s}_2, 0),\ldots, X(\bold{s}_n , 0))$, and $\tilde{\mathbf{\Sigma}}=\begin{pmatrix}1&0&\cdots &0\\0&1&\cdots &0\\ \vdots \\0&0&\cdots &1\\ \end{pmatrix}\bigotimes{\mathbf{\Sigma}}_{\eta}+\mathbf{\Sigma}$\ where the elements of ${\mathbf{\Sigma}}_{\eta}$ are obtained from the purely spatial covariance function $c_{\eta}$ and the elements of $\mathbf{\Sigma}$ are obtained from the covariance function $c_{g}$ in the following way\ the $(i,j)$ th entry of ${\mathbf{\Sigma}}_{\eta}$ is $c_{\eta}(\bold{s}_{i},\bold{s}_{j})$ and the $((t_{1}-1)n+i,(t_{2}-1)n+j)$ th entry of $\mathbf{\Sigma}$ is $c_{g}(x(\bold{s}_{i},t_{1}-1),x(\bold{s}_{j},t_{2}-1))$ where $1 \leq t_{1},t_{2} \leq T$ and $1\leq i,j \leq n$ . Although the above density function has physical resemblance with a Gaussian density, the involvement of $x(\bold{s}_{i},t)$ in $\tilde{\mathbf{\Sigma}}$ renders it non-Gaussian. In the extreme case when the process variance $c_{g}(0,0)$ ($=\sigma_{g}^2$) of the Gaussian process $g(\cdot)$ is 0, $\tilde{\mathbf{\Sigma}}$ becomes a block diagonal matrix with identical blocks and the joint density becomes Gaussian. In the formation of $\tilde{\mathbf{\Sigma}}$ the component ${\mathbf{I}}_{T\times T}\bigotimes\mathbf{\Sigma_{\eta}}$ corresponds to linear evolution and Gaussianity whereas the component $\mathbf{\Sigma}$ corresponds to nonlinear evolution and non-Gaussianity. Moreover, it is clear from the form of the density function that the temporal aspect is imposed through both the location function and the scale function associated with the latent process, making it more flexible. The interesting property that the joint density of the observed variables is also non-Gaussian, follows from non-Gaussianity of the joint distribution of the latent states. We have the following theorem in this regard. \[thm:observe\] Suppose that the spatio-temporal process is being observed at locations $\bold{s}_{1}, \bold{s}_{2}, \bold{s}_{3}\cdots, \bold{s}_{n}$ for times $t=1, 2, 3,\cdots, T $. Then the following hold true:\ (a) The joint distribution of the observed variables is a Gaussian mixture and has the following density $\mathlarger{\mathlarger{\int}_{\mathbb R^{nT}}\frac{1}{(2\pi)^\frac{nT}{2}}\frac{1}{|{\mathbf\Sigma}_{f,\epsilon}|^\frac{1}{2}}}\times $ $\exp\left[-\frac{1}{2}{\begin{pmatrix}y(\bold{s}_{1},1)-\beta_{0f}-\beta_{1f}x(\bold{s}_{1},1)\\y(\bold{s}_{2},1)-\beta_{0f}-\beta_{1f}x(\bold{s}_{2},1)\\ \vdots\\y(\bold{s}_{n},T)-\beta_{0f}-\beta_{1f}x(\bold{s}_{n},T)\end{pmatrix}}^{\prime}{{\mathbf\Sigma}_{f,\epsilon}}^{-1}{\begin{pmatrix}y(\bold{s}_{1},1)-\beta_{0f}-\beta_{1f}x(\bold{s}_{1},1)\\y(\bold{s}_{2},1)-\beta_{0f}-\beta_{1f}x(\bold{s}_{2},1)\\ \vdots\\y(\bold{s}_{n},T)-\beta_{0f}-\beta_{1f}x(\bold{s}_{n},T)\end{pmatrix}}\right]\mathlarger{ h(\mathbf{x}) \,d\mathbf{x}} $\ where the mixing density $h(\mathbf{x})$ is obtained by marginalizing the pdf derived at Theorem \[thm:state\] with respect to the variables $x(\bold{s}_{1},0),x(\bold{s}_{2},0),\cdots,x(\bold{s}_{n},0)$ and the $((t_{1}-1)n+i,(t_{2}-1)n+j)$ th entry of $\mathbf{\Sigma}_{f,\epsilon}$ is given by $c_{f}(x(\bold{s}_i,t_1),x(\bold{s}_j,t_2))+c_{\epsilon}(\bold{s}_i,\bold{s}_j)\delta(t_1-t_2)$ where $1\leq t_{1},t_{2},\leq T$ and $1\leq i,j\leq n$.\ (b) In the extreme case, when the process variance $c_{g}(0,0)$ ($=\sigma_{g}^2$) and $c_{f}(0,0)$ ($=\sigma_{f}^2$) of each of the Gaussian processes $g(\cdot)$ and $f(\cdot)$ are 0, the joint distribution turns into Gaussian. So, our method is very flexible in the sense that it can yield both Gaussian and non-Gaussian models for the observed data. Non-Gaussianity itself is of independent interest in the context of spatial modelling and some works in this direction are [@Kim:Mallick; @Oliveira:Kedem:Short; @Fonseca:Steel; @Palacios:Steel]. Covariance exploration ---------------------- Before delving into a deeper study of the covariance function and some related issues like nonstationarity and nonseparability, a more basic question is whether the process is light tailed, that is, whether or not all the coordinate variables have finite variance. From Theorem \[thm:observe\] we see that the observed variables are distributed as a Gaussian mixture and Gaussian mixtures sometimes can give rise to heavy tailed distributions. So, an answer to the above question is not immediately available. In what follows, we first mathematically establish that the process is light tailed and then we show that the covariance function is nonstationary and nonseparable. \[thm:covariance\] (a) The observed spatio-temporal process $Y(\bold{s},t)$ is light tailed in the sense that all the coordinate variables have finite variance.\ (b) The covariance function $c_{y}((\bold{s},t),(\bold{s^*},t^*))$ of the observed spatio-temporal process is nonstationary and nonseparable for any $s,s^*,t,t^*$. Although we are able to show that our method yields nonstationary and nonseparable covariance function, it has an obvious drawback that no closed form expression for the covariance function is available. This problem, however exists in many other approaches including the convolution models, DSTM models etc., where a closed form covariance function is available only in very few special cases. Actually, the direct construction of nonstationary covariance function is the only approach that yields a closed form covariance function. Moreover, from the prediction point of view, this is not a serious problem since what we need are the posterior predictive distributions at un-monitored locations at arbitrary points of time, which do not require closed form expressions of covariance functions. However, we still attempted to derive some closed form expression, and are partially successful in the sense that when our model is approximately linear (in some suitable sense to be described later), the covariance function is approximately a geometric function of time lag. ### Approximate form of the covariance function Here, in Theorem \[thm:closed\_covariance\] we show that if the process variances ($\sigma_{g}^2$) and ($\sigma_{f}^2$) are small then under some additional assumptions, the covariance function is approximately a geometric function of time lag. This result is mainly of theoretical interest and has nothing to do with the application of our model. \[thm:closed\_covariance\] Assume that $|\beta_{1g}|<1$. Then for given $\epsilon^{\prime\prime}>0$ arbitrarily small, $\exists~\delta>0$ such that for $0<\sigma_{g}^{2},\sigma_{f}^{2}<\delta$ the covariance between $Y(\bold{s},t)$ and $Y(\bold{s^*},t^*)$, denoted by $c_{y}((\bold{s},t),(\bold{s^*},t^*))$ is of following form $$\begin{aligned} \beta_{1g}^{|t-t^*|}\left[c_{0}(\bold{s},\bold{s}^*)+[\frac{1-\beta_{1g}^{2(t^*+1)}}{1-\beta_{1g}^2}]c_{\eta}(\bold{s},\bold{s}^*)\right]-\epsilon^{\prime\prime} &\leq c_{y}((\bold{s},t),(\bold{s^*},t^*))\\ &\leq \beta_{1g}^{|t-t^*|}\left[c_{0}(\bold{s},\bold{s}^*)+[\frac{1-\beta_{1g}^{2(t^*+1)}}{1-\beta_{1g}^2}]c_{\eta}(\bold{s},\bold{s}^*)\right]+\epsilon^{\prime\prime}.\end{aligned}$$ Sample path properties ---------------------- Until now we have explored the finite dimensional distributions and properties of the spatio-temporal process $Y(\bold{s},t)$. But finite dimensional properties alone are not sufficient to characterize any arbitrary general stochastic process. Two stochastic processes with completely different sample path behaviour may have identical finite dimensional distributions and properties. We demonstrate this through the following simple example (see also [@Adler81]): Let us consider two spatio-temporal processes $Y(\bold{s},t)(\omega)$ and $Y^{*}(\bold{s},t)(\omega)$ defined on the same probability space $\Omega=[0,1]^2$ in the following way: $Y(\bold{s},t)(\omega)=0$ for all $\bold{s},t,\omega$, and $Y^{*}(\bold{s},t)(\omega)=1$ for all $\bold{s}=\omega$ and $=0$ otherwise. Then one can show that for any fixed $t$, $Y(\bold{s},t)(\omega)$ has continuous sample path with probability $1$ whereas $Y^{*}(\bold{s},t)(\omega)$ has discontinuous sample path with probability $1$. Now, we study the path properties of the proposed spatio-temporal process. The first part of the following theorem states that the observed spatio-temporal process has continuous sample path almost surely and in the second part it says that under additional smoothness assumptions regarding the covariance functions, the observed spatio-temporal process will have smooth sample paths almost surely. \[thm:sample\_path\] (a) The spatio-temporal process $Y(\bold{s},t)$ has continuous sample paths with probability $1$.\ (b) Assume that the covariance functions $c_{f}(\cdot,\cdot),c_{g}(\cdot,\cdot), c_{\epsilon}(\cdot,\cdot),c_{\eta}(\cdot,\cdot),c_{0}(\cdot,\cdot)$ satisfy the additional smoothness assumptions that centered Gaussian processes with these covariance functions will have almost surely $k$ times differentiable sample paths. Then the spatio-temporal process $Y(\bold{s},t)$ will also have almost surely $k$ times differentiable sample paths. The first part of the above theorem states that almost always the spatial surface induced by our model at any time point $t$ is continuous. So, unless the spatial surface is extremely irregular, any spatio-temporal data can be modeled reasonably adequately by our proposed spatio-temporal process. A stronger statement, however, is made in the second part of the theorem. It says that if the covariance functions $c_{f}(\cdot,\cdot),c_{g}(\cdot,\cdot),c_{\epsilon}(\cdot,\cdot), c_{\eta}(\cdot,\cdot),c_{0}(\cdot,\cdot)$ are sufficiently regular (smooth) then the sample paths of the process $Y(\bold{s},t)$ are almost always regular (smooth) and their degrees of smoothnes depend on the degree of smoothness of the covariance functions. As an illustration consider the situation when all the covariance function belong to the Matérn family whose smoothenes parameter is $\nu$. Then the sample paths of the centered Gaussian processes with the respective covariance functions will be almost surely $\lfloor \nu-1 \rfloor$ times differentiable, where, for any $x$, $\lfloor x\rfloor$ denotes the smallest integer greater than or equal to $x$. So, the spatio-temporal process $Y(\bold{s},t)$ will also have almost surely $\lfloor \nu-1 \rfloor$ times differentiable sample paths. Hence, when we have clear evidence from the data or prior knowledge regarding the degree of smoothness of the spatio-temporal process we should choose all the covariance functions $c_{f}(\cdot,\cdot),c_{g}(\cdot,\cdot),c_{\epsilon}(\cdot,\cdot),c_{\eta}(\cdot,\cdot), c_{0}(\cdot,\cdot)$ from the Matérn family with specific value of $\nu$. Model fitting and prediction ============================ Now we illustrate how our model can be used to make prediction at new spatio-temporal coordinates. Firstly, let us specify the prior structure. We consider bivariate Gaussian priors for each of $(\beta_{0g},\beta_{1g})$ and $(\beta_{0f},\beta_{1f})$. Although any reasonable isotropic covariance kernel that satisfy the mild regularity conditions already mentioned in Section \[sec:our\_proposal\] can be used in our model, for the sake of simplicity we consider the squared exponential covariance kernel with the following representation $c(\bold{u},\bold{v})=\sigma^2 e^{-\lambda||\bold{u}-\bold{v}||^2}$. Since, we have five covariance kernels $c_{f}(\cdot,\cdot),c_{g}(\cdot,\cdot),c_{\epsilon}(\cdot,\cdot),c_{\eta}(\cdot,\cdot), c_{0}(\cdot,\cdot)$ we need to put prior on five scale parameters $\sigma_{f}^2,\sigma_{g}^2, \sigma_{\epsilon}^2,\sigma_{\eta}^2,\sigma_{0}^2$ and five smoothness parameters $\lambda_{f},\lambda_{g}, \lambda_{\epsilon},\lambda_{\eta},\lambda_{0}$. We consider lognormal priors for all of them. Also, we have taken $N(\boldsymbol{0},\boldsymbol{I})$ prior for the vector parameter ${\boldsymbol{\mu}}_{0}$. All the priors considered above are mutually independent. With the prior specification as above and the conditional densities $[\mathbf{y}|\mathbf{x},\tilde{\theta}]$ and $[\mathbf{x}|\tilde{\theta}]$ being explicitly available ($\tilde{\theta}$ denotes the vector of all the parameters) we design a Gibbs sampler with Gaussian full conditionals for ${\boldsymbol{\mu}}_{0}$, $(\beta_{0g},\beta_{1g})$ and $(\beta_{0f},\beta_{1f})$ and Metropolis steps for scale parameters $\sigma_{f}^2,\sigma_{g}^2,\sigma_{\epsilon}^2,\sigma_{\eta}^2, \sigma_{0}^2$ and smoothness parameters $\lambda_{f},\lambda_{g},\lambda_{\epsilon},\lambda_{\eta},\lambda_{0}$. We update the latent state vector $\mathbf{x}_{nT}$ using Transformation based Markov Chain Monte Carlo (TMCMC) introduced by Dutta and Bhattacharya (2014)[@Dutta:Bhattacharya]. In particular, we use the additive transformation which has been shown by Dutta and Bhattacharya (2014)[@Dutta:Bhattacharya] to require less number of “moves types" compared to other valid transformations. The idea of TMCMC is very simple yet a very powerful one. Here we briefly illustrate the idea of additive TMCMC by contrasting it with the traditional Random Walk Metropolis (RWM) approach, assuming that we wish to update all the variables simultaneously. Suppose that we want to simulate from the conditional distribution $[\mathbf{x}_{nT}|\tilde{\theta}]$ using the RWM approach. Then we have to simulate $nT$ independent Gaussian random variables $\mathbf{E}_{nT}=(\epsilon_{1},\epsilon_{2},\cdots,\epsilon_{nT})'$; assuming that the current state of the Markov chain is $\mathbf{x}_{nT}^{(i)}$, we accept the new state $\mathbf{x}_{nT}^{(i)}+\mathbf{E}_{nT}$ with probability $\min\left\{1,\frac{[\mathbf{x}_{nT}^{(i)}+ \mathbf{E}_{nT}|\tilde{\theta}]}{[\mathbf{x}_{nT}^{(i)}|\tilde{\theta}]}\right\}$. However if $nT$ is large then this acceptance probability will tend to be extremely small. Hence the RWM chain sticks to a particular state for very long time and therefore the convergence to the posterior $[\mathbf{x}_{nT}|\tilde{\theta}]$ is very slow. What additive TMCMC does is simulate only one $\epsilon>0$ from some arbitrary distribution left-truncated at zero, and then form the $nT$ dimensional vector $\mathbf{E}_{nT}$ setting the $k$-th element independently to $-\epsilon$ with probability $p_k$ and $+\epsilon$ with probability $1-p_k$. For our applications we set $p_k=1/2$ $\forall$ $k=1,2,\ldots,nT$. Then we accept the new state $\mathbf{x}_{nT}^{(i)}+\mathbf{E}_{nT}$ with acceptance probability $\min\left\{1,\frac{[\mathbf{x}_{nT}^{(i)}+\mathbf{E}_{nT}|\tilde{\theta}]}{[\mathbf{x}_{nT}^{(i)}|\tilde{\theta}]}\right\}$. Dutta and Bhattacharya (2014)[@Dutta:Bhattacharya], Dey and Bhattacharya (2014a)[@Dey:Bhattacharya2014a], Dey and Bhattacharya (2014b)[@Dey:Bhattacharya2014b] provide details of many advantages of TMCMC (in particular, additive TMCMC) as compared to traditional MCMC (in aprticular, RWM). However, in our setup, we have not used TMCMC directly. Instead, we simulated $T$ independent $\epsilon_{t}$ each to update $n$ latent observations corresponding to time $t$. This may be called a block TMCMC and it improves mixing significantly over ordinary TMCMC. In our model dimension of the state vector is large and updating it using usual RWM would have been very inefficient. TMCMC saves us from such pitfall. Using the above sampling-based approach we study the posterior distribution of the unknown quantities and make inferences regarding the parameters. But our main goal is to predict $y(\bold{s}^*,t^*)$ at some new spatio-temporal coordinate $(\bold{s}^*,t^*)$. So, we augment $x(\bold{s}^*,t^*)$ with $\{x(\bold{s}_{i},t)\}$ where $ i=1,\cdots,n $ and $ t=1,\cdots,T $. The conditional distribution of $[\mathbf{y}|\mathbf{x},x(\bold{s}^*,t^*),\tilde{\theta}]$ remains the same as $[\mathbf{y}|\mathbf{x},\tilde{\theta}]$ and posterior simulation is done exactly the same way as in the case of model-fitting. Once the post burn-in posterior samples $(x^{(B)}(\bold{s}^*,t^*),\tilde{\theta}^{(B)}),(x^{(B+1)}(\bold{s}^*,t^*),\tilde{\theta}^{(B+1)}),\cdots$ from $[x(\bold{s}^*,t^*),\tilde{\theta}|\mathbf{y}]$ are available, we simulate $y^{(B)}(\bold{s}^*,t^*),y^{(B+1)}(\bold{s}^*,t^*),\cdots$ from\ $N\left(\beta_{0f}^{(j)}+\beta_{1f}^{(j)}x^{(j)}(\bold{s}^*,t^*)+{\boldsymbol{\Sigma}}^{(j)}_{12}\left({\boldsymbol{\Sigma}}^{(j)}_{22}\right)^{-1}\boldsymbol{V}^{(j)},(\sigma_{f}^{(j)})^2+(\sigma_{\epsilon}^{(j)})^{2}-{\boldsymbol{\Sigma}}^{(j)}_{12}\left({\boldsymbol{\Sigma}}^{(j)}_{22}\right)^{-1}{\boldsymbol{\Sigma}}^{(j)}_{21}\right)$ where $j=B,B+1,\cdots$ and $\{\beta_{0f}^{(j)},\beta_{1f}^{(j)},\sigma_{f}^{(j)},\sigma_{\epsilon}^{(j)}\}$ are post burn-in posterior samples for the respective parameters. Here ${\boldsymbol{\Sigma}}^{(j)}_{12}$ is the covariance between $x^{(j)}(\bold{s}^*,t^*)$ and the vector $\{x^{(j)}(\bold{s}_{i},t)\}$ and ${\boldsymbol{\Sigma}}^{(j)}_{22}$ is the covariance matrix of the vector $\{x^{(j)}(\bold{s}_{i},t)\}$. The vector $\boldsymbol{V}^{(j)}$ consists of $ y(\bold{s}_{i},t)-\beta_{0f}^{(j)}-\beta_{1f}^{(j)}x^{(j)}(\bold{s}_{i},t)$ where $ i=1,\cdots,n $ and $ t=1,\cdots,T $. These $y^{(B)}(\bold{s}^*,t^*),y^{(B+1)}(\bold{s}^*,t^*),\cdots$ are samples from the posterior predictive distribution $[y(\bold{s}^*,t^*)|\mathbf{y}]$ which is used for prediction at the new spatio-temporal coordinate $(\bold{s}^*,t^*)$. Simulation and real data study ============================== Illustration via a simulation example ------------------------------------- We randomly sample $15$ points from a square of length $2$ and generate spatial time series of length $20$ at those points using our model. The true parameter values for simulation are $\beta_{0f}=-4.1,\beta_{1f}=0.51,\beta_{0g}=5.1,\beta_{1g}=0.64$ and $\sigma_{f}=\sigma_{g}=1.0, \sigma_{\epsilon}=4.0,\sigma_{\eta}=4.9,\sigma_{0}=5.8$. The true values of the smoothness parameters are $\lambda_{f}=4.3,\lambda_{g}=2.4,\lambda_{\epsilon}=\lambda_{\eta}=6.25,\lambda_{0}=4.0$. We simulate the vector ${\boldsymbol{\mu}}_{0}$ from $N(\boldsymbol{0},\boldsymbol{I})$. We consider the prior structure as mentioned in Section 4. We use independent diffused normal priors with mean $0$ and variance $1000$ for each of $\beta_{0g},\beta_{1g},\beta_{0f},\beta_{1f}$. For the vector ${\boldsymbol{\mu}}_{0}$ we consider $N(\boldsymbol{0},\boldsymbol{I})$ prior. For the scale parameters $\sigma_{f}^2,\sigma_{g}^2$ we used lognormal priors with location parameter $0$ and scale parameter $0.7$. For the scale parameters $\sigma_{\epsilon}^2,\sigma_{\eta}^2,\sigma_{0}^2$ associated respectively with the error processes and the spatial process at time $0$ we used lognormal priors with location parameter $3$ and scale parameter $0.1$. The choice of the hyper parameters associated with $\sigma_{f}^2,\sigma_{\epsilon}^2$ are made based on the variance of the data. We used lognormal priors also for the smoothness parameters. However these lognormal priors are concentrated near $0$. This somewhat concentrated prior provides safeguard against ill-conditioning of the relevant covariance matrices during MCMC iterations – indeed, free movement of the smoothness parameters in their respective parameter spaces for MCMC pusposes can make the covariance matrices $\tilde{\mathbf{\Sigma}}$ and $\mathbf{\Sigma}_{f,\epsilon}$ drastically ill-conditioned. In making the above selection of hyper parameters we have extensively used MCMC pilot runs based on a smaller subset of the entire dataset. Since, due to smaller sizes of the data sets the pilot runs were much faster, we were able to experiment with many such pilot MCMC runs with different combinations of the hyper parameters. We chose that combination with smallest deviation of the posterior predictive median surface from the true data surface in terms of total absolute deviation. Then we perform our final MCMC computation based on the entire dataset using those selected values of the hyper parameters. In a standard laptop we have run $200,000$ iterations at a rate almost $70,000$ iterations per day. We have taken first $100,000$ iterations as burn-in and the post burn-in $100,000$ iterations have been used for posterior inference. Convergence of the MCMC chains are checked using different starting points. Visual monitoring suggests that the convergence is satisfactory. Apart from the above set of priors we also experimented with some non informative priors but the results turned out to be only negligibly different, indicating some degree of robustness with respect to the prior choices. We obtain the leave-one-out posterior predictive distribution for the observed data at each of the 300 space-time coordinates. Out of the 300 data points, at 292 space-time coordinates the observed data fell within the $95\%$ Bayesian prediction intervals of their respective posterior predictive distributions. The average length of the $95\%$ Bayesian prediction interval being 20.25 indicates that the results are encouraging. Although we have used different sets of parameter values to conduct this simulation experiment we report only one study for space constraint. The results are shown in Figures 1(a)-1(f). \ \ \ A brief of Figure \[fig:subfigures1\] follows. Panel (a) displays the leave-one-out 95% Bayesian prediction interval for all of the $15$ spatial locations at time point $t=7$. The upper and lower quantiles (which form the intervals) are interpolated to form two surfaces. The middle surface is obtained by interpolating the leave-one-out point predictions (which is the median in this case) at the monitoring sites $\bold{s}_{1}, \bold{s}_{2},\cdots, \bold{s}_{15}$ for time point $t=7$. Panel (b) displays once again the leave-one-out 95% Bayesian prediction interval for all of the $15$ spatial locations but this time the middle surface is interpolated through the truly observed data (the truly observed data being represented by black dots). Panel (c) and Panel (d) display similar plots as (a) and (b) but for time point $t=14$. Panels (e) and (f) show the leave-one-out 95% Bayesian prediction intervals for the time series at spatial locations $\bold{s}_{6}$ and $\bold{s}_{9}$ respectively. The starred curve represents the curve interpolated through the truly observed data (the observed data itself being represented by star) and the plain curves represent respectively the upper, middle and lower quantiles associated with the posterior predictive distributions. In other words, Figures 1(a)-1(f) demonstrate the leave-one-out posterior predictive performance of our model. While figures 1(b) and 1(d) show that our model performs satisfactorily in terms of interval prediction at purely spatial level, the similarity of the median surface in figure 1(a) with the true data surface in figure 1(b) indicates that our model performs well also in terms of point prediction. The same conclusion can be drawn by observing figures 1(c) and 1(d). Figures 1(e) and 1(f) confirm that our model is also successful in terms of prediction at the temporal level. \ While figures 1(a)-1(f) display the performance of our model in terms of prediction, figures 2(a)-2(f) focus on the issue of capturing the true, underlying correlation structure. The figures display the posterior distribution of the correlation function $\rho((\bold{s}_{i},t_{j}),(\bold{s}_{i^{\prime}},t_{j^{\prime}}))$ for different choices of $\bold{s}_{i},t_{j},\bold{s}_{i^{\prime}},t_{j^{\prime}}$. The true value and the $95\%$ Bayesian credible interval are also depicted in the figures. As is evident from the diagrams, the true values lie well within the respective $95\%$ Bayesian credible intervals, vindicating reasonable performance of our model in terms of capturing the true correlation structure. Hence, in summary, this simulation study demonstrates the effectiveness of our model for the purpose of prediction as well as for learning about the true correlation structure. Simulation from a nonlinear non-Gaussian spatial state space model ------------------------------------------------------------------ In the previous section we described how our model performs when the data is simulated from our own model. But, now we describe a simulation study where the data is generated from a nonlinear non-Gaussian state space model which is completely different from our model. Let us first describe our data generation method. We consider a square of length $2$ and generate $20$ random locations inside it. Then we simulate a spatial time series of length $20$ at those points using the following nonlinear state space model: $$\begin{aligned} Y(\bold{s},t) =&-4.1+0.7X(\bold{s},t)+\epsilon(\bold{s},t), \label{eqn:NLDLM1}\\ X(\bold{s},t) =&-1.1+0.5X(\bold{s},t-1)+3\sin(\frac{\pi}{4}X(\bold{s},t-1))-5\sin(\frac{\pi}{5}X(\bold{s},t-1))+\eta(\bold{s},t), \label{eqn:NLDLM2}\\ (X(\bold{s}_{1},0)&,X(\bold{s}_{2},0),\cdots,X(\bold{s}_{20},0))\sim N({\boldsymbol{0}},{\boldsymbol{\Sigma}}_{0}); \label{eqn:NLDLM3}\end{aligned}$$ where $ \bold{s}\in \mathbb{R}^2 $ and $ t\in\{1,2,3,\ldots\}$. $\epsilon(\cdot,t)$ and $\eta(\cdot,t)$ are temporally independent and identically distributed spatial Gaussian processes with squared exponential covariance kernel. The corresponding scale parameters are $\sigma_{\epsilon}=3.0,\sigma_{\eta}=3.9$ and the smoothness parameters are $\lambda_{\epsilon}=6.25,\lambda_{\eta}=6.25$. The covariance matrix ${\boldsymbol{\Sigma}}_{0}$ is also generated by a squared exponential covariance kernel with scale and smoothnes parameters $3.8$ and $4.0$ respectively. \ As is evident from the above figures the evolutionary transformation $-1.1+0.5x+3\sin(\frac{\pi}{4}x)-5\sin(\frac{\pi}{5}x)$ is highly nonlinear over the range of $X(\bold{s}_{i},t)$, which serves our purpose. Our prior specification is very similar to the previous simulation study, where we have taken diffused normal priors for the location parameters and lognormal priors for the scale and smoothness parameters. We have taken sufficiently large burn-in and convergence of the chain is confirmed via visual monitoring. The computation is a bit time consuming with almost 60,000 iterations per day in a standard laptop. This is however natural for dynamic models, as they are so high-dimensional. In the following set of figures we describe how our model performs in terms of prediction. \ \ \ A brief description of Section \[fig:subfigures4\] follows. Panel (a) displays the leave-one-out 95% Bayesian prediction interval for all of the $20$ spatial locations at time point $t=3$. The upper and lower quantiles (which form the intervals) are interpolated to form two surfaces. The middle surface is obtained by interpolating the leave-one-out point predictions (which is the median in this case) at the monitoring sites $\bold{s}_{1}, \bold{s}_{2},\cdots, \bold{s}_{20}$ for time point $t=3$. Panel (b) displays once again the the leave-one-out 95% Bayesian prediction interval for all of the $20$ spatial locations but this time the middle surface is interpolated through the truly observed data (the truly observed data being represented by black dots). Panel (c) and Panel (d) display plots similar as (a) and (b) but for time point $t=11$. Panels (e) and (f) depict the leave-one-out 95% Bayesian prediction interval for the time series at spatial locations $\bold{s}_{2}$ and $\bold{s}_{8}$, respectively. The starred curve represents the curve interpolated through the truly observed data (the observed data itself being represented by star) and the plain curves represent respectively the upper, middle and lower quantiles associated with the posterior predictive distributions. From the above set of figures it is evident that our model performs very well even though the data arose from a completely different model. $SO_{2}$ precipitation over Great Britain and France ---------------------------------------------------- Air pollution over large geographical regions is a topic of wide range of studies involving statistics and other disciplines. Among them statistical modeling of pollution caused by $SO_{2}$ draws considerable attention. Here we consider a $SO_{2}$ precipitation dataset over Great Britain and France. The dataset consists of monthly measurement of sulphur dioxide precipitation taken at $16$ monitoring stations spread over Great Britain and France, starting from April 1999 to January 2001. This dataset is a part of data collected through the ‘European monitoring and evaluation programme’ (EMEP) which co-ordinates the monitoring of airborne pollution over Europe. Further information is available at , the website from which we have obtained the data set. First we consider some exploratory analysis based on the dataset. It is important to note that instead of direct measurement what we have is the measurement of natural logarithm of $SO_{2}$ precipitation for these stations. \ We plot the station wise time series in a single figure and we see clear seasonal component and a very mild decreasing trend in most of the time series plotes shown in Panel (b) of Figure \[fig:subfigures5\]. Then we estimate the trend and seasonal component of each of the time series separately using simple descriptive techniques and detrend and deseasonalize them. Finally, we model the residual spatio-temporal data using our method. The prior structure used in the real data analysis is similar to that of the simulation study. We used diffused bivariate normal priors for $(\beta_{0g},\beta_{1g})$ and $(\beta_{0f},\beta_{1f})$ and lognormal priors for all the other parameters. Since the monitoring stations are spread over a large geographical region and distance stretches horizontally as latitude increases, the use of simple longitude and latitude as spatial coordinates would not be appropriate. The Lambert (or Schmidt) projection addresses this problem by preserving the area. This projection is defined by the transformation of longitude and latitude, expressed in radians as $\psi $ and $\phi$, to the new co-ordinate system $\mathbf{s}=(2\sin(\frac{\pi}{4}-\frac{\phi}{2})\sin(\psi),-2\sin(\frac{\pi}{4}-\frac{\phi}{2})\cos(\psi))$. However, with respect to the temporal coordinate we simply take one month as one unit of time. We implemented an MCMC chain with sufficiently large burn-in; visual inspection suggests satisfactory convergence. Based on the MCMC samples we calculated leave-one-out posterior predictive distributions of $\ln{SO_{2}}$ at each of the monitoring stations for all the months ranging from April 1999 to January 2001. We calculated the $95\%$ Bayesian prediction interval associated with the leave-one-out posterior predictive distribution for each of the $352$ space time coordinates. More than $95\%$ of the data fell within the respective prediction interval. In Figures 6(a)-6(c) we provide the 95% Bayesian prediction intervals associated with the leave-one-out posterior predictive distributions for time series of length $22$ for different spatial locations. These three plots show that our model performs satisfactorily also in terms of prediction at purely temporal level. Similarly, in Figure 7(a)-7(i) we display the contourplots of the leave-one-out posterior point prediction surface, along with the contourplot of the true data surfaces for different months. The contour plots clearly show that our model performs very well in terms of point prediction. We also display the interval prediction through the combined plot of the interpolating surfaces of the upper quantiles, the true data and the lower quantiles, for the corresponding months. Overall, this shows that our model performs well in terms of both point prediction and interval prediction at purely spatial level. \ \ \ One point to be noted here is that we have fitted our model to the residual space-time dataset (after detrending and deseasonalizing the original dataset), not to the original $\ln{SO_{2}}$ measurements. So, while doing posterior prediction (or leave-one-out posterior prediction) at a space-time location, we needed to add back the trend and seasonal components to obtain the correct predictions. Only then we can compare the posterior prediction and the true value at any particular space-time location. Possible extensions =================== A very important aspect of such space-time data is covariate information. Although in this article, we only consider model without any covariate information, it can be easily incorporated in our setup. In that case our model will be of the following form: $$\begin{aligned} Y(\bold{s},t)&=f(X(\bold{s},t),Z(\bold{s},t))+\epsilon(\bold{s},t), \label{eqn:npr3}\\ X(\bold{s},t)&=g(X(\bold{s},t-1))+\eta(\bold{s},t), \label{eqn:npr4}\end{aligned}$$ where $Z(\bold{s},t)$ is the covariate process and $f(\cdot,\cdot)$ is now a Gaussian processes on $\mathbb{R}^2$. The rest of the theory remains the same as before. In fact, we can have $k$ different covariate processes $Z_{1}(\bold{s},t),Z_{2}(\bold{s},t),\cdots,Z_{k}(\bold{s},t)$, in which case $f(\cdot,\cdot,\cdots,\cdot)$ will be a Gaussian process on $\mathbb{R}^{k+1}$. Another direction for extension is time varying nonparametric state space model. In that case we let the evolutionary and observational transformations vary with time, so that our model assumes the form: $$\begin{aligned} Y(\bold{s},t)&=f_{t}(X(\bold{s},t))+\epsilon(\bold{s},t), \label{eqn:npr5}\\ X(\bold{s},t)&=g_{t}(X(\bold{s},t-1))+\eta(\bold{s},t); \label{eqn:npr6}\end{aligned}$$ Such modification is particularly useful when the time interval on which we are observing the data is very wide so that it is unlikely for the functions $f$ and $g$ to remain invariant with respect to time. In the context of purely temporal nonparametric state space models, [@Ghosh14] consider time-varying functions $f_t(\cdot)$ and $g_t(\cdot)$, which they re-write as $f(\cdot,t)$ and $g(\cdot,t)$, respectively. In other words, they consider the time component as an argument of their random functions $f$ and $g$, which are modeled by Gaussian processes in the usual manner. Such ideas can be easily adopted in our spatio-temporal model. Currently, we are working on these extensions. Discussion and concluding remarks ================================= One common problem in the context of space-time data is missing observations. Malfunction of monitoring device, inexperienced handling etc., may lead to such problem. So, any good spatio-temporal model should take care of missing data. In our case it is very simple. We augment the latent variable (say $x(\bold{s}^*,t^*)$) corresponding to the missing observation with $\{x(\bold{s}_{i},t)\}$ , $ i=1,\cdots,n $ and $ t=1,\cdots,T $. Then we can predict the missing observation $y(\bold{s}^*,t^*)$ in the same way we make prediction at a new spatio-temporal coordinate (see Section $4$). Thus, prediction at a new spatio-temporal co-ordinate and missing data are taken care of simultaneously in our model. Although we have developed a new nonstationary model, we have not done any comparative study with respect to to other existing nonstationary models (mentioned in section $2$). Unfortunately, these nonstationary models are built on entirely different philosophies and hence are not comparable. For example, the nonstationary model developed in [@Higdon2] depends heavily on the choice of kernel and may outperform the nonstationary model developed in[@Guttorp:Meiring:Sampson] depending on whether some particular kernel is being used. In fact, Dou and Zidek (2010) [@Dou:Le:Zidek], while comparing two different methods for analyzing hourly ozone data expressed similar view (see page 1209). Similarly, our model, which is built on the idea of state space models is not comparable at all with the nonstationary models that are based on the ideas of kernel convolution or deformation. In fact, our model is not even comparable with DSTM, another model based on the idea of state space. The reason behind it is that although our model reduces to the no-covariate DSTM model if $\beta_{0f}=\beta_{0g}=0$, $\beta_{1f}=\beta_{1g}=1$, $\sigma_{f}^{2}=\sigma_{g}^{2}=0$, in spirit it is entirely different from DSTM models, which are generally used to utilize covariate information in improving the prediction. Indeed, our objective is to utilize unobserved information to enhance prediction. Both of these models fall in the category of state space models, but the motivation behind them are quite different. DSTM models are much more like spatio-temporal regression modeling where the linear regression structure varies with time (varying coefficient linear regression). Acknowledgments {#acknowledgments .unnumbered} =============== The research of the first author is fully funded by CSIR SPM Fellowship, Govt. of India. The authors thank ‘European monitoring and evaluation programme’ (EMEP) for the $SO_{2}$ dataset. They also thank Moumita Das for many fruitful discussions on this article. Appendix {#appendix .unnumbered} ======== Before proving the theorems let us make a notational clarification. The notations $[\bold{X}|\bold{Y}],[\bold{x}|\bold{y}]$ and $[\bold{X}=\bold{x}|\bold{Y}=\bold{y}]$ are equivalent and throughout this section they will denote the value of conditional pdf of $\bold{X}$ given $\bold{Y}=\bold{y}$ at $\bold{X}=\bold{x}$. For this proof we assume that there exist continuous modifications of the Gaussian processes that we consider, that is, there exist processes with sample paths that are continuous everywhere, not just almost everywhere, and that our Gaussian processes equals such processes with probability one (see, for example, see [@Adler81] for details). Existence of such continous modifications are guaranteed under the correlation structure that we consider for our Gaussian processes. Let us first notice that, $\exists$ a probability space $(\Omega,\mathcal{F},P)$ such that $$(g(x),X(\bold{s}_{1},0),\cdots,X(\bold{s}_{n},0)): (\Omega,\mathcal{F})\rightarrow (C(\mathbb{R}),\mathcal{A})\bigotimes (\mathbb{R}^n,\mathcal{B}(\mathbb{R}^n))$$ where $C(\mathbb{R})$ is the space of all real valued continuous functions on $\mathbb{R}$ and $\mathcal{A}$ is the Borel sigma field obtained from the topology of compact convergence on the space $C(\mathbb{R})$. Such a joint measurability result need not hold unless $g(x)$ and $(X(\bold{s}_{1},0),\cdots,X(\bold{s}_{n},0))$ are independent. Now, we will show that $g(X(\bold{s}_{1},0))$ is a measurable real valued function or a proper random variable. To show this, first see that $g(X(\bold{s}_{1},0)):\Omega\rightarrow\mathbb{R}$ can be written as $T(g(\cdot),X(\bold{s}_{1},0))$ where $T:C(\mathbb{R})\bigotimes\mathbb{R}\rightarrow\mathbb{R}$ is a transformation such that, $T(g,x)=g(x)$ where $g$ is a real valued continuous function on $\mathbb{R}$ and $x$ is a real number. $T:C(\mathbb{R})\bigotimes\mathbb{R}\rightarrow\mathbb{R}$ is a continuous transformation where the topology associated with $C(\mathbb{R})$ is the topology of compact convergence and the topology associated with $\mathbb{R}$ is the usual Euclidean distance based topology on real numbers. \[lem:measurable\] Let us consider the metric $d(g,g^\prime)=\mathlarger{\sum_{i=1}^{\infty}\frac{1}{2^i} \frac{\sup\limits_{x\in [-i,i]}{|g(x)-g^\prime(x)|}}{1+\sup\limits_{x\in[-i,i]}{|g(x)-g^\prime(x)|}}}$. This metric induces the topology of compact convergence on the space $C(\mathbb{R})$. To prove continuity of $T$ one needs to show that $d(g_{n},g)\rightarrow 0$ and ${|x_{n}-x|}\rightarrow 0 \Rightarrow {|T(g_{n},x_{n})-T(g,x)|}\rightarrow 0$.\ Let us assume that $d(g_{n},g)\rightarrow 0$ and ${|x_{n}-x|}\rightarrow 0$. So, $\exists \ N_{0}$ and $j_{0}$ such that $\forall \ n\geq N_{0}$, $x_{n}\in [-j_{0},j_{0}]$.\ Now, $\mathlarger{\frac{1}{2^{j_{0}}}}\sup\limits_{x\in [-j_{0},j_{0}]}{|g_{n}(x)-g(x)|}\leq d(g_{n},g) \Rightarrow \sup\limits_{x\in [-j_{0},j_{0}]}{|g_{n}(x)-g(x)|}\rightarrow 0$\ and ${|g(x_{n})-g(x)|}\rightarrow 0$ because $g$ is continuous.\ But, $$\begin{aligned} {|g_{n}(x_{n})-g(x)|} \leq {|g_{n}(x_{n})-g(x_{n})|} + {|g(x_{n})-g(x)|}\\ \text{So,}\ \forall \ n\geq N_{0},\ {|g_{n}(x_{n})-g(x)|} \leq \sup\limits_{x\in [-j_{0},j_{0}]}{|g_{n}(x)-g(x)|} + {|g(x_{n})-g(x)|}\\\end{aligned}$$ The RHS goes to $0$ as $n\rightarrow\infty$. Hence, ${|T(g_{n},x_{n})-T(g,x)|}={|g_{n}(x_{n})-g(x)|}\rightarrow0$. Once continuity of $T$ is proved, note that $T^{-1}(U)$, for any open set $U \subseteq \mathcal{\mathbb{R}}$, is an open set in the product topology on the space $C(\mathbb{R})\bigotimes\mathbb{R}$. Hence, $T^{-1}(U)$ belongs to the Borel sigma field generated by this product topology which is in this case equivalent to the product sigma field $\mathcal{A}\bigotimes\mathcal{B}(\mathbb{R})$ associated with $(C(\mathbb{R}),\mathcal{A})\bigotimes (\mathbb{R},\mathcal{B}(\mathbb{R}))$. This equivalence holds because both of the spaces $C(\mathbb{R})$ and $\mathbb{R}$ are separable. But $(g(x),X(\bold{s}_{1},0))$ is measurable with respect to $(\Omega,\mathcal{F})$ and $(C(\mathbb{R}),\mathcal{A}) \bigotimes (\mathbb{R},\mathcal{B}(\mathbb{R}))$. Hence, the inverse image of $T^{-1}(U)$ with respect to $(g(x),X(\bold{s}_{1},0))$ is in $\mathcal{F}$. So, the inverse image of any open set $U \subseteq \mathcal{\mathbb{R}}$ with respect to $g(X(\bold{s}_{1},0))$ is in $\mathcal{F}$. This proves the measurability of $g(X(\bold{s}_{1},0))$. Following exactly same argument as above we can further prove that $g(X(\bold{s}_{2},0)),\cdots,g(X(\bold{s}_{n},0))$ are jointly measurable. Now, as $\eta(\bold{s},t)$ is independent of $g(x)$ and $(X(\bold{s}_{1},0),\cdots,X(\bold{s}_{n},0))$, we have the joint measurability of $(X(\bold{s}_{1},1),\cdots,X(\bold{s}_{n},1))$ (See \[eqn:npr2\]). Infact, we can prove that $(g(x),X(\bold{s}_{1},1),\cdots,X(\bold{s}_{n},1))$ are jointly measurable. To do it we consider $T^{\prime}:C(\mathbb{R})\bigotimes\mathbb{R}\rightarrow C(\mathbb{R})\bigotimes\mathbb{R}$ such that $T^{\prime}(g,x)=(g,g(x))$ where $g$ is a real valued continuous function on $\mathbb{R}$ and $x$ is a real number. Then similarly as in case of $T$ we can prove that $T^{\prime}$ is also a continuous map which immediately implies that $(g,g(X(\bold{s}_{1},0)))$ are jointly measurable. Then $\eta(\bold{s},t)$ being independent of $g(x)$ and $(X(\bold{s}_{1},0),\cdots,X(\bold{s}_{n},0))$, implies the joint measurability of $(g(x),X(\bold{s}_{1},1),\cdots,X(\bold{s}_{n},1))$. Hence, starting with the joint measurability of $(g(x),X(\bold{s}_{1},0),\cdots,X(\bold{s}_{n},0))$ we prove the joint measurability of $(X(\bold{s}_{1},1),\cdots,X(\bold{s}_{n},1))$ and $(g(x),X(\bold{s}_{1},1),\cdots,X(\bold{s}_{n},1))$. Similarly, if we start with joint measurability of $(g(x),X(\bold{s}_{1},1),\cdots,X(\bold{s}_{n},1))$ we can prove the joint measurability of $(X(\bold{s}_{1},2),\cdots,X(\bold{s}_{n},2))$. Thus, the joint measurability of the whole collection of state variables ${X(\bold{s}_{i},t)}$ $\forall\ i=1,2,\cdots,n;t=0,1,\cdots,T $ is mathematically established. Now, to prove joint measurability of the collection of observed variables ${Y(\bold{s}_{i},t)}$ $\forall\ i=1,2,\cdots,n;t=1,\cdots,T $ recall the observational equation (1). Since, $f(x)$ takes values in $(C(\mathbb{R}),\mathcal{A})$ just as $g(x)$, and also since $\epsilon(\bold{s},t)$ is independent of $f(s)$ just as $\eta(\bold{s},t)$ is independent of $g(x)$, all the previous arguments go through in this case and joint measurability of ${Y(\bold{s}_{i},t)}$ $\forall\ i=1,2,\cdots,n;t=0,1,\cdots,T $ is established. Finally, it remains to show that a valid spatio-temporal process is induced by this model. But it is immediate from the application of Kolmogorov consistency theorem. The consistency conditions of the theorem are trivially satisfied by our construction and hence the result follows. In the previous proof, we needed to assume the continous modification of the underlying Gaussian process. Here we consider a proof where the underlying Gaussian process need not be continuous. In fact, the alternative proof that we now provide is valid even if the underlying process admits at most countable number of discontinuities. Note that it is possible to represent any stochastic process $\{Z(\bold{s});\bold{s}\in T\}$, for fixed $\bold{s}$ as a random variable $\omega\mapsto Z(\bold{s},\omega)$, where $\omega\in\Omega$; $\Omega$ being the set of all functions from $T$ into $\mathbb R$. Also, fixing $\omega\in\Omega$, the function $\bold{s}\mapsto Z(\bold{s},\omega);~\bold{s}\in T$, represents a path of $Z(\bold{s});\bold{s}\in T$. Indeed, we can identify $\omega$ with the function $\bold{s}\mapsto Z(\bold{s},\omega)$ from $T$ to $\mathbb R$; see, for example, [@Oksendal00], for a lucid discussion. This latter identification will be convenient for our purpose, and we adopt this for proving our result on measurability. Note that the $\sigma$-algebra $\mathcal F$ associated with $Z$ is generated by sets of the form $$\left\{\omega:\omega(\bold{s}_1)\in B_1,\omega(\bold{s}_2)\in B_2,\ldots,\omega(\bold{s}_k)\in B_k\right\},$$ where $B_i\subset\mathbb R;i=1,\ldots,k$, are Borel sets in $\mathbb R$. In our case, the Gaussian process $g(\cdot)$ can be identified with $g(x)(\omega_1)=\omega_1(x)$, for any fixed $x\in\mathbb R$ and $\omega_1\in\Omega_1$, where $\Omega_1$ is the set of all functions from $\mathbb R$ to $\mathbb R$. The [*initial*]{} Gaussian process $X(\cdot,0)$ can be identified with $X(\bold{s},0)(\omega_2)=\omega_2(\bold{s})$, where $\bold{s}\in\mathbb R^d$ $(d\geq 2)$ and $\omega_2\in\Omega_2$. Here $\Omega_2$ is the set of all functions from $\mathbb R^d$ to $\mathbb R$. Let $\mathcal F_1$ and $\mathcal F_2$ be the Borel $\sigma$-fields associated with $\Omega_1$ and $\Omega_2$, respectively. We first show that the composition of $g(\cdot)$ with $X(\cdot,0)$, given by $g(X(\bold{s},0))$ is a measurable random variable for any $\bold{s}$ is a measurable function. Since $g$ and $X(\cdot,0)$ are independent, we need to consider the product space $\Omega_1\otimes\Omega_2$, and noting that $g(X(\bold{s},0)(\omega_2))(\omega_1)=\omega_1(\omega_2(\bold{s}))$, where $(\omega_1,\omega_2)\in\Omega_1\otimes\Omega_2$, need to show that sets of the form $$A(\bold{s}_1,\ldots,\bold{s}_k)=\left\{(\omega_1,\omega_2):\omega_1(\omega_2(\bold{s}_1))\in B_1,\omega_1(\omega_2(\bold{s}_2))\in B_2,\ldots, \omega_1(\omega_2(\bold{s}_k))\in B_k\right\},$$ where $B_i\subset\mathbb R;i=1,\ldots,k$, are Borel sets in $\mathbb R$, are in $\mathcal F_1\otimes\mathcal F_2$, the product Borel $\sigma$-field associated with $\Omega_1\otimes\Omega_2$. For our purpose, we let $B_i$ be of the form $[a_i,b_i]$ for real values $a_i<b_i$. Now, suppose that $(\omega_1,\omega_2)\in A(\bold{s}_1,\ldots,\bold{s}_k)$. Then $\omega_1(\omega_2(\bold{s}_i))\in [a_i,b_i]$, which implies that $\omega_2(\bold{s}_i)$ is at most a countable union of sets of the form $[a^{(i)}_j,b^{(i)}_j];~j\in\mathcal D_i$, where $\mathcal D_i$ is a countable set of indices. Also, it holds that $\omega_1(x^*)\in [a_i,b_i];~\forall~x^*\in\mathbb Q\cap\left\{\underset{j\in\mathcal D_i} \cup [a^{(i)}_j,b^{(i)}_j]\right\}$. Here $\mathbb Q$ is the countable set of rationals in $\mathbb R$. If necessary, we can envisage a countable set $\mathcal D^*$ consisting of points of discontinuities of $\omega_1$. If $\xi$ is a point of discontinuity, then $\omega_1(\xi)$ may be only the left limit of particular sequence $\{\omega_1(\xi_{1,m});m=1,2,\ldots\}$ or only the right limit of a particular sequence $\{\omega_1(\xi_{2,m});m=1,2,\ldots\}$, or $\omega_1(\xi)$ may be an isolated point, not reachable by sequences of the above forms. It follows that $(\omega_1,\omega_2)$ must lie in $$A^*(\bold{s}_1,\ldots,\bold{s}_k)=\cap_{i=1}^k\left\{(\omega_1,\omega_2):\omega_1(x)\in [a_i,b_i]~\forall~x\in \left(\mathbb Q\cap\left\{\underset{j\in\mathcal D_i}\cup [a^{(i)}_j,b^{(i)}_j]\right\}\right)\cup\mathcal D^*, \omega_2(\bold{s}_i)\in\underset{j\in\mathcal D_i}\cup [a^{(i)}_j,b^{(i)}_j]\right\}.$$ Now, if $(\omega_1,\omega_2)\in A^*(\bold{s}_1,\ldots,\bold{s}_k)$, then, noting that for any point $x\in \underset{j\in\mathcal D_i}\cup [a^{(i)}_j,b^{(i)}_j]$ of $\omega_1$, $\omega_1(x)=\underset{m\rightarrow\infty}\lim \omega_1(\xi_m)$, where $\{\xi_m;m=1,2,\ldots\}\in \mathbb Q\cap\left\{\underset{j\in\mathcal D_i}\cup [a^{(i)}_j,b^{(i)}_j]\right\}$, it is easily seen that $(\omega_1,\omega_2)\in A(\bold{s}_1,\ldots,\bold{s}_k)$. Hence, $A(\bold{s}_1,\ldots,\bold{s}_k)=A^*(\bold{s}_1,\ldots,\bold{s}_k)$. Now observe that $A^*(\bold{s}_1,\ldots,\bold{s}_k)$ is a finite intersection of countable union of measurable sets; hence, $A^*(\bold{s}_1,\ldots,\bold{s}_k)$ is itself a measurable set. In other words, we have proved that $g(X(\cdot,0))$ is measurable. Now, as $\eta(\cdot,t)$ is independent of $g(\cdot)$ and $X(\cdot,0)$, it follows from (\[eqn:npr2\]) that $X(\cdot,1)$ is measurable. To prove measurability of $X(\cdot,2)$, note that $$\begin{aligned} X(\bold{s},2)&=g(X(\bold{s},1))+\eta(\bold{s},2)\notag\\ &=g(g(X(\bold{s},0))+\eta(\bold{s},1))+\eta(\bold{s},2). \label{eq:2nd}\end{aligned}$$ The process $\eta(\cdot,1)$ requires introduction an extra sample space $\Omega_3$, so that we can identify $\eta(\bold{s},1)(\omega_3)$ as $\omega_3(\bold{s})$. With this, we can represent $g(g(X(\bold{s},0))+\eta(\bold{s},1))$ of (\[eq:2nd\]) as $\omega_1(\omega_1(\omega_2(\bold{s}))+\omega_3(\bold{s}))$. Now, $\omega_1(\omega_1(\omega_2(\bold{s}))+\omega_3(\bold{s}))\in [a_i,b_i]$ implies that $\omega_1(\omega_2(\bold{s}))+\omega_3(\bold{s})\in\underset{j\in\mathcal D_i}\cup[a^{(j)}_i,b^{(j)}_i]$. If $\omega_1(\omega_2(\bold{s}))+\omega_3(\bold{s})\in [a^{(k)}_i,b^{(k)}_i]$ for some $k\in\mathcal D_i$, the the set of solutions is $$\underset{r\in\mathbb R}\cup\left\{\omega_1(\omega_2(\bold{s}))\in [a^{(k)}_i-r,b^{(k)}_i-r],w_3(\bold{s})=r\right\}, \label{eq:uncountable}$$ where $\omega_1(\omega_2(\bold{s}))\in [a^{(k)}_i-r,b^{(k)}_i-r]$ implies, as before, that $\omega_2(\bold{s})$ belongs to a countable union of measurable sets in $\mathbb R$. Although the set (\[eq:uncountable\]) is an uncountable union, following the technique used for proving measurability of $g(X(\cdot,0))$, we will intersect the set by $\mathbb Q$, the (countable) set of rationals in $\mathbb R$; this will render the intersection a countable set. The proof of measurability then follows similarly as before. Proceeding likewise, we can prove measurability of $X(\cdot,t)$ is measurable for $t=2,3,\ldots$. Proceeding exactly in the same way, we can also prove that $Y(\cdot,t);~t=1,2,\ldots,T$ are measurable. Moreover, it can be easily seen that the same methods employed for proving the above results on measurability can be extended in a straightforward (albeit notationally cumbersome) manner to prove that the sets of the forms $$\left\{X(\bold{s}_i,t_i)\in [a_i,b_i];i=1,\ldots,k\right\}\ \ \mbox{and} \ \ \left\{Y(\bold{s}_i,t_i)\in [a_i,b_i];i=1,\ldots,k\right\}$$ are also measurable. Furthermore, it can be easily verified that $X$ and $Y$ satisfy Kolmogorov’s consistency criteria. In other words, $X$ and $Y$ are well-defined stochastic processes in both space and time. Let us first observe that conditional on $g(x)$ our latent process satisfies the Markov property. That is, $$\begin{aligned} &[(x(\bold{s}_{1},t),\cdots,x(\bold{s}_{n},t))\mid (g(x(\bold{s}_{1},t-1)),\cdots,g(x(\bold{s}_{n},t-1))),(x(\bold{s}_{1},t-1),\notag\\ &\quad\quad\cdots,x(\bold{s}_{n},t-1)),(x(\bold{s}_{1},t-2),\cdots,x(\bold{s}_{n},t-2)),\cdots,(x(\bold{s}_{1},0),\cdots,x(\bold{s}_{n},0))]\notag\\ &= [(x(\bold{s}_{1},t),\cdots,x(\bold{s}_{n},t))\mid (g(x(\bold{s}_{1},t-1)),\cdots,g(x(\bold{s}_{n},t-1))),(x(\bold{s}_{1},t-1), \cdots,x(\bold{s}_{n},t-1))]\notag\\ &\sim \mathlarger{\frac{1}{|\mathbf{\Sigma_{\eta}}|^\frac{1}{2}}} \exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},t)-g(x(\bold{s}_{1},t-1))\\ x(\bold{s}_{2},t)-g(x(\bold{s}_{2},t-1))\\ \vdots\\ x(\bold{s}_{n},t)-g(x(\bold{s}_{n},t-1))\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}_{\eta}}^{-1} {\begin{pmatrix}x(\bold{s}_{1},t)-g(x(\bold{s}_{1},t-1))\\ x(\bold{s}_{2},t)-g(x(\bold{s}_{2},t-1))\\ \vdots\\ x(\bold{s}_{n},t)-g(x(\bold{s}_{n},t-1))\end{pmatrix}}\right],\notag\end{aligned}$$ where $[x\mid y]$ denotes the conditional density of $X$ at $x$ given $Y=y$. Now, let us represent $g(x(\bold{s}_{i},t-1))$ by $u(i,t)$ for all $i=1,\cdots,n$ and $t=1,2,\cdots,T$. Then repeatedly using the Markov property we have following $$\begin{aligned} [x(\bold{s}_{1},T),\cdots,x(\bold{s}_{n},T),\cdots,x(\bold{s}_{1},0),\cdots,x(\bold{s}_{n},0) & \mid g(x(\bold{s}_{1},T-1)),\cdots\\ \cdots,g(x(\bold{s}_{n},T-1)),\cdots,g(x(\bold{s}_{1},0))& ,\cdots,g(x(\bold{s}_{n},0))]\\\end{aligned}$$ $$\begin{aligned} \sim [x(\bold{s}_{1},T)& ,\cdots,x(\bold{s}_{n},T)\mid g(x(\bold{s}_{1},T-1)),\cdots,g(x(\bold{s}_{n},T-1)), x(\bold{s}_{1},T-1),\cdots,x(\bold{s}_{n},T-1)]\times\\ \dotsm \times &[x(\bold{s}_{1},1),\cdots,x(\bold{s}_{n},1)\mid g(x(\bold{s}_{1},0)),\cdots,g(x(\bold{s}_{n},0)), x(\bold{s}_{1},0),\cdots,x(\bold{s}_{n},0)]\\ & \times [x(\bold{s}_{1},0),\cdots,x(\bold{s}_{n},0)]\\ \vspace{3mm} \sim \mathlarger{\frac{1}{(2\pi)^\frac{nT}{2}}}&\mathlarger{\frac{1}{|\mathbf{\Sigma_{\eta}}|^\frac{T}{2}}}\prod_{t=1}^{T} \exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},t)-u(1,t)\\x(\bold{s}_{2},t)-u(2,t)\\ \vdots\\ x(\bold{s}_{n},t)-u(n,t)\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}_{\eta}}^{-1} {\begin{pmatrix}x(\bold{s}_{1},t)-u(1,t)\\x(\bold{s}_{2},t)-u(2,t)\\ \vdots\\x(\bold{s}_{n},t)-u(n,t)\end{pmatrix}}\right]\\ \vspace{4mm} \times\mathlarger{\frac{1}{(2\pi)^\frac{n}{2}}}&\mathlarger{\frac{1}{|\mathbf{\Sigma}_{0}|^\frac{1}{2}}} \exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\ x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\ x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}_{0}}^{-1} {\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}\right]\end{aligned}$$ But, this is the joint density of the state variables conditioned on $g(x)$. To obtain the joint density of the state variables one needs to marginalize it with respect to the Gaussian process $g(\cdot)$. After marginalization, the joint density takes the following form: $$\begin{aligned} \mathlarger{\frac{1}{(2\pi)^\frac{n(T+1)}{2}}}\mathlarger{\frac{1}{|\mathbf{\Sigma}_{0}|^\frac{1}{2}}} \mathlarger{\frac{1}{|\mathbf{\Sigma_{\eta}}|^\frac{T}{2}}} \exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\ x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}_{0}}^{-1} {\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}\right]\\\end{aligned}$$ $$\begin{aligned} \times \mathlarger{\mathlarger{\int_{\mathbb{R}^{nT}}}}\prod_{t=1}^{T} \exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},t)-u(1,t)\\ x(\bold{s}_{2},t)-u(2,t)\\ \vdots\\ x(\bold{s}_{n},t)-u(n,t)\end{pmatrix}}^{\prime} {{\mathbf{\Sigma}}_{\eta}}^{-1} {\begin{pmatrix}x(\bold{s}_{1},t)-u(1,t)\\x(\bold{s}_{2},t)-u(2,t)\\ \vdots\\x(\bold{s}_{n},t)-u(n,t)\end{pmatrix}}\right] \mathlarger{\frac{1}{(2\pi)^\frac{nT}{2}}}\mathlarger{\frac{1}{|\mathbf{\Sigma}|^\frac{1}{2}}}\\ \exp\left[-\frac{1}{2}{\begin{pmatrix}u(1,1)-\beta_{0g}-\beta_{1g}x(\bold{s}_{1},0)\\u(2,1)-\beta_{0g}-\beta_{1g}x(\bold{s}_{2},0)\\ \vdots\\u(n,T)-\beta_{0g}-\beta_{1g}x(\bold{s}_{n},T-1)\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}}^{-1} {\begin{pmatrix}u(1,1)-\beta_{0g}-\beta_{1g}x(\bold{s}_{1},0)\\u(2,1)-\beta_{0g}-\beta_{1g}x(\bold{s}_{2},0)\\ \vdots\\u(n,T)-\beta_{0g}-\beta_{1g}x(\bold{s}_{n},T-1)\end{pmatrix}}\right]d\mathbf{u}\\\end{aligned}$$ where $\mathbf{\Sigma}$ is as in (3.2). This is nothing but a convolution of two $\mathbb{R}^{nT}$ dimensional Gaussian densities, one with mean vector $\mathbf{0}$ and covariance matrix ${\mathbf{I}}_{T\times T}\bigotimes\mathbf{\Sigma_{\eta}}$ and the other one with mean vector\ $(\beta_{0g}+\beta_{1g}x(\bold{s}_{1},0),\cdots,\beta_{0g}+\beta_{1g}x(\bold{s}_{n},T-1))^\prime $ and covariance matrix $\mathbf{\Sigma}$.\ Hence, the integral boils down to $$\begin{aligned} \mathlarger{\frac{1}{(2\pi)^\frac{n}{2}}}\mathlarger{\frac{1}{|\mathbf{\Sigma}_{0}|^\frac{1}{2}}} \exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}^{\prime}{{\mathbf{\Sigma}}_{0}}^{-1} {\begin{pmatrix}x(\bold{s}_{1},0)-\mu_{01}\\x(\bold{s}_{2},0)-\mu_{02}\\ \vdots\\x(\bold{s}_{n},0)-\mu_{0n}\end{pmatrix}}\right] \mathlarger{\mathlarger{\frac{1}{(2\pi)^\frac{nT}{2}}\frac{1}{|\tilde{\mathbf{\Sigma}}|^\frac{1}{2}}}}\\ \times\exp\left[-\frac{1}{2}{\begin{pmatrix}x(\bold{s}_{1},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{1},0)\\x(\bold{s}_{2},1)-\beta_{0g}-\beta_{1g}x(\bold{s}_{2},0)\\ \vdots\\x(\bold{s}_{n},T)-\beta_{0g}-\beta_{1g} x(\bold{s}_{n},T-1)\end{pmatrix}}^{\prime}{\tilde{\mathbf{\Sigma}}}^{-1}{\begin{pmatrix}x(\bold{s}_{1},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{1},0)\\x(\bold{s}_{2},1)-\beta_{0g}-\beta_{1g} x(\bold{s}_{2},0)\\ \vdots\\x(\bold{s}_{n},T)-\beta_{0g}-\beta_{1g} x(\bold{s}_{n},T-1)\end{pmatrix}}\right],\end{aligned}$$ where $\tilde{\mathbf{\Sigma}}$ is as in (3.2). First, see that for fixed $x(\bold{s}_{i},t_{1})$, $Y(\bold{s}_{i},t_{1})$ is distributed as a Gaussian with mean $\beta_{0f}+\beta_{1f}x(\bold{s}_{i},t_{1})$ and variance ${\sigma_{f}}^2+{\sigma_{\epsilon}}^2$ where ${\sigma_{f}}^2$ and ${\sigma_{\epsilon}}^2$ are respectively the process variance associated with the isotropic Gaussian processes $\epsilon(\cdot,t)$ and $f(x)$ (see (3) and (1)). Now, see that for fixed $x(\bold{s}_{i},t_{1})$ and $x(\bold{s}_{j},t_{2})$, $f(x(\bold{s}_{i},t_{1}))$ and $f(x(\bold{s}_{j},t_{2}))$ has covariance $c_{f}(x(\bold{s}_{i},t_{1}),x(\bold{s}_{j},t_{2}))$. Also, $\epsilon(\cdot,t_{1})$ and $\epsilon(\cdot,t_{2})$ are mutually independent spatial Gaussian processes for $t_{1}\neq t_{2}$. Hence, conditional on state variables the covariance between $Y(\bold{s}_{i},t_{1})$ and $Y(\bold{s}_{j},t_{2})$ is $c_{f}(x(\bold{s}_i,t_1),x(\bold{s}_j,t_2))+c_{\epsilon}(\bold{s}_i,\bold{s}_j)\delta(t_1-t_2)$. Here $\delta(\cdot)$ is the delta function i.e. $\delta(t)=1$ for $t=0$ and $=0$ otherwise. So, the joint density of the observed variables, which is denoted by $[y(\bold{s}_{1},1),y(\bold{s}_{2},1),\cdots,y(\bold{s}_{n},T)]$, is given by $$\begin{aligned} &[y(\bold{s}_{1},1),y(\bold{s}_{2},1),\cdots,y(\bold{s}_{n},T)]=\\ &\mathlarger{\int_{\mathbb{R}^{nT}}}[y(\bold{s}_{1},1),y(\bold{s}_{2},1),\cdots,y(\bold{s}_{n},T)\mid x(\bold{s}_{1},1), x(\bold{s}_{2},1),\cdots,x(\bold{s}_{n},T)][x(\bold{s}_{1},1),x(\bold{s}_{2},1),\cdots,x(\bold{s}_{n},T)]d\mathbf{x}\end{aligned}$$ Hence, part $(a)$ follows. For part $(b)$ note that if $\sigma_{f}^{2}=0$, the conditional density\ $[y(\bold{s}_{1},1),y(\bold{s}_{2},1),\cdots,y(\bold{s}_{n},T)\mid x(\bold{s}_{1},1),x(\bold{s}_{2},1),\cdots,x(\bold{s}_{n},T)]$ is Gaussian with block diagonal covariance matrix ${\mathbf{I}}_{T\times T}\bigotimes\mathbf{\Sigma_{\epsilon}}$. On the other hand, we have already noted that if $\sigma_{g}^{2}=0$, the joint density of the state variables boils down to Gaussian (see the discussion following Theorem\[thm:state\]). Let us consider only the state variables from time $t=1$ onwards. They jointly follow an $nT$ dimensional Gaussian distribution. It is not difficult to see that the mean vector and the covariance matrix of the $nT$ dimensional Gaussian distribution are of following forms: $$\begin{aligned} \text{the $((t-1)n+i)$ th entry of the mean vector is} \ \beta_{1g}^{t}\mu_{0i}+\beta_{0g}\frac{\beta_{1g}^{t}-1}{\beta_{1g}-1}\\ \text{where $1\leq t\leq T$}\\ \text{and the $(((t_{1}-1)n+i),((t_{2}-1)n+j))$ th entry of the covariance matrix is}\\ \beta_{1g}^{t_{1}+t_{2}}\sigma_{i,j}^{0}+(\beta_{1g}^{t_{1}+t_{2}-2}+\beta_{1g}^{t_{1}+t_{2}-4}+\cdots+\beta_{1g}^{{|t_{1}-t_{2}|}}) c_{\eta}(s_{i},s_{j})\ \text{where $1\leq t_{1},t_{2}\leq T$ and $1\leq i,j\leq n$}\\ \sigma_{i,j}^{0} \text{ is the $(i,j)$ th entry of the covariance matrix } \mathbf{\Sigma_{0}}\end{aligned}$$ Now, using part $(a)$ we see that the joint distribution of $Y(\bold{s}_{1},1),Y(\bold{s}_{2},1),\cdots,Y(\bold{s}_{n},T)$ is nothing but a convolution of two $\mathbb{R}^{nT}$-dimensional Gaussian densities. Hence, it is a Gaussian distribution whose mean vector has $((t-1)n+i)$ th entry as $\beta_{0f}+\beta_{1f}\left(\beta_{1g}^{t}\mu_{0i} +\beta_{0g}\frac{\beta_{1g}^{t}-1}{\beta_{1g}-1}\right)\\ \text{where $1\leq t\leq T$}$ and the $(((t_{1}-1)n+i),((t_{2}-1)n+j))$ th entry of the covariance matrix is\ $\beta_{1f}^{2}\left(\beta_{1g}^{t_{1}+t_{2}}\sigma_{i,j}^{0}+(\beta_{1g}^{t_{1}+t_{2}-2}+\beta_{1g}^{t_{1}+t_{2}-4} +\cdots+\beta_{1g}^{{|t_{1}-t_{2}|}})c_{\eta}(s_{i},s_{j})\right)+c_{\epsilon}(\bold{s}_{i},\bold{s}_{j})\delta(t_{1}-t_{2})$\ where $1\leq t_{1},t_{2}\leq T$ and $1\leq i,j\leq n$. So, part $(b)$ is proved. part $(a)$ We first show that $Var(X(\bold{s},t))$ is finite. Then by using the formula $$\begin{aligned} Var(Y(\bold{s},t))&=Var\left(E(Y(\bold{s},t)|X(\bold{s},t)\right))+E\left(Var(Y(\bold{s},t)|X(\bold{s},t)\right))\\ &=Var(\beta_{0f}+\beta_{1f}X(\bold{s},t))+E(\sigma_{f}^2+\sigma_{\epsilon}^2)\\ &=\beta_{1f}^2Var(X(\bold{s},t))+\sigma_{f}^2+\sigma_{\epsilon}^2\end{aligned}$$ we establish that $Var(Y(\bold{s},t))$ is finite. To show $Var(X(\bold{s},t))$ is finite we use principle of mathematical induction, i.e. we first show that $Var(X(\bold{s},0))$ is finite and then we show that if\ $Var(X(\bold{s},0)),Var(X(\bold{s},1)),\cdots,Var(X(\bold{s},t-1))$ are finite then $Var(X(\bold{s},t))$ is finite. These two steps together compel $Var(X(\bold{s},t))$ to be finite. The first step is trivially shown as $X(\bold{s},0)$ is a Gaussian random variable. Now we show the second step of mathematical induction, that is, we show that if\ $Var(X(\bold{s},0)),Var(X(\bold{s},1)),\cdots,Var(X(\bold{s},t-1))$ are finite then $Var(X(\bold{s},t))$ is finite. Now consider the following: $$\begin{aligned} &Var(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})\\ &=Var(g(X(\bold{s},t-1))+\eta(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})\\ &=Var(g(X(\bold{s},t-1))|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})+\sigma_{\eta}^2\\ &=Var(g(x_{t-1})|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})+\sigma_{\eta}^2\\ &=Var(g(x_{t-1})|g(x_{t-2})+\eta(\bold{s},t-1)=x_{t-1},\cdots,g(x_{0})+\eta(\bold{s},1)=x_{1},X(\bold{s},0)=x_{0})+\sigma_{\eta}^2\\ &=\sigma_{g}^2-\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{\Sigma}_{g12}+\sigma_{\eta}^2\ \ \text{[see page 16 of \cite{Rasmussen:Williams}]}\end{aligned}$$ where $\bold{\Sigma}_{g12}^{'}$ is the row vector $(c_{g}(x_{t-1},x_{0})\ c_{g}(x_{t-1}x_{1})\cdots\ c_{g}(x_{t-1},x_{t-2}))$ and $\bold{\Sigma}_{g22}$ is the variance covarince matrix $\begin{pmatrix}c_{g}(x_{0},x_{0})\ c_{g}(x_{0},x_{1})\cdots c_{g}(x_{0},x_{t-2}) \\ c_{g}(x_{1},x_{0})\ c_{g}(x_{1},x_{1})\cdots c_{g}(x_{1},x_{t-2})\\ \vdots\\c_{g}(x_{t-2},x_{0})\ c_{g}(x_{t-2},x_{1})\cdots c_{g}(x_{t-2},x_{t-2})\end{pmatrix}$ induced by covariance function $c_{g}(\cdot,\cdot)$. Now, we consider $E(Var(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0}))$. We want to show that this quantity is finite. But the problem is that we have to deal with the inverse of a random matrix $(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I}) $. Fortunately, the random matrix $(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I}) $ is non-negative definite (nnd). Hence, $\sigma_{g}^2-\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{\Sigma}_{g12} +\sigma_{\eta}^2\leq \sigma_{g}^2+\sigma_{\eta}^2$. On the other hand, this quantity being a conditional variance is always nonnegative. So, the following inequality holds\ $$0\leq \sigma_{g}^2-\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{\Sigma}_{g12} +\sigma_{\eta}^2\leq \sigma_{g}^2+\sigma_{\eta}^2.$$ Hence, it follows that $$0\leq E(\sigma_{g}^2-\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{\Sigma}_{g12} +\sigma_{\eta}^2)\leq \sigma_{g}^2+\sigma_{\eta}^2.$$ So, the quantity $E(Var(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2) =x_{t-2}\cdots,X(\bold{s},0)=x_{0}))$ being equivalent to $E(\sigma_{g}^2-\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22} +\sigma_{\eta}^2\bold{I})^{-1}\bold{\Sigma}_{g12}+\sigma_{\eta}^2)$, is finite.\ Now we consider the term $E(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})$. $$\begin{aligned} &E(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})\\ &=E(g(X(\bold{s},t-1))+\eta(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})\\ &=E(g(X(\bold{s},t-1))|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})+0\\ &=E(g(x_{t-1})|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0})\\ &=E(g(x_{t-1})|g(x_{t-2})+\eta(\bold{s},t-1)=x_{t-1},\cdots,g(x_{0})+\eta(\bold{s},1)=x_{1},X(\bold{s},0)=x_{0})\\ &=\beta_{g0}+\beta_{g1}x_{t-1}+\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})}\ \ \text{[see page 16 of \cite{Rasmussen:Williams}]}\end{aligned}$$ where $\bold{Z(\bold{s})}^{\prime}$ is the row vector $(x_{1}-\beta_{g0}-\beta_{g1}x_{0}\ \ x_{2}-\beta_{g0}-\beta_{g1}x_{1}\ \cdots \ x_{t-1}-\beta_{g0}-\beta_{g1}x_{t-2})$. We want to show that $Var(E(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0}))$ is finite. Equivalently, we want to show $Var(\beta_{g0}+\beta_{g1}x_{t-1}+\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})})$ is finite. For that it is enough to show $Var(\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})})$ is finite since our induction hypothesis already assume that $Var(X(\bold{S},t-1))$ is finite. Now we show that $Var(\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})})$ is finite. First note that $\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})}$ can be expressed as a linear combination of the elements of $\bold{Z(\bold{s})}$ as $w_{1}z_{1}(\bold{s})+w_{2}z_{2}(\bold{s})+\cdots+w_{t-1}z_{t-1}(\bold{s})$. If the $w_{i}(\bold{S})$ are fixed numbers it is easy to see that $w_{1}z_{1}(\bold{s})+w_{2}z_{2}(\bold{s})+\cdots+w_{t-1}z_{t-1}(\bold{s})$ has finite variance. Unfortunately, $w_{i}(\bold{S})$ are random. However we will show that they are bounded random variables and then using a lemma we will prove that $Var(w_{1}z_{1}(\bold{s})+w_{2}z_{2}(\bold{s})+\cdots+w_{t-1}z_{t-1}(\bold{s}))$ is finite. First we show that $w_{i}(\bold{S})$ are bounded random variables. Consider the spectral decomposition of the real symmetric (nnd) matrix $\bold{\Sigma}_{g22}$. Let us assume that $\bold{\Sigma}_{g22}=\bold{U}\bold{D}\bold{U}^{\prime}$, where $\bold{U}$ is an orthogonal matrix and $\bold{D}$ is the diagonal matrix whose diagonal elements are eigenvalues. Then $$\begin{aligned} \bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})} &= \bold{\Sigma}_{g12}^{'}(\bold{U}\bold{D}\bold{U}^{\prime}+\sigma_{\eta}^2\bold{I})^{-1}\bold{Z(\bold{s})}\\ &=\bold{\Sigma}_{g12}^{'}(\bold{U}\bold{D}\bold{U}^{\prime}+\sigma_{\eta}^2\bold{U}\bold{U}^{\prime})^{-1}\bold{Z(\bold{s})}\\ &=\bold{\Sigma}_{g12}^{'}(\bold{U}(\bold{D}+\sigma_{\eta}^2\bold{I})\bold{U}^{\prime})^{-1}\bold{Z(\bold{s})}\\ &=\bold{\Sigma}_{g12}^{'}{\bold{U}^{\prime}}^{-1}(\bold{D}+\sigma_{\eta}^2\bold{I})^{-1}\bold{U}^{-1}\bold{Z(\bold{s})}\\ &=\bold{\Sigma}_{g12}^{'}\bold{U}(\bold{D}+\sigma_{\eta}^2\bold{I})^{-1}{\bold{U}^{\prime}}\bold{Z(\bold{s})} \ \ \text{[Since $\bold{U}$ is an orthogonal matrix]}\end{aligned}$$ Since $\bold{U}$ is a (random) orthogonal matrix its elements are bounded random variables between $-1$ and $1$. The (random) elements of the row vector $\bold{\Sigma}_{g12}^{'}$ are covariances induced by the isotropic covariance kernel $c_{g}(\cdot,\cdot)$. Hence, they are bounded random variables between $-\sigma_{g}^{2}$ and $\sigma_{g}^{2}$. Finally, the (random) elements of $(\bold{D}+\sigma_{\eta}^2\bold{I})^{-1}$ are bounded random variables between $0$ and $\frac{1}{\sigma_{\eta}^{2}}$. Hence, the (random) row vector $\bold{\Sigma}_{g12}^{'}(\bold{\Sigma}_{g22}+\sigma_{\eta}^2\bold{I})^{-1}$, being a product of some random matrices whose elements are bounded random variables, is itself composed of bounded random variables. So, its elements $w_{i}(\bold{S})$, although random, are bounded. Now, we state a crucial lemma. Let us assume that $X_{1},X_{2},\cdots,X_{n}$ are random variables with finite variance and $W_{1},W_{2},\cdots,W_{n}$ are bounded random variables all defined on same probability space. Then the random variables $Y=(W_{1}X_{1}+W_{2}X_{2}+\cdots+W_{n}X_{n})$ also has finite variance. Let us assume that $W_{1},W_{2},\cdots W_{n}$ lie between $[-M,M]$.\ $E(X_{i}^2)\leq K$ for $i=1,2\cdots,n$. Now $Var(W_{i}X_{i})= E(Var(W_{i}X_{i}|X_{i}))+Var(E(W_{i}X_{i}|X_{i}))=E(X_{i}^2Var(W_{i}|X_{i}))+Var(X_{i}E(W_{i}|X_{i}))$. But $Var(W_{i}|X_{i})\leq E(W_{i}^2|X_{i})\leq M^2$. So, $E(X_{i}^2Var(W_{i}|X_{i}))\leq M^2E(X_{i}^2)$. Similarly, $E(W_{i}|X_{i})$ lies between $[-M,M]$. So, $Var(X_{i}E(W_{i}|X_{i}))\leq E(X_{i}^2(E(W_{i}|X_{i}))^2)\leq M^2E(X_{i}^2)$. Hence, $Var(W_{i}X_{i})\leq 2M^2E(X_{i}^2)$. So, $$\begin{aligned} |Var(Y)|&=|\sum_{i=1}^{n}Var(W_{i}X_{i})+2\sum_{1\leq i<j\leq n}Cov(W_{i}X_{i},W_{j}X_{j})|\\ &\leq \sum_{i=1}^{n}Var(W_{i}X_{i})+2\sum_{1\leq i<j\leq n}|Cov(W_{i}X_{i},W_{j}X_{j})|\\ &\leq \sum_{i=1}^{n}Var(W_{i}X_{i})+2\sum_{1\leq i<j\leq n}Var^{\frac{1}{2}}(W_{i}X_{i})Var^{\frac{1}{2}}(W_{j}X_{j})\\ &\leq 2nM^2K+n(n-1)2M^2K\end{aligned}$$ So, $Y$ has finite variance. Once we apply the lemma to $w_{1}z_{1}(\bold{s})+w_{2}z_{2}(\bold{s})+\cdots+w_{t-1}z_{t-1}(\bold{s})$ the finiteness of $Var(E(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0}))$ is immediate. Then by the formula $Var(X(\bold{s},t))=E(Var(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0) =x_{0}))+Var(E(X(\bold{s},t)|X(\bold{s},t-1)=x_{t-1},X(\bold{s},t-2)=x_{t-2}\cdots,X(\bold{s},0)=x_{0}))$ we get that $Var(X(\bold{s},t))$ is finite. part $(b)$ Since we have already proved in part $(a)$ that the coordinate variables of the observed spatio-temporal process have finite variances, now we can consider the covariance function associated with the process and study its properties. Let us denote the covariance between $Y(\bold{s},t)$ and $Y(\bold{s}^*,t^*)$ by $c_{y}((\bold{s},t),(\bold{s}^*,t^*))$. Then $$\begin{aligned} c_{y}((\bold{s},t),(\bold{s}^*,t^*))=&E[Cov(Y(\bold{s},t),Y(\bold{s}^*,t^*)\mid x(\bold{s},t),x(\bold{s}^*,t^*))]\notag\\ &+Cov[E(Y(\bold{s},t)\mid x(\bold{s},t)), E(Y(\bold{s}^*,t^*)\mid x(\bold{s}^*,t^*))]\notag\\ =E[c_{f}(X(\bold{s},t),X(\bold{s}^*,t^*))]+&c_{\epsilon}(\bold{s},\bold{s}^*)\delta(t-t^*) +\beta_{1f}^{2}Cov[X(\bold{s},t),X(\bold{s}^*,t^*)]\notag\\ $$ Now, the term $E[c_{f}(X(\bold{s},t),X(\bold{s}^*,t^*))]$ will be nonstationary and hence $E[c_{f}(X(\bold{s}+\bold{h},t+k),X(\bold{s}^*+\bold{h},t^*+k))] \neq E[c_{f}(X(\bold{s},t),X(\bold{s}^*,t^*))]$. In fact, $| X(\bold{s}+\bold{h}, t+k)-X(\bold{s}^*+\bold{h},t^*+k)|\neq| X(\bold{s},t)-X(\bold{s}^*,t^*)|$ with probability 1 because $X(\bold{s},t)$ has density with respect to Lebesgue measure and this heuristically justifies our argument. So, the covariance function $c_{y}(\cdot,\cdot)$ is nonstationary in both space and time. To prove non separability, first see that $c_{f}(x(\bold{s},t),x(\bold{s}^*,t^*))$ is non separable in space and time, because both space and time are involved in it through $x(\bold{s},t)$. Hence, $E[c_{f}(X(\bold{s},t),X(\bold{s}^*,t^*))]$ is nonseparable and therefore $c_{y}(\cdot,\cdot)$ is nonseparable in space and time. \[sec:proof5\] First consider $Cov(X(\bold{s},t),X(\bold{s}^{*},t^{*}))$ where WLOG we assume $t>t^*$. Also, assume that $g^{*}(\cdot)$ is the centered Gaussian process obtained from $g(\cdot)$. Then $$\begin{aligned} &Cov(X(\bold{s},t),X(\bold{s}^*,t^*))=Cov(g(X(\bold{s},t-1))+\eta(\bold{s},t),X(\bold{s}^*,t^*))\\ &=Cov(\beta_{0g}+\beta_{1g} X(\bold{s},t-1)+g^{*}(X(\bold{s},t-1))+\eta(\bold{s},t),X(\bold{s}^*,t^*))\\ &=\beta_{1g}Cov(X(\bold{s},t-1),X(\bold{s}^*,t^*))+Cov(g^{*}(X(\bold{s},t-1)),X(\bold{s}^*,t^*))\\\end{aligned}$$ Repeatedly expanding the term in the same way we get $$\begin{aligned} &=\beta_{1g}^{t-t^*}Cov(X(\bold{s},t^*),X(\bold{s}^*,t^*))+\beta_{1g}^{t-t^*-1} Cov(g^{*}(X(\bold{s},t^*)),X(\bold{s}^*,t^*))+\label{cov}\\ &\cdots+Cov(g^{*}(X(\bold{s},t-1)),X(\bold{s}^*,t^*))\notag\end{aligned}$$ Just as the previous paragraph we can further see that $$\begin{aligned} &Cov(X(\bold{s},t^*),X(\bold{s}^*,t^*))\notag\\ &=Cov(\beta_{0g}+\beta_{1g} X(\bold{s},t^*-1)+g^{*}(X(\bold{s},t^*-1))+\eta(\bold{s},t^*),\beta_{0g}\notag\\ &+\beta_{1g}X(\bold{s}^*,t^*-1)+g^{*}(X(\bold{s}^*,t^*-1))+\eta(\bold{s}^*,t^*))\notag\\ &=\beta_{1g}^2Cov(X(\bold{s},t^*-1),X(\bold{s}^*,t^*-1))+\beta_{1g}Cov(X(\bold{s},t^*-1),g^{*}(X(\bold{s}^*,t^*-1)))+\notag\\ &\beta_{1g}Cov(X(\bold{s}^*,t^*-1),g^{*}(X(\bold{s},t^*-1)))+Cov(g^{*}(X(\bold{s},t^*-1)),g^{*}(X(\bold{s}^*,t^*-1)))+c_{\eta}(\bold{s},\bold{s}^*)\end{aligned}$$ Now we plan to show that terms of the types $Cov(g^{*}(X(\bold{s}^*,t^*-1)),X(\bold{s},t^*-1))$ and $Cov(g^{*}(X(\bold{s},t^*-1)),g^{*}(X(\bold{s}^*,t^*-1)))$ are negligible if $\sigma_{g}^2$ is small enough. Our next lemma proves it rigorously. \[lemma:small\_cov\] For arbitrarily small $\epsilon>0$, $\exists\ \delta>0$ such that $Cov(g^{*}(X(\bold{s},t-1)),X(\bold{s}^*,t^*))<\epsilon$ for $0<\sigma_{g}^2<\delta$. See that it is enough to prove that $Var(g^{*}(X(\bold{s},t-1)))$ is arbitrarily small $\forall \bold{s},t$. Then Cauchy-Schwartz inequality implies $Cov^{2}(g^{*}(X(\bold{s},t-1)),g^{*}(X(\bold{s}^*,t^*))) \leq Var(g^{*}(X(\bold{s},t-1)))Var(g^{*}(X(\bold{s}^*,t^*)))$ is arbitrarily small. Similarly, Cauchy-Schwartz inequality implies $Cov^{2}(g^{*}(X(\bold{s},t-1)),\eta(\bold{s}^*,t^*)) \leq Var(g^{*}(X(\bold{s},t-1)))Var(\eta(\bold{s}^*,t^*))=Var(g^{*}(X(\bold{s},t-1)))\sigma_{\eta}^2$ is arbitrarily small. Then we are done by the expansion $$\begin{aligned} Cov(g^{*}(x(\bold{s},t)),x(\bold{s^*},t^*))&=Cov(g^{*}(x(\bold{s},t-1)),g^{*}(x(\bold{s}^*,t^*-1))+\cdots+\beta_{1g}^{t^*}Cov(g^{*}(x(\bold{s},t-1)),g^{*}(x(\bold{s}^*,0)))\\ &+Cov(g^{*}(x(\bold{s},t-1)),\eta(\bold{s}^*,t^*-1))+\cdots+\beta_{1g}^{t^*}Cov(g^{*}(x(\bold{s},t-1)),\eta(\bold{s}^*,0))\end{aligned}$$ Before proceeding towards the proof we mention two results from Gaussian process (see [@Adler07] for details) that will be used subsequently. \[result:Borell-TIS\] Let us assume that $g$ is an almost surely bounded centered Gaussian process on index set $T\subseteq\mathbb{R}$. Define $\sigma_{T}^{2}=\sup\limits_{t\in T}E(g_{t}^2)$.\ Then $P(\|g\|>s)\leq \exp(-\frac{(s-E\|g\|)^2}{2\sigma_{T}^2})$ for $s>E(\|g\|)$ where $\|g\|=\sup\limits_{t}g_{t}$. \[result:Dudley\] Under the assumption of the Borel-TIS inequality,\ $E\|g\|\leq K\mathlarger{\int_{0}^{\mbox{diam}(T)}}\sqrt{H(\epsilon)}d\epsilon$,\ where $\mbox{diam}(T)=\underset{\bold{s}_1,\bold{s}_2\in T}\sup d(\bold{s}_1,\bold{s}_2)$ is the diameter of the index set $T$ with respect to the canonical pseudo-metric $d$ associated with the Gaussian process $g$ given by $d(\bold{s}_1,\bold{s}_2)=\sqrt{E(g(\bold{s}_1)-g(\bold{s}_2))^2}$, and $H(\epsilon)=\ln{N(\epsilon)}$ where $N(\epsilon)$ is the minimum number of $\epsilon$ balls required to cover the index set $T$ with respect to the canonical pseudo-metric $d$; $K$ is a universal constant. With the above two results, we are ready to prove Lemma \[lemma:small\_cov\]. Consider $Var((g^{*}(X(\bold{s},t-1))))$. Observe that $$\begin{aligned} Var((g^{*}(X(\bold{s},t-1))))&\leq E((g^{*}(X(\bold{s},t-1))))^{2}\\ & \leq E(\sup\limits_{x}|g^{*}(x)|^{2})\ \ \ \\ & =\mathlarger{\int_{0}^{\infty}}P(\sup\limits_{x}|g^{*}(x)|^{2}>u)du\ \ \ (\text{by the tail sum formula})\\ &\leq 2\mathlarger{\int_{0}^{\infty}}P(\sup\limits_{x} g^{*}(x)>\sqrt{u})du\\ &= 2\mathlarger{\int_{0}^{L^{2}}}P(\sup\limits_{x} g^{*}(x)>\sqrt{u})du+2\mathlarger{\int_{L^{2}}^{\infty}} P(\sup\limits_{x} g^{*}(x)>\sqrt{u})du\\ &(\text{where $L=\max{(E(\sup\limits_{x}g^{*}(x)),0)}$}\\ &\leq 2L^{2}+2\mathlarger{\int_{L^{2}}^{\infty}}e^{-\frac{(\sqrt{u}-L)^2}{2\sigma_{g}^{2}}}\ \ \ (\text{using Result \ref{result:Borell-TIS}})\end{aligned}$$ Now, using the change of variable $\sqrt{u}=z+L$ the integral $\mathlarger{\int_{L^{2}}^{\infty}}e^{-\frac{(\sqrt{u}-L)^2}{2\sigma_{g}^{2}}}$ can be reduced to the form $$\begin{aligned} &\mathlarger{\int_{0}^{\infty}}e^{-\frac{z^2}{2\sigma_{g}^{2}}}2zdz+2L\mathlarger{\int_{0}^{\infty}}e^{-\frac{z^2}{2\sigma_{g}^{2}}}dz\\ &=2\sigma_{g}^{2}+L\sigma_{g}(\sqrt{2\pi})\end{aligned}$$ Hence, $Var((g^{*}(X(\bold{s},t-1))))\leq 2L^{2}+4\sigma_{g}^{2}+2L\sigma_{g}(\sqrt{2\pi})$.\ But, $0\leq L\leq K\mathlarger{\int_{0}^{diam(T)}}\sqrt{H(\epsilon)}d\epsilon$ by Result \[result:Dudley\] and it is not difficult to see that $H(\epsilon)$ is a decreasing function of $\sigma_{g}^2$. The same is true of $diam(T)$ when as a function of $\sigma_{g}^2$. These two facts together permit applicability of the monotone convergence theorem to yield $$\begin{aligned} 0\leq \lim_{\sigma_{g}^{2}\rightarrow 0^+}L\leq \lim_{\sigma_{g}^{2}\rightarrow 0^+}K\mathlarger{\int_{0}^{diam(T)}}\sqrt{H(\epsilon)}d\epsilon & \leq \lim_{\sigma_{g}^{2}\rightarrow 0^+}K\mathlarger{\int_{0}^{\infty}}\sqrt{H(\epsilon)}\mathbb{I}(\epsilon\leq diam(T))d\epsilon\\ &\leq K\mathlarger{\int_{0}^{\infty}}\lim_{\sigma_{g}^{2}\rightarrow 0^+}\sqrt{H(\epsilon)}\mathbb{I}(\epsilon\leq diam(T))d\epsilon=0.\end{aligned}$$ So, $\lim_{\sigma_{g}^{2}\rightarrow 0^+}L=0$ which in turn implies $Var((g^{*}(X(\bold{s},t-1))))$ can be made arbitrarily small by making $\sigma_{g}^{2}$ small. This proves Lemma \[lemma:small\_cov\]. Arguing similarly one can also show that for arbitrarily small $\epsilon>0$, $\exists\ \delta>0$ such that $Cov(g^{*}(X(\bold{s},t^*-1)),g^{*}(X(\bold{s}^*,t^*-1)))<\epsilon$ for $0<\sigma_{g}^2<\delta$. Moreover, see that the bound is uniform in $\bold{s}$ and $t$. Since $|\beta_{1g}|<1$, using the bound repeatedly in (15), we obtain $$\begin{aligned} &|Cov(X(\bold{s},t),X(\bold{s}^*,t^*))-\beta_{1g}^{t-t^*}Cov(X(\bold{s},t^*),X(\bold{s}^*,t^*))|\\ &\leq \frac{\epsilon}{1-|\beta_{1g}|}\end{aligned}$$ Similarly, using the bound repeatedly in (16), we obtain $$\begin{aligned} &|Cov(X(\bold{s},t^*),X(\bold{s}^*,t^*))-Cov(X(\bold{s},0),X(\bold{s}^*,0))-[\frac{1-\beta_{1g}^{2(t^*+1)}}{1-\beta_{1g}^2}]c_{\eta}(\bold{s},\bold{s}^*)|\\ &\leq \left[\frac{\epsilon}{1-|\beta_{1g}|}+\frac{\epsilon}{1-|\beta_{1g}|}+\frac{\epsilon}{1-|\beta_{1g}|}\right].\end{aligned}$$ Combining them we get $$\begin{aligned} &|Cov(X(\bold{s},t),X(\bold{s}^*,t^*))-\beta_{1g}^{t-t^*}Cov(X(\bold{s},0),X(\bold{s}^*,0))-\beta_{1g}^{t-t^*}[\frac{1-\beta_{1g}^{2(t^*+1)}}{1-\beta_{1g}^2}]c_{\eta}(\bold{s},\bold{s}^*)|\\ &\leq \frac{\epsilon}{1-|\beta_{1g}|}\left[1+3|\beta_{1g}|^{t-t^*}\right] \leq \frac{4\epsilon}{1-|\beta_{1g}|}\end{aligned}$$ Now plugging in this approximation in the expression for $c_{y}((\bold{s},t),(\bold{s}^*,t^*))$ we get the desired result. So, Theorem 3.5 is finally proved. Part $(a)$: From the condition of the theorem it is clear that $\exists$ a probability space $(\Omega,\mathcal{F},P)$ and a set $A\in\mathcal{F}$ such that $P(A)=1$ and for $\omega\in A$, $X(\bold{s},0)(\omega),\eta(\bold{s},t)(\omega), \epsilon(\bold{s},t)(\omega)$ are continuous functions in $\bold{s}$ where $t=1,2,3\cdots$ and $g(x)(\omega),f(x)(\omega)$ are continuous functions in $x$. Then by the property of composition of two functions $X(\bold{s},1)(\omega) =g(X(\bold{s},0)(\omega))(\omega)+\eta(\bold{s},1)(\omega)$ is a continuous function in $\bold{s}$. Proceeding recursively, one can prove that $X(\bold{s},t)(\omega)$ is a continuous function in $\bold{s}$ for any $t$. Once we show $X(\bold{s},t)(\omega)$ is a continuous function, we prove $Y(\bold{s},t)(\omega)=f(X(\bold{s},t)(\omega))(\omega)+\epsilon(\bold{s},t)(\omega)$ is a continuous function in $\bold{s}$. So, part $(a)$ is proved. Part $(b)$: Proof of part $(b)$ follows the similar lines of the proof of part $(a)$. Firstly, we state a simple lemma. Let us consider two real valued functions $u(z)$ and $v(x,y)$ such that both of them are $k$ times differentiable. Then the composition function $u(v(x,y))$ is also $k$ times differentiable. Proof of this lemma is basically a generalization of chain rule for multivariate functions and can be found in advanced multivariate calculus text books. We give a brief sketch of the proof. First we clarify the term $k$ times differentiable for the function $v(x,y)$. It means all mixed partial derivatives of $v(x,y)$ of order $k$ exist. We prove the lemma using mathematical induction. Firstly, We show that the lemma is true for $k=1$ and then we show that if the lemma is true for $k-1$ then it must be true for $k$ as well. That the lemma is true for $k=1$ easily follows from the chain rule for multivariate functions. Now we prove the second step. By the induction hypothesis the lemma is true for the $k-1$ case and $u(z)$ and $v(x,y)$ are $k$ times differentiable. We want to show that $u(v(x,y))$ is also $k$ times differentiable. Without loss of generality, we consider the mixed partial derivative $\frac{\partial}{\partial x^{k_{1}}\partial y^{k_{2}}}\left(u(v(x,y))\right)$ where $k_{1}+k_{2}=k$ and show that it exists. Observe that the partial derivative is equivalent to $\frac{\partial}{\partial x^{k_{1}}\partial y^{k_{2}-1}} \left(u^{\prime}(v(x,y))(\frac{\partial}{\partial y}v(x,y))\right)$ provided it exists. Since, by the induction hypothesis the lemma is true for the $k-1$ case and $u^{\prime}(z)$ and $v(x,y)$ are $k-1$ times differentiable, the composition of them $u^{\prime}(v(x,y))$ is also $k-1$ times differentiable. On the other hand, $\frac{\partial}{\partial y}v(x,y)$ is also $k-1$ times differentiable. So, the product of them $u^{\prime}(v(x,y))(\frac{\partial}{\partial y}v(x,y))$ is also $k-1$ times differentiable. Hence the partial derivative $\frac{\partial}{\partial x^{k_{1}}\partial y^{k_{2}-1}}\left(u^{\prime} (v(x,y))(\frac{\partial}{\partial y}v(x,y))\right)$ exists. Equivalently, $\frac{\partial}{\partial x^{k_{1}}\partial y^{k_{2}}} \left(u(v(x,y))\right)$ exists. Similarly one can prove the existence of other mixed partial derivatives of $u(v(x,y))$ of order $k$. Hence, by induction the proof follows. Part $(b)$: From the condition of the theorem it is clear that $\exists$ a probability space $(\Omega,\mathcal{F},P)$ and a set $A\in\mathcal{F}$ such that $P(A)=1$ and for $\omega\in A$, $X(\bold{s},0)(\omega),\eta(\bold{s},t)(\omega),\epsilon(\bold{s},t)(\omega)$ are $k$ times differentiable functions in $\bold{s}$ where $t=1,2,3\cdots$ and $g(x)(\omega),f(x)(\omega)$ are $k$ times differentiable functions in $x$. Then by the above lemma $X(\bold{s},1)(\omega)=g(X(\bold{s},0)(\omega))(\omega)+\eta(\bold{s},1)(\omega)$ is a $k$ times differentiable function in $\bold{s}$. The rest of the proof is exactly similar as in part $(a)$. [99]{} \[1\][\#1]{} \[1\] urlstyle \[1\][doi: \#1]{} ADLER, R. J. and TAYLOR, J. E. (2007). *Random Fields and Geometry*. Springer. ISBN-13: 978-0-387-48112-8. ANDERES, E. B. and STEIN, M. L. (2008). Estimating deformations of isotropic Gaussian random fields on the plane. *Ann. Appl. Stat.*, 36(2): 719-741, 2008. ISSN 0090-5364. [doi: ]{}[10.1214/009053607000000893]{}. URL <http://www.jstor.org/stable/25464644>. BANERJEE, S., CARLIN, B. P. and GELFAND, A. (2004). *Hierarchical Modeling and Analysis for Spatial Data*. Chapman & Hall/CRC. ISBN 1-58488-410-X. BANERJEE, S., GAMERMAN, D. and GELFAND, A. (2005). Spatial process modelling for univariate and multivariate dynamic spatial data. *Environmetrics*, 16(5): 465-479, 2005. ISSN 1099-095X. [doi: ]{}[10.1002/env.715]{}. URL <http://onlinelibrary.wiley.com/doi/10.1002/env.715/pdf>. BRIGGS, B. H. (1968). On the analysis of moving patterns in geophysics—I. Correlation analysis. *J. Atmos. Terr. Phys.*, 30(10): 1777-1788, 1968. ISSN 0021-9169. [doi: ]{}[10.1016/0021-9169(68)90097-4]{}. URL <http://www.sciencedirect.com/science/article/pii/0021916968900974>. BRUNO, F., GUTTORP, P., SAMPSON, P. D. and COCCHI, D. (2009). A simple non-separable, non-stationary spatiotemporal model for ozone. *Environ. Ecol. Stat.*, 16(4): 515-529, 2009. ISSN 1573-3009. [doi: ]{}[10.1007/s10651-008-0094-8]{}. URL <http://link.springer.com/article/10.1007/s10651-008-0094-8>. COX, D. R. and ISHAM, V. (1988). A Simple Spatial-Temporal Model of Rainfall. *Proc. R. Soc. Lon. Ser. A*, 415(1849): 317-328, 1988. ISSN 1471-2946. URL <http://www.jstor.org/stable/2398094>. CRESSIE, N. and HUANG, H. C.(1999). Classes of nonseparable, spatio-temporal stationary covariance functions. *J. Amer. Statist. Assoc.*, 94(448): 1330-1339, 1999. ISSN 1537-274X. [doi: ]{}[10.1080/01621459.1999.10473885]{}. URL <http://amstat.tandfonline.com/doi/pdf/10.1080/01621459.1999.10473885>. DEY, K., K., and BHATTACHARYA, S. (2014a). On Optimal Scaling of Additive Transformation Based Markov Chain Monte Carlo. URL <http://arxiv.org/abs/1307.1446>. DEY, K., K., and BHATTACHARYA, S. (2014b). On Geometric Ergodicity of Additive and Multiplicative Transformation Based Markov Chain Monte Carlo in High Dimensions. URL <http://arxiv.org/pdf/1312.0915.pdf>. DOU, Y., LE, N. D. and ZIDEK, V. (2010). Modeling hourly ozone concentration fields. *Ann. Appl. Stat.*, 4(3): 1183-1213, 2010. ISSN 1932-6157. [doi: ]{}[10.1214/09-AOAS318]{}. URL <http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdfview_1&handle=euclid.aoas/1287409369>. DUTTA, S. and BHATTACHARYA, S. (2014). Markov chain monte carlo based on deterministic transformations. *Statistical Methodology*, 16: 100-116, 2014. ISSN 1572-3127. URL <http://www.sciencedirect.com/science/article/pii/S1572312713000683>. FONSECA, T. C. O. and STEEL, M. F. J. (2011). Non-Gaussian spatiotemporal modelling through scale mixing. *Biometrika*, 98(4): 761-774, 2011. ISSN 1464-3510. [doi: ]{}[10.1093/biomet/asr047]{}. URL <http://biomet.oxfordjournals.org/content/98/4/761.abstract>. FUENTES, M. (2001). A high frequency kriging approach for non-stationary environmental processes. *Environmetrics*, 12(5): 469-483, 2001. ISSN 1099-095X. [doi: ]{}[10.1002/env.473]{}. URL <http://onlinelibrary.wiley.com/doi/10.1002/env.473/pdf>. FUENTES, M.,CHEN, L. and DAVIS, J. M.(2008). A class of nonseparable and nonstationary spatial temporal covariance functions. *Environmetrics*, 19(5): 487-507, 2008. ISSN 1099-095X. [doi: ]{}[10.1002/env.891]{}. URL <http://onlinelibrary.wiley.com/doi/10.1002/env.891/pdf>. FUENTES, M.,CHEN, L., DAVIS, J. M. and LACKMANN, G. (2003). A new class of nonseparable and nonstationary covariance models for wind fields. Technical Report. URL <http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA515693>. FUENTES, M. (2002). Spectral methods for non stationary spatial processes *Biometrika*, 89(1): 197-210, 2002. ISSN 1464-3510. [doi: ]{}[10.1093/biomet/89.1.197]{}. URL <http://biomet.oxfordjournals.org/content/89/1/197.full.pdf+html>. FUENTES, M. and SMITH, R. L. (2001). A new class of nonstationary spatial models. Technical Report, North Carolina State University (2001). [doi: ]{}[10.1.1.7.6794]{}. URL <http://www.stat.unc.edu/postscript/rs/nonstat.pdf>. FURRER, R. and SAIN, S. R. (2009). Spatial model fitting for large datasets with applications to climate and microarray problems. *Stat. Comput.*, 19(2): 113-128, 2009. ISSN 1573-1375. [doi: ]{}[10.1007/s11222-008-9075-x]{}. URL <http://www.springerlink.com/content/203124p861356383/>. GELFAND, A. E., DIGGLE, P. J., FUENTES, M., and GUTTORP, P. (2010). *Handbook of Spatial Statistics*. CRC Press. ISBN 978-1-4200-7287-7. URL <http://www.crcpress.com/product/isbn/9781420072877>. GHOSH, A., MUKHOPADHYAY, S., ROY, S., and BHATTACHARYA, S. (2014). Bayesian Inference in Nonparametric Dynamic State-Space Models. *Statistical Methodology*, 21: 35-48, 2014. [doi: ]{}[http://dx.doi.org/10.1016/j.stamet.2014.02.004]{}. URL <http://www.sciencedirect.com/science/article/pii/S1572312714000197>. GNEITING, T. (2002). Nonseparable, stationary covariance functions for space-time data. *J. Amer. Statist. Assoc.*, 97(458): 590-600, 2002. ISSN 1537-274X. [doi: ]{}[10.1198/016214502760047113]{}. URL <http://www.jstor.org/stable/3085674?origin=JSTOR-pdf>. GUTTORP, P., MEIRING, W. and SAMPSON, P. D. (1994). A space-time analysis of ground-level ozone data. *Environmetrics*, 5(3): 241-254, 1994. ISSN 1099-095X. [doi: ]{}[10.1002/env.3170050305]{}. URL <http://onlinelibrary.wiley.com/doi/10.1002/env.3170050305/pdf>. HIGDON, D. (1998). A process-convolution approach to modelling temperatures in the North Atlantic Ocean. *Environ. Ecol. Stat.*, 5(2): 173-190, 1998. ISSN 1352-8505 . [doi: ]{}[10.1023/A:1009666805688]{}. URL <http://www.springerlink.com/content/g3v4441h61qu1v60/fulltext.pdf>. HIGDON, D. (2002). Space and space-time modeling using process convolutions. *In Quantitative Methods for Current Environmental Issues*. Springer-Verlag. ISBN 978-1-85233-294-5. HIGDON, D., SWALL, J. and KERN, J. (1999). Nonstationary spatial modeling. *Bayesian Statistics 6*. Oxford:Clarendon. ISBN 978-0-1985-0485-6. HUERTA, G., SANSÓ, B. and STROUD, J. R. (2004)). A spatiotemporal model for Mexico City ozone levels. *Appl. Statist.*, 53(2): 231-248, 2004. ISSN 1467-9876. [doi: ]{}[10.1046/j.1467-9876.2003.05100.x]{}. URL <http://onlinelibrary.wiley.com/doi/10.1046/j.1467-9876.2003.05100.x/pdf>. JUN, M., KNUTTI, R. and NYCHKA, D. W. (2008). Spatial analysis to quantify numerical model bias and dependence: How many climate models are there? *J. Amer. Statist. Assoc.*, 103(483): 934-947, 2008. ISSN 1537-274X. [doi: ]{}[10.1198/016214507000001265]{}. URL <http://www.tandfonline.com/doi/abs/10.1198/016214507000001265>. KIM, H. M. and MALLICK, B. K. (2004). A Bayesian prediction using the skew Gaussian distribution. *J. Stat. Plan. Infer.* , 120(1-2): 85-101, 2004. ISSN 0378-3758. [doi: ]{}[10.1016/S0378-3758(02)00501-3]{}. URL <http://www.sciencedirect.com/science/article/pii/S0378375802005013>. LOPES, H. F., SALAZAR, E. and GAMERMAN, D. (2008). Spatial Dynamic Factor Analysis. *Bayesian Anal.*, 3(4): 759-792, 2008. ISSN 1931-6690. [doi: ]{}[10.1214/08-BA329]{}. URL <http://ba.stat.cmu.edu/journal/2008/vol03/issue04/lopes.pdf>. MAJUMDAR, A. and GELFAND, A. E. (2007). Multivariate spatial modeling for geostatistical data using convolved covariance functions. *Math. Geol.*, 39(2): 225-245, 2007. ISSN 1573-8868. [doi: ]{}[10.1007/s11004-006-9072-6]{}. URL <http://link.springer.com/article/10.1007/s11004-006-9072-6#page-1>. MAJUMDAR, A., PAUL, D. and BAUTISTA, D. (2010). A generalized convolution model for multivariate nonstaionary spatial processes. *Stat. Sinica.*, 20(2): 675-695, 2010. ISSN 1017-0405. URL <http://www3.stat.sinica.edu.tw/statistica/oldpdf/A20n211.pdf>. KSENDAL, B. (2000). *Stochastic Differential Equations*. *5th Edition*. Springer-Verlag. Hiedelberg, New York. PACIOREK, C. J. and SCHERVISH, M. J. (2006). Spatial modelling using a new class of nonstationary covariance functions. *Environmetrics*, 17(5): 483-506, 2006. ISSN 1099-095X. [doi: ]{}[10.1002/env.785]{}. URL <http://onlinelibrary.wiley.com/doi/10.1002/env.785/pdf>. PALACIOS, M. B. and STEEL, M. F. J. (2006). Non-Gaussian Bayesian Geostatistical Modeling. *J. Amer. Statist. Assoc.*, 101(474): 604-618, 2006. ISSN 1537-274X. [doi: ]{}[10.1198/016214505000001195]{}. URL <http://www.tandfonline.com/doi/abs/10.1198/016214505000001195#.UjrXM0nJXQo>. RASMUSSEN, C. E. and WILLIAMS, C. K. I. (2006). *Gaussian Processes for Machine Learning*. MIT Press. ISBN 0-262-18253-X. SAIN, S. R., FURRER, R. and CRESSIE, N. (2011). A spatial analysis of multivariate output from regional climate models. *Ann. Appl. Stat.*, 5(1): 150-175, 2011. ISSN 1932-6157. [doi: ]{}[10.1214/10-AOAS369]{}. URL <http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1300715186>. SAMPSON, P. D., and GUTTORP, P. (1992). Nonparametric Estimation of Nonstationary Spatial Covariance Structure. *J. Amer. Statist. Assoc.*, 87(417): 108-119, 1992. ISSN 1537-274X. [doi: ]{}[10.1080/01621459.1992.10475181]{}. URL <http://www.jstor.org/stable/2290458>. SANG, H., JUN, M. and HUANG, J. Z. (2011). Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors. *Ann. Appl. Stat.*, 5(4): 2519-2548, 2011. ISSN 1932-6157. [doi: ]{}[10.1214/11-AOAS478]{}. URL <http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1324399605>. SCHMIDT, A. M., and O’HAGAN, A. (2003). Bayesian inference for non-stationary spatial covariance structure via spatial deformations. *J. Roy. Statist. Soc. Ser. B*, 65(3): 743-758, 2003. ISSN 1467-9868. [doi: ]{}[10.1111/1467-9868.00413]{}. URL <http://www.jstor.org/stable/3647549>. SMITH, R. L., TEBALDI, C., NYCHKA, D. and MEARNS, L. O. (2009). Bayesian modeling of uncertainty in ensembles of climate models. *J. Amer. Statist. Assoc.*, 104(485): 97-116, 2009. ISSN 1537-274X. [doi: ]{}[10.1198/jasa.2009.0007]{}. URL <http://amstat.tandfonline.com/doi/abs/10.1198/jasa.2009.0007>. STROUD, J. R., MÜLLER, P. and SANSÓ, B. (2001). Dynamic models for spatiotemporal data. *J. Roy. Statist. Soc. Ser. B*, 63(4): 673-689, 2001 . ISSN 1467-9868. [doi: ]{}[10.1111/1467-9868.00305]{}. URL <http://www.jstor.org/stable/2680659>. MEIRING, W., MONESTIEZ, P., SAMPSON, D. P. and GUTTORP, P. (1997). Developments in the modelling of nonstationary spatial covariance structure from space-time monitoring data. *Geostatistics Wollongong ’96*. Springer. ISBN 978-0-7923-4496-4. OLIVEIRA, V. D., KEDEM, B. and SHORT, D. A. (1997). Bayesian Prediction of Transformed Gaussian Random Fields. *J. Amer. Statist. Assoc.*, 92(440): 1422-1433, 1997. ISSN 1537-274X. [doi: ]{}[10.1080/01621459.1997.10473663]{}. URL <http://www.tandfonline.com/doi/abs/10.1080/01621459.1997.10473663#.UjrV0EnJXQo>. TEBALDI, C. and KNUTTI, R. (2007). The use of the multi-model ensemble in probabilistic climate projections. *Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.*, 365: 2053-2075, 2007. ISSN 1471-2962. [doi: ]{}[10.1098/rsta.2007.2076]{}. URL <http://rsta.royalsocietypublishing.org/content/365/1857/2053> TEBALDI, C. and SANSÓ, B. (2009). Joint projections of temperature and precipitation change from multiple climate models: A hierarchical Bayesian approach. *J. Roy. Statist. Soc. Ser. A*, 172(1): 83-106, 2009. ISSN 1467-985X. [doi: ]{}[ 10.1111/j.1467-985X.2008.00545.x]{}. URL <http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2008.00545.x/abstract>. WEST, M. and HARRISON, J. (1997). *Bayesian Forecasting and Dynamic Models*. Springer-Verlag. ISBN 978-0-387-94725-9. [doi: ]{}[ 10.1007/b98971]{}. URL <http://www.springerlink.com/content/978-0-387-94725-9>. ZHU, Z. and WU, Y. (2012). Estimation and prediction of a class of convolution based spatial nonstationary models for large spatial data. *J. Comput. Graph. Stat.*, 19(1): 74-95, 2012. ISSN 1537-2715. [doi: ]{}[ 10.1198/jcgs.2009.07123]{}. URL <http://amstat.tandfonline.com/doi/pdf/10.1198/jcgs.2009.07123>. [^1]: Suman Guha is a Phd student in Bayesian and Interdisciplinary Research Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108. His research is supported by CSIR SPM Fellowship, Govt. of India. *Corresponding e-mail address:* sumanguha\_r@isical.ac.in. [^2]: Sourabh Bhattacharya is an Assistant Professor in Bayesian and Interdisciplinary Research Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108. *Corresponding e-mail address:* sourabh@isical.ac.in.
--- abstract: 'In our previous paper I (del Valle–Turbiner, 2019) it was developed the formalism to study the general $D$-dimensional radial anharmonic oscillator with potential $V(r)= \frac{1}{g^2}\,\hat{V}(gr)$. It was based on the Perturbation Theory (PT) in powers of $g$ (weak coupling regime) and in inverse, fractional powers of $g$ (strong coupling regime) in both $r$-space and in $(gr)$-space, respectively. As the result it was introduced - the [*Approximant*]{} - a locally-accurate uniform compact approximation of a wave function. If taken as a trial function in variational calculations it has led to variational energies of unprecedented accuracy for cubic anharmonic oscillator. In this paper the formalism is applied to both quartic and sextic, spherically-symmetric radial anharmonic oscillators with two term potentials $V(r)= r^2 + g^{2(m-1)}\, r^{2m}, m=2,3$, respectively. It is shown that a two-parametric Approximant for quartic oscillator and a five-parametric one for sextic oscillator for the first four eigenstates used to calculate the variational energy are accurate in 8-12 figures for any $D=1,2,3\ldots $ and $g \geq 0$, while the relative deviation of the Approximant from the exact eigenfunction is less than $10^{-6}$ for any $r \geq 0$.' author: - 'J.C. del Valle' - 'A.V. Turbiner' bibliography: - 'references2.bib' title: | Radial Anharmonic Oscillator: Perturbation Theory, New Semiclassical Expansion, Approximating Eigenfunctions.\ II. Quartic and Sextic Anharmonicity Cases --- Introduction {#introduction .unnumbered} ============ In our previous paper [@DelValle1], henceforth denoted by I, we studied the energy and wave function of the radial anharmonic potential given by $$\label{potential} V(r)\ =\ \frac{1}{g^2}\,\hat{V}(g\,r)\ =\ \frac{1}{g^2}\,\sum_{k=2}^{m}a_k\,g^k\,r^k\ ,\ r \in [0, \infty)\ .$$ Here $r$ is hyperradius in $D$-dimensional space, $g \geq 0$ is a coupling constant and $a_k$, $k=2,3,...,m$, are parameters. It was assumed that $a_2$ and $a_m$ are both positive and the potential $V(r) > 0$ at $r>0$ is positive as well whose possible minima are non-degenerate with minimum at $r=0$: $V(0)=0$. Hence, the minimum at origin is global. The corresponding radial Schrödinger operator takes the form $$\label{radialop} \hat{h}_r\ =\ -\frac{\hbar^2}{2M}\left({\partial}_r^2\ +\ \frac{D-1}{r}{\partial}_r\ -\ \frac{l(l+D-2)}{r^2}\right)\ +\ \frac{1}{g^2}\hat{V}(gr)\quad ,\quad {\partial}_r\equiv\frac{d}{dr}\ ,$$ where $\hbar$ is the Planck constant and $M$ is the mass of the system. It has an infinite discrete spectra and it does not contain non-analytic in $g$ terms. Needless to say that at $m=3,4,\ldots$ and $g \neq 0$, the spectral problem - the radial Schrödinger equation: $\hat{h}_r\Psi=E\Psi$ - is non-solvable. Hence, energies and radial wave functions can be found only approximately. The general formalism to study (\[radialop\]) was developed in I, where it was successfully applied to a particular case of cubic radial anharmonic oscillator. In order to make this paper self-contained, some results and relevant equations obtained in I will be repeated briefly here. Our ultimate goal was to construct a *locally accurate* uniform approximation of the wave function $\Psi$ for some of the low-lying states via the variational method. It is worth mentioning that in $D=1$ any eigenfunction is labeled by single quantum number like $\Psi_n$ with $n=0,1,\ldots$. It measures a number of nodes, where the eigenfunction vanishes. In $D>1$ two quantum numbers are needed to identify the state, $\Psi_{n_r,\ell}$, where $n_r,\ell=0,1,\ldots$, here $n_r$ and $\ell$ are radial quantum number and angular momentum, respectively. We focus on the ground state function $n_r=\ell=0$, thus, we drop labels and denote it as $\Psi(r)$, in the exponential representation $$\label{Phi} \Psi(r)\ =\ e^{-\frac{1}{\hbar}\,\Phi(r)}\ ,$$ where the function $\Phi$ is the *phase* of the wave function. This representation allows us to transform the radial Schrödinger equation into a Riccati one, $$\label{riccati} \hbar\,{\partial}_r\, y\ -\ y\,\left(y \ -\ \frac{\hbar\,(D-1)}{r}\right)\ =\ 2 M\, \left(E \ -\ V \right)\quad ,\quad\ y = {\partial}_r\,\Phi(r)\ .$$ There are two ways to remove the explicit appearance of the Planck constant $\hbar$ and the mass $M$ in this equation: ${\bf (i)}$ by introducing in (\[riccati\]) the new $\hbar$-dependent variable $$\label{change-v} v\ =\ \bigg(\frac{2M}{\hbar^2}\bigg)^{\frac{1}{4}} \,r \ ,$$ which we call the [*quantum*]{} coordinate, and then changing phase and energy $$\label{change-to-Y} y\ =\ (2M \hbar^2)^{\frac{1}{4}}\ \mathcal{Y}\quad , \quad E\ =\ \frac{\hbar}{(2M)^{\frac{1}{2}}}\,{\varepsilon}\ .$$ After that the equation (\[riccati\]) becomes the *Riccati-Bloch* (RB) equation, see I, $$\label{riccati-bloch} {\partial}_v\mathcal{Y}\ -\ \mathcal{Y}\left(\mathcal{Y} - \frac{D-1}{v}\right)\ =\ {\varepsilon}\left({\lambda}\right)\ -\ \frac{1}{{\lambda}^2}\,\hat{V}\left({\lambda}v\right)\quad ,\quad {\partial}_v \equiv \frac{d}{dv}\ ,$$ where the *effective* coupling constant is $$\label{effective} {\lambda}\ =\ \left(\frac{\hbar^2}{2M}\right)^{\frac{1}{4}}\, g\ ,$$ while ${\varepsilon}$ plays the role of energy. The RB equation governs the dynamics in $v(r)$-space. ${\bf (ii)}$ one can introduce in (\[riccati\]) the *classical*, $\hbar$-independent coordinate $$\label{change-to-u} u\ =\ g\,r\ ,$$ and define a new unknown function $$\label{change-to-Z} \mathcal{Z}\ =\ \frac{g}{(2M)^{1/2}}\, y\ .$$ It is easy to check that $\mathcal{Z}(u)$ obeys a non-linear differential equation $${\lambda}^2\,{\partial}_u\mathcal{Z}\ -\ \mathcal{Z}\left(\mathcal{Z} - \frac{{\lambda}^2(D-1)}{u}\right)\ =\ {\lambda}^2\,{\varepsilon}({\lambda})\ -\ \hat{V}(u) \quad , \quad {\partial}_u\equiv\frac{d}{du}\ , \label{Bloch}$$ which was called the (radial) *Generalized Bloch* (GB) equation in I. The definition of ${\varepsilon}$ and ${\lambda}$ remains the same (\[change-to-Y\]) and (\[effective\]) as in RB. The GB equation governs the dynamics in $(gr)$-space. Let us note that for $D=1$ the equation (\[Bloch\]) was called in [@ESCOBARI; @ESCOBARII; @Shuryak] the *(one-dimensional) GB Equation*. The classical coordinate $u$ is related with the quantum one $v$ in a remarkable easy relation $$\label{u vs v} u\ =\ {\lambda}\,v\ ,$$ see (\[change-v\]), (\[effective\]) and (\[change-to-u\]). For both equations (\[riccati-bloch\]) and (\[Bloch\]) the Perturbation Theory (PT) in powers of the effective coupling constant ${\lambda}$ can be developed; it generates the weak coupling expansion for ${\varepsilon}$ $${\varepsilon}({\lambda})\ =\ \sum_{n=0}^{\infty} {\varepsilon}_n {\lambda}^{n}\ \label{eps-in-la}$$ and for the functions $\mathcal{Y}(v)$ and $\mathcal{Z}(u)$, $$\mathcal{Y}(v)\ =\ \sum_{n=0}^{\infty}\mathcal{Y}_n(v){\lambda}^n\ , \label{Y-in-la}$$ $$\mathcal{Z}(u)\ =\ \sum_{n=0}^{\infty}\mathcal{Z}_n(u){\lambda}^n\ . \label{Z-in-la}$$ Due to the fact that ${\varepsilon}_n$ depend on the parameters $a_k$, see (\[potential\]), the developed PT in powers of ${\lambda}\sim (g\,\hbar^{1/2})$ (\[eps-in-la\]) can be considered as the expansion in powers of coupling constant $g$ and at the same time as the semiclassical expansion in powers of $\hbar^{1/2}$. In a similar way due to the fact that $\mathcal{Z}_n(u)$ (\[Z-in-la\]) does not depend on $\hbar$ the developed PT for $\mathcal{Z}(u)$ in powers of ${\lambda}\sim (g\,\hbar^{1/2})$ can be considered as the semiclassical expansion in powers of $\hbar^{1/2}$ as well as the expansion in powers of $g$. When PT in powers of ${\lambda}$ is developed for the RB equation, it leads to the weak coupling expansion of $\mathcal{Y}(v)$ via the so-called Non-Linearization Procedure [@TURBINER:1984]. In turn, PT for the case of the GB equation leads to a *new version* of the semiclassical expansion. A non-trivial connection between two expansions (\[Y-in-la\]) and (\[Z-in-la\]) has already been established in I. Expansion (\[Z-in-la\]) can be transformed into an expansion of the phase, $$\label{phase} \Phi(r;{\lambda})\ =\ \sum_{n=0}^{\infty}{\lambda}^n G_n(r)\quad ,\quad G_1(r)\ =\ 0\ ,$$ see (\[Phi\]), where $$\label{gndef} G_n(r)\ =\ \left(\frac{2M}{g^2}\right)^{1/2}\int^{r}Z_n(gr)\,dr\ .$$ Keeping $g$ fixed the expansion (\[phase\]) can be regarded as a semiclassical expansion in powers of $\hbar^{1/2}$ of the phase (\[Phi\]) in the non-classical domain at large $r$ (beyond a turning point). RB and GB equations can also be used to study the strong coupling regime, a domain of large $g$. In this case, a perturbative approach implemented in the RB equation leads to the strong coupling expansion of the energy, $$E\ =\ \left(\frac{\hbar^2}{2M}\right)^{\frac{1}{m+2}} g^{2\frac{(m-2)}{m+2}}\sum_{n=0}^{\infty}\,\tilde{{\varepsilon}}_n\tilde{{\lambda}}^{-n}\quad ,\quad \tilde{{\lambda}}\ =\ \left(\frac{\hbar^2\,g^4}{2M}\right)^{\frac{1}{m+2}}\ =\ {\lambda}^{\frac{4}{m+2}}\ . \label{Scoupling}$$ where $\tilde{{\varepsilon}}_n$, $n=1,2,...$, are coefficients. A similar expression occurs for the energy of any exited state. There is an interesting connection between the behavior of the wave function at small distances and the strong coupling regime, see I. The analytical information on the phase (\[Phi\]), collected from the weak and strong coupling regime, was used to design the *Approximant*: an approximation of the exact ground state $(0,0)$ wave function valid at any $D > 0$ in the form $$\label{approximant} \Psi_{(0,0)}^{(t)}\ =\ e^{-\frac{1}{\hbar}\,\Phi_t}\ .$$ A straightforward modification of $\Psi^{(t)}_{(0,0)}$ via multiplication on a suitable polynomial in $r$ with real roots allows to construct Approximants for excited states, therefore the Approximant for the ground state appears as the building block. The approximate phase $\Phi_t$, called [*Phase Approximant*]{}, is constructed in such a way that it interpolates the expansions at small and large $r$, in the weak and strong coupling regimes. The main result obtained in I is a simple formula for the phase $\Phi_t$, which has to be applicable for the general anharmonic oscillator potential $$\frac{1}{\hbar}\Phi_t\ =\ \frac{\tilde{a}_0\ +\ \tilde{a}_1\,g\,r\ +\ \frac{1}{g^2}{\hat V}(r\,;\ \tilde{a}_2, \dots , \tilde{a}_{m})}{\sqrt{\frac{1}{g^2\,r^{2}}\,{\hat V}(r\,;\ \tilde{b}_2, \ldots ,\tilde{b}_{m})}}\ +\ \text{Logarithmic Terms\,($r\,;\, \{\tilde{c}\}$)} \ , \label{generalrecipe}$$ cf. [@Turbiner2005] for $D=1$ at $m=4$, where one can put for normalization ${\tilde b}_2 = 1$. Here ${\hat V} (r; \{\tilde{a}\})$ and ${\hat V} (r; \{\tilde{b}\})$ are modified versions of the original potential (\[potential\]): instead of the parameters $\{a\}$, some parameters $\{\tilde{a}\}$ and $\{\tilde{b}\}$ are taken, respectively. The insertion of logarithmic terms in the Phase Approximant $\Phi_t$ (with dependence on some extra parameters $\{\tilde{c}\}$) mimics the logarithmic terms in the exact wave function. In order to fix the values of the free parameters in $\Phi_t$ (\[generalrecipe\]), the function $\Psi_{0,0}^{(t)}$ (\[approximant\]) is used as a trial function in variational calculations: we compute the parameter-dependent variational energy $$\label{evar} E_{var}\ =\ \frac{\int_0^{\infty}\Psi^{(t)}_{0,0}\,(\hat{h}_r\,\Psi^{(t)}_{0,0})\,r^{D-1}\,dr} {\int_0^{\infty}(\Psi^{(t)}_{0,0})^2 \ r^{D-1}\,dr}\ ,$$ and then minimize it with respect to parameters $\{\tilde{a},\tilde{b},\tilde{c}\}$ to obtain the upper bound of the exact energy. Since $E_{var}$ corresponds to the first two terms in PT, namely, $$E_{var}\ \equiv\ E_{0}^{(1)}\ =\ E_0\ +\ E_1\ , \label{firsta}$$ the Non-Linearization Procedure could be used to estimate its accuracy by calculating higher-order corrections $E_2$, $E_3$, ..., to $E_{var}$. Therefore, various partial sums define different approximations to the exact energy. For instance, the partial sum $$E_0^{(2)}\ =\ E_0\ +\ E_1\ +\ E_2\ , \label{seconda}$$ corresponds to the second order approximation, while the variational energy itself (\[firsta\]) is the first order one. In general, the partial sum $$\label{nth} E^{(n)}_0\ =\ E_0\ +\ E_1\ +\ \ldots\ E_n\ ,$$ defines the $n$th approximation. In our previous work I, we presented results for the cubic anharmonic oscillator when the Approximant (\[approximant\]) is used as trial function in variational calculations for several low-lying states. It was explicitly checked that the relative deviation of the Approximant (\[approximant\]) from the exact eigenfunction is less than $10^{-4}$ for all $r \in[0,\infty)$. Therefore the Approximant (\[approximant\]) represents a locally accurate, uniform approximation of the exact wave function. Simultaneously, the absolute accuracy in energy reaches the extremely high value $\sim 10^{-7}$ at any dimension $D$ and coupling constant $g \in[0, \infty)$! In the present paper it will be shown that in the same formalism, the Approximants leads to highly accurate results for the quartic and sextic radial anharmonic potentials. We follow the same program as in I: as the first step we focus on the study of the ground state: the structure of PT in powers of ${\lambda}$ and in inverse powers of ${\lambda}$ is investigated in order to construct the Approximant for the phase of the ground state wave function. Then “functionally-similar" Approximant is used for the phase of the trial functions of the excited states. With this knowledge we perform variational calculations for some low-lying states imposing the orthogonality conditions with respect to the ground state and between excited states. The accuracy of the variational energy and the quality of the Approximant are evaluated in two different manners: $(i)$ by using PT and calculating corrections to variational calculations in the framework of the Non-Linearization Procedure; $(ii)$ by using one of the most accurate numerical methods to solve the Schrödinger equation: the Lagrange Mesh Method (LMM) in a formulation proposed by D. Baye, see [@BAYE]. Eventually, we use the Approximant to calculate the first two dominant terms of the strong coupling expansion for the ground state. The paper is divided in two large parts: Section I is dedicated to the quartic two-term radial anharmonic oscillator and Section II is devoted to the sextic two-term radial anharmonic oscillator. In Conclusions it will be shown that choosing the parameters in the (phase) Approximant in such a way to reproduce [*all*]{} growing terms at large $r$ exactly and removing linear in $r$ term at the small $r$ expansion leads to the striking fact that the relative deviation of the (phase) Approximant from the exact phase is bounded and do not exceed $\sim 10^{-2}$. It reduces the number of free parameters to one, two, five parameters for cubic, quartic and sextic radial anharmonic oscillators, respectively, while the accuracy of the variational energy is reduced to five-six figures for any coupling constant, which is still unprecedented result. Quartic Anharmonic Oscillator ============================= The simplest, formally even $V(-r)=V(r)$, radial anharmonic oscillator potential is characterized by quartic anharmonicity, $$V(r)\ =\ r^2\ +\ g^2\,r^4\ , \label{potquartic}$$ cf. (\[potential\]) at $m=4$ with $a_3=0$ and $a_4=1$. It is worth mentioning that many properties those the quartic anharmonic radial oscillator exhibits are typical for any (formally) even anharmonic potential, $V(r)=V(-r)$. In particular, the polynomial nature of corrections ${\varepsilon}_n$ and $\mathcal{Y}_n(v)$, see (\[eps-in-la\]) and (\[Y-in-la\]) is one of such common properties. The results of the forthcoming Section are obtained in similar way to those for the cubic anharmonic potential, therefore we omit some details already presented in I. PT in the Weak Coupling Regime ------------------------------ For the quartic anharmonic oscillator the perturbative expansions of ${\varepsilon}$ and $\mathcal{Y}(v)$, derived from RB equation (\[riccati-bloch\]), $$\label{riccati-bloch-4} {\partial}_v\mathcal{Y}\ -\ \mathcal{Y}\left(\mathcal{Y} - \frac{D-1}{v}\right)\ =\ {\varepsilon}\left({\lambda}\right)\ -\ v^2\ \ -\ {\lambda}^2\,v^4 \quad , \quad {\partial}_v \equiv \frac{d}{dv}\ ,$$ where $v$ and ${\lambda}$ are defined in (\[change-v\]) and (\[effective\]), correspondingly, are of the form $$\label{eps-in-la-4} {\varepsilon}\ =\ {\varepsilon}_0\ +\ {\varepsilon}_2\,{\lambda}^2\ +\ {\varepsilon}_4\,{\lambda}^4\ +\ \ldots \quad ,\quad {\varepsilon}_0\ =\ D\ ,$$ and $$\label{Y-in-la-4} \mathcal{Y}(v)\ =\ \mathcal{Y}_0(v)\ +\ \mathcal{Y}_2(v)\,{\lambda}^2\ +\ \mathcal{Y}_4(v)\,{\lambda}^4\ +\ \ldots\ \quad ,\quad \mathcal{Y}_0(v)\ =\ v \ ,$$ respectively. All odd terms in ${\lambda}$ vanish in both expansions, see (\[eps-in-la\]) and (\[Y-in-la\]). In general, a finite number of corrections can be calculated by linear algebra means, in particular, the first non-vanishing corrections are $$\label{correction2-4} {\varepsilon}_2\ =\ \frac{1}{4}\,D\,(D+2)\quad ,\quad \mathcal{Y}_2(v)\ =\ \frac{1}{2}\,v^3\ +\ \frac{1}{4}\,(D+2)\, v\ ,$$ while the next two corrections ${\varepsilon}_{4,6}$ and $\mathcal{Y}_{4,6}(v)$ are presented in Appendix A. In principle, the algebraic procedure of finding PT corrections holds for all even anharmonic potentials $V(r)=V(-r)$, however, it is enough to have a single odd monomial term to occur in the potential this property breaks down. In this situation, the calculation of correction ${\varepsilon}_n$ becomes numerical procedure like it happens for the cubic case, see the paper I. Eventually, in contrast to the cubic case, it can be shown that for all even potentials any correction ${\varepsilon}_{2n}$ is a polynomial in $D$. In general, all corrections $\mathcal{Y}_{2n}(v)$ are odd-degree polynomials in $v$ of the form, $$\mathcal{Y}_{2n}(v)\ =\ v\,\sum_{k=0}^{n}c_{2k}^{(2n)}v^{2(n-k)}\ , \label{Y2n}$$ where any coefficient $c_{2k}^{(2n)}$ is a polynomial in $D$ of degree $k$, $$c_{2k}^{(2n)}\ =\ P_{k}^{(2n)} (D)\ \quad ,\quad c_{2n}^{(2n)}\ =\ \frac{{\varepsilon}_{2n}}{D}\ . \label{Y2n-c}$$ Due to invariance $v {\rightarrow}-v$ of the original equation (\[riccati-bloch-4\]), it is convenient to simplify it by introducing a new unknown function and changing $v$-variable, $$\mathcal{Y}\ =\ v\, \mathcal{\tilde Y}\quad \mbox{and}\quad {\rm v}\ =\ v^2\ .$$ As a result, (\[riccati-bloch-4\]) is reduced to $$\label{riccati-bloch-4-tilde} 2 {\rm v} {\partial}_{\rm v} \mathcal{\tilde Y}\ -\ \mathcal{\tilde Y}\left({\rm v} \mathcal{\tilde Y} - D\right)\ =\ {\varepsilon}\left({\lambda}\right)\ -\ {\rm v}\ \ -\ {\lambda}^2\,{\rm v}^2 \quad , \quad {\partial}_v \equiv \frac{d}{dv}\ .$$ This is a convenient form of the RB equation to carry out the PT consideration. In particular, the first correction (\[correction2-4\]) to $\mathcal{\tilde Y}$ becomes a linear function, $$\mathcal{\tilde Y}_2\ =\ \frac{1}{4}\,\left[2{\rm v}\ +\ (D+2)\right]\ ,$$ and, in general, $\mathcal{\tilde Y}_{2n}$ is a polynomial in ${\rm v}$ of degree $n$, see (\[Y2n\]). Corrections $\mathcal{\tilde Y}_{4,6}$ are presented in Appendix A. The energy corrections ${\varepsilon}_{2n}$ are of the form [@DOLGOVPOPOV1978] $${\varepsilon}_{2n}(D)\ =\ D\,(D+2)\,R_{n-1}(D)\ , \label{factorizationq}$$ where $R_{n-1}(D)$ is a polynomial of degree $(n-1)$ in $D$, in particular, $R_0=\frac{1}{4}$. From (\[factorizationq\]) one can see that any energy correction ${\varepsilon}_{2n}$ vanishes when $D=0$, the property which holds for any anharmonic oscillator, see I. Consequently, their formal sum results in ${\varepsilon}(D=0)=0$ and ultimately in $E(D=0)=0$. Thus, at $D=0$ the radial Schrödinger equation is reduced to $$-\frac{\hbar^2}{2M}\left(\frac{d^2\Psi(r)}{dr^2}\ -\ \frac{1}{r}\,\frac{d\Psi(r)}{dr}\right)\ +\ (r^2+g^2\,r^4)\,\Psi(r)\ =\ 0\ . \label{D=0-q}$$ Needless to say, this equation defines the zero mode of the Schrödinger operator. It can be solved exactly in terms of Airy functions [@DOLGOVPOPOV1979], $$\label{D=0-q-psi} \Psi\ =\ C_1\,\text{Ai}\left(\frac{1+({\lambda}v)^2}{{\lambda}^{4/3}}\right)\ +\ C_2\,\text{Bi}\left(\frac{1+({\lambda}v)^2}{{\lambda}^{4/3}}\right)\ ,$$ for definition of ${\lambda}$ and $v$ see (\[change-v\]) and (\[effective\]), respectively. However, this linear combination can not be made normalizable at $D=0$ by any choice of constants $C_1$ and $C_2$. Hence, the original assumption $E=0$ is incorrect; it opens the possibility for non-perturbative contributions at $D=0$ in order to have $E \neq 0$. Interestingly, at the non-physical dimension $D=-2$, all corrections ${\varepsilon}_{2n}$ with $n > 1$ also vanish, thus, the formal sum of corrections results is ${\varepsilon}=-2$, see (\[factorizationq\]). In this case, no exact solution for the corresponding radial Schrödinger equation is found. It is not clear whether the Schrödinger equation has the solution in the Hilbert space at ${\varepsilon}=-2$. Generating Functions -------------------- As it was mentioned above, one can determine the coefficients in the polynomial correction $\mathcal{Y}_{2n}(v)$, i.e. $c_{2k}^{(2n)}, k=0,1,\ldots, n$, see (\[Y2n\]), by algebraic means. However, as has discussed in I, more efficient procedure to calculate them is through constructing their generating functions in $(u=gr)$-space. It was shown that the correction $\mathcal{Z}_{2n}(u)$ is, in fact, a generating function of the coefficients $c_{2k}^{(2n)}, k=0,1,2,\ldots$, see below. For quartic anharmonic oscillator the function $\mathcal{Z}(u)$, derived from GB equation $${\lambda}^2\,{\partial}_u\mathcal{Z}\ -\ \mathcal{Z}\left(\mathcal{Z} - \frac{{\lambda}^2(D-1)}{u}\right)\ =\ {\lambda}^2\,{\varepsilon}({\lambda})\ -\ u^2 \ -\ u^4 \quad , \quad {\partial}_u\equiv\frac{d}{du}\ , \label{Bloch-4}$$ cf. (\[Bloch\]), can be written as an expansion in terms of generating functions, namely, $$\label{expansionZq} \mathcal{Z}(u)\ =\ \mathcal{Z}_0(u)\ +\ \mathcal{Z}_2(u)\,{\lambda}^2\ +\ \mathcal{Z}_4(u)\,{\lambda}^4\ +\ \ldots\ ,$$ where each coefficient $$\mathcal{Z}_{2k}(u)\ =\ u\, \sum_{n=k}^{\infty}c_{2k}^{(2n)}u^{2(n-k)} \quad ,\quad k\ =\ 0,1,\ldots\ ,$$ is given by infinite series. Here the expansion of ${\varepsilon}$ in powers of ${\lambda}$ is given by (\[eps-in-la-4\]). Note that all generating functions $\mathcal{Z}_{2k+1}(u)$, $k=1,2,...$ of odd order ${\lambda}^{2k+1}$ are absent in expansion (\[Z-in-la\]). Due to invariance $u {\rightarrow}-u$ it is convenient to simplify (\[Bloch-4\]) by introducing $$\mathcal{Z}\ =\ u \mathcal{\tilde Z}\quad \mbox{and}\quad {\rm u}\ =\ u^2\ .$$ Finally, (\[Bloch-4\]) is reduced to $$2 {\lambda}^2\,{\rm u}\,{\partial}_{\rm u} \mathcal{\tilde Z}\ -\ \mathcal{\tilde Z}\left({\rm u} \mathcal{\tilde Z} - {{\lambda}^2\,D}\right)\ =\ {\lambda}^2\,{\varepsilon}({\lambda})\ -\ {\rm u} \ -\ {\rm u}^2 \quad , \quad {\partial}_u\equiv\frac{d}{du}\ . \label{Bloch-4-tilde}$$ It is easy to find the first two terms of the expansion (\[expansionZq\]) explicitly, $$\mathcal{\tilde Z}_0({\rm u})\ =\ \sqrt{1+{\rm u}}\ , \label{Z0quartic}$$ $$\label{Z2quartic} \mathcal{\tilde Z}_2({\rm u})\ =\ \frac{{\rm u}+D\left(1+{\rm u}-\sqrt{1+{\rm u}}\right)}{2{\rm u}(1+{\rm u})}\ .$$ Interestingly, from the polynomial form of the coefficient $c_{2k}^{(2n)}$ in $D$, see (\[factorizationq\]), one can find the structure of generating function in $D$, $$\mathcal{\tilde Z}_{2k}({\rm u})\ =\ \sum_{n=0}^{k}f^{(k)}_n({\rm u})\,D^n\ ,$$ where all $f^{(k)}_n({\rm u}), \ n=0,1,\ldots k$ are real functions. In general, $\mathcal{\tilde Z}_{2k}({\rm u})$ is a polynomial in $D$ of degree $k$. The asymptotic behavior of the generating functions $\mathcal{Z}_{2k}(u),\ k=0,1,2,\ldots$ in expansion (\[expansionZq\]) at large $u$ is related to the asymptotic behavior of the function $y$ at large $r$ in a quite interesting manner. It can be easily found that for fixed (effective) coupling constant $g ({\lambda})$, the asymptotic expansion of $y$ at large $r$, rewritten in variable $v$, see (\[change-v\]), has the form $$y\ =\ (2M\hbar^2)^{\frac{1}{4}}\left({\lambda}v^2\ +\ \frac{1}{2{\lambda}}\ +\ \frac{D+1}{2}v^{-1}\ -\ \frac{4{\lambda}^2{\varepsilon}+1}{8{\lambda}^3}v^{-2}\ +\ \ldots\right)\ ,\quad v{\rightarrow}\infty\ . \label{qexpansion}$$ Note that the first three terms of the expansion are ${\varepsilon}$- and $D$-independent. On the other hand, the first three terms in the expansion of lowest generating function $(\frac{2M}{g^2})^{1/2}\mathcal{Z}_0(u)$ at large $u$ are $$\label{Z0quartic_exp} \left(\frac{2M}{g^2}\right)^{1/2}\mathcal{Z}_0(u)\ =\ \left(\frac{2M}{g^2}\right)^{1/2}\left(u^2\ +\ \frac{1}{2}\ -\ \frac{1}{8}u^{-2}\ +\ \ldots\right)\ , \quad u{\rightarrow}\infty \ ,$$ see (\[change-to-Z\]), and also are ${\varepsilon}$- and $D$-independent. To compare the expansions (\[qexpansion\]) and (\[Z0quartic\_exp\]), let us replace the classical coordinate $u$ by quantum $v$ (\[u vs v\]), $$u\ =\ {\lambda}\,v\ .$$ (evidently, large $v$ implies large $u$ and vice versa as long as ${\lambda}$ is fixed). Then the expansion (\[Z0quartic\_exp\]) becomes $$\left(\frac{2M}{g^2}\right)^{1/2}\mathcal{Z}_0({\lambda}v)\ =\ (2M\hbar^2)^{\frac{1}{4}}\left({\lambda}v^2\ +\ \frac{1}{2{\lambda}}\ -\ \frac{1}{8{\lambda}^3}v^{-2}\ +\ \ldots\right)\ ,\quad v {\rightarrow}\infty\ .$$ It reproduces exactly the first two terms in (\[qexpansion\]) but fails to reproduce $O(v^{-1})$, this term is absent in the expansion. However, the next generating function $(\frac{2M}{g^2})^{1/2}{\lambda}^2\mathcal{Z}_2({\lambda}v)$ at large $v$ reproduces the term $O(v^{-1})$ exactly in the original expansion (\[qexpansion\]), $$\left(\frac{2M}{g^2}\right)^{1/2}{\lambda}^2\mathcal{Z}_2(\lambda v)\ =\ (2M\hbar^2)^{\frac{1}{4}}\left(\frac{D+1}{2}v^{-1}\ -\ \frac{D}{2{\lambda}}v^{-2}\ +\ \ldots\right)\ , \quad\quad v{\rightarrow}\infty\ .$$ In turn, it fails to reproduce correctly the term $O(v^{-2})$. Thus, the expansion of the sum $(\frac{2M}{g^2})^{1/2}(\mathcal{Z}_0({\lambda}v)+{\lambda}^2\mathcal{Z}_2({\lambda}v))$ at large $v$ reproduces exactly the first three, ${\varepsilon}$-independent terms in the expansion (\[qexpansion\]). These three terms are responsible for normalizability of the wavefunction at large $v$. All higher generating functions $\mathcal{Z}_4({\lambda}v)$, $\mathcal{Z}_6({\lambda}v)\ldots\ $ contribute at large $v$ to the same term $O(v^{-2})$ as follows $$\left(\frac{2M}{g^2}\right)^{1/2}{\lambda}^{2n}\mathcal{Z}_{2n}({\lambda}v)\ =\ (2M\hbar^2)^{\frac{1}{4}} \left(-\frac{{\varepsilon}_{2n-2}\,{\lambda}^{2n-3} }{2} \, v^{-2}\ +\ \ldots\right)\ ,\quad v{\rightarrow}\infty\ ,\ n\,>\,2\ ,$$ where ${\varepsilon}_{2n-2}$ is the energy PT correction of the order $(2n-2)$. As a consequence, no matter how many generating functions we consider in the expansion $(\frac{2M}{g^2})^{1/2}(\mathcal{Z}_0({\lambda}v)+{\lambda}^2\mathcal{Z}_2({\lambda}v)+...)$, the term of order $O(v^{-2})$ of (\[qexpansion\]) can not be reproduced exactly. The Approximant and Variational Calculations --------------------------------------------- The expansion of $\mathcal{Z}(u)$, see (\[expansionZq\]), can be easily transformed into the expansion of the phase by making integration in $u$: any $\mathcal{Z}_{2n}(u)$ becomes the generating function $G_{2n}(u)$. Eventually, the expansion of the phase in generating functions becomes $$\Phi(u;{\lambda})\ =\ G_0(u)\ +\ {\lambda}^2\, G_2(u)\ +\ {\lambda}^4\, G_4(u)\ +\ \ldots\ , \label{genexpphi-u}$$ using (\[gndef\]). Keeping $g$ fixed (\[genexpphi-u\]) can be regarded as a semiclassical expansion of the phase. For the present case - quartic anharmonic oscillator - this expansion results in integer powers of the Planck constant $\hbar$, see (\[effective\]). Without loss of generality we set $\hbar=1$ and $M = 1/2$, thus, it becomes $v = r$, ${\varepsilon}= E$, ${\lambda}= g$ and $\mathcal{Y} = y$, see (\[change-v\]), (\[change-to-Y\]) and (\[effective\]), and it remains $u=g\,r$. The expansion of the phase (\[genexpphi-u\]) is reduced into $$\Phi(r;\,g)\ =\ G_0(r;\,g)\ +\ g^2\, G_2(r;\,g)\ +\ g^4\, G_4(r;\,g)\ +\ \ldots\ , \label{genexpphi}$$ Note that any generating function $G_{2k}(r;\,g), k=0,1,\ldots$ can be written in closed analytic form in terms of elementary functions. For example, $$\begin{aligned} G_0(r;\,g) &\ =\ \frac{1}{3 g^2}\left(1+g^2r^2\right)^{3/2} , \label{firstr4}\\ g^2\,G_2(r;\,g) &\ =\ \frac{1}{4}\log[1+g^2r^2]\ +\ \frac{D}{2}\log\left[1+\sqrt{1+g^2r^2}\right]\ , \label{secondr4}\end{aligned}$$ while the next generating functions $G_{4,6}(r;\,g)$ are presented in Appendix A. The explicit expressions for the generating functions $G_0(r;\,g)$ and $G_2(r;\,g)$ allow us to construct the Approximant $\Psi_{0,0}^{(t)}=e^{-\Phi_t}$. Following (\[generalrecipe\]) the (phase) Approximant has the form $$\Phi_t\ =\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\, r^2\ +\ \tilde{a}_4\, g^2\, r^4}{\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2}}\ +\ \dfrac{1}{4}\log\left[1\ +\ \tilde{b}_4\,g^2\, r^2\right]\ +\ \dfrac{D}{2}\log\left[1\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2}\right] \ , \label{quartictrialg}$$ where $\tilde{a}_{0,2,4}, \tilde b_4$ are free parameters. The logarithmic terms, added in (\[quartictrialg\]), generate prefactor to the exponential function, in fact, they are just a certain *minimal* modification of ones which occur in the second generating function $G_2(r;\,g)$ (\[secondr4\]). As a result, the Approximant of the ground state function for arbitrary $D=1,2,3,\ldots$ is given by $$\label{ApproximantQuartic} \Psi_{(0,0)}^{(t)}\ =\ \frac{1}{\left(1\ +\ \tilde{b}_4\,g^2\,r^2\right)^{1/4} \left(1\ +\ \sqrt{1\ + \tilde{b}_4\, g^2\, r^2}\right)^{D/2}}\, \exp\left(-\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4} {\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2}}\right)\ .$$ This is the central formula of Section. Following the derivation we are certain it has to provide the highly accurate uniform approximation for the exact ground state eigenfunction, which will be checked and confirmed below. It must be emphasized that at $D=1$ the exponent (\[quartictrialg\]) in formula (\[ApproximantQuartic\]) coincides with the exponent found in [@Turbiner2005], [@Turbiner2010] but slightly differ in logarithmic terms, hence, in the form of pre-factors in (\[ApproximantQuartic\]) with the same asymptotic behavior at $r {\rightarrow}\infty$. This is the consequence of the fact that in the time when [@Turbiner2005], [@Turbiner2010] were written the GB equation, thus, the expansion in generating functions (\[genexpphi\]) were unknown. This difference leads to nonessential increase in accuracy in variational energy based on (\[ApproximantQuartic\]) with respect to ones used in [@Turbiner2005], [@Turbiner2010], while the local deviation from exact function remains almost the same. It is easy to check that by setting the constraint $$\label{a4} \tilde{b}_4\ =\ 9\,\tilde{a}_4^2\ ,$$ it allows us to reproduce the dominant term in expansion (\[qexpansion\]) exactly, hence, the asymptotic behavior of the phase at large distances. It is worth mentioning that the relaxing this constraint by keeping parameters $\tilde{a}_4$ and $\tilde{b}_4$ free demonstrates this constraint is fulfilled with high accuracy. It justifies imposing the constraint (\[a4\]) on variational parameters. Note that by choosing $$\tilde{a}_0\ =\ \frac{1}{3 g^2}\quad ,\quad \tilde{a}_2\ =\ \frac{2}{3}\quad ,\quad \tilde{a}_4\ =\ \frac{1}{3}\ , \label{reproduction1}$$ the (phase) Approximant $\Phi_t$ reproduces exactly the first two terms in the expansion in generating functions (\[genexpphi\]). It already leads to a highly accurate variational energies, see below Table I. However, all three parameters $\tilde{a}_0, \tilde{a}_2, \tilde{a}_4$ in (\[reproduction1\]) are far from being optimal from the viewpoint of the variational calculations. Making minimization of the energy with respect to these parameters one can see that they appear as smooth functions in $g^2$, simultaneously being slow-changing versus $D$ for fixed $g^2$. Plots of the parameters ${\tilde a}_{0,2,4}$ [*vs*]{} $g^2$ for $D=1,2,3,6$ are shown in Fig. \[fig:varpar\], while ${\tilde a}_{0,2,4}$ [*vs*]{} $D$ for fixed $g^2$ are shown in Fig. \[fig:varparfixed\]. It is worth mentioning that at small $g \lesssim 0.1$ the parameters ${\tilde a}_{0,2,4}$ are $D$-independent. Making analysis of the parameters ${\tilde a}_{0,2,4}$ [*vs*]{} $g^2$ for different $D$ one can see the appearance of another, $D$-independent constraint on the parameters, $$\label{a2} \tilde{a}_2\ \approx \ \frac{1 + 27\,\tilde{a}_4^2}{18 \tilde{a}_4}\ .$$ It corresponds to the fact that the coefficient in front of $r$ - another growing term at $r {\rightarrow}\infty$ in the trial phase (\[quartictrialg\]) - is reproduced [*almost exactly*]{} in accordance to (\[qexpansion\]). Thus it can be concluded that the trial phase (\[quartictrialg\]) at large $r$ reproduce (almost) exactly all three growing with $r$ terms: $r^3$, $r$ and $\log r$. Eventually, if we require to reproduce all those terms exactly the Approximant in its final form will contain two free parameters $\{\tilde{a}_0, \tilde{a}_4\}$ ONLY: parameters $\{\tilde{a}_2, \tilde{b}_4\}$ obey constraints (\[a2\]), (\[a4\]), respectively. [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0$ (a), ${\tilde a}_2$ (b) and ${\tilde a}_4$ (c) [*vs*]{} the coupling constant $g$ for $D=1,2,3,6$. Parameters (\[reproduction1\]), which allow to reproduce the first two terms $G_0, G_2$ in expansion (\[genexpphi\]) (see text), shown by solid (black) line which is horizontal for ${\tilde a}_2$ (b) and ${\tilde a}_4$ (c).[]{data-label="fig:varpar"}](a0_q.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0$ (a), ${\tilde a}_2$ (b) and ${\tilde a}_4$ (c) [*vs*]{} the coupling constant $g$ for $D=1,2,3,6$. Parameters (\[reproduction1\]), which allow to reproduce the first two terms $G_0, G_2$ in expansion (\[genexpphi\]) (see text), shown by solid (black) line which is horizontal for ${\tilde a}_2$ (b) and ${\tilde a}_4$ (c).[]{data-label="fig:varpar"}](a2_q.eps "fig:"){width="\linewidth"} [0.5]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0$ (a), ${\tilde a}_2$ (b) and ${\tilde a}_4$ (c) [*vs*]{} the coupling constant $g$ for $D=1,2,3,6$. Parameters (\[reproduction1\]), which allow to reproduce the first two terms $G_0, G_2$ in expansion (\[genexpphi\]) (see text), shown by solid (black) line which is horizontal for ${\tilde a}_2$ (b) and ${\tilde a}_4$ (c).[]{data-label="fig:varpar"}](a4_q.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0 (a), {\tilde a}_2 (b), {\tilde a}_4 (c)$ [*vs*]{} $D$ for fixed $g^2=0.1, 1, 10$. $D$-independent parameters (\[reproduction1\]), which allow to reproduce the first two terms $G_0, G_2$ in expansion (\[genexpphi\]) (see text), shown by solid (black) horizontal lines. In (a) the horizontal lines correspond to ${\tilde a}_0=\frac{1}{3g^2}$ at $g^2=0.1\, (i), 1.\, (ii), 10.\, (iii)$. []{data-label="fig:varparfixed"}](a0_gfixed_q.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0 (a), {\tilde a}_2 (b), {\tilde a}_4 (c)$ [*vs*]{} $D$ for fixed $g^2=0.1, 1, 10$. $D$-independent parameters (\[reproduction1\]), which allow to reproduce the first two terms $G_0, G_2$ in expansion (\[genexpphi\]) (see text), shown by solid (black) horizontal lines. In (a) the horizontal lines correspond to ${\tilde a}_0=\frac{1}{3g^2}$ at $g^2=0.1\, (i), 1.\, (ii), 10.\, (iii)$. []{data-label="fig:varparfixed"}](a2_gfixed_q.eps "fig:"){width="\linewidth"} [0.5]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0 (a), {\tilde a}_2 (b), {\tilde a}_4 (c)$ [*vs*]{} $D$ for fixed $g^2=0.1, 1, 10$. $D$-independent parameters (\[reproduction1\]), which allow to reproduce the first two terms $G_0, G_2$ in expansion (\[genexpphi\]) (see text), shown by solid (black) horizontal lines. In (a) the horizontal lines correspond to ${\tilde a}_0=\frac{1}{3g^2}$ at $g^2=0.1\, (i), 1.\, (ii), 10.\, (iii)$. []{data-label="fig:varparfixed"}](a4_gfixed_q.eps "fig:"){width="\linewidth"} As was indicated in I, the Approximant of the ground state function $\Psi_{(0,0)}^{(t)}$ is a building block to construct the Approximants of excited states. In particular, for $D=1$ in the case of the $n_r$-th excited state (at $r \geq 0$, see below) the Approximant has the form $$\label{approximantcuartic1} \Psi_{(n_r,p)}^{(t)}\ =\ \frac{r^p P_{n_r}(r^2)}{\left(1\ +\ \tilde{b}_4\,g^2\,r^2\right)^{1/4}\left(1\ +\ \sqrt{1\ +\ \tilde{b}_4\, g^2\,r^2}\right)^{1/2}}\, \exp \left(-\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4} {\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2}}\right)\ ,$$ where $P_{n_r}(r^2)$ is a polynomial of degree $n_r$ with real coefficients and all real roots, and with $P_{n_r}(0)=1$ chosen for normalization, here $p=0,1$ and $(-)^p=\pm$ has the meaning of parity w.r.t. reflection $(r {\rightarrow}-r)$. At $d=1$ there are two possible domains for the Schrödinger operator: $r \in [0, \infty)$ (i) and $r \in (-\infty, +\infty)$ (ii). For the first domain (i) there exist the states of positive parity $p=0$ only, we denote them $(n_r, 0)$, thus negative nodes at $r < 0$ in (\[approximantcuartic1\]) are ignored. Hence, the state $(n_r, 0)$ is the $n_r$-th excited state. As for the second domain (ii) there exist both states of positive and negative parity, we denote the state as $(n_r, p)$. The state $(n_r, p)$ corresponds to $(2n_r+p)$-th excited state. It is evident that the energy of the state $(n_r, 0)$ for the first domain (i) coincide with energy of the state $(n_r, 0)$ for the second domain (ii). It is easily demonstrated that energies $E_{(n_r, p)}$ obey the following inequality $$E_{(n_r, 0)} < E_{(n_r, 1)} < E_{(n_r+1, 0)} \ ,$$ for any coupling constant. For fixed $n_r$, the $(n_r-1)$ free parameters of $P_{n_r}(r^2)$ are found by imposing the orthogonality constraints $$\label{constraint1} (\Psi_{(n_r,p)}^{(t)},\Psi_{(k_r,p)}^{(t)})\ =\ 0\quad ,\qquad k_r=0,\ldots,(n_r-1)\ .$$ For higher dimensions, $D>1$, the Approximant for the state $(n_r,\ell)$ takes the form $$\label{approximantcuarticD} \Psi_{(n_r,\ell)}^{(t)}\ =\ \frac{r^{\ell}\,P_{n_r}(r^2)}{\left(1\ +\ \tilde{b}_4\,g^2\,r^2\right)^{1/4} \left(1\ +\ \sqrt{1\ +\ \tilde{b}_4\, g^2\,r^2}\right)^{D/2}}\, \exp\left(-\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4} {\sqrt{1\,+\,\tilde{b}_4\,g^2r^2}}\right)\ .$$ Here $P_{n_r}(r^2)$ is the polynomial of degree $n_r$ with $n_r$ real [*positive*]{} roots. In a similar way as in the one-dimensional case, for fixed angular momentum $\ell$ the $(n_r-1)$ free parameters of $P_{n_r}(r^2)$ are found by imposing the orthogonality constraints, $$\label{constraint} (\Psi_{(n_r,\ell)}^{(t)},\Psi_{(k_r,\ell)}^{(t)})\ =\ 0\quad ,\quad k_r=0,\ldots,(n_r-1)\ .$$ Finally, in order to fix the remaining three free parameters $\tilde{a}_{0,2,4}$ in exponential we use the Approximant, either $\Psi_{(n_r,p)}^{(t)}$ or $\Psi_{(n_r,\ell)}^{(t)}$, as entry in variational calculations. A description of the computational code we used can be found in I. The variational energy calculations for four low-lying states with quantum numbers $(0,0)$, $(0,1)$, $(0,2)$, $(1,0)$ for different values of $D>1$ and $g^2$ are presented in Tables \[Quartic\] - \[quartic3\]. For some of these states, the variational energy $E_{var} = E_0^{(1)}$, the first correction $E_2$ to it, as well as its corrected value of variational energy $E_0^{(2)} = E_{var} + E_2$ are shown, see (\[firsta\]) and (\[seconda\]). Systematically, the variational energy $E_0^{(1)}$ is found with extremely high absolute accuracy: $10^{-8}- 10^{-14}$, which is found by calculating the correction $E_2$. The variational results are compared with numerical ones obtained via LMM, see I for technical details. LMM results are obtained taking 50, 100, 200 mesh points for $g^2=0.1, 1., 10.0$, respectively. It allows us to reach, at least, 12 d.d. in the energies of $(0,0)$, $(0,1)$, $(0,2)$ states at $D=1,2,3,6$, see Tables \[Quartic\] - \[quartic2\] denoted as $E_0^{(2)}$. Making analysis of numerical results suggests that, when $E_2$ is evaluated, the $E_0^{(2)}$ provides 12 - 13 correct d.d., at least. It implies that, once $E_2$ is taken into account, all digits of $E_0^{(2)}$ printed in Tables \[Quartic\] - \[quartic3\] are exact. These highly accurate results are confirmed independently by the calculating the second correction of the variational energy $E_3$ which is always $\leq 10^{-12}$ for all $D$ and $g^2$ that we have studied. It indicates a very fast rate of convergence in the Non-Linearization Procedure when trial function (\[approximantcuartic1\]), (\[approximantcuarticD\]) is taken as the zero approximation. We must mention the hierarchy of eigenstates which holds for any fixed integer $D>1$ and $g^2$: $(0,0)$, $(0,1)$, $(0,2)$, $(1,0)$. Interestingly, it coincides with the hierarchy of the first four eigenstates for the cubic potential established in I. There is a considerable number of calculations devoted to estimate the energy of the first low-lying states in domain (ii). Our results, see Tables \[Quartic\] - \[quartic3\], are in complete agreement with [@Turbiner2005], [@Turbiner2010] for $D=1$ and superior considerably of those obtained for $D>1$ and different $g^2$, see e.g. [@Taseli2dr4], [@WENIGER] and [@WITWIT2dr4]. Deviation of $\Psi_{(n_r,\ell)}^{(t)}$ from the exact (unknown) eigenfunction $\Psi_{(n_r,\ell)}$ can be estimated via the Non-Linearization Procedure. It can be shown that for the ground state function this deviation is extremely small and bounded, $$\left|\frac{\Psi_{(0,0)}(r)-\Psi_{(0,0)}^{(t)}(r)}{\Psi_{(0,0)}^{(t)}(r)}\right|\lesssim 10^{-6}\ ,$$ in the whole range $r \in [0,\infty)$ at any dimension $D$ and any $g^2$ that we considered. Therefore we can say that our Approximant $\Psi_{(0,0)}^{(t)}$ is a locally accurate approximation of the exact wave function $\Psi_{(0,0)}$ once the optimal parameters are chosen. A similar situation occurs for the Approximants for excited states at different $D$ and $g^2$. [max width=]{} [|c|ccc|ccc|]{} & &\ &$\quad\quad\quad E_0^{(1)}\quad\quad\quad$ &$\quad\quad-E_2\quad\quad$ &$\quad\quad\quad E_0^{(2)}\quad\quad\quad$ &$\quad\quad\quad E_0^{(1)}\quad\quad\quad$ &$\quad\quad-E_2\quad\quad$ &$\quad\quad\quad E_0^{(2)}\quad\quad\quad$\ ------------------------------------------------------------------------ 0.1 & 1.065285509544 & $3.00\times10^{-14}$ & 1.065285509544 & 2.168597211269 & $5.28\times10^{-14}$ & 2.168597211269\ 1.0 & 1.392351641563 & $3.37\times 10^{-11}$ & 1.392351641530 & 2.952050091995 & $3.17\times10^{-11}$ & 2.952050091962\ 10.0 & 2.449174072588 & $4.69\times 10^{-10}$ & 2.449174072118 & 5.349352819751 & $3.44\times10^{-10}$ & 5.349352819462\ & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 3.306872013152 & $2.20\times10^{-13}$ & 3.306872013152 & 6.908332111232 & $9.80\times10^{-14}$ & 6.908332111232\ 1.0 & 4.648812704237 & $2.69\times10^{-11}$ & 4.648812704210 & 10.390627295514 & $9.68\times10^{-12}$ & 10.390627295504\ 10.0 & 8.599003455030 & $2.22\times10^{-10}$ & 8.599003454807 & 19.936900374076 & $6.48\times10^{-11}$ & 19.936900374011\ \[Quartic\] \[quartic1\] [max width=]{} [|c|ccc|ccc|]{} & &\ & $\qquad\qquad E_0^{(1)}\qquad\qquad$ & $\qquad\quad -E_2 \qquad\quad$ &$\quad\quad\quad E_0^{(2)}\quad\quad\quad $ & $\quad\quad\quad E_0^{(1)}\quad\quad\quad$ &$\quad\quad-E_2\quad\quad$ &$\quad\quad\quad E_0^{(2)}\quad\quad\quad$\ ------------------------------------------------------------------------ 0.1 & 3.306872013236 & $8.33\times10^{-11}$ & 3.306872013153 & 4.477600360878 & $1.10 \times 10^{-10}$ & 4.477600360768\ 1.0 & 4.648812707206 & $2.99 \times 10^{-9}$ & 4.648812704212 & 6.462906003251 & $3.39 \times 10^{-9}$ & 6.462905999864\ 10.0 & 8.599003467556 & $1.27 \times 10^{-8}$ & 8.599003454810 & 12.138224752729 & $1.38 \times 10^{-8}$ & 12.138224738901\ & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 5.678682663377 & $1.33 \times 10^{-10}$ & 5.678682663243 & 9.447358518278 & $1.80 \times 10^{-10}$ & 9.447358518099\ 1.0 & 8.380342533658 & $3.56 \times 10^{-9}$ & 8.380342530101 & 14.658513816952 & $3.39 \times 10^{-9}$ & 14.658513813563\ 10.0 & 15.927096988667 & $1.40 \times 10^{-8}$ & 15.927096974709 & 28.536810849436 & $1.21 \times 10^{-8}$ & 28.536810837360\ [max width=]{} [|c|ccc|ccc|]{} & &\ & &$\qquad\qquad E_0^{(1)}\qquad\qquad$ &$\quad\quad-E_2\quad\quad$ &$\qquad\qquad E_0^{(2)}\qquad\qquad$\ ------------------------------------------------------------------------ 0.1 & & 6.908332112167 & $9.35 \times 10^{-10}$ & 6.908332111232\ 1.0 & & 10.390627321799 & $2.63 \times 10^{-8}$ & 10.390627295506\ 10 & & 19.936900479247 & $1.05 \times 10^{-7}$ & 19.936900374040\ & &\ & $E_0^{(1)}$ & $\qquad\qquad -E_2 \qquad\qquad$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 8.165006438494 & $1.00 \times 10^{-9}$ & 8.165006437493 & 12.084471853886 & $1.11 \times 10^{-9}$ & 12.084471852776\ 1.0 & 12.485556075670 & $2.47 \times 10^{-8}$ & 12.485556051000 & 19.217523515555 & $1.97 \times 10^{-8}$ & 19.217523495879\ 10.0 & 24.145857689623 & $9.48 \times 10^{-8}$ & 24.145857594824 & 37.811402320699 & $6.90 \times 10^{-8}$ & 37.811402251702\ In all cases the first order correction $y_1$ to the logarithmic derivative of the ground state is a bounded function at different $D$ and $g^2$. For example, for $g^2=1$ the first correction $y_1$ has the upper bound $$\label{cases-4} |y_1|_{max} \sim \begin{cases} 0.0106\ ,\qquad D=1 \\ 0.0092\ ,\qquad D=2 \\ 0.0086\ ,\qquad D=3 \\ 0.0072\ ,\qquad D=6 \\ \end{cases}$$ It is the consequence of the fact that by construction the derivative of $\Phi_t$ reproduces exactly all growing terms at large $r$ in expansion (\[qexpansion\]). “Boundness" of $y_1$ and its small value of the maximum implies that we deal with smartly designed zero order-approximation $\Psi_{0,0}^{(t)}$ that leads, in framework of the Non-Linearization Procedure, to a fastly convergent series for the energy and wave function. In Figs. \[fig:D=1q\] - \[fig:D=3q\] $y_0$ and $y_1$ [*vs*]{} $r$ are presented for $g^2=1$ in physics dimensions $D=1,2,3$. Let us emphasize that all curves in these figures are slow-changing [*vs*]{} $D$. Therefore, it is not a surprise that similar plots should appear for $D=6$ (not shown) as well as for other values of $g$. An analysis of these plots indicates that $|y_1|$ is extremely small function in comparison with $|y_0|$ in the domain $0 \leq r \lesssim 1.7$, thus, in domain which provides the dominant contribution in variational integrals. It is the consequence of minimization of the energy functional, see (\[evar\]). It is the real reason why the energy correction $E_2$ is extremely small being of order $\sim 10^{-8}$, or sometimes even smaller, $\sim 10^{-10}$. Similar situation occurs for the phase (and its derivative) of the Approximants for the excited states. The Approximant $\Psi_{(n_r,\ell)}^{(t)}$ (\[approximantcuarticD\]) also allows us to get an accurate estimate of the position of the radial nodes of the exact wave function. For example, in the state $(1,0)$ where is a single positive node, $r_0 > 0$, the trial function (\[approximantcuarticD\]) provides the zero order estimate of the radial node $r_0^{(0)}$ coming directly from the orthogonality constraint (\[constraint\]). Results are presented in Table \[quartic3\]. A comparison of these numerical results with those coming from the LMM indicates that the Approximant $\Psi_{1,0}^{(t)}$ defines the node with not less than 5 d.d. From Table \[quartic3\] it can be noted that the radial node is an increasing function of $D$ at fixed $g^2$, but decreasing with the increase of $g^2$ at fixed $D$. ![Quartic oscillator at $D=1$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right) [*vs*]{} $r$ for $g^2=1$. []{data-label="fig:D=1q"}](D1g1q.eps){width="99.00000%"} ![Quartic oscillator at $D=2$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right) [*vs*]{} $r$ for $g^2=1$.[]{data-label="fig:D=2q"}](D2g1q.eps){width="99.00000%"} ![Quartic oscillator at $D=3$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right) [*vs*]{} $r$ for $g^2=1$.[]{data-label="fig:D=3q"}](D3g1q.eps){width="99.00000%"} The Strong Coupling Expansion ----------------------------- In this Section assuming $2M=\hbar=1$ we will focus on finding the first two terms of the strong coupling expansion (\[Scoupling\]) of the ground state energy for the quartic anharmonic oscillator (\[potquartic\]), $$\label{stq} E\ \equiv \ g^{2/3}\, {\tilde {\varepsilon}}\ =\ g^{2/3}(\tilde{{\varepsilon}}_0\ +\ \tilde{{\varepsilon}}_2g^{-4/3}\ +\ \tilde{{\varepsilon}}_4g^{-8/3}\ +\ \ldots) \ .$$ In contrast to the weak coupling expansion, see (\[eps-in-la-4\]) at ${\lambda}=g$, the expansion (\[stq\]) has a finite radius of convergence in $1/g$. This expansion corresponds to PT in powers of $\hat{{\lambda}}$ for the potential $$\label{qST} V(w)\ =\ w^4\ +\ \hat{{\lambda}}^2\,w^2\ ,\quad \hat{{\lambda}}^2\ =\ g^{-4/3} \ ,$$ in the Schrödinger equation defined in $w \in [0,\infty)$. The transformed RB equation suitable to develop such a PT has the form, see Eq.(III.8) in I and (\[riccati-bloch-4-tilde\]), $$\label{riccatiST} 2{\rm w}{\partial}_{\rm w}{\mathcal{Y}}({\rm w})\ -\ {\mathcal{Y}}({\rm w}) \left({\rm w}{\mathcal{Y}}({\rm w})-{D}\right)\ =\ \tilde{{\varepsilon}}(\hat{{\lambda}})\ -\ {\hat{{\lambda}}^2}\,{\rm w} - {\rm w}^2 \quad ,\quad {\partial}_{\rm w}\equiv\frac{d}{d{\rm w}}\ ,\ {\rm w}\equiv w^2\ ,$$ c.f. (\[riccati-bloch\]) with different r.h.s., where $\hat{{\lambda}}$ plays a role of effective coupling constant, and $\tilde{{\varepsilon}}(\tilde{{\lambda}})$ plays a role of energy. In order to calculate the first two terms $\tilde{{\varepsilon}}_0$ and $\tilde{{\varepsilon}}_2$ of the strong coupling expansion (\[stq\]) we use the Approximant (\[ApproximantQuartic\]). In Table \[table:stcuartic1\] for different $D$ the leading coefficient $\tilde{{\varepsilon}}_0$ and the second perturbative correction $\hat{{\varepsilon}}_2$ as well as $\tilde{{\varepsilon}}_0^{(2)}=\tilde{{\varepsilon}}_0^{(1)}+\hat{{\varepsilon}}_2$, calculated via the Non-Linearization Procedure, are presented. Numerical results for $\tilde{{\varepsilon}}_0$, based on the LMM and obtained with 12 d.d., indicate that $\tilde{{\varepsilon}}_0^{(2)}$ found in Non-Linearization procedure with the Approximant (\[ApproximantQuartic\]) reproduce not less that 10 d.d. This accuracy is verified independently by calculating the next correction $\hat{{\varepsilon}}_3$ which results in the order of $\hat{{\varepsilon}}_3 \sim 10^{-2}\hat{{\varepsilon}}_2$. In turn, Table \[table:stcuartic2\] contains the results of the first two approximations for the coefficient $\tilde{{\varepsilon}}_2$ in (\[stq\]). It should be mentioned that our final results for the coefficient $\tilde{{\varepsilon}}_0$ reproduce and sometimes exceed the best results available in literature so far for $D=1$, see e.g. [@TurbinerST], [@StrongFernandez], [@WENIGST]. [max width=]{} ---------------------------------------------------------------- -------------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------------- -------------------------------------------------------- ---------------------------------------------------------------- $\quad\quad\quad \tilde{{\varepsilon}}_0^{(1)}\quad\quad\quad$ $\quad\quad\quad-\hat{{\varepsilon}}_2\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_0^{(2)}\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_0^{(1)}\quad\quad\quad$ $\quad\quad\quad-\hat{{\varepsilon}}_2\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_0^{(2)}\quad\quad\quad$ \[4pt\] 1.060362090491 $7.02 \times 10^{-12}$ 1.060362090484 2.344829072753 $9.27 \times 10^{-12}$ 2.344829072744 \[4pt\] $\tilde{{\varepsilon}}_0^{(1)}$ $-\hat{{\varepsilon}}_2$ $\tilde{{\varepsilon}}_0^{(2)}$ $\tilde{{\varepsilon}}_0^{(1)}$ $-\hat{{\varepsilon}}_2$ $\tilde{{\varepsilon}}_0^{(2)}$ 3.799673029810 $9.27 \times 10^{-12}$ 3.799673029801 8.928082199890 $4.07 \times 10^{-11}$ 8.928082199850 \[4pt\] ---------------------------------------------------------------- -------------------------------------------------------- ---------------------------------------------------------------- ---------------------------------------------------------------- -------------------------------------------------------- ---------------------------------------------------------------- : Ground state $(0,0)$ energy $\tilde{{\varepsilon}}_0$ for the potential $W=r^4$ (see (\[qST\])) for $D=1,2,3,6$ found in PT based on the Approximant $\Psi_{(0,0)}^{(t)}$: $\tilde{{\varepsilon}}_0^{(1)}$ corresponds to the variational energy, $\hat{{\varepsilon}}_2$ is the second PT correction, $\tilde{{\varepsilon}}_0^{(2)}=\tilde{{\varepsilon}}_0^{(1)}+\hat{{\varepsilon}}_2$ is the corrected variational energy. 10 d.d. in $\tilde{{\varepsilon}}_0^{(2)}$ confirmed independently in LMM.[]{data-label="table:stcuartic1"} [max width=]{} ---------------------------------------------------------------- -------------------------------------------------------------- ---------------------------------------------------------------- ----------------------------------------------------------------- -------------------------------------------------------------- ---------------------------------------------------------------- $\quad\quad\quad \tilde{{\varepsilon}}_2^{(1)}\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_{2,1}\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_2^{(2)}\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_2^{(1)}\quad\quad\quad $ $\quad\quad\quad \tilde{{\varepsilon}}_{2,1}\quad\quad\quad$ $\quad\quad\quad \tilde{{\varepsilon}}_2^{(2)}\quad\quad\quad$ 0.362022648388 $3.96 \times 10^{-10}$ 0.362022648784 0.651477773845 $4.38 \times 10^{-10}$ 0.651477774283 \[4pt\] $\tilde{{\varepsilon}}_2^{(1)}$ $\tilde{{\varepsilon}}_{2,1}$ $\tilde{{\varepsilon}}_2^{(2)}$ $\tilde{{\varepsilon}}_2^{(1)}$ $\tilde{{\varepsilon}}_{2,1}$ $\tilde{{\varepsilon}}_2^{(2)}$ 0.901605894682 $2.03 \times 10^{-9}$ 0.901605896709 1.526804282772 $-3.06 \times 10^{-8}$ 1.526804252175 \[4pt\] ---------------------------------------------------------------- -------------------------------------------------------------- ---------------------------------------------------------------- ----------------------------------------------------------------- -------------------------------------------------------------- ---------------------------------------------------------------- : Subdominant coefficient $\tilde{{\varepsilon}}_2$ in the strong coupling expansion (\[stq\]) for the ground state $(0,0)$ energy for the quartic radial anharmonic potential (\[qST\]) for different $D=1,2,3,6$. First order correction $\tilde{{\varepsilon}}_{2,1}$ in PT, see text, included. 10 d.d. in $\tilde{{\varepsilon}}_2^{(2)}$ confirmed independently in LMM.[]{data-label="table:stcuartic2"} Quartic Radial Anharmonic Oscillator: conclusions ------------------------------------------------- It is shown that the 2-parametric Approximants (\[ApproximantQuartic\]), (\[approximantcuartic1\]), (\[approximantcuarticD\]) taken as variational trial functions for the first four states $(0,0), (0,1), (0,2), (1,0)$ of the quartic radial $D$-dimensional anharmonic oscillator with the potential (\[potquartic\]) provide extremely high relative accuracy in energy ranging from $\sim 10^{-14}$ to $\sim 10^{-8}$ for different coupling constants $g$ and dimension $D$. Variational parameters depend on $g$ and $D$ in a smooth manner and can be easily interpolated. For $D=1$ the Approximant (\[approximantcuartic1\]) appears as a slight generalization of the trial functions proposed in [@Turbiner2005; @Turbiner2010]: they differ in a form of the pre-exponential factors. If variationally optimized Approximants are taken as zero approximation in Non-Linearization (iteration) procedure, they lead to a fastly convergent scheme with rate of convergence $\sim 10^{-4}$. For the ground state it was calculated the relative deviation of the logarithmic derivative of variationally optimized Approximant from the exact one [*vs*]{} radial coordinate $r$ for different $g$ and $D$. It was always smaller than $\sim 10^{-6}$. It implies that the Approximants with interpolated parameters ${\tilde a}_{4,0}$ [*vs*]{} $g$ and $D$ provide highly accurate uniform approximation of the eigenfunctions of the Quartic Radial Anharmonic Oscillator while the respectful eigenvalues are given by ratio of two integrals with integrands proportional to Approximants. Sextic Anharmonic Oscillator ============================ In this Section it will be considered the sextic anharmonic radial oscillator two-term potential $$\label{potsextic} V(r)\ = \ r^2\ +\ g^4\,r^6\ ,$$ see (\[potential\]) at $m=6$ and $a_3=a_4=a_5=0, a_6=1$, cf. (\[potquartic\]). PT in the Weak Coupling Regime ------------------------------ In the Weak Coupling Regime the perturbative expansion for ${\varepsilon}$ and $\mathcal{Y}(v)$, developed in RB equation (\[riccati-bloch\]), $$\label{riccati-bloch-6} {\partial}_v\mathcal{Y}\ -\ \mathcal{Y}\left(\mathcal{Y} - \frac{D-1}{v}\right)\ =\ {\varepsilon}\left({\lambda}\right)\ -\ v^2\ \ -\ {\lambda}^4\,v^6 \quad , \quad {\partial}_v \equiv \frac{d}{dv}\ ,$$ where $v$ and ${\lambda}$ are defined in (\[change-v\]) and (\[effective\]), correspondingly, are of the form $$\label{encorrection-6} {\varepsilon}\ =\ {\varepsilon}_0\ +\ {\varepsilon}_4\,{\lambda}^4\ +\ {\varepsilon}_8\,{\lambda}^8\ +\ \ldots\quad , \qquad {\varepsilon}_0\ =\ D\ ,$$ and $$\label{Yncorrection-6} \mathcal{Y}(v)\ =\ \mathcal{Y}_0\ +\ \mathcal{Y}_4\,{\lambda}^4\ +\ \mathcal{Y}_8\,{\lambda}^8\ +\ \ldots \quad , \quad \mathcal{Y}_0\ =\ v \ ,$$ respectively. All coefficients in front of terms ${\lambda}^{4n+1}$, ${\lambda}^{4n+2}$, ${\lambda}^{4n+3}$, $n=0,1,2,\ldots$ are equal to zero. Since the potential (\[potsextic\]) is even, PT can be constructed by algebraic means. The first non-vanishing corrections are $$\label{correction2-6} {\varepsilon}_4\ =\ \frac{1}{8}\,D\,(D+2)\,(D+4)\quad ,\quad \mathcal{Y}_4(v)\ =\ \frac{1}{2}\,v^5\ +\ \frac{1}{4}(D+4)\,v^3\ +\ \frac{1}{8}\,(D+2)\,(D+4)\,v\ ,$$ while the next two corrections ${\varepsilon}_{8,12}$ and $\mathcal{Y}_{8,12}(v)$ are presented in Appendix B. It can be shown that the correction $\mathcal{Y}_{4n}(v)$ has the form of odd polynomial in $v$, $$\mathcal{Y}_{4n}(v)\ =\ v\,\sum_{k=0}^{2n}c_{2k}^{(4n)}v^{2(2n-k)}\ , \label{Yncorrection-sex}$$ with coefficients $c_{2k}^{(4n)}$ being polynomial in $D$ of degree $k$, $$c^{(4n)}_{2k}\ =\ P^{(4n)}_{k}(D)\quad ,\qquad c_{4n}^{(4n)}\ =\ \frac{{\varepsilon}_{4n}}{D}\ , \label{propertiessextic}$$ cf. (\[Y2n\]), (\[Y2n-c\]). The correction ${\varepsilon}_{4n}$ has the factorization property $${\varepsilon}_{4n}(D)\ =\ D\,(D+2)\,(D+4)\,R_{2n-2}(D)\ , \label{factorizations}$$ where $R_{2n-2}(D)$ is a polynomial in $D$ of degree $(2n-2)$, cf. (\[factorizationq\]), in particular, $R_0=\frac{1}{8}$. Due to invariance $v {\rightarrow}-v$ of the original equation (\[riccati-bloch-6\]) it is convenient to simplify it by introducing a new unknown function and changing $v$-variable to its square, $$\mathcal{Y}\ =\ v\, \mathcal{\tilde Y}\quad \mbox{and}\quad {\rm v}\ =\ v^2\ .$$ As a result, (\[riccati-bloch-6\]) becomes $$\label{riccati-bloch-6-tilde} 2 {\rm v} {\partial}_{\rm v} \mathcal{\tilde Y}\ -\ \mathcal{\tilde Y}\left({\rm v} \mathcal{\tilde Y} - D\right)\ =\ {\varepsilon}\left({\lambda}\right)\ -\ {\rm v}\ \ -\ {\lambda}^4\,{\rm v}^3 \quad , \quad {\partial}_v \equiv \frac{d}{dv}\ .$$ This is a convenient form of the RB equation to carry out the PT consideration. It particular, the first correction (\[correction2-6\]) in expansion of $\mathcal{\tilde Y}$ becomes a second degree polynomial in ${\rm v}$, $$\mathcal{\tilde Y}_4({\rm v})\ =\ \frac{1}{2}\,{\rm v}^2\ +\ \frac{D+4}{4}\,{\rm v}\ +\ \frac{(D+2)\,(D+4)}{8}\,\ ,$$ and, in general, $\mathcal{\tilde Y}_{4n}({\rm v})$ is a polynomial in ${\rm v}$ of degree $(2n)$, see (\[Y2n\]). Corrections $\mathcal{\tilde Y}_{4,6}$ are presented in Appendix B. From (\[propertiessextic\]) one can see that all corrections ${\varepsilon}_{4n}$ vanish at $D=0, -2, -4$, hence, their formal sum results in ${\varepsilon}=0, -2, -4$, respectively. In the case $D=0$ the radial Schrödinger equation takes the form $$\label{D=0-s} -\frac{\hbar^2}{2M}\left(\frac{d^2\Psi(r)}{dr^2}\ -\ \frac{1}{r}\frac{d\Psi(r)}{dr}\right)\ +\ (r^2\ +\ g^4\,r^6)\,\Psi(r)\ =\ 0\ ,$$ cf.(\[D=0-q\]). Its formal solution, cf. [@DOLGOVPOPOV1979], is given in terms of the parabolic cylinder functions [@abramowitz+stegun] (also known as Weber functions), it reads $$\label{D=0-s-psi} \Psi\ =\ C_1\,D_{\nu_{-}}({\lambda}v^2)\ +\ C_2\,D_{\nu_{-}}\left(i\,{\lambda}v^2\right)\ ,\quad\quad\quad \nu_{\pm}\ =\ -\frac{1}{2}\ \pm\ \frac{1}{4{\lambda}^2}\ ,$$ if written in $v$ and ${\lambda}$, see (\[change-v\]) and (\[effective\]), respectively, cf. (\[D=0-q-psi\]). It has the meaning of the zero mode of the Schrödinger operator at $D=0$. The function (\[D=0-s-psi\]) cannot be made normalizable by any choice of constants $C_1$ and $C_2$. Hence, the Schrödinger operator at $D=0$ for sextic potential (\[potsextic\]) has no zero mode in the Hilbert space. It complements the similar statement made for quartic potential (\[potquartic\]). One can guess that zero mode in the Hilbert space is absent for the Schrödinger operator with anharmonicity $r^{2m}$ at $D=0$. Like in the quartic oscillator case, the assumption that $E(D=0)=0$ is incorrect: non perturbative contribution in ${\lambda}$ (or $g$) to energy should be present at $D=0$. Note even though at $D=-2$ and $D=-4$ all corrections ${\varepsilon}_{4n}$ vanish and we formally have ${\varepsilon}=-2$ and ${\varepsilon}=-4$, respectively, no exact solutions have been found for the corresponding radial Schrödinger equation. In Appendix \[appendix:C\] some general results are presented for the two-term potentials. Generating Functions -------------------- For the sextic anharmonic oscillator using the GB equation (\[Bloch\]), $${\lambda}^2\,{\partial}_u\mathcal{Z}\ -\ \mathcal{Z}\left(\mathcal{Z} - \frac{{\lambda}^2(D-1)}{u}\right)\ =\ {\lambda}^2\,{\varepsilon}({\lambda})\ -\ u^2 - u^6 \quad , \quad {\partial}_u\equiv\frac{d}{du}\ , \label{Bloch-6}$$ the expansion of $\mathcal{Z}(u)$ in generating functions $$\label{expansionZs} \mathcal{Z}(u)\ =\ \mathcal{Z}_0(u)\ +\ \mathcal{Z}_2(u)\,{\lambda}^2\ +\ \mathcal{Z}_4(u)\,{\lambda}^4\ +\ \ldots\ ,$$ can be constructed, where the reduced energy expansion is given by (\[encorrection-6\]), $${\varepsilon}\ =\ {\varepsilon}_0\ +\ {\varepsilon}_4\,{\lambda}^4\ +\ {\varepsilon}_8\,{\lambda}^8\ +\ \ldots\quad , \qquad {\varepsilon}_0\ =\ D\ ,$$ Interestingly (\[expansionZs\]) has the same structure as the expansion for the quartic anharmonic case: all generating functions $\mathcal{Z}_{2k+1}(u)$, $k=1,2,...$ of odd orders ${\lambda}^{2k+1}$ are absent in expansion, cf. (\[expansionZq\]). It contrasts with the expansion of $\mathcal{Y}(v; {\lambda})$ and ${\varepsilon}({\lambda})$ in which the powers ${\lambda}^{4n}, n=0,1,2, \ldots$ are present only. In fact, for any even radial anharmonic potential, $V(r)=V(-r)$, $\mathcal{Z}(u)$ is written in terms of generating functions as an expansion in powers of ${\lambda}^2$. As the result there are two different families of generating functions, $$\mathcal{Z}_{4k}(u)\ =\ u\,\sum_{n=k}^{\infty}c_{4k}^{(4n)}u^{4(n-k)}\ ,$$ and $$\mathcal{Z}_{4k+2}(u)\ =\ u\,\sum_{n=k+1}^{\infty}c_{4k+2}^{(4n)}u^{4(n-k)-2}\ ,$$ those occur in correspondence to ${\varepsilon}_{4k+2}=0$ and ${\varepsilon}_{4k} \neq 0$. Following (\[Yncorrection-sex\]) and (\[propertiessextic\]) it is easy to see that for both families the generating function $\mathcal{Z}_{2p}(u)$ is a polynomial in $D$ of degree $p$, $$\mathcal{Z}_{2p}(u)\ =\ u\sum_{n=0}^{p}f^{(p)}_n(u^2)\,D^n\ ,$$ where $f_n^{(p)}(u^2)$, $n=0,1,...,k$ are some real functions. The first two terms in expansion (\[expansionZs\]) are $$\begin{aligned} \label{1sextic} \mathcal{Z}_0(u)&\ =\ u\,\sqrt{1+u^4}\ ,\\ \mathcal{Z}_2(u)&\ =\ \frac{2 u^4+D\left( 1+u^4- \sqrt{1+u^4}\right)}{2 u \left(1+u^4\right)}\ . \label{2sextic}\end{aligned}$$ Due to invariance $u {\rightarrow}-u$ one can simplify the non-linear equation (\[Bloch-6\]) by introducing $$\mathcal{Z}\ =\ u \mathcal{\tilde Z}\quad \mbox{and}\quad {\rm u}\ =\ u^2\ .$$ Finally, (\[Bloch-6\]) is reduced to $$2 {\lambda}^2\,{\rm u}\,{\partial}_{\rm u} \mathcal{\tilde Z}\ -\ \mathcal{\tilde Z}\left({\rm u} \mathcal{\tilde Z} - {{\lambda}^2\,D}\right)\ =\ {\lambda}^2\,{\varepsilon}({\lambda})\ -\ {\rm u} \ -\ {\rm u}^3 \quad , \quad {\partial}_u\equiv\frac{d}{du}\ . \label{Bloch-6-tilde}$$ In this case the first two terms in expansion (\[expansionZs\]) are simplified, $$\begin{aligned} \label{1sextic-tilde} \mathcal{\tilde Z}_0({\rm u})&\ =\ \sqrt{1+{\rm u}^2}\ ,\\ \mathcal{\tilde Z}_2({\rm u})&\ =\ \frac{2 {\rm u}^2+D\left( 1+{\rm u}^2- \sqrt{1+{\rm u}^2}\right)}{2 {\rm u} \left(1+{\rm u}^2\right)}\ , \label{2sextic-tilde}\end{aligned}$$ cf. (\[1sextic\]), (\[2sextic\]). The asymptotic behavior at large $u$ of $\mathcal{Z}_{2p}(u)$ is related with the expansion of $y$ at large $r$. For the sextic anharmonic oscillator, the expansion of $y$ at large $r$ and fixed (effective) coupling constant $g(\lambda)$ is rewritten conveniently in variable $v$, see (\[change-v\]), $$\label{asymptoticsextic} y\ =\ (2M\hbar^2)^{\frac{1}{4}}\left({\lambda}^2 v^3\ +\ \frac{(D+2){\lambda}^2+1}{2{\lambda}^2}v^{-1} -\ \frac{{\varepsilon}}{2{\lambda}^2}v^{-3}\ +\ \ldots\right)\ ,\quad\quad v{\rightarrow}\infty\ .$$ The first two terms of this expansion are ${\varepsilon}$-independent, while the first term is also $D$-independent. Following an analogous procedure to that used for the quartic oscillator, we can transform the expansion of $\mathcal{Z}_{2p}(u)$ at large $u$ into an expansion at large $v$ via the connection between the classical and quantum coordinate (\[u vs v\]). The first two terms in (\[expansionZs\]) expanded at large $v$ are $$\left(\frac{2M}{g^2}\right)^{1/2} \mathcal{Z}_0({\lambda}v)\ =\ (2M\hbar^2)^{\frac{1}{4}}\left({\lambda}^2 v^3\ +\ \frac{1}{2{\lambda}^2}v^{-1}\ -\ \frac{1}{8{\lambda}^6}v^{-5}\ +\ \ldots \right)\ , \quad\quad v{\rightarrow}\infty\ ,$$ and $$\left(\frac{2M}{g^2}\right)^{1/2} {\lambda}^2 \mathcal{Z}_2({\lambda}v)\ =\ (2M\hbar^2)^{\frac{1}{4}}\left(\frac{D+2}{2}v^{-1}\ -\ \frac{D}{2{\lambda}^2}v^{-3}\ -\ \frac{1}{{\lambda}^{4}}v^{-5}\ +\ \ldots \right)\ ,\quad\quad v{\rightarrow}\infty\ ,$$ see (\[change-to-Z\]). We explicitly observe that the sum $(\frac{2M}{g^2})^{1/2}(\mathcal{Z}_0({\lambda}v)+{\lambda}^2 \mathcal{Z}_2({\lambda}v))$ at large $v$ reproduces exactly the first two terms in the expansion (\[asymptoticsextic\]). However, all higher generating functions, $\mathcal{Z}_4({\lambda}v)$, $\mathcal{Z}_6({\lambda}v), ...$ contribute to the same order $O(v^{-3})$ for large $v$, $$\left(\frac{2M}{g^2}\right)^{1/2} {\lambda}^{2p} \mathcal{Z}_{2p}({\lambda}v)\ =\ (2M\hbar^2)^{\frac{1}{4}}\left(-\frac{{\varepsilon}_{4p-4}{\lambda}^{4p-6}}{2}v^{-3}\ +\ \ldots\right)\ , \quad v {\rightarrow}\infty \ ,$$ here ${\varepsilon}_{4p-4}$ is the energy correction of order ${\lambda}^{4p-4}$. Therefore, it does not matter how many generating functions are considered in the expansion $(\frac{2M}{g^2})^{1/2}(\mathcal{Z}_0({\lambda}v)+{\lambda}^2 \mathcal{Z}_2({\lambda}v)+...)$ for large $v$, the ${\varepsilon}$-dependent coefficient in front of term of order $O(v^{-3})$ is never reproduced exactly. Similar situation occurred in the quartic case. The Approximant and Variational Calculations -------------------------------------------- From now on we set again $\hbar=1$ and $M=1/2$, consequently, $v=r$ and ${\varepsilon}=E$. Following formulas (\[1sextic\]) and (\[2sextic\]), the first two terms of expansion of the phase (\[phase\]) can be calculated, $$\begin{aligned} G_0(r;g) & \ =\ \frac{r^2}{4g^2}\, \sqrt{1+g^4r^4}\ +\ \frac{1}{4g^2}\log\left[g^2r^2+\sqrt{1+g^4r^4}\right]\ , \label{firstr6}\end{aligned}$$ and $$\begin{aligned} g^2\,G_2(r;g)&\ =\ \frac{1}{4}\log\left[1+g^4r^4\right]\ +\ \frac{D}{4}\log\left[1+\sqrt{1+g^4r^4}\right]\ . \label{secondr6}\end{aligned}$$ cf. (\[firstr4\]). The next two generating functions $G_4(r;g)$ and $G_6(r;g)$ can be also calculated, see Appendix \[appendix:C\]. Keeping the expressions for $G_0(r;g)$ and $G_2(r;g)$ in mind we can proceed to construct the Approximant $\Psi_{0,0}^{(t)}$. Following the prescription (\[generalrecipe\]), the exponential phase of the Approximant (the Phase Approximant) should have the form $$\Phi_t\ =\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4\ +\ \tilde{a}_6\,g^4\,r^6} {\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}}\ +$$ $$\frac{1}{4g^2}\,\log\left[\tilde{c}_2\, g^2\,r^2\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}\right]\ +$$ $$\dfrac{1}{4}\log\left[1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4 r^4\right]\ +\ \dfrac{D}{4}\log\left[1+\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6g^4r^4}\right]\ , \label{sextictrial}$$ where $\tilde{a}_{0,2,4,6}$, $\tilde{b}_{4,6},\ {\tilde c}_2$ are seven free parameters. All three logarithmic terms inserted in (\[sextictrial\]) appear as a *minimal* modification of those that come from generating functions $G_{0,2}(r;g)$. For arbitrary $D>1$ the Phase Approximant (\[sextictrial\]) can be transformed to the Approximant of the ground state function, $$\Psi_{(0,0)}^{(t)}\ =\ \frac{1}{\left(1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4r^4\right)^{1/4} \left(1\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6 g^4\,r^4}\right)^{D/4}}\ \times$$ $$\frac{1}{ \left(\tilde{c}_2 g^2r^2\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}\right)^{1/{4g^2}}}\ \times$$ $$\label{approximantsextic} \exp \left(-\dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4\ +\ \tilde{a}_6\,g^4\,r^6} {\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}}\right)\ .$$ This is the central formula for this Section. Later it will be shown that it leads to a highly accurate uniform approximation of the exact ground state eigenfunction and also to a highly accurate variational energy for the ground state. Following the general prescription, the constraint $$\label{a6} \tilde{b}_6\ =\ 16\,\tilde{a}_6^{2}\ ,$$ cf. (\[a4\]), guarantees that the approximate phase $\Phi_{t}$ (\[sextictrial\]) reproduces exactly the dominant term $\sim r^4$ in expansion (\[asymptoticsextic\]). It is worth mentioning that the relaxing this constraint by keeping $\tilde{a}_6$ and $\tilde{b}_6$ free demonstrates that as the result of minimization this constraint is restored with very high accuracy. Furthermore, there exists another constraint, $$\label{a4-6} \tilde{b}_4\ =\ 32\,\tilde{a}_6\,\tilde{a}_4\ ,$$ cf. (\[a2\]), which corresponds to absence of subdominant term $\sim r^2$ in expansion of the trial phase $\Phi_{t}$ (\[sextictrial\]) at $r {\rightarrow}\infty$, see (\[asymptoticsextic\]). Again relaxing the condition (\[a4-6\]), keeping the parameters $\{\tilde{a}_6,\tilde{a}_4, \tilde{b}_4\}$ free and making minimization of the energy functional one can see that resulting parameters obey the condition (\[a4-6\]) with very high accuracy. Thus, the Approximant (\[approximantsextic\]) contains, finally, 5 free parameters, $\{\tilde{a}_0,\tilde{a}_2,\tilde{a}_4,\tilde{a}_6,\tilde{c}_2\}$. We must emphasize that the choice of parameters $$\label{reproduction2} \tilde{a}_0\ =\ 0\ ,\qquad\tilde{a}_2\ =\ \frac{1}{4}\ , \qquad\tilde{a}_4\ =\ 0\ ,\qquad\tilde{b}_4\ =0\ ,\qquad \tilde{a}_6\ =\ \frac{1}{4}\ ,\qquad \tilde{c}_2\ =\ 1\ ,$$ in the (phase) Approximant $\Phi_t$ (\[sextictrial\]) allows us to reproduce exactly the first two terms in expansion (\[expansionZs\]). Note that contrary to the quartic radial anharmonic oscillator case (\[potquartic\]), no single parameter in (\[reproduction2\]) has explicit dependence on the coupling constant $g$, see (\[reproduction1\]). However, this choice of parameters (\[reproduction2\]) is not optimal from the point of the variational calculations of the energy. Performing minimization of the expectation value of the radial Scrödinger operator (\[radialop\]), saying differently, of the variational energy, for different values of $g^4$ and $D$ one can see that all five parameters $\{\tilde{a}_0,\tilde{a}_2,\tilde{a}_4,\tilde{a}_6,\tilde{c}_2\}$ are smooth, slow-changing functions [*vs*]{} $g^4$ and $D$. Plots of variational parameters for the ground state, as functions of $g^4$ for fixed $D$, are shown in Fig. \[fig:varparfixeds\]. In turn, Fig. \[fig:varparfixedsD\] presents the plots of the parameters as functions of $D$ for fixed $g^4$. Similar behavior of parameters occurs for the excited states $(0,1), (0,2), (1,0)$. [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} the coupling constant $g^4$ in domain $g^4 \in [0, 10]$ for $D=1, 2, 3,6$ []{data-label="fig:varparfixeds"}](a0_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} the coupling constant $g^4$ in domain $g^4 \in [0, 10]$ for $D=1, 2, 3,6$ []{data-label="fig:varparfixeds"}](a2_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} the coupling constant $g^4$ in domain $g^4 \in [0, 10]$ for $D=1, 2, 3,6$ []{data-label="fig:varparfixeds"}](a4_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} the coupling constant $g^4$ in domain $g^4 \in [0, 10]$ for $D=1, 2, 3,6$ []{data-label="fig:varparfixeds"}](b4_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} the coupling constant $g^4$ in domain $g^4 \in [0, 10]$ for $D=1, 2, 3,6$ []{data-label="fig:varparfixeds"}](b6_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} the coupling constant $g^4$ in domain $g^4 \in [0, 10]$ for $D=1, 2, 3,6$ []{data-label="fig:varparfixeds"}](c2_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} dimension $D$ for $g^4=0.1, 1.0, 10.0$ []{data-label="fig:varparfixedsD"}](a0_gfixed_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} dimension $D$ for $g^4=0.1, 1.0, 10.0$ []{data-label="fig:varparfixedsD"}](a2_gfixed_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} dimension $D$ for $g^4=0.1, 1.0, 10.0$ []{data-label="fig:varparfixedsD"}](a4_gfixed_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} dimension $D$ for $g^4=0.1, 1.0, 10.0$ []{data-label="fig:varparfixedsD"}](b4_gfixed_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} dimension $D$ for $g^4=0.1, 1.0, 10.0$ []{data-label="fig:varparfixedsD"}](b6_gfixed_s.eps "fig:"){width="\linewidth"} [0.47]{} ![Ground state $(0,0)$: Variational parameters ${\tilde a}_0\ (a)$, ${\tilde a}_2\ (b)$, ${\tilde a}_4\ (c)$, ${\tilde b}_4\ (d)$, ${\tilde b}_6\ (e)$, ${\tilde c}_2\ (f)$ [*vs*]{} dimension $D$ for $g^4=0.1, 1.0, 10.0$ []{data-label="fig:varparfixedsD"}](c2_gfixed_s.eps "fig:"){width="\linewidth"} As was indicated in I, the Approximant of the ground state function $\Psi_{(0,0)}^{(t)}$ is a building block to construct the Approximants of excited states. In particular, for $D=1$ in the case of the $n_r$-th excited state (at $r \geq 0$, see below), the Approximant has the form $$\Psi_{(n_r, p)}^{(t)}\ =\ \frac{r^p\,P_{n_r}(r^2)}{\left(1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4r^4\right)^{1/4} \left(1\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6 g^4\,r^4}\right)^{D/4}}\ \times$$ $$\frac{1}{ \left(\tilde{c}_2 g^2r^2\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}\right)^{1/{4g^2}}}\ \times$$ $$\label{1dsextic} \exp \left(-\dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4\ +\ \tilde{a}_6\,g^4\,r^6} {\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}}\right)\ ,$$ where $P_{n_r}(r^2)$ is a polynomial of degree $n_r$ in $r^2$ with real coefficients and all real roots, and with $P_{n}(0)=1$ chosen for normalization, here $p=0,1$ and $(-)^p=\pm$ has the meaning of parity w.r.t. reflection $(r {\rightarrow}-r)$. Similarly to the quartic case at $D=1$ there are two possible domains for the Schrödinger operator: $r \in [0, \infty)$ (i) and $r \in (-\infty, +\infty)$ (ii). For the first domain (i) there exist the states of positive parity $p=0$ only, we denote them $(n_r, 0)$, thus negative nodes in (\[1dsextic\]) at $r < 0$ are ignored. Hence the state $(n_r, 0)$ is the $n_r$-th excited state. As for the second domain (ii) there exist the states of both positive and negative parity, we denote the state as $(n_r, p)$. The state $(n_r, p)$ corresponds to $(2n_r+p)$-th excited state. It is evident that the energy of the state $(n_r, 0)$ for the first domain (i) coincide with energy of the state $(n_r, 0)$ for the second domain (ii). Like in the quartic radial anharmonic case it is easily demonstrated that energies $E_{(n_r, p)}$ obey the following inequality $$E_{(n_r, 0)} < E_{(n_r, 1)} < E_{(n_r+1, 0)} \ ,$$ for any coupling constant. For fixed $n_r$, the $(n_r-1)$ free parameters of $P_{n_r}(r^2)$ are found by imposing the orthogonality constraints $$\label{constraint6} (\Psi_{(n_r,p)}^{(t)},\Psi_{(k_r,p)}^{(t)})\ =\ 0\quad ,\qquad k_r=0,\ldots,(n_r-1)\ .$$ see previous Section I for details, cf. (\[constraint1\]). As for $D>1$, the Approximant of the state $(n_r,\ell)$ reads $$\Psi_{(n_r,\ell)}^{(t)}\ =\ \frac{r^{\ell}P_{n_r}(r^2)}{\left(1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4r^4\right)^{1/4} \left(1\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6 g^4\,r^4}\right)^{D/4}}\ \times$$ $$\frac{1}{ \left(\tilde{c}_2 g^2r^2\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}\right)^{1/{4g^2}}}\ \times$$ $$\label{appsextic} \exp \left(-\dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4\ +\ \tilde{a}_6\,g^4\,r^6}{\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}}\right)\ .$$ Like for the quartic anharmonic radial oscillator case, $P_{n_r}(r^2)$ is a polynomial of degree $n_r$ with $n_r$ positive roots. Needless to say, the procedure to fix the value of its coefficients is identical to that followed for the quartic case via orthogonality constraints. Once they are imposed, any Approximant depends only on five free non-linear parameters $\{\tilde{a}_0,\tilde{a}_2,\tilde{a}_4,\tilde{a}_6,\tilde{c}_2\}$ regardless quantum numbers or dimension. Their values are fixed when we take either $\Psi_{(n_r,p)}^{(t)}$ or $\Psi_{(n_r,\ell)}^{(t)}$ as entry in variational calculations. In Tables \[Sextic1\] - \[Sextic4\] we present the calculations of variational energy for the first four low-lying states with quantum numbers $(0,0), (1,0), (0,1)$ and $(0,2)$ for different values of $D>1$ and $g^4$. As for $D=1$ the states $(0,0), (1,0), (0,1)$ are studied for the operator defined in the domain [ii]{}. The variational energy $E_0^{(1)}$, the correction $E_2$, and the corrected value $E_0^{(2)}$ are presented for some of these states, see (\[firsta\]) and (\[seconda\]). The variational energy $E_0^{(1)}$ provides absolute accuracy of order $10^{-9} - 10^{-13}$, as it is confirmed by calculating the order of the correction $E_2$. Using the LMM, we evaluated independently the energy for these states. The same number of mesh points that we used for the quartic oscillator case was used for the sextic case, i.e. 50, 100 and 200 for $g^4=0.1, 1.0, 10.0$, respectively. As for energy the LMM allows us to reach not less than 12 d.d. which coincide with $E_0^{(2)}$. In other words, all digits for $E_0^{(2)}$ printed in Tables \[Sextic1\] - \[Sextic4\] are exact. The next correction to variational energy $E_3$ confirmed it: systematically $E_3 \lesssim 10^{-12}$ for any $D$ and $g^4$ considered. It indicates an extremely fast rate of convergence of (\[nth\]) as $n {\rightarrow}\infty$ when trial function (\[1dsextic\]) or (\[appsextic\]) is taken as zero approximation. The hierarchy of eigenstates for fixed integer $D > 1$ and positive $g^4$ is the same as the cubic and quartic potentials: $(0,0)$, $(0,1)$, $(0,2)$, $(1,0)$. We guess that the same hierarchy should hold for any two-term potential. In contrast with the quartic oscillator case, following the available literature the estimates of the energy of the low-lying states for the sextic radial anharmonic oscillator are limited. Most of calculations are made for the one-dimensional case. In general, our results reproduce all known numerical ones that can be found for $D=1,2,3$, see e.g. [@WENIGER], [@Taseli2dr4] [@Meissner] and [@WITWIT2dr6]. At $D=6$ the calculations are carried out for the first time. The relative deviation of $\Psi_{(0,0)}^{(t)}$ from the exact (unknown) ground state eigenfunction $\Psi_{(0,0)}$ is estimated with the Non-Linearization procedure, it is bounded and very small, $$\label{deviationsextic} \left|\frac{\Psi_{(0,0)}(r)-\Psi_{(0,0)}^{(t)}(r)}{\Psi_{(0,0)}^{(t)}(r)}\right| \lesssim 10^{-6}\ ,$$ in the whole range of $r\in(0,\infty]$ at any integer $D$ and at any coupling constant $g^4$ which we considered. Thus, our Approximant leads to a locally accurate approximation of the exact wave function once optimal parameters are chosen. A similar situation occurs for excited states for different $D$ and $g^4$. For all $D$ and $g^4$, the correction $|y_1|$ to the logarithmic derivative of the ground state is very small function in comparison with $|y_0|$ in the domain $0 \lesssim r \lesssim 1.7$ in which the dominant contribution of integrals - required by the variational method - occurs. In all cases the first order correction $(-y_1)$ to the logarithmic derivative of the ground state function $y$ is positive and bounded function in abome-mentioned domain in $r$ for all $D$ and $g^4$ we have studied. For example, for $g^4=1$ and $D=1,2,3,6$ the first correction $y_1$ has the upper bound $$\label{cases-6} |y_1|_{max} \sim \begin{cases} 0.0078\ ,\qquad D=1 \\ 0.0065\ ,\qquad D=2 \\ 0.0048\ ,\qquad D=3 \\ 0.0031\ ,\qquad D=6 \\ \end{cases}$$ cf. (\[cases-4\]). It is the consequence of the fact that by construction in the derivative $y_t=(\Phi_t)^{\prime}$ the first two terms at large $v$ in expansion (\[asymptoticsextic\]) are reproduced exactly: $\sim~v^3$ and $\sim~1/v$ . Moreover, the minimization of the variational energy leads to the approximate fulfillment of the condition (\[a4-6\]), hence, the coefficient in front of $\sim~v$ - this term is absent in the expansion (\[asymptoticsextic\]) - is very small. Correspondingly, $|y_1|$ grows at large $v$ with a very small rate. “Boundness" of $y_1(r)$ at $r \geq 2$, which gives essential contribution to the variational integrals, and its small value of the maximum, and slow growth at large $r$ implies that we deal with a smartly designed zero order-approximation $\Psi_{(0,0)}^{(t)}$. It leads, in framework of the Non-Linearization Procedure, to a fast convergent iteration procedure for the energy and wave function. In Figs. \[fig:D=1s\] - \[fig:D=3s\], $y_0$ and $y_1$ [*vs*]{} $r$ are presented for $g^4=1$ in physics dimensions $D=1,2,3$. We emphasize that all curves in these figures are slow-changing [*vs*]{} $D$. Therefore, it is not a surprise that similar plots should appear for $D=6$ (not shown) as well as for other values of $g > 0$. An analysis of these plots indicates that $(-y_1)^2$ is extremely small function in comparison with $y_0$ in the domain $0 \leq r \lesssim 1.7$, thus, in domain which provides the dominant contribution in variational integrals. It is the real reason why the energy correction $E_2$ is extremely small being of order $\sim 10^{-8}$, or sometimes even smaller, $\sim 10^{-10}$. Similar situation occurs for the phase (and its derivative) of the Approximants for the excited states. [max width=]{} [|c|ccc|ccc|]{} & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1&1.109087078465&$1.20 \times 10^{-13}$&1.109087078465 & 2.307218600932&$7.04 \times 10^{-13}$&2.307218600931\ 1.0&1.435624619003&$3.22 \times 10^{-13}$&1.435624619003 & 3.121935474246&$9.81 \times 10^{-13}$&3.121935474246\ 10.0&2.205723269598&$3.22 \times 10^{-12}$&2.205723269595 & 4.936774524584&$1.72 \times 10^{-12}$&4.936774524582\ & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 3.596036921222 & $1.76 \times 10^{-12}$ & 3.596036921220 & 7.987905269800 & $7.11 \times 10^{-13}$ & 7.987905269799\ 1.0 & 5.033395937721 & $5.21 \times 10^{-12}$ & 5.033395937720 & 11.937202695862 & $9.62 \times 10^{-13}$ & 11.937202695862\ 10.0 & 8.114843118826 & $7.60 \times 10^{-12}$ & 8.114843118819 & 19.880256604739 & $3.12 \times 10^{-12}$ & 19.880256604736\ -1.2cm In a similar way as it was discussed for the quartic radial anharmonic oscillator, the Approximant for the first radial excited state $\Psi_{(1,0)}^{(t)}$ provides an estimate of the position of the radial node by imposing the orthogonality constraint to the Approximant of the ground state. For sextic anharmonic potential the zero order approximation $r_0^{(0)}$ gives not less than 5 d.d. of accuracy. The first order correction $r_0^{(1)}$ gives a contribution to the 6 d.d. Variational results are presented in Table \[Sextic4\], they compared with ones obtained in LMM with 50 mesh points for $g^4=0.1$, with 100 mesh points for $g^4=1$, with 200 mesh points for $g^4=10$. It can be noted that the radial node $r_0$ grows with an increase of $D$ at fixed $g^4$, but decreases with the increase of $g^4$ at fixed $D$. \[Sextic2\] [max width=]{} [|c|ccc|ccc|]{} & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 3.596036921295 & $7.50 \times 10^{-11}$ & 3.596036921220 & 4.974197493807 & $9.01 \times 10^{-11}$ & 4.974197493717\ 1.0 & 5.033395937795 & $7.52 \times 10^{-11}$ & 5.033395937720 & 7.149928601496 & $5.84 \times 10^{-11}$ & 7.149928601438\ 10.0 & 8.114843118966 & $1.48 \times 10^{-10}$ & 8.114843118818 & 11.688236034577 & $1.81 \times 10^{-10}$ & 11.688236034396\ & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 6.439143322388 & $6.64 \times 10^{-11}$ & 6.439143322321 & 11.324899788818 & $8.15 \times 10^{-10}$ & 11.324899788004\ 1.0 & 9.455535276950 & $1.09 \times 10^{-10}$ & 9.455535276841 & 17.387207808723 & $1.26 \times 10^{-9}$ & 17.387207807460\ 10.0 & 15.619579279334 & $5.05 \times 10^{-10}$ & 15.619579278830 & 29.302506554618 & $1.22 \times 10^{-9 }$ & 29.302506553402\ [max width=]{} [|c|ccc|ccc|]{} & &\ & & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & & 7.987905270111 & $3.12 \times 10^{-10}$ & 7.987905269799\ 1.0 & & 11.937202696127 & $2.66 \times 10^{-10}$ & 11.937202695862\ 10.0 & & 19.880256605756 & $1.02 \times 10^{-9 }$ & 19.880256604742\ & &\ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$ & $E_0^{(1)}$ & $-E_2$ & $E_0^{(2)}$\ ------------------------------------------------------------------------ 0.1 & 9.617462285440 & $1.50 \times 10^{-10}$ & 9.617462285290 & 14.962630328506 & $1.60 \times 10^{-10}$ & 14.962630328346\ 1.0 & 14.584132948883 & $3.15 \times 10^{-9}$ & 14.584132945729 & 23.431551835405 & $2.19 \times 10^{-9}$ & 23.431551833215\ 10.0 & 24.447468037325 & $5.42 \times 10^{-9}$ & 24.447468031906 & 39.815551142800 & $7.05 \times 10^{-9}$ & 39.815551135750\ [max width=]{} [|c|cc|cc|cc|]{} & & &\ & $E$ & $r_0$ & $E$ & $ r_0$ & $E$ & $r_0$\ ------------------------------------------------------------------------ 0.1 & 2 & 0052 & 21 & 7872 & 10 & 3484\ 1.0 & 3 & 1606 & 7 & 4964 & 37 & 6200\ 10.0 & 51 & 4526 & 604 & 2682 & 513 & 964\ ![Sextic oscillator at $D=1$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right) [*vs*]{} $r$ for $g^2=1$.[]{data-label="fig:D=1s"}](D1g1s.eps){width="99.00000%"} ![Sextic oscillator at $D=2$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right) [*vs*]{} $r$ for $g^2=1$.[]{data-label="fig:D=2s"}](D2g1s.eps){width="99.00000%"} ![Sextic oscillator at $D=3$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right) [*vs*]{} $r$ for $g^2=1$.[]{data-label="fig:D=3s"}](D3g1s.eps){width="99.00000%"} The Strong Coupling Expansion ----------------------------- In this section we calculate the first two terms in the strong coupling expansion (\[Scoupling\]). For the sextic anharmonic oscillator potential (\[potsextic\]) this (convergent) expansion has the form $$\label{sexticST} {\varepsilon}\ =\ g (\tilde{{\varepsilon}}_0\ +\ \tilde{{\varepsilon}}_6\,g^{-3}\ +\ \tilde{{\varepsilon}}_{12}\,g^{-6}\ +\ \ldots)\ ,$$ as long as $2M=\hbar=1$. Evidently, this expansion corresponds to PT in power of $\hat{{\lambda}}$ for the potential $$\label{sexticSTW} V(w)\ =\ w^6\ +\ \hat{{\lambda}}\,w^2\ ,\quad \hat{{\lambda}}\ =\ g^{-3}$$ in the Schrödinger equation defined in $w \in [0,\infty)$. We use the Approximant (\[appsextic\]) and develop the perturbation procedure already described in I and also in the previous Section to calculate $\tilde{{\varepsilon}}_0$ and $\tilde{{\varepsilon}}_6$. In Table \[sexticst1\] we present for different $D$ the variational estimate for $\tilde{{\varepsilon}}_0$ (denoted by $\tilde{{\varepsilon}}_0^{(1)}$), the correction $\hat{{\varepsilon}}_2$ and the corrected value $\tilde{{\varepsilon}}_0^{(2)}$ calculated via the Non-Linearization Procedure. Using the LMM it was verified that $\tilde{{\varepsilon}}_0^{(2)}$ provides not less than 12 exact d.d. at every $D$ considered. Hence, the first 10 d.d. in variational energy ${\varepsilon}_0^{(1)}$ printed in Table \[sexticst1\] are exact. In this case, the $\tilde{{\varepsilon}}_3 \sim 10^{-2}{\varepsilon}_2$, it indicates a high rate of convergence in PT. Finally, in Table \[sexticst2\] we present the first two approximations of the coefficient in front of subdominant term $\tilde{{\varepsilon}}_6$, see (\[sexticST\]). Comparing our results for $\tilde{{\varepsilon}}_0$ and $\tilde{{\varepsilon}}_6$ with those available in literature, see e.g. [@Taseli2dr4] and [@Meissner], one can see that we usually reproduce, although often exceed them. [max width=]{} --------------------------------- -------------------------- --------------------------------- --------------------------------- -------------------------- --------------------------------- $\tilde{{\varepsilon}}_0^{(1)}$ $-\hat{{\varepsilon}}_2$ $\tilde{{\varepsilon}}_0^{(2)}$ $\tilde{{\varepsilon}}_0^{(1)}$ $-\hat{{\varepsilon}}_2$ $\tilde{{\varepsilon}}_0^{(2)}$ 1.144802453800 $3.21\times10^{-12}$ 1.144802453797 2.609388463259 $5.72\times10^{-12}$ 2.609388463253 \[4pt\] $\tilde{{\varepsilon}}_0^{(1)}$ $-\hat{{\varepsilon}}_2$ $\tilde{{\varepsilon}}_0^{(2)}$ $\tilde{{\varepsilon}}_0^{(1)}$ $-\hat{{\varepsilon}}_2$ $\tilde{{\varepsilon}}_0^{(2)}$ 4.338598711518 $4.73\times10^{-12}$ 4.338598711513 10.821985609895 $7.21\times10^{-12}$ 10.821985609888 \[4pt\] --------------------------------- -------------------------- --------------------------------- --------------------------------- -------------------------- --------------------------------- : Ground state $(0,0)$ energy $\tilde{{\varepsilon}}_0$ for the potential $W=r^6$ (see (\[sexticSTW\])) for $D=1,2,3,6$ found in PT, based on the Approximant $\Psi_{(0,0)}^{(t)}$ : $\tilde{{\varepsilon}}_0^{(1)}$ corresponds to the variational energy, $\hat{{\varepsilon}}_2$ is the second PT correction, $\tilde{{\varepsilon}}_0^{(2)}=\tilde{{\varepsilon}}_0^{(1)}+\hat{{\varepsilon}}_2$ is the corrected variational energy. 10 d.d. in $\tilde{{\varepsilon}}_0^{(2)}$ confirmed independently in LMM, see text.[]{data-label="sexticst1"} [max width=]{} --------------------------------- -------------------------------- --------------------------------- --------------------------------- -------------------------------- ----------------------------------- $\tilde{{\varepsilon}}_6^{(1)}$ $-\tilde{{\varepsilon}}_{6,1}$ $\tilde{{\varepsilon}}_6^{(2)}$ $\tilde{{\varepsilon}}_6^{(1)}$ $-\tilde{{\varepsilon}}_{6,1}$ $\tilde{{\varepsilon}}_{6}^{(2)}$ 0.307920304114 $3.83 \times 10^{-10}$ 0.307920303731 0.534591069789 $2.85 \times 10^{-10}$ 0.534591069504 \[4pt\] $\tilde{{\varepsilon}}_6^{(1)}$ $-\tilde{{\varepsilon}}_{6,1}$ $\tilde{{\varepsilon}}_6^{(2)}$ $\tilde{{\varepsilon}}_6^{(1)}$ -$\tilde{{\varepsilon}}_{6,1}$ $\tilde{{\varepsilon}}_6^{(2)}$ 0.718220134970 $1.55 \times 10^{-9 }$ 0.718220133425 1.137762108070 $2.68 \times 10^{-10}$ 1.137762107802 \[4pt\] --------------------------------- -------------------------------- --------------------------------- --------------------------------- -------------------------------- ----------------------------------- : Subdominant coefficient $\tilde{{\varepsilon}}_6$ in the strong coupling expansion (\[sexticST\]) of the ground state $(0,0)$ energy for the sextic radial anharmonic potential (\[potsextic\]) for $D=1,2,3,6$. First order correction $\tilde{{\varepsilon}}_{6,1}$ in PT, see text, included. 10 d.d. in $\tilde{{\varepsilon}}_6$ confirmed independently in LMM.[]{data-label="sexticst2"} Sextic Radial Anharmonic Oscillator: Conclusions ------------------------------------------------ It is shown that the 5-parametric Approximants (\[approximantsextic\]), (\[1dsextic\]), (\[appsextic\]) taken as variational trial functions for the first four states $(0,0), (0,1), (0,2), (1,0)$ of the sextic radial $D$-dimensional anharmonic oscillator with the potential (\[potsextic\]) provide extremely high relative accuracy in energy ranging from $\sim 10^{-14}$ to $\sim 10^{-8}$ for different coupling constants $g$ and dimension $D$. Variational parameters depend on $g$ and $D$ in a smooth manner and can be easily interpolated. If variationally optimized Approximants are taken as zero approximation in Non-Linearization (iteration) procedure, they lead to fast convergent procedure with rate of convergence $\sim 10^{-4}$. For the ground state it was calculated the relative deviation of the logarithmic derivative of variationally optimized Approximant from the exact one [*vs*]{} radial coordinate $r$ for different $g$ and $D$. It was always smaller than $\sim 10^{-6}$ at any $r \in [0, \infty)$. It implies that the Approximants with interpolated parameters $\{\tilde{a}_0,\tilde{a}_2,\tilde{a}_4,\tilde{a}_6,\tilde{c}_2\}$ [*vs*]{} $g$ and $D$ provide highly accurate approximation of the eigenfunctions of the Sextic Radial Anharmonic Oscillator while the respectful eigenvalues are given by ratio of two integrals. Conclusions =========== Based on the analysis of asymptotic behavior of the phase $\Phi(r)$ of the wavefunction for the state $(n_r, \ell)$ defined in the form $$\Psi(r)\ =\ r^{\ell} Q_{n_r}(r)\, e^{-\Phi(r)}\ ,$$ at large and small $r$, its weak and strong coupling expansions in $r-$ and $(g\,r)-$ spaces we constructed the interpolating expression for the phase in extremely compact form (\[generalrecipe\]), which should be valid for the general radial polynomial potential and its [*any*]{} eigenstate. This expression was called the (Phase) Approximant and denoted $\Phi_t(r)$. Here $Q_{n_r}(r)$ is a polynomial with real, positive roots. Intuitively, it is clear that the Approximant is very close to the exact phase thus we have no intention to make rigorous analysis of it for the general polynomial potential in order to answer the question of how close is it. It was checked successfully for a few, particular, physically important cases. The (Phase) Approximant depends on a number of free parameters which can be fixed variationally by taking the function $\Psi_t(r) = r^{\ell} Q_{n_r}(r)\, e^{-\Phi_t(r)}$ as trial function in the energy functional and then either imposing orthogonality conditions to previous functions with $k_r < n_r$, or simply requiring the roots of $Q_{n_r}(r)$ to be positive. It turns out that the optimal variational parameters allow us to reproduce with high accuracy all coefficients in growing terms of the semiclassical expansion at large $r$, thus, $\Psi_t(r)$ is close to the exact wavefunction $\Psi(r)$ even in classically-prohibited domain where it is exponentially-small! This phenomenon allows to draw two conclusions: (i) $\Psi_t(r)$ is of very high “quality": after minimization it becomes close to exact eigenfunction even in domain where it is exponentially-small, thus, where it gives exponentially small contribution to the variational integrals and (ii) without loosing much of accuracy we can impose constraints on free parameters $\Phi_t(r)$ by requiring the exact reproduction of all growing terms of the phase in semiclassical domain $r \gg 1$. Now we proceed to discuss three particular cases which were studied in detail. In concrete, for three, two-term radial anharmonic oscillators in $D$-dimensional space with the potential $$V_p(r)\ =\ r^2\ +\ g^p\,r^{p+2}\ ,\quad r \geq 0\ ,$$ at $p=1,2,4$ we constructed highly-accurate-locally, compact, few-parametric, uniform approximation, $$\Psi(r)\ =\ r^{\ell} P_{n_r}(r^2)\, e^{-\Phi_p(r)}\ ,$$ for the eigenfunction of the state $(n_r, \ell)$ (with radial quantum number $n_r$ and angular momentum $\ell$), where $P_{n_r}(r^2)$ is a polynomial in $r^2$ of degree $n_r$ with positive roots. Phases $\Phi_p(r)$ had appeared of the form: [**(I)**]{} Cubic anharmonic oscillator, $p=1$, $$\Phi_1(r)\ =\ \frac{\tilde{a}_0\ +\ \tilde{a}_1\,gr+\tilde{a}_2\,r^2\ +\ \tilde{a}_3\,g\,r^3}{\sqrt{1\ +\ \tilde{b}_3\, g\, r}}\ +\ \frac{1}{4}\,\log[1\ + \tilde{b}_3\, g\,r] + D \log \left[1\ +\ \sqrt{1\ +\ \tilde{b}_3\, g\,r}\right]\ ,$$ see Eq.(V.13) in I. If all parameters are chosen to be optimal in variational calculation with $\Phi_t(r)$ as trial function [^1] , the variational energy is obtained with absolute accuracy $10^{-8}$ (8 s.d.) for $D=1, 2, 3, 6$ and $g=0.1, 1.0, 10.0$, see I, for the four lowest states: $(0,0), (0,1), (0,2), (1,0)$. All parameters are smooth slow-changing functions, see Fig.4 in paper I for the ground state $(0,0)$. Undoubtedly the same accuracy would be reached for other dimensions and coupling constants, as well as for excited states. The first correction $E_2$ to the variational energy $E_0^{(1)}$ is always of the order $10^{-8}$, while the rate of convergence of energy in Non-Linearization procedure is $\frac{E_3}{E_2} \sim 10^{-4}$. Relative deviation of $\Phi_t(r)$ from the exact eigenfunction is bounded function, it does not exceed $\sim 10^{-4}$. Now we fix in $\Phi_1(r)$ the parameters $\tilde{b}_3, \tilde{a}_2, \tilde{a}_1, \tilde{a}_0$ to be the function of the parameter $\tilde{a}_3$ following the constraint Eq.(V.14) in paper I $$\tilde{b}_3\ =\ \frac{25}{4}\,\tilde{a}^2_3\ ,$$ and two more constraints $$\tilde{a}_2\ =\ \frac{125 \tilde{a}^2_3 + 12}{150 \tilde{a}_3}\quad ,\quad -g^2 \tilde{a}_1\ =\ \frac{9375 \tilde{a}^4_3 - 1000 \tilde{a}^2_3 + 48}{15000 \tilde{a}^3_3} \ ,$$ - it allows to reproduce [*exactly*]{} all four growing terms in the phase at semiclassical domain $r {\rightarrow}\infty$: $r^{5/2}, r^{3/2}, r^{1/2}, \log r$ - and also putting $$\tilde{a}_0 \ =\ \frac{2\,\tilde{a}_1}{\tilde{b}_3}\ +\ \frac{D+1}{2}\ ,$$ to keep $y(r=0)={\Phi^{\prime}_1}|_{r=0}=0$ - no linear term in $r$ in the expansion of phase at $r {\rightarrow}0$ occurs. Interestingly, those four constraints occur approximately for optimal parameters in variational calculus. Hence, the trial phase becomes one-parametric, it depends on the parameter $\tilde{a}_3$ only. If now we remake the minimization of the variational energy w.r.t. $\tilde{a}_3$ only, see plots on Figs.\[(0\_0)\] - \[(1\_0)\], it allows us to reproduce not less than 5 s.d. correctly in energy for $D=1,2,3,6$ and $g = 0.1, 1.0, 10.0$ for all four low-lying states: $(0,0), (0,1), (0,2), (1,0)$. This result seems unprecedented: we are not aware about a situation when one-parametric trial function had led to such an accuracy. The rate of convergence of energy in Non-Linearization procedure reduces to $\frac{E_3}{E_2} \sim 10^{-2}$ in comparison to $\sim 10^{-4}$ as for non-constraint function Eq.(V.13) in paper I. ![Cubic oscillator: parameter $\tilde{a}_3$ [*vs*]{} $g$ at $D=1,2,3,6$ for the ground state $(0,{0})$. Positions of maximum for different curves about to coincide $g_{max} \approx 0.56$, and for $g \gtrsim g_c \approx 2.6$ the curves become non-distinguishable.[]{data-label="(0_0)"}](Fig_0_0.eps){width="70.00000%"} ![Cubic oscillator: parameter $\tilde{a}_3$ [*vs*]{} $g$ at $D=1,2,3,6$ for the 1st excited state $(0,{1})$.[]{data-label="(0_1)"}](Fig_0_1.eps){width="70.00000%"} ![Cubic oscillator: parameter $\tilde{a}_3$ [*vs*]{} $g$ at $D=1,2,3,6$ for the 2nd excited state $(0,{2})$.[]{data-label="(0_2)"}](Fig_0_2.eps){width="70.00000%"} ![Cubic oscillator: parameter $\tilde{a}_3$ [*vs*]{} $g$ at $D=1,2,3,6$ for the 3rd excited state $(1,{0})$.[]{data-label="(1_0)"}](Fig_1_0.eps){width="70.00000%"} Note that in this case the first correction $y_1$ to derivative of the trial phase $y_0= {\Phi^{\prime}_1}$ is bounded function with maximal deviation of the order of $10^{-2}$, for illustration see Figs.\[D=1-y01\_c\]-\[D=3-y01\_c\] at $g=1$ (cf. Figs.5-7 in paper I), $$|y_1|_{max} \sim \begin{cases} 0.0167\ ,\qquad D=1 \\ 0.0168\ ,\qquad D=2 \\ 0.0178\ ,\qquad D=3 \\ 0.0224\ ,\qquad D=6 \\ \end{cases}$$ cf. (\[cases-4\]), (\[cases-6\]). It is evident that the first correction to the phase itself $\Phi_1$ is also bounded function for all $r \in [0, \infty)$. ![Cubic oscillator at $D=1$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right, with amplification at small $r$ at sub-figure) [*vs*]{} $r$ for $g=1$. []{data-label="D=1-y01_c"}](D1g1_c.eps){width="99.00000%"} ![Cubic oscillator at $D=2$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right, with amplification at small $r$ at sub-figure) [*vs*]{} $r$ for $g=1$.[]{data-label="D=2-y01_c"}](D2g1_c.eps){width="99.00000%"} ![Cubic oscillator at $D=3$: function $y_0=(\Phi_t)'$ (on left) and its first correction $y_1$ (on right, with amplification at small $r$ at sub-figure) [*vs*]{} $r$ for $g=1$.[]{data-label="D=3-y01_c"}](D3g1_c.eps){width="99.00000%"} [**(II)**]{} Quartic anharmonic oscillator, $p=2$ $$\Phi_2(r)\ =\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\, r^2\ +\ \tilde{a}_4\, g^2\, r^4}{\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2}}\ +\ \dfrac{1}{4}\log\left[1\ +\ \tilde{b}_4\,g^2\, r^2\right]\ +\ \dfrac{D}{2}\log\left[1\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2}\right] \ ,$$ see (\[quartictrialg\]), where $\tilde{a}_0,\tilde{a}_4$ are two [*free*]{} non-linear parameters only, see Figs.1-2 (a),(c); the parameters $\tilde{b}_4, \tilde{a}_2$ are fixed following the exact constraints (\[a4\]), (\[a2\]), $$\tilde{b}_4\ =\ 9\,\tilde{a}_4^2\quad ,\quad \tilde{a}_2\ =\ \frac{ 27\,\tilde{a}_4^2+1}{18 \tilde{a}_4}\ .$$ It allows to reproduce [*exactly*]{} all growing terms at $r {\rightarrow}\infty$ in the phase: $r^3, r, \log r$. In this case the variational energy coincides with exact one with not less than 8 s.d. for the states $(0,0), (0,1), (0,2), (1,0)$ for $D=1,2,3,6$ and $g^2=0.1, 1.0, 10.0$. However, there are no doubts that it will be the same for any integer $D$ and $g^2 > 0$, and for any excited state. The rate of convergence of energy in Non-Linearization procedure remains $\frac{E_3}{E_2} \sim 10^{-4}$ as in non-constraint function (\[quartictrialg\]). Note that in this case the first correction $y_1$ to derivative of the trial phase $y_0 = {\Phi^{\prime}_2}$ is bounded function with maximal deviation of the order of $10^{-2}$. The same is true for the phase itself ${\Phi_2}$. [**(III)**]{} Sextic anharmonic oscillator, $p=4$ $$\Phi_4(r)\ =\ \dfrac{\tilde{a}_0\ +\ \tilde{a}_2\,r^2\ +\ \tilde{a}_4\,g^2\,r^4\ +\ \tilde{a}_6\,g^4\,r^6}{\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}}\ +\ \frac{1}{4g^2}\log\left[\tilde{c}_2\, g^2\,r^2\ +\ \sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4\,r^4}\right] \ +$$ $$\dfrac{1}{4}\log\left[1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6\,g^4 r^4\right]\ +\ \dfrac{D}{4}\log\left[1+\sqrt{1\ +\ \tilde{b}_4\,g^2\,r^2\ +\ \tilde{b}_6g^4r^4}\right]\ ,$$ see (\[sextictrial\]), where $\{\tilde{a}_0,\tilde{a}_2,\tilde{a}_4,\tilde{a}_6,\tilde{c}_2\}$ are five [*free*]{} non-linear parameters only, see Figs.6-7; the parameters $\tilde{b}_4,\tilde{b}_6$ are fixed following the constraints (\[a6\]), (\[a4-6\]), $$\tilde{b}_6\ =\ 16\,\tilde{a}_6^{2}\quad ,\quad \tilde{b}_4\ =\ 32\,\tilde{a}_6\,\tilde{a}_4\ ,$$ in order to reproduce exactly all growing terms at $r {\rightarrow}\infty$ in the phase: $r^4, r^2, \log r$. It is worth emphasizing that optimal parameters in variational calculus obey the above constraints with high accuracy. In this case the variational energy coincides with exact one with not less than 8 s.d. for the states $(0,0), (0,1), (0,2), (1,0)$ for $D=1,2,3,6$ and $g^4=0.1, 1.0, 10.0$. However, there are no doubts that it will be the same for any integer $D$ and $g^4 > 0$, and for any excited state. The rate of convergence of energy in Non-Linearization procedure remains $\frac{E_3}{E_2} \sim 10^{-4}$ as in non-constraint function (\[sextictrial\]). Note that in this case the first correction $y_1$ to derivative of the trial phase $y_0= {\Phi^{\prime}_4}$ is bounded function with maximal deviation of the order of $10^{-2}$. The same is true for the phase itself ${\Phi_4}$. In order to have benchmark results for energies we developed LMM for all three potentials at $p=1,2,4$ for $D=1,2,3,6$ and three values of the coupling constant $g^p=0.1, 1.0, 10.0$. For those values of coupling constants LMM was used with 50, 100, 200 mesh points, respectively, independently on $D$. These calculations had allowed us to find eigenenergies with 12 figures. Concluding we state that the constrained (phase) Approximant (\[generalrecipe\]) written for the general radial polynomial potential and arbitrary eigenstate, which guarantees the absence of linear in $r$ term at small distances and reproduces exactly all growing, energy-independent terms at large $r$, provides the uniform approximation of the exact phase in the whole domain $r \in [0, \infty)$. In all studied particular cases the absolute deviation of the derivative of phase turned to be less or order $10^{-2}$. The variational energy, found with constrained (phase) Approximant taken as the phase of trial function, provides accuracy not less than 5 significant digits for any integer dimension $D$ and any coupling constant $g > 0$. From physics point of view the constrained (phase) Approximant corresponds to a classical action for a potential which coincides with original potential at small and large distances and differs (slightly) for intermediate distances of order one. Bohr-Sommerfeld quantization condition for this potential will be studied elsewhere. It is straightforward to write the Riccati-Bloch (cf.(7)) and the generalized Bloch (cf.(11)) equations for perturbed two-body Coulomb problem or, saying differently, radially perturbed Hydrogen atom, $$\label{Coulomb} V_c(r)\ =\ g\,\tilde{V}(gr) = -\frac{b_0}{r} + b_1 g^2 r + b_2 g^3 r^2 + \ldots \ ,\ b_0=1\ ,\quad r \in [0,\infty)\ ,$$ where $b_i, i=1,2,\ldots$ are parameters, and perform a perturbation theory analysis in powers of $g$ and construct the expansion in generating functions, see (13)-(15). It will be done elsewhere. In a similar way as was done in the paper I and the present paper the constrained (Phase) Approximant can be constructed. On intuitive level, it was already realized a long ago for the quarkonium (funnel-type) potential, $$V_c\ =\ -\frac{1}{r}\ +\ b_1 g^2 r \ ,$$ in [@Turbiner:1987], and, recently, for the Yukawa potential, $$V_c\ =\ -\frac{b_0}{r}\,e^{- g r} \ ,$$ see [@delValle-Nader:2018], which had led to highly accurate variational energies for both cases. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank J.C. López Vieyra and H. Olivares Pilón for their interest to the work and useful remarks, and especially for help with numerical and computer calculations. J.C. delV. is supported by CONACyT PhD Grant No.570617 (Mexico). This work is partially supported by CONACyT grant A1-S-17364 and DGAPA grant IN113819 (Mexico). First PT Corrections and Generating Functions $G_4,6$ for the Quartic Anharmonic Oscillator {#appendix:A} =========================================================================================== We present the corrections ${\varepsilon}_{4,6}$ and $\mathcal{Y}_{4,6}$ in explicit form for the quartic anharmonic potential (\[potquartic\]) in the expansion (\[Y2n\]) and (\[Y2n-c\]), $${\varepsilon}_4 \ =\ -\frac{1}{16}\,D(D+2)(2D+5)\ ,$$ $${\varepsilon}_6 \ =\ \frac{1}{64}\,D(D+2)(8D^2+43D+60)\ ,$$ $$\mathcal{Y}_4(v)\ =\ -\frac{1}{8}v^5\ -\ \frac{1}{16}(3D+8)v^3\ +\ \frac{{\varepsilon}_4}{D}\,v\ ,$$ $$\mathcal{\tilde Y}_4({\rm v})\ =\ -\frac{1}{16}\left[2{\rm v}^2\ +\ (3D+8){\rm v}\ +\ (D+2)(2D+5)\right]\ ,$$ $$\mathcal{Y}_6(v)\ =\ \dfrac{1}{16}v^7\ +\ \frac{1}{32}(5D+16)v^5\ +\ \frac{1}{16}(3D^2+17D+25)v^3\ +\ \frac{{\varepsilon}_6}{D}\,v\ .$$ $$\mathcal{\tilde Y}_6({\rm v})\ =\ \dfrac{1}{64}\left[4{\rm v}^3\ +\ {2}(5D+16){\rm v}^2\ +\ 4 (3D^2+17D+25){\rm v}\ +\ (D+2)(8D^2+43D+60)\right]\ .$$ We also present two generating functions for the phase, see expansion (\[phase\]) and (\[genexpphi\]), $$G_4(r;g)\ =\ \frac{5}{24 g^2 w^3}\ +\ \frac{D \left(1+w+w^2\right)}{4 g^2 (w+1)w^2}\ +\ \frac{D^2}{8 g^2 w}\ ,$$ $$G_6(r;g)\ =\ -\frac{5}{16 g^2 w^6}\ -\ \frac{D \left(15+30 w+20 w^2+16 w^3+20 w^4+30 w^5+15 w^6\right)}{48 g^2 (w+1)^2w^5}$$ $$-\frac{D^2 \left(4+8 w+8 w^2+12 w^3+18 w^4+9 w^5\right)}{32 g^2 (w+1)^2w^4}\ -\ \frac{D^3 \left(1+3 w^2\right)}{48 g^2 w^3}\ ,$$ where $w=\sqrt{1+g^2v^2}$. Two remarks in a row: $(i)$ in the variable $w$ all generating functions are rational functions, $(ii)$ the polynomial structure in $D$ of generating functions becomes evident. First PT Corrections and Generating Functions $G_{4,6} $ for the Sextic Anharmonic Oscillator {#appendix:B} ============================================================================================= We present explicitly the first corrections ${\varepsilon}_{8,12}$ and $\mathcal{Y}_{8,12}$ for the sextic anharmonic oscillator (\[potsextic\]), see (\[Yncorrection-sex\]) and (\[factorizations\]), $${\varepsilon}_8\ =\ -\frac{1}{128}\,D\,(D+2)\,(D+4)\,(9 D^2+72 D+152)\ ,$$ $${\varepsilon}_{12}\ =\ \dfrac{1}{1024}\,D\,(D+2)\,(D+4)\,(81 D^4+1404 D^3+9624 D^2+31152 D+40384)\ .$$ $$\mathcal{Y}_8(v)\ =\ -\frac{1}{8}v^9-\frac{1}{16}(3D+16)v^7-\frac{1}{16}(3D^2+27D+64)v^5 -\frac{1}{32}(4D^3+49D^2+204D+288)v^3+\frac{{\varepsilon}_{8}}{D}\,v$$ $$\mathcal{\tilde Y}_8({\rm v})\ =\ -\frac{1}{8}{\rm v}^4-\frac{1}{16}(3D+16){\rm v}^3-\frac{1}{16}(3D^2+27D+64){\rm v}^2 -\frac{1}{32}(4D^3+49D^2+204D+288){\rm v}$$ $$-\frac{1}{128}\,(D+2)\,(D+4)\,(9 D^2+72 D+152)\ ,$$ $$\mathcal{Y}_{12}(v)\ =\ \frac{1}{16}v^{13}\ +\ \frac{1}{32} (5 D+32) v^{11}\ +\ \frac{5}{64} (3 D^2+34 D+104) v^9$$ $$+\ \frac{1}{32} (8 D^3+125 D^2+688 D+1344) v^7 $$ +  (55 D\^4+1038 D\^3+7708 D\^2+26784 D+36800) v\^5 $$+\ \frac{1}{256} (36 D^5+783 D^4+7040 D^3+32768 D^2+78912 D+78336) v^3$$ $$+\ \frac{{\varepsilon}_{12}}{D}\,v\ .$$ $$\mathcal{\tilde Y}_{12}({\rm v})\ =\ \frac{1}{16}\,{\rm v}^{6}\ +\ \frac{1}{32}\,(5 D+32)\,{\rm v}^{5} \ +\ \frac{5}{64}\,(3 D^2+34 D+104)\,{\rm v}^4$$ $$+\ \frac{1}{32}\,(8 D^3+125 D^2+688 D+1344)\,{\rm v}^3 $$ + (55 D\^4+1038 D\^3+7708 D\^2+26784 D+36800)[v]{}\^2 $$+\ \frac{1}{256}\,(36 D^5+783 D^4+7040 D^3+32768 D^2+78912 D+78336)\,{\rm v}$$ $$+\ \dfrac{1}{1024}\,(D+2)\,(D+4)\,(81 D^4+1404 D^3+9624 D^2+31152 D+40384)\ .$$ Now we present explicitly two generating functions $G_4(r), G_6(r)$ in the expansion (\[phase\]) for sextic oscillator, $$G_4(r)\ =\ \frac{r^2}{4w}\bigg(\frac{\left(5 + w^2\right)}{3 w^2}\ +\ \frac{D \left(1+w+w^2\right)}{w (w+1)}\ +\ \frac{D^2}{4}\bigg)\ ,$$ $$4 g^2\,G_6(r)\ =\ \frac{5-3w^2}{w^6}\ +\ \frac{D \left(15+15 w-4 w^2+2 w^3+6 w^4+6 w^5\right)} {6 (w+1)w^5}$$ $$\ +\ \frac{D^2 \left(2+2 w+w^2+3 w^3+3 w^4\right)}{4 (w+1)w^4} +\ \frac{D^3 \left(1+3 w^2\right)}{24 w^3}\ ,$$ where $w=\sqrt{1+g^4r^4}$. Two remarks in a row: $(i)$ in the variable $w$, all generating functions are rational function; $(ii)$ The polynomial structure in $D$ of these generating functions is evident. General Aspects of the Two-Term Radial Anharmonic Oscillator {#appendix:C} ============================================================= Let us consider the general two-term radial anharmonic oscillator potential, $$\label{twoterm} V(r)\ =\ r^2\ +\ g^{m-2}r^{m}\ ,\quad m\ >\ 2\  .$$ cf. (\[potential\]), where $a_2=a_m=1$ and $a_3,a_4,...,a_{m-1}=0$. Expansion of ${\varepsilon}$ in powers of ${\lambda}$ has the form $$\label{generalseries} {\varepsilon}({\lambda})\ =\ {\varepsilon}_0\ +\ {\lambda}^{m-2}{\varepsilon}_{m-2}\ +\ {\lambda}^{2(m-2)}{\varepsilon}_{2(m-2)}\ +\ ...\ ,$$ see (\[eps-in-la\]). Not surprisingly, for even potentials ($m=2p$, $p=1,2,...$) the PT coefficients ${\varepsilon}_{2n(p-1)}$, $n=1,2,...$, are polynomials in $D$ factorized as follows $${\varepsilon}_{2n(p-1)}(D)\ =\ D(D+2)(D+4) \ldots (D+2p-2)R_{(n-1)(p-1)}(D)$$ where $R_{(n-1)(p-1)}(D)$ is a polynomial in $D$ of degree $(n-1)(p-1)$, see [@ELETSKY]. We emphasize that, in the framework of the Non-Linearization Procedure, any PT coefficient ${\varepsilon}_{2n(p-1)}(D)$ is calculated by linear algebra means. Series (\[generalseries\]) is divergent due to a Dyson instability argument. It is evident that for larger $D$ and $m$, the index of divergence is larger, see e.g. [@IVANOV]. For potential (\[twoterm\]) generating functions $G_0(r;g)$ and $G_2(r;g)$ can be written in closed form, $$\frac{G_0(r)}{(2M)^{1/2}}\ =\ \frac{2}{m+2}r^2\left\{\sqrt{1+(gr)^{m-2}}+\frac{m-2}{4}\, {}_2F_1\left(\frac{1}{2},\frac{2}{m-2};\frac{m}{m-2};-(gr)^{m-2}\right)\right\}\ ,$$ $$\frac{g^2\,G_2(r)}{(2M)^{1/2}}\ =\ \frac{1}{4}\,\log\left[1+(gr)^{m-2}\right]\ +\ \frac{D}{m+2}\,\log\left[1+\sqrt{1+(gr)^{m-2}}\right]\ ,$$ where ${}_2 F_1$ is the hypergeometric function, see I. [^1]: Note that as the result of minimization the ratio of optimal parameters $\frac{\tilde{b}_3}{\tilde{a}^2_3}$ is equal to $\frac{25}{4}$ with accuracy $\sim 10^{-5}$ for all $D$ and $g$ we had studied.
--- abstract: 'We consider isometric actions of lattices in semisimple algebraic groups on (possibly non-compact) homogeneous spaces with (possibly infinite) invariant Radon measure. We assume that the action has a dense orbit, and demonstrate two novel and non-classical dynamical phenomena that arise in this context. The first is the existence of a mean ergodic theorem even when the invariant measure is infinite, which implies the existence of an associated limiting distribution, possibly different than the invariant measure. The second is uniform quantitative equidistribution of all orbits in the space, which follows from a quantitative mean ergodic theorem for such actions. In turn, these results imply quantitative ratio ergodic theorems for isometric actions of lattices. This sheds some unexpected light on certain equidistribution problems posed by Arnol’d [@A] and also on the equidistribution conjecture for dense subgroups of isometries formulated by Kazhdan [@K]. We briefly describe the general problem regarding ergodic theorems for actions of lattices on homogeneous spaces and its solution via the duality principle [@GN2], and give a number of examples to demonstrate our results. Finally, we also prove results on quantitative equidistribution for transitive actions.' address: - | School of Mathematics\ University of Bristol\ Bristol, U.K. - 'Department of Mathematics, Technion' author: - Alexander Gorodnik - Amos Nevo title: | On Arnold’s and Kazhdan’s\ equidistribution problems --- Equidistribution beyond amenable groups ======================================= Let $G$ be a locally compact second countable (lcsc) group acting continuously on an lcsc space $X$. Assume that $X$ carries a $\sigma$-finite $G$-invariant Radon measure $\mu$ of full support. Consider the following three natural conditions that often arise in practice. 1. The action of $G$ on $X$ is transitive. 2. The action of $G$ on $X$ has a unique invariant Radon measure. 3. The action of $G$ on $X$ is isometric. Letting $\Gamma$ be a countable dense subgroup of $G$, consider the problem of formulating and establishing equidistribution results for the $\Gamma$-orbits in $X$. This problem has been studied in the past mainly in the case when the group $\Gamma$ is amenable and the measure $\mu$ is finite. It is a compelling challenge to generalize the theory to the case where the the group is non-amenable and the measure may be infinite. In particular, one would like to establish that a limiting distribution for the $\Gamma$-orbits exists, and study what its properties might be. This challenge can be generalized further to the case where $X$ is any standard Borel space with $\sigma$-finite measure, where now the results sought are mean ergodic theorems in $L^p$ and pointwise ergodic theorems holding almost everywhere. The possible choices of $G$, $X$ and $\Gamma$ include a large set of important examples arising naturally in various branches of dynamics. We will discuss some of these examples below. In the present paper we will concentrate on establishing equidistribution results with an effective rate of convergence for certain non-amenable groups, and in particular lattice subgroups of semisimple algebraic groups. A major ingredient in our considerations will be mean ergodic theorems for actions of these groups. We note that a mean ergodic theorem for spaces with infinite measure is a novel and distinctly non-classical phenomenon. Indeed, it is well-known (see [@Aa Ch. 2]) that for an action of a single transformation on a space with infinite measure, no formulation of such a result is possible. Likewise, effective rate of orbit equidstribution is a phenomenon that does not arise in the ergodic theory of amenable groups, since the ergodic averages may converge arbitrarily slowly. Historical background --------------------- The problem of extending ergodic theory to general countable groups was raised half a century ago by Arnol’d and Krylov [@AK]. They have established equidistribution of dense free subgroups of $SO_3({\mathbb R})$ acting on $S^2$, w.r.t. the word length, and have formulated the problem of establishing ergodic theorem for balls w.r.t. word length for actions of general countable finitely generated groups. Motivated by the problems raised in [@AK], Kazhdan [@K] has established that the orbits of certain dense $2$-generator subgroups of the isometry group of the plane satisfy a ratio ergodic theorem, namely that for every $x\in X$ and any two bounded open sets $A_1$ and $A_2$ (with nice boundary) $$\lim_{t\to \infty}\frac{{\left|{\left\{{\gamma\in B_t \,;\, \gamma^{-1}x\in A_1}\right\}}\right|}}{{\left|{\left\{{\gamma\in B_t \,;\, \gamma^{-1}x\in A_2}\right\}}\right|}}=\frac{m(A_1)}{m(A_2)}\,.$$ Here $B_t$ denotes the ball of radius $t$ w.r.t. the word length on the free semigroup, so that the counting is in effect governed by the weights given by convolution powers, and $m$ is Lebesgue measure on the plane. Kazhdan has raised in [@K] the question of extending this result to other dense subgroups of a Lie group $G$ acting on a homogeneous space $X=G/H$, particularaly when the action is isometric. Motivated by both [@AK] and [@K] Guivarc’h in [@Gu1] has proved the mean ergodic theorem for actions of free groups on a probablity space generalizing von-Neumann’s classical result, and in [@Gu2] has established a generalization of Kazhdan’s ratio ergodic theorem for dense subgroups of isometries of Euclidean spaces, the weights being given again by the convolution powers of a fixed probability measure on $\Gamma$. Guivarc’h has also raised the problem of establishing equidistribution results in the generality of actions with a unique invariant measure. Ergodic theorems on homogeneous spaces : Arnold’s problems ---------------------------------------------------------- Subsequently (see [@A] problems 1996-15 p. 115, 2002-16 p. 148) Arnol’d has revisited this topic and raised the following challenges. Consider the standard Lorentzian form $Q(x,y,z)=x^2+y^2-z^2$ on ${\mathbb R}^{2,1}$, and the identity component of the group of isometries of the form, denoted by $G=SO^0(2,1)$. Under the standard linear action of $G$, the space ${\mathbb R}^{2,1}$ decomposes into three invariant subsets of different types, as follows. 1. the light cone $\mathcal{C}$, namely the set where the form vanishes, 2. the two-sheeted hyperboloid $\mathcal{H}$ given by ${\left\{{x^2+y^2-z^2=1}\right\}}$, each of whose components inherits a $G$-invariant Riemannian structure of constant negative curvature isometric to hyperbolic plane. Thus ${\mathcal{H}}$ is a homogeneous space $G/K$ with compact stability group conjugate to $K=SO_2({\mathbb R})\cong {\mathbb T}$, 3. the one-sheeted hyperboloid known as the de-Sitter space $\mathcal{S}$ given by ${\left\{{x^2+y^2-z^2=-1}\right\}}$, which inherits a $G$-invariant two-dimensional Lorentzian structure, and is a homogeneous space $G/H$ with stability group conjugate to $H=SO^0(1,1)\cong {\mathbb R}$. The group $G$ has a natural action on the projectiviziation of the light cone, namely the usual action by fractional linear transformations of the circle. The circle forms the boundary of the hyperbolic plane and is denoted $\mathcal{B}$. Consider now any lattice subgroup $\Gamma\subset G $. Then all orbits of the lattice in hyperbolic space are discrete, and all orbits of the lattice on the boundary are dense. On the de-Sitter space, almost all $\Gamma$-orbits are dense, but not all. For example, the $\Gamma$-orbit of a point is discrete if the intersection its stability group with the lattice is a lattice in the stability group. Thus two problems that arise naturally and which were formulated by Arnol’d are 1. establish equidistribution of the lattice orbits on the boundary $\mathcal{B}$. 2. establish ergodic theorems for dense lattice orbits in the de-Sitter space $\mathcal{S}$. The first problem was solved in [@G2] (see also [@gm; @go] for generalisations), and the second problem was solved by F. Maucourant (unpublished) and in greater generality with explicit rate in [@GN2]. The distribution of orbits for the action of $\Gamma$ on the light cone $\mathcal{C}$ was computed in [@l], [@n], [@G1], [@GW Sec. 12]. Ergodic theorems beyond amenable groups : some surprises -------------------------------------------------------- We now turn to explicating the results alluded to above and to describing their general context. The study of the distribution of $G$-orbits in a space $X$ proceeds by fixing a family of bounded Borel measures $\beta_t$ on $G$ for $t\in {\mathbb Z}^+$ or $t\in {\mathbb R}^+$. The measures $\beta_t$ are not necessarily probability measures. For example, one important special case is when $B_t\subset G$ is a family of bounded sets of positive Haar measure, and we then fix a choice of growth rate function $V(t)$, and define $\beta_t$ to be Haar measure on $B_t$ divided by $V(t)$. The growth function $V(t)$ may be of lower order of magnitude than $m_G(B_t)$, for example. We consider the operators, defined on a compactly supported test-function $f:X\to\mathbb{R}$ by $$\pi_X(\beta_t)f(x)=\int_G f(g^{-1}x)d\beta_t(g)\,.$$ Thus in the special case noted above $$\pi_X(\beta_t)f(x)=\frac{1}{V(t)}\int_{g\in B_t} f(g^{-1}x)dg\,.$$ The properties of this family of operators provide the key to analyzing the distribution of the orbit $G\cdot x$. Of course, in the classical case of amenable groups acting by measure-preserving transformations on a probability space we have for a family of sets $B_t\subset G$ : 1. the right choice of the growth function $V(t)$ is the total measure $ m_G(B_t)$, so that the operators above become averaging (i.e. Markov) operators, 2. the limit of the time averages as $t\to \infty$ is the space average of the function, when the probability measure is ergodic. In particular, the limiting distribution is $G$-invariant. It turns out that the ergodic theory of non-amenable groups is full of surprises, and reveals several phenomena that have no analogues in classical amenable ergodic theory. 1. the operators $\pi_X(\beta_t)$ may fail to converge, even when $\beta_t$ are normalized ball averages w.r.t. a word metric and the action is an isometric action on a compact $G$-transitive space preserving Haar measure. 2. the operators $\pi_X(\beta_t)$ may converge to a limit operator, but the limit may be different than the ergodic mean, even when $\beta_t$ are normalized ball averages w.r.t. a word metric and the action is an isometric action on a compact $G$-transitive space preserving Haar measure. 3. When the invariant measure is infinite, the operator $\pi_X(\beta_t)$ associated with a family $B_t$ may converge for a choice of growth function $V(t)$ which is different than $m_G(B_t)$, with convergence for almost all points, or even for all points $x$ outside a countable set : $$\lim_{t\to \infty} \frac{1}{V(t)}\int_{g\in B_t} f(g^{-1}x)dg=\int_X fd\nu_x\,.$$ 4. The limit measure $\nu_x$ appearing in (3) may be non-invariant and depend non-trivially on the initial point $x$. Furthermore, the limit measure may be completely different if the family of sets $B_t$ which are taken as the support of the measures $\beta_t$ is changed. 5. The expression in (3) may converge for [*each and every* ]{} $x\in X$, but still the measure $\nu_x$ may be non-invariant, and it may depend on the initial point $x$ and on the family $B_t$. This may happen even when the invariant measure is unique and even when the action is isometric. 6. The operators $\pi_X(\beta_t)$ in (3) may converge with a [*uniform rate of convergence*]{}, valid for almost all points, or even for all points, namely $${\left| \frac{1}{V(t)}\int_{g\in B_t} f(g^{-1}x)dg-\int_X fd\nu_x\right|}\le C(x,f) V(t)^{-\delta}\,.$$ This can happen in compact spaces and also non-compact spaces. 7. As a result, convergence of the ratios : $$\frac{{\left|{\left\{{\gamma\in B_t \,;\, \gamma^{-1}x\in A_1}\right\}}\right|}}{{\left|{\left\{{\gamma\in B_t \,;\, \gamma^{-1}x\in A_2}\right\}}\right|}}$$ may take place at a uniform rate for almost all points. As before, even for an isometric action (with infinite invariant measure) the uniform rate may apply to all points, but $\nu_x$ appearing in the limiting expression $\frac{\nu_x(A_1)}{\nu_x(A_2)}$ may depend on $x$ and $B_t$. We remark that (1) and (2) are implicit already in [@AK], [@Gu1], and that (1) has been noted explicitly in [@Bew] (see also Theorem \[th:free\] below). The phenomena described in (3) and (4) were first demonstrated by a pioneering result of Ledrappier [@l] on the distribution of orbits of lattice subgroups of $SL_2({\mathbb R})$ in the real plane, see also [@lp1] and [@lp2]. Equivalently, the result applies to the action of a lattice in $SO^0(2,1)$ on the light cone $\mathcal{C}$ above (see [@GW Th. 12.2] for more information). The phenomena described in (5) for isometric actions has been first demonstrated in [@GW Cor. 1.4(ii)] (see also Theorem \[th:inf\] below). Regarding (6), note that mean and pointwise ergodic theorems with uniform rates for semisimple Kazhdan groups acting on probability spaces have been established in [@n0; @MNS; @GN1]. The problem whether an ergodic theorem implies equidistribution with rates for all points forms one of the main subjects of the present paper. The solution to this problem gives the phenomena described in (7) as immediate corollaries. ### The mean ergodic theorem and equidistribution in compact spaces Assume now that the space $X$ is equipped with a $G$-invariant probability measure $\mu$. We say that $\pi_X(\beta_t)$ satisfies the [*mean ergodic theorem*]{} in $L^2$ (with limit operator $\mathcal{P}$) if for every $f\in L^2(X,\mu)$, $$\label{eq:mean} \left\| \int_G f(g^{-1}x)d\beta_t(g)-\mathcal{P}f(x)\right \|_2\to 0\quad\hbox{as $t\to \infty$,}$$ where $\mathcal{P}:L^2(X,\mu)\to L^2(X,\mu)$ is a linear operator, which may be different than the orthogonal projection on the space of $G$-invariant function, see for instance Theorem \[th:free\] below. The mean ergodic theorem is known to hold for several large classes of lcsc groups, including general amenable groups (see [@n1] for a survey) and also for semisimple $S$-algebraic groups and their lattice subgroups (see [@GN1] for a comprehensive discussion). The next obvious question regarding the distribution of orbits is that of pointwise convergence of the averages, namely whether for every $f\in L^2(X,\mu)$ and almost all $x\in X$ $$\label{eq:pointwise} {\left|\int_G f(g^{-1}x)d\beta_t(g)-\mathcal{P}f(x)\right|} \longrightarrow 0\quad\hbox{as $t\to \infty$}.$$ When the space $X$ is a compact metric space, one can consider sharpening the pointwise ergodic theorem in two material ways. The first is to establish [*pointwise everywhere*]{} convergence when $f$ is continuous, namely that (\[eq:pointwise\]) holds for every point $x\in X$ without exception, in which case we say that the orbits of $G$ in $X$ are equidistributed. It was noted in [@GN1] (based on an earlier argument due to Guivarc’h [@Gu1]) that for isometric actions on compact spaces with an invariant ergodic probability measure of full support, pointwise everywhere convergence of the averages for continuous functions [*follows* ]{} from the mean ergodic theorem. This result has as a consequence the fact that such actions are in fact uniquely ergodic. Thus unique ergodicity can be established via spectral arguments. The second is to establish that the convergence in (\[eq:pointwise\]) proceeds at a fixed rate, uniformly for every starting point, if the function $f$ is Hölder continuous, namely $$\label{eq:quantitative} {\left|\int_G f(g^{-1}x)d\beta_t(g)-\mathcal{P}f(x)\right|} \le C(f,x,)E(t)$$ where $E(t)\to 0$ as $t\to \infty$. In this case we say that the orbits have a uniform rate of equidistribution, and would like to estimate this rate. In the present paper we will establish equidistribution results with an effective uniform rate in two significant cases, namely isometric and transitive action. To establish a quantitative version of these results for actions of general groups $G$ we will require the spectral assumption of the existence of a spectral gap. Recall that a unitary representation has spectral gap if it has no almost invariant sequence of unit vectors. We emphasize that this assumption is necessary for the conclusion to hold, and that its validity is a very common phenomenon. For example, all actions of groups with property $T$ have a spectral gap (in the orthogonal complement of the invariants). We also show that the convergence rate is uniform on the sets $$C^a(X)_1={\left\{{f\in C(X)\,;\, \sup_{x\in X} |f(x)|+ \sup_{x\neq y}\frac{{\left|f(x)-f(y)\right|}}{d(x,y)^a}\le 1}\right\}}$$ of Hölder continuous functions with Hölder norm bounded by one. Let us now describe some concrete instances of these results, when the action preserves a probability measure. Uniform rate of equidistribution : some examples ------------------------------------------------ ### Free groups Let $\mathbb{F}_r$ be a free group with $r$ generators, $r\ge 2$. We denote by $\ell(\gamma)$ the length of an element $\gamma\in\Gamma$ w.r.t. the free generating set, and by $B_{2n}$ the ball of radius $2n$. We denote by ${\varepsilon}_0 :\mathbb{F}_r\to{\left\{{\pm 1}\right\}}$ the sign character of the free group, taking the value $1$ on words of even length, and $-1$ on words of odd length. Given a unitary representation $\pi$ of $\mathbb{F}_r$ on a Hilbert space ${\mathcal{H}}$, let ${\mathcal{H}}^1$ denote the space of invariants, and let ${\mathcal{H}}^{{\varepsilon}_0}$ denote the space realizing the character ${\varepsilon}_0$. Any vector $f_0\in {\mathcal{H}}^{{\varepsilon}_0}$ satisfies $\pi(\gamma)f_0=(-1)^{\ell(\gamma)}f_0$. Given a decreasing family of finite index subgroups $\Gamma_i$ of ${\mathbb F}_r$, we denote by $\hat {\mathbb F}_r$ the profinite completion which is equipped with an invariant metric defined by $$d(\gamma_1,\gamma_2)=\max\{|{\mathbb F}_r:\Gamma_i|^{-1}:\, \gamma_1^{-1}\gamma_2\notin \hat \Gamma_i\}$$ for $\gamma_1,\gamma_2\in \hat {\mathbb F}_r$. \[th:free\] 1. Consider an isometric action of ${\mathbb F}_r$ on a compact manifold $X$. Let $\mu$ be an ergodic smooth probability measure on $X$ with full support such that the representation of $\mathbb{F}_r$ on $L^2_0(X,\mu)$ has a spectral gap. Then for every Hölder-continuous $f\in C^a(X)_1$ and every $x\in X$, $$\label{eq:mean_free} \frac{1}{\#B_{2n}}\sum_{\gamma\in B_{2n}} f(\gamma^{-1}x)= \mathcal{P}f(x)+O\left(e^{-\theta_a n}\right)$$ where - $\theta_a>0$ depends only on the spectral gap and $\dim (X)$, and - the operator $\mathcal{P}$ is given by $$\mathcal{P}f=\int_X f\,d\mu+\frac{r-1}{r}\left(\int_X f\bar f_0\,d\mu\right)f_0,$$ where $f_0\equiv 0$ when ${\mathcal{H}}^{{\varepsilon}_0}=0$ which is always the case when $X$ is connected. Otherwise, ${\mathcal{H}}^{{\varepsilon}_0}$ is $1$-dimensional, and $f_0$ denotes a unit vector that spans ${\mathcal{H}}^{{\varepsilon}_0}$. 2. Let $\Gamma_i\subset {\mathbb F}_r$ be a decreasing family of finite index subgroups such that - $|\Gamma_i:\Gamma_{i+1}|$ is uniformly bounded, - the family of representations $L_0^2({\mathbb F}_r/\Gamma_i)$ of ${\mathbb F}_r$ satisfies property $(\tau)$ (i.e., has uniform spectral gap). Then for every $f\in C^a(\hat {\mathbb F}_r)_1$ and $x\in \hat{\mathbb F}_r$, holds as well, with respect to the Haar probability measure $\mu$ on $\hat {\mathbb F}_r$, the pro-finite completion associated with the family $\Gamma_i$. We recall that taking $X={S}^2$ to be the unit sphere in ${\mathbb R}^3$, and $G=SO_3({\mathbb R})$, it was shown in [@LPS; @LPS2] (see also [@c; @o] for generalisations to higher-dimensional spheres) that certain dense subgroups $\Gamma\subset G$ admit a spectral gap in their representation on $L^2_0(S^2)$. The class of such subgroups was significantly enlarged recently in [@BG]. It may even be the case that [*every*]{} dense finitely generated subgroup of $G$ admits a spectral gap, without exception. In any case, whenever a spectral gap exists, [*every*]{} orbit of the dense group become equidistributed on the sphere at a uniform rate, depending on the size of the spectral gap (as well as the parameters $r$ and $a$, of course). ### Lattices in semisimple algebraic groups {#sec:lat} Let ${\sf G}\subset \hbox{GL}_d$ be a simply connected absolutely simple algebraic group defined over a local field $K$ of characteristic zero which is isotropic over $K$ (for example, ${\sf G}=SL_d$). Let $G={\sf G}(K)$, and let $\Gamma$ be a lattice in $G$. We fix a norm on $\hbox{Mat}_d(K)$ which is the Euclidean norm if $K$ is Archimedean and the $\max$-norm otherwise. Let $B_t$ denote the ball ${\left\{{g\in G\,;\, \log \|g\|<t}\right\}}$. \[th:lattice\] 1. Consider an isometric action of $\Gamma$ on a compact manifold $X$. Let $\mu$ be a smooth probability measure on $X$ with full support such that the action of $\Gamma$ in $L^2_0(X,\mu)$ has a spectral gap. Then for every $f\in C^a(X)_1$ and every $x\in X$, $$\label{eq:mean_lattice} \frac{1}{\#(\Gamma \cap B_t)} \sum_{\gamma\in B_t} f(\gamma^{-1}x) = \int_X f\,d\mu+O\left(e^{-\theta_a t}\right)$$ with explicit $\theta_a>0$. 2. Let $\Gamma_i\subset \Gamma$ be a decreasing family of finite index subgroups such that - $|\Gamma_i:\Gamma_{i+1}|$ is uniformly bounded, - the family of representations $L_0^2(\Gamma/\Gamma_i)$ of $\Gamma$ satisfies property $(\tau)$. Then for every $f\in C^a(\hat \Gamma)_1$ and every $x\in \hat\Gamma$, holds as well, with respect to the Haar probability measure $\mu$ on the pro-finite completion $\hat \Gamma$ associated with the family $\Gamma_i$. One can also formulate a version of Theorem \[th:lattice\] for general semisimple $S$-arithmetic group, but then one needs to impose the condition that the unitary representation of $G^+$ induced from the representation of $\Gamma$ on $L^2_0(X,\mu)$ has a strong spectral gap (see [@GN1] for the terminology). ### Transitive actions Another significant case where pointwise everywhere convergence holds with a uniform rate is that of transitive actions. In this case this property holds for every bounded Borel function. Let ${\sf G }\subset \hbox{GL}_d$ be a linear algebraic group defined over a local field $K$ of characteristic zero, and $G={\sf G}(K)$. We fix a norm on $\hbox{Mat}_d(K)$ which is the Euclidean norm if $K$ is Archimedean and the $\max$-norm otherwise. Let $\beta_t$ denote the Haar-uniform probability measure on $G$ supported on ${\left\{{g\in G\,;\,\log \|g\|<t}\right\}}$. \[th:trans\] Consider a transitive continuous action of $G$ on a compact space $X$ that supports invariant Borel probability measure $\mu$. Assume that the Haar-uniform averages $\pi_X(\beta_t)$ satisfy the mean ergodic theorem, namely for $f\in L^2(X,\mu)$, $$E(f,t):=\left\| \pi_X(\beta_{t})f(x)- \int_X f\,d\mu \right\|_2\to 0\quad\hbox{as $t\to\infty$.}$$ Then for every bounded Borel function $f$ on $X$ with $\sup |f|\le 1$ and for every $x\in X$, $$\pi_X(\beta_{t})f(x)= \int_X f\,d\mu+O\left(\left(\sup_{s\in (t-1,t+1)} E(f,s)\right)^{\theta} \right)$$ with an explicit $\theta\in (0,1)$ independent of $f$ and $x$. Proofs of Theorems \[th:free\], \[th:lattice\], and \[th:trans\] will be given in Section \[sec:last\]. Ergodic theorems : spaces with infinite measures ------------------------------------------------ Let us now turn to spaces with infinite invariant measure, and consider the problem of establishing mean and pointwise ergodic theorems and quantitative equidistribution of orbits for general group actions on such spaces. In general this basic challenge is largely unexplored and here we take up the important case of dense subgroups $\Gamma \subset G$ acting isometrically by translations, where we can establish [*pointwise everywhere*]{} convergence with a uniform rate. To that end we introduce a natural generalization of the mean ergodic theorem in this setting below (see Definition \[def:mean\] below). In particular, we obtain the following equidistribution result that provides a quantitative version of [@GW Cor. 1.4]. Let ${\sf G}\subset \hbox{GL}_d$ be a semisimple simply connected algebraic group which is defined over a number field $K$ and is $K$-simple. Let $T$ and $S$ be finite sets of valuations of $K$ with $T\subset S$. For $v\in S$, we denote by $K_v$ the corresponding completions. Let $O_S$ denote the ring of $S$-integers and $\Gamma={\sf G}(O_S)$. Let $$\begin{aligned} H(g)&=\prod_{v\in S} \|g_v\|_v\quad\hbox{for $g=(g_v)\in \prod_{v\in S} {\sf G}(K_v)$}, $$ where $\|\cdot\|_v$ are norms on $\hbox{Mat}_d(K_v)$ as in Section \[sec:lat\]. \[th:inf\] Assume that $\Gamma$ is dense in $G=\prod_{v\in T} {\sf G}(K_v)$ with respect to the diagonal embedding. Then there exist $\alpha\in\mathbb{Q}^+$ and $\beta\in\mathbb{N}$ such that for every Hölder function on $G$ with exponent $a$ and compact support and for every $x\in G$ $$\frac{1}{t^{\beta-1}e^{\alpha t}} \sum_{\gamma\in \Gamma:\, \log H(\gamma)<t} f(\gamma x)=\int _G f(g)\frac{dm_G(g)}{H(gx)^\alpha}+O_{f,x}(e^{-\theta_a t})$$ uniformly for $x$ in compact sets, where $m_G$ is a suitably normalised Haar measure on $G$ and $\theta_a>0$. We note that the $L^2$-convergence for the operators appearing in Theorem \[th:inf\] is a special case of the results of [@GN2]. Hence, since the action of $\Gamma$ on $X$ is isometric, Theorem \[th:inf\] follows from Theorem \[th:isometric\](2) below. We refer to [@GW p. 107] for the identification of $\alpha$, and also for the fact that taking the family $B_t$ associated with the distance function given by a power of the height function, will change the power of the density function appearing in the limiting distribution. Thus, as we have already mentioned above, this result demonstrates that in the infinite-measure setting the limit measure does not have to be invariant and may depend nontrivially on the initial point $x$ and the family $B_t$, even if the action is an isometric action with a spectral gap. Dense groups of isometries : Kazhdan’s conjecture ------------------------------------------------- Let $(X,d)$ be an lcsc metric space, and let $G=Isom(X)$ be its group of isometries. Assume that the action of $G$ on $X$ is transitive, and let $m_X$ be the unique isometry-invariant Radon measure on $X$. Fix two bounded open sets $A_1$ and $A_2$ with boundary of zero measure. Consider a countable dense subgroup $\Gamma\subset G$, and a family of sets $B_t\subset \Gamma$, for example, balls w.r.t. a left-invariant metric. For each $x\in X$ the orbit $\Gamma\cdot x$ is dense in $X$ and we can form the ratios $$\frac{{\left|{\left\{{\gamma\in B_t \,;\, \gamma^{-1}x\in A_1}\right\}}\right|}}{{\left|{\left\{{\gamma\in B_t \,;\, \gamma^{-1}x\in A_2}\right\}}\right|}}\,.$$ Consider the problem of whether the ratios satisfy a ratio ergodic theorem, namely whether the limit exists as $t\to \infty$ and furthermore whether it is given by $$\label{eq:ratio} \lim_{t\to \infty} \frac{\sum_{\gamma\in B_t} \chi_{A_1}(\gamma^{-1}x)}{\sum_{\gamma\in B_t} \chi_{A_2}(\gamma^{-1}x)}=\frac{m_X(A_1)}{m_X(A_2)}.$$ This problem was raised by D. Kazhdan in [@K], where the case of certain dense $2$-generator subgroups of $Isom({\mathbb R}^2)$ acting on the plane was studied. Assuming one of the generators was an irrational rotation, a version of (\[eq:ratio\]) was established, but with $B_t$ taken as balls in the free group or semigroup, and not as balls w.r.t. the word metric. This amounts to considering weighted averages on $\Gamma$, the weights being given by convolution powers. This result was generalized by Y. Guivarc’h [@Gu2] who considered weighted averages given by convolution powers on dense subgroups of $Isom({\mathbb R}^n)$ acting on ${\mathbb R}^n$ (note also that a gap in the argument in [@K] was closed in [@Gu2]). For further results in this direction see [@v1], [@v2]. Theorem \[th:inf\] has of course a direct bearing on this problem. In principle, to show that ratios converge, one does not need to establish the much stronger result that both the numerator and the denominator converge at a common rate and find an explicit expression for the rate. However that is precisely the conclusion of Theorem \[th:inf\], so as an immediate corollary, we obtain the following Keeping the notation and assumptions of Theorem \[th:inf\], we have, 1. if $f_1$ and $f_2$ are continuous and $f_2\ge 0$ (and not identically zero), then for every $x_1,x_2\in X$, $$\lim_{t\to \infty}\frac{\sum_{\gamma\in \Gamma:\, \log H(\gamma)<t} f_1(\gamma x_1) }{\sum_{\gamma\in \Gamma:\, \log H(\gamma)<t} f_2(\gamma x_2) } =\frac{\int _X f_1(g)H(g x_1)^{-\alpha}dm_X(g)}{\int _X f_2(g) H(g x_2)^{-\alpha}dm_X(g)},$$ 2. if in addition $f_1$ and $f_2$ are Hölder continuous with exponent $a$, then for every $x_1,x_2\in X$, $$\frac{\sum_{\gamma\in \Gamma:\,\log H(\gamma)<t} f_1(\gamma x_1) }{\sum_{\gamma\in \Gamma:\, \log H(\gamma)<t} f_2(\gamma x_2) } =\frac{\int _X f_1(g)H(gx_1)^{-\alpha}dm_X(g)}{\int _X f_2(g) H(gx_2)^{-\alpha}dm_X(g)} +O_{f_1,f_2, x_1,x_2}(e^{-\theta_a t})$$ uniformly over $x_1,x_2$ in compact sets. Thus the ratios converge for every point, with uniform rate, but the limit is [*not*]{} the ratio of the integrals with respect to the isometry invariant measure, but with respect to a [*different*]{} measure. We also remark that if $f_1$ and $f_2$ are bounded measurable functions with bounded support, with $f_2\ge 0$ and not identically zero, then the ratios converge to the stated limit at almost every point, with uniform rate. This is a consequence of the results of [@GN2]. Proof of quantitative equidistribution results ============================================== Let $G$ be an lcsc group acting measurably on a measurable space $X$ equipped with a $\sigma$-finite quasi-invariant measure $\mu$. We fix an increasing filtration of $X$ be measurable sets $X_r$ with $r\in\mathbb{N}$ of finite measure. We denote by $\|\cdot\|_{p,r}$ the $L^p$-norm with respect to the measure $\mu|_{X_r}$. We consider families $\beta_t$ of bounded Borel measures on $G$, and in particular, given a family of sets $B_t$ on $G$ for $t\ge t_0$, and a positive growth function $V(t)$, we consider the operators: $$\pi_X(\beta_t)f(x)=\frac{1}{V(t)}\int_{g\in B_t}f(g^{-1}x)\, dg$$ for measurable $f:X\to \mathbb{R}$. \[def:mean\] The operators $\pi_X(\beta_t)$ satisfy the [*mean ergodic theorem*]{} in $L^1$ for the action of $G$ on $X$ if for every $r\in \mathbb{N}$ and $f\in L^1(X,\mu|_{X_r})$, the sequence $\pi_X(\beta_t)f$ converges in $L^1(X,\mu|_{X_r})$. It is clear from the definition that for $1\le p < \infty$ there exist linear operators $$\mathcal{P}_r: L^p(X,\mu|_{X_r})\to L^p(X,\mu|_{X_r})$$ such that $$\label{eq:mean-inf} E_r(f,t):=\left\|\pi_X(\beta_t)f-\mathcal{P}_rf \right\|_{p,r}\to 0\quad\hbox{as $t\to\infty$,}$$ and since $$\mathcal{P}_{r+1}|_{L^p(X,\mu|_{X_r})}=\mathcal{P}_{r},$$ it is consistent to denote these operators by $\mathcal{P}$. We shall then say that $\pi_X(\beta_t)$ satisfy the mean ergodic theorem with limit operator $\mathcal{P}$. We note that our notion of the mean ergodic theorem depends on the choice of the filtration $X_r\subset X$ and on the normalisation $V(t)$. It is of course necessary to choose the normalisation so that the operator $\mathcal{P}$ is not trivial. Then the mean ergodic theorem yields information about the limiting distribution of the orbits. As noted already above, the fact that the foregoing formulation of the mean ergodic theorem for spaces with [*infinite*]{} measure is meaningful is an indication of a novel and distinctly non-classical phenomenon. Indeed, it is well-known (see [@Aa Ch. 2]) that for an action of a single transformation on a space with infinite measure, no normalisation $V(t)$ for which (\[eq:mean-inf\]) holds can be found. Nonetheless, it has been gradually realised that mean ergodic theorems and equidistribution results do hold for some classes of action on infinite measure spaces (see [@l; @lp1; @lp2; @G1; @G2; @GW]). In fact, in the forthcoming paper [@GN2] we establish the mean ergodic theorem for lattices in $S$-algebraic semisimple groups acting on general algebraic homogeneous spaces. This result is part of a systematic approach to ergodic theory on homogeneous spaces via the duality principle. Isometric actions ----------------- Let us assume now that $X$ is a metric space equipped with a Radon measure $\mu$, so that the measures of balls are finite. We fix a filtration of $X$ by balls $X_r$ of radius $r$ centered at some fixed $x_0\in X$. We denote by $D_{\varepsilon}(x)$ the closed ball in $X$ of radius ${\varepsilon}$ centered at $x$. 1. We say that the measure $\mu$ is [*uniformly of full support*]{} if for every $r\in\mathbb{N}$ and ${\varepsilon}\in (0,1]$, $$\inf_{x\in X_r} \mu(D_{\varepsilon}(x))>0.$$ 2. We say that the measure $\mu$ has [*local dimension at most $\rho$*]{} if for every $r\in\mathbb{N}$, ${\varepsilon}\in (0,1]$ and $x\in X_r$, $$\mu(D_{\varepsilon}(x))\ge m_r{\varepsilon}^\rho.$$ \[r:uniform\] If the sets $X_r$ are compact, then every measure $\mu$ of full support is uniformly of full support. Moreover, if $X$ is a compact manifold and $\mu$ is a smooth measure with full support, then $\mu$ has local dimension at most $\dim (X)$. The following theorem is our main technical result concerning equidistribution for isometric actions. \[th:isometric\] Consider an isometric action of an lcsc group $G$ on the lcsc metric space $X$ equipped with a quasi-invariant Radon measure $\mu$. Assume that the mean ergodic theorem holds for the operators $\pi_X(\beta_t)$ in the action of $G$ on $X$. Then 1. If the measure $\mu$ is uniformly of full support, then for every uniformly continuous function $f$ such that ${\operatorname{supp}}(f)\subset X_r$ and $(\mathcal{P}f)|_{X_r}$ is uniformly continuous, we have $$\max_{x\in X_r} \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x)\right|=o_{r,f}(1)$$ as $t\to\infty$. 2. If the measure $\mu$ has local dimension at most $\rho$, then for every $f\in C^a(X)_1$ such that ${\operatorname{supp}}(f)\subset X_r$ and $(\mathcal{P}f)|_{X_r}\in C^a(X)_1$, we have $$\max_{x\in X_r} \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x)\right|\ll_r E_r(f,t)^{a/(a+\rho)}$$ for all sufficiently large $t$. Let us note two other general approaches that derive equidistribution results from estimates on $L^2$-norms. The first approach [@Gu1; @CO; @GN1], which is originally due to Guivarc’h, applies only in the case of compact spaces and does not produce a rate of convergence. The second approach [@CU §8] uses the theory of elliptic operators, so that it can only be applied in the setting of Lie groups and sufficiently smooth functions. Transitive actions ------------------ As noted above, in general the behavior of the operators $\pi_X(\beta_t)$ may depend quite sensitively of the initial point. Nonetheless, when the action is transitive it is still possible to obtain a uniform result, provided that a certain regularity property of the measures $\beta_t$ holds. Let $d$ be a right invariant metric on $G$ compatible with the topology of $G$ such that the closed balls with respect to $d$ are compact (such a metric always exists, see for instance [@hp]). We denote the closed ball of radius ${\varepsilon}$ centered at $g\in G$ by $\mathcal{O}_{\varepsilon}(g)$. \[def:holder\] The family of measures $\beta_t$ is called [*coarsely monotone*]{} if there exists monotone functions $\kappa:(0,1]\to (0,\infty)$ and $\delta:(0,1]\to (1,\infty)$ such that $$\delta_{\varepsilon}\to 1\;\;\hbox{and}\;\;\kappa_{\varepsilon}\to 0\quad\quad\hbox{as ${\varepsilon}\to 0^+$},$$ and for every ${\varepsilon}\in (0,1]$, $g\in \mathcal{O}_{\varepsilon}(e)$, and $t\ge t_0$, $$g\cdot\beta_t\le \delta_{\varepsilon}\, \beta_{t+\kappa_{\varepsilon}}.$$ If, in addition, we have $\delta_{\varepsilon}=1+O({\varepsilon}^{a_0})$ for some $a_0>0$, then the family of measures is called [*Hölder coarsely monotone*]{} with exponent $a_0$. Let $X$ be a lcsc space on which the group $G$ acts transitively and continuously. Since $X$ is locally compact, the topology on $X$ coincides with the topology defined on $X$ by viewing it as a factor space $G$. We equip $X$ with a $G$-quasi-invariant Radon measure $\mu$. The space $X$ is equipped with the natural metric (see [@HR §8]), which is defined by $$\label{eq:met} d(x_1,x_2)=\inf\{d(g_1,g_2):\, g_1,g_2\in G,\, g_1 x_0=x_1,\, g_2 x_0=x_2\}.$$ where $x_0$ is a fixed element of $X$. We use the filtration on $X$ such that $X_r$ are balls of radius $r$ in $X$ centered at some fixed $x_0\in X$. \[th:transitive\] Assume that the mean ergodic theorem holds for the family of operators $\pi_X(\beta_t)$ in the transitive $G$-action on $X$. 1. If $\beta_t$ is a coarsely monotone family of measures, then for every bounded Borel function $f:X\to \mathbb{R}$ such that ${\operatorname{supp}}(f)\subset X_r$ and $\mathcal{P}f$ is uniformly continuous on $X_r$, we have $$\max_{x\in X_r} \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x)\right|=o_{r,f}(1)$$ as $t\to\infty$. 2. If in addition $\beta_t$ is Hölder coarsely monotone with exponent $a_0$ and $\mu$ has local dimension at most $\rho$, then for every bounded Borel function $f:X\to \mathbb{R}$ such that ${\operatorname{supp}}(f)\subset X_r$, and $(\mathcal{P}f)|_{X_r}\in C^a(X)_1$, we have $$\max_{x\in X_r} \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x) \right|\ll_r \left(\sup_{s\in (t-\kappa_1,t+\kappa_1)} E_r(f,s)\right)^\frac{\min(a_0,a)}{\min(a_0,a)+\rho}$$ for all sufficiently large $t$. Proof of equidistribution for isometric actions =============================================== In this section we prove Theorem \[th:isometric\]. We start the proof with the following \[l:est\] Assume that the mean ergodic theorem holds for the family of operators $\pi_X(\beta_t)$. We assume that the action of $G$ on the space $X$ is isometric and equipped with a quasi-invariant Radon measure $\mu$ which is uniformly of full support. Then for all sufficiently large $t$, $r\in \mathbb{N}$, and $y\in X_r$, $$\beta_t\left(\left\{g\in G:\, g^{-1}y\in X_r\right\}\right)=O_r(1).$$ It follows from the mean ergodic theorem that $$\int_{X_r} \left|\beta_t\left(\left\{g\in G:\, g^{-1}x\in X_r\right\}\right)-\mathcal{P}\chi_{X_r}(x)\right|d\mu(x)=o_r(1)$$ as $t\to\infty$, and hence, $$c_r(t):=\int_{X_r} \beta_t\left(\left\{g\in G:\, g^{-1}x\in X_r\right\}\right)d\mu(x)=O_r(1).$$ Let $\delta>0$. We observe that for the set $$\Omega_r(\delta,t):=\left\{x\in X_r:\, \beta_t\left(\left\{g\in G:\, g^{-1}x\in X_r\right\}\right)>\delta\right\},$$ we have $$\mu(\Omega_r(\delta,t))\le c_r(t) /\delta.$$ Therefore, if we choose $\delta=2 c_r(t) /m_{r-1}$ where $$m_{r-1}:=\inf_{y\in X_{r-1}} \mu(D_1(y))>0,$$ then $$\mu(\Omega_r(\delta,t))<\mu(D_1(y))\quad \hbox{for all $y\in X_{r-1}$.}$$ Hence, for every $y\in X_{r-1}$ there exists $x\in D_1(y)\subset X_r$ such that $x\notin \Omega_r(\delta,t)$, i.e., $$\beta_t\left(\left\{g\in G:\, g^{-1}x\in X_r\right\}\right)\le \delta=O_r(1).$$ Since the action is isometric, we have $d(x_0,g^{-1}x)\le d(x_0,g^{-1}y)+1$. Hence, if $g^{-1}y\in X_{r-1}$, then $g^{-1}x\in X_{r}$, so that $$\begin{aligned} \beta_t\left(\left\{g\in G:\, g^{-1}y\in X_{r-1}\right\}\right)\le \beta_t\left(\left\{g\in G:\, g^{-1}x\in X_{r}\right\}\right).\end{aligned}$$ This implies the claim. In the proof we shall use parameters ${\varepsilon}\in (0,1)$ and $\delta>0$ that will be specified later. Let $$\label{eq:omega} \Omega_r(\delta,t)=\left\{x\in X_r:\, \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x) \right|>\delta \right\}.$$ Then $$\mu\left(\Omega_r(\delta,t)\right)\le E_r(f,t)/\delta.$$ Hence, if we assume that $$\label{eq:delta} m_{r-1}({\varepsilon}):=\inf_{y\in X_{r-1}} \mu(D_{\varepsilon}(y))>E_r(f,t)/\delta,$$ then for every $y\in X_{r-1}$ there exists $x\in D_{\varepsilon}(y)\subset X_r$ such that $x\notin \Omega_r(\delta,t)$, i.e., $$\label{eq:bbb0} \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x) \right|\le \delta.$$ Let $$\label{eq:om} \omega_{r}(f,{\varepsilon})=\sup\{|f(z)-f(w)|:\,\, z,w\in X_r,\, d(z,w)<{\varepsilon}\}.$$ Since $f$ is uniformly continuous, $\omega_{r}(f,{\varepsilon})\to 0$ as ${\varepsilon}\to 0^+$. Using that the action of $G$ on $X$ is isometric and ${\operatorname{supp}}(f)\subset X_r$, we deduce that $$\begin{aligned} &\left|\pi_X(\beta_t)f(x)-\pi_X(\beta_t)f(y)\right|\\ \le\,& \omega_{r}(f,{\varepsilon})\, \beta_t\left(\left\{g\in G:\, g^{-1}x\in X_r\;\hbox{or}\;g^{-1}y\in X_r\right\}\right) \ll_r\, \omega_{r}(f,{\varepsilon}),\end{aligned}$$ where the last inequality follows from Lemma \[l:est\]. Hence, it follows from (\[eq:bbb0\]) that for every $y\in X_{r-1}$, $$\left|\pi_X(\beta_t)f(y)-\mathcal{P}f(y) \right|\ll_r \delta + \omega_{r}(f,{\varepsilon})+\omega_{r}(\mathcal{P}f,{\varepsilon}).$$ This estimate holds provided that $\delta$ satisfies (\[eq:delta\]). Hence, it follows that for every $r\in{\mathbb N}$ $$\max_{y\in X_{r-1}} \left|\pi_X(\beta_t)f(y)-\mathcal{P}f(y)\right|\ll_r \tilde E_r(f,t)$$ where $$\tilde E_r(f,t)=\inf_{{\varepsilon}\in (0,1)}\Large(E_r(f,t)/m_{r-1}({\varepsilon}) + \omega_{r}(f,{\varepsilon})+\omega_{r}(\mathcal{P}f,{\varepsilon})\Large).$$ Using that $E_r(f,t)\to 0$ as $t\to \infty$ and $\omega_{r}(f,{\varepsilon}),\omega_{r}(\mathcal{P}f,{\varepsilon})\to 0$ as ${\varepsilon}\to 0^+$, we conclude that $\tilde E_r(f,t)=o_{r,f}(1)$ as well. This proves the first part of the theorem. To prove the second part of the theorem, we observe that under the additional assumptions, $$\tilde E_r(f,t)\ll_r \inf_{{\varepsilon}\in (0,1)} \Large({\varepsilon}^{-\rho} E_r(f,t) + {\varepsilon}^a\Large).$$ We therefore take ${\varepsilon}=E_r(f,t)^{1/(a+\rho)}$, and note that since $E_r(f,t)\to 0$ as $t\to\infty$, we have ${\varepsilon}\in (0, 1)$ for all sufficiently large $t$. Then it follows that $$\tilde E_r(f,t)\ll_r E_r(f,t)^{a/(a+\rho)},$$ as required. Proof of equidistribution for transitive actions ================================================ Our proof of Theorem \[th:transitive\] follows the same strategy as the proof of Theorem \[th:isometric\]. We start with the following lemma establishing directly that in the transitive case the quasi-invariant measure is uniformly of full support. We use the metric $d$ on the homogeneous space $X$ defined in (\[eq:met\]). For every $r\in\mathbb{N}$ and ${\varepsilon}>0$, $$m_r({\varepsilon}):=\inf_{x\in X_r} \mu(D_{\varepsilon}(x))>0.$$ It follows from the definition of the metric on $X$ that $$D_s(x)=\mathcal{O}_s(e)\cdot x\quad\hbox{for every $s>0$ and $x\in X$. }$$ Since the balls $\mathcal{O}_r(e)$ are compact, there exists ${\varepsilon}'={\varepsilon}'({\varepsilon},r)>0$ such that $g\mathcal{O}_{{\varepsilon}'}(e)g^{-1}\subset \mathcal{O}_{\varepsilon}(e)$ for every $g\in \mathcal{O}_r(e)$. Then since the measure $\mu$ is quasi-invariant, for every $g\in \mathcal{O}_r(e)$, $$\mu(D_{\varepsilon}(gx_0))\ge \mu(\mathcal{O}_{\varepsilon}(e)gx_0)\ge \mu(g \mathcal{O}_{{\varepsilon}'}(e)x_0)\gg_r \mu(\mathcal{O}_{{\varepsilon}'}(e)x_0).$$ This proves the claim. Note that, without loss of generality, we may assume that $$E_r(-f,t)=E_r(f,t).$$ Hence, it suffices to prove the estimate for nonnegative $f$. Let ${\varepsilon}\in (0,1)$ and $\delta>0$. As in the proof of Theorem \[th:isometric\], we introduce the set $$\Omega_r(\delta,t)=\left\{x\in X_r:\, \left|\pi_X(\beta_t)f(x)-\mathcal{P}f(x) \right|>\delta \right\}$$ and observe that $$\mu\left(\Omega_r(\delta,t)\right)\le E_r(f,t)/\delta.$$ Let $\bar E_r(f,t):=\sup_{s\in (t-\kappa_1,t+\kappa_1)} E_r(f,s)$, and $$\label{eq:d} \delta>\bar E_r(f,t)/m_{r-1}({\varepsilon}).$$ Then for every $y\in X_{r-1}$ and $s\in (t-\kappa_1,t+\kappa_1)$, there exists $x_s\in D_{\varepsilon}(y)\subset X_r$ such that $x_s\notin \Omega_r(\delta,s)$, i.e., $$\label{eq:bbb} \left|\pi_X(\beta_s)f(x_s)-\mathcal{P}f(x_s) \right|\le \delta.$$ We set $x_1=x_{t-\kappa_\epsilon}$ and $x_2=x_{t+\kappa_\epsilon}$. Then since $y=h x_1$ for some $h\in \mathcal{O}_{\varepsilon}(e)$, it follows from the coarsely monotone property of $\beta_t$ that $$\pi_X(\beta_t)f(y)=\pi_X(h\cdot \beta_{t})f(x_1)\ge \delta_{\varepsilon}^{-1}\,\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1),$$ and $$\begin{aligned} \left|\pi_X(\beta_t)f(y)-\mathcal{P}f(y) \right|\le & \left(\pi_X(\beta_{t})f(y)- \delta_{\varepsilon}^{-1}\,\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)\right)\\ &+\left|\delta_{\varepsilon}^{-1}\,\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)-\mathcal{P}f(y) \right|.\end{aligned}$$ Similarly, $$\pi_X(\beta_t)f(y) \le \delta_{\varepsilon}\,\pi_X(\beta_{t+\kappa_{\varepsilon}})f(x_2),$$ and hence $$\begin{aligned} \left|\pi_X(\beta_t)f(y)-\mathcal{P}f(y) \right| \le & \left(\delta_{\varepsilon}\,\pi_X(\beta_{t+\kappa_{\varepsilon}})f(x_2)- \delta_{\varepsilon}^{-1}\,\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)\right)\\ &+ \left|\delta_{\varepsilon}^{-1}\,\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)-\mathcal{P}f(y) \right|.\end{aligned}$$ Now we estimate each of the above terms separately. It follows from (\[eq:bbb\]) and uniform continuity of $\mathcal{P}f$ on $X_r$ that $$\begin{aligned} &|\delta_{\varepsilon}\,\pi_X(\beta_{t+\kappa_{\varepsilon}})f(x_2)- \delta_{\varepsilon}^{-1}\,\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)|\\ \le &\, \delta_{\varepsilon}\,|\pi_X(\beta_{t+\kappa_{\varepsilon}})f(x_2)-\mathcal{P}f(x_2)| +|\delta_{\varepsilon}\mathcal{P}f(x_2)-\delta^{-1}_{\varepsilon}\mathcal{P}f(x_1)|\\ & + \delta_{\varepsilon}^{-1}\,|\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)-\mathcal{P}f(x_1)|\\ \le & \, ( \delta_{\varepsilon}+ \delta_{\varepsilon}^{-1})\delta+ \delta_\epsilon |\mathcal{P}f(x_2)-\mathcal{P}f(x_1)| + (\delta_\epsilon-\delta_\epsilon^{-1})|\mathcal{P}f(x_1)|\\ \ll_r& \, \delta +\omega_r(\mathcal{P}f,2{\varepsilon})+(\delta_\epsilon-\delta_\epsilon^{-1}),\end{aligned}$$ where the function $\omega_r$ is defined as in (\[eq:om\]). Also, $$\begin{aligned} &\left|\delta_{\varepsilon}^{-1}\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)-\mathcal{P}f(y) \right|\\ \le &\, \delta_{\varepsilon}^{-1}\,|\pi_X(\beta_{t-\kappa_{\varepsilon}})f(x_1)-\mathcal{P}f(x_1)|+\left|\delta_{\varepsilon}^{-1}\,\mathcal{P}f(x_1)- \mathcal{P}f(y) \right|\\ \le &\, \delta_{\varepsilon}^{-1}\delta+\delta_{\varepsilon}^{-1}|\mathcal{P}f(x_1)-\mathcal{P}f(y)| +(1- \delta_{\varepsilon}^{-1})\left|\mathcal{P}f(y) \right|\\ \ll_r &\, \delta +\omega_r(\mathcal{P}f,{\varepsilon})+(1-\delta_\epsilon^{-1}).\end{aligned}$$ Therefore, we conclude that $$\begin{aligned} \left|\pi_X(\beta_t)f(y)-\mathcal{P}f(y) \right|\ll_r \delta+ \omega_r(\mathcal{P}f,2{\varepsilon})+\delta_{\varepsilon}-2\delta_{\varepsilon}^{-1}+1 .\end{aligned}$$ Since this estimate holds for all $\epsilon\in (0,1)$, $y\in X_{r-1}$ and $\delta$ satisfying (\[eq:d\]), we have $$\max_{y\in X_{r-1}} \left|\pi_X(\beta_t)f(y)-\mathcal{P}f(y)\right|\ll_r \tilde E_r(f,t)$$ where $$\tilde E_r(f,t)=\inf_{{\varepsilon}\in (0,1)} \Large\{\bar E_r(f,t)/m_{r-1}({\varepsilon})+ \omega_r(\mathcal{P}f,2{\varepsilon})+\delta_{\varepsilon}-2\delta_{\varepsilon}^{-1}+1 \Large\}.$$ Since $\bar E_r(f,t)\to 0$ as $t\to \infty$ and $\delta_{\varepsilon}\to 1$, $\omega_r(\mathcal{P}f,2{\varepsilon})\to 0$ as ${\varepsilon}\to 0^+$, it follows that $\tilde E_r(f,t)\to 0$ as $t\to \infty$ too. This implies the first part of the theorem. To prove the second part of the theorem, we observe that under the additional assumptions, $$\tilde E_r(f,t)\ll_r \inf_{{\varepsilon}\in (0,1)} \Large({\varepsilon}^{-\rho} \bar E_r(f,t) + {\varepsilon}^{\min(a_0,a)}\Large).$$ Since $E_r(f,t)\to 0$ as $t\to \infty$, it follows that $\bar E_r(f,t)\in (0,1)$ for sufficiently large $t$. Taking ${\varepsilon}=\bar E_r(f,t)^{1/(\min(a_0,a)+\rho)}$, we deduce the second claim. Completion of the proofs {#sec:last} ======================== We deduce Theorem \[th:free\] from Theorem \[th:isometric\](2). We recall that the mean ergodic theorem for the free group ${\mathbb F}_r$ was established in [@Gu1; @n00]. Moreover, under the spectral gap assumption, the method of the proof of [@n00 Th. 1] implies that $$\left\|\frac{1}{\#B_{2n}}\sum_{\gamma\in B_{2n}} f(\gamma^{-1}x)-\mathcal{P}f(x)\right\|_2 =O\left(e^{-\theta n}\|f\|_2\right)$$ for some $\theta>0$ determined by the spectral gap. Let $G$ be the closure of ${\mathbb F}_r$ in the isometry group of $X$. Then the measure $\mu$ is invariant and ergodic with respect to $G$. Since $X$ is compact, $G$ is compact, and it follows that $\mu$ is supported on a single orbit of $G$. Hence, $G$ acts transitively on $X$. Let $G_0$ be the closure in $G$ of the subgroup of ${\mathbb F}_r$ generated by the words of even length. Since $G_0$ has index at most two in $G$, the subgroup $G_0$ is open in $G$, and $X$ consists of at most two open orbits of $G_0$. This implies that $L^2(X)^{\epsilon_0}$ has dimension at most one and is trivial when $X$ is connected. Moreover, it is clear that $f_0$ is locally constant and, in particular, $\mathcal{P}f\in C^a(X)_1$. Finally, we note that in case (1) the measure $\mu$ has local dimension at most $\dim(X)$ (cf. Remark \[r:uniform\]). In case (2), we have $$\mu(D_\epsilon(x))=\mu(D_\epsilon(e))=\epsilon$$ when $\epsilon=|\Gamma:\Gamma_i|^{-1}$. Since $|\Gamma_i:\Gamma_{i+1}|$ is uniformly bounded, this implies that $\mu$ has local dimension at most one. Now Theorem \[th:free\] follows from Theorem \[th:isometric\](2). We note that $L^2$-convergence for (\[eq:mean\_lattice\]) with exponential rate follows from the results of [@GN1]. Indeed, the balls $B_t$ are Hölder admissible by [@GN1 Ch. 7]. Since in both cases we have a lower estimate on the local dimension (see the proof of Theorem \[th:free\]), Theorem \[th:lattice\] follows from Theorem \[th:isometric\](2). It follows from [@GN1 Ch. 7] that the family of measures $\beta_t$ is Hölder coarsely monotone. Hence, Theorem \[th:trans\] is a consequence of Theorem \[th:transitive\](2). [100]{} J. Aaronson, An introduction to infinite ergodic theory. Mathematical Surveys and Monographs, 50. American Mathematical Society, Providence, RI, 1997. V. Arnol’d, Arnold’s problems. Springer-Verlag, Berlin, 2004. V. Arnol’d and A. Krylov, Uniform distribution of points on a sphere and certain ergodic properties of solutions of linear ordinary differential equations in a complex domain. Dokl. Akad. Nauk SSSR 148 1963 9–12. T. Bewley, Sur l’application des théorèmes ergodiques aux groupes libres de transformations: Un contre-exemple. C. R. Acad. Sci. Paris Sér. A-B 270 1970 A1533–A1534. J. Bourgain and A. Gamburd, On the spectral gap for finitely-generated subgroups of $\rm SU(2)$. Invent. Math. 171 (2008), no. 1, 83–121. L. Clozel, Automorphic forms and the distribution of points on odd-dimensional spheres. Israel J. Math. 132 (2002), 175–187. L. Clozel and J.-P. Otal, Unique ergodicité des correspondances modulaires. Essays on geometry and related topics, Vol. 1, 2, 205–216, Monogr. Enseign. Math., 38, Enseignement Math., Geneva, 2001. L. Clozel and E. Ullmo, Équidistribution des points de Hecke. Contributions to automorphic forms, geometry, and number theory, 193–254, Johns Hopkins Univ. Press, Baltimore, MD, 2004. A. Gorodnik, Lattice action on the boundary of ${\rm SL}(n,\Bbb R)$. Ergodic Theory Dynam. Systems 23 (2003), no. 6, 1817–1837. A. Gorodnik, Uniform distribution of orbits of lattices on spaces of frames. Duke Math. J. 122 (2004), no. 3, 549–589. A. Gorodnik and F. Maucourant, Proximality and equidistribution on the Furstenberg boundary. Geom. Dedicata 113 (2005), 197–213. A. Gorodnik and A. Nevo, The ergodic theory of lattice subgroups, Annals of Mathematics Studies 172, Princeton University Press, 2010. A. Gorodnik and A. Nevo, Duality principle and ergodic theorems, in preparation. A. Gorodnik and H. Oh, Orbits of discrete subgroups on a symmetric space and the Furstenberg boundary. Duke Math. J. 139 (2007), no. 3, 483–525. A. Gorodnik and B. Weiss, Distribution of lattice orbits on homogeneous varieties. Geom. Funct. Anal. 17 (2007), no. 1, 58–115. Y. Guivarc’h, Généralisation d’un théorème de von Neumann, C.R. Acad. Sci. Paris 268 (1969), 1020–1023. Y. Guivarc’h, Equirépartition dans les espaces homogènes. Théorie ergodique (Actes Journées Ergodiques, Rennes, 1973/1974), pp. 131–142. Lecture Notes in Math., Vol. 532, Springer, Berlin, 1976. U. Haagerup and A. Przybyszewska, Proper metrics on locally compact groups, and proper affine isometric actions on Banach spaces, arXiv:math/0606794v1. E. Hewitt and K. Ross, Abstract harmonic analysis. Vol. I. Second edition. Grundlehren der Mathematischen Wissenschaften 115. Springer-Verlag, Berlin-New York, 1979. D. Kazhdan, Uniform distribution on a plane. Trudy Moskov. Mat. Obsh. 14 (1965), 299–305. F. Ledrappier, Distribution des orbites des rśeaux sur le plan réel. C. R. Acad. Sci. Paris Sér. I Math. 329 (1999), no. 1, 61–64. F. Ledrappier and M. Pollicott, Ergodic properties of linear actions of $(2\times 2)$-matrices. Duke Math. J. 116 (2003), no. 2, 353–388. F. Ledrappier and M. Pollicott, Distribution results for lattices in ${\rm SL}(2,\Bbb Q\sb p)$. Bull. Braz. Math. Soc. (N.S.) 36 (2005), no. 2, 143–176. A. Lubotzky, R. Phillips, P. Sarnak, Hecke operators and distributing points on the sphere. I. Frontiers of the mathematical sciences: 1985 (New York, 1985). Comm. Pure Appl. Math. 39 (1986), no. S, suppl., S149–S186. A. Lubotzky, R. Phillips, P. Sarnak, Hecke operators and distributing points on $S^2$. II. Comm. Pure Appl. Math. 40 (1987), no. 4, 401–420. G. Margulis, A. Nevo, and E. Stein, Analogs of Wiener’s ergodic theorems for semisimple Lie groups. II. Duke Math. J. 103 (2000), no. 2, 233–259. A. Nevo, Harmonic analysis and pointwise ergodic theorems for noncommuting transformations. J. Amer. Math. Soc. 7 (1994), no. 4, 875–902. A. Nevo, Spectral transfer and pointwise ergodic theorems for semi-simple Kazhdan groups. Math. Res. Lett. 5 (1998), no. 3, 305–325. A. Nevo, Pointwise ergodic theorems for actions of groups. Handbook of dynamical systems. Vol. 1B, 871–982, Elsevier B. V., Amsterdam, 2006. A. Nogueira, Orbit distribution on $\Bbb R^2$ under the natural action of ${\rm SL}(2,\Bbb Z)$. Indag. Math. (N.S.) 13 (2002), no. 1, 103–124. H. Oh, The Ruziewicz problem and distributing points on homogeneous spaces of a compact Lie group. Probability in mathematics. Israel J. Math. 149 (2005), 301–316. Y. Vorobets, On the uniform distribution of orbits of finitely generated groups and semigroups of plane isometries. Mat. Sb. 195 (2004), no. 2, 17–40; translation in Sb. Math. 195 (2004), no. 1-2, 163–186. Y. Vorobets, On the actions of finitely generated groups and semigroups on a plane by means of isometries. Mat. Zametki 75 (2004), no. 4, 523–548; translation in Math. Notes 75 (2004), no. 3-4, 489–512.
--- abstract: 'A longstanding problem for Deep Neural Networks (DNNs) is understanding their puzzling ability to generalize well. We approach this problem through the unconventional angle of *cognitive abstraction mechanisms*, drawing inspiration from recent neuroscience work, allowing us to define the Cognitive Neural Activation metric (CNA) for DNNs, which is the correlation between information complexity (entropy) of given input and the concentration of higher activation values in deeper layers of the network. The CNA is highly predictive of generalization ability, outperforming norm-and-margin-based generalization metrics on an extensive evaluation of over 100 dataset-and-network-architecture combinations, especially in cases where additive noise is present and/or training labels are corrupted. These strong empirical results show the usefulness of CNA as a generalization metric, and encourage further research on the connection between information complexity and representations in the deeper layers of networks in order to better understand the generalization capabilities of DNNs.' bibliography: - 'biblio.bib' --- Introduction ============ Deep neural networks (DNNs) have made big strides in recent years, improving substantially on state of the art results across many benchmarks and showing great generalization abilities [@lecun2015deep]. This is perhaps surprising given the large number of parameters, flexibility, and relative lack of explicit priors enforced in DNNs. Even more puzzling is their ability to memorize random datasets [@zhang2016understanding; @yun2019small]. In part due to this, there has been a surge of work aimed at understanding the crucial factors to high-performing DNNs. Some studies have approached explaining generalization in DNNs via optimization arguments characterizing critical points [@dauphin2014identifying; @kawaguchi2016deep; @haeffele2017global], the smoothness of loss surfaces [@nguyen2017loss; @choromanska2015loss; @li2017visualizing], and implicit priors and regularization brought on by DNNs’ learning methods [@mianjy2018implicit], including that overparameterization itself leads to better learned optima and easier optimization [@arpit2019benefits; @allen2019learning; @oymak2019towards; @allen2018convergence]. The work from [@morcos2018importance] provides insight by relating DNN generalization capability to reliance on single directions. Others have used information theory as a means of explaining the perfomance and inner-mechanisms of DNNs [@tishby2015deep; @shwartz2017opening]. In [@lampinen2018analytic], task structure of DNNs w.r.t. transfer learning is incorporated for tighter bounds on generalization error. Complementary to these works, we approach the problem of understanding generalization of DNNs via the unconventional angle of applying the cognitive neuroscience concept of abstraction mechanisms [@calvo2008handbook; @peters2017human; @taylor2015global; @shivhare2016cognitive; @gilead2014mind] – i.e. *how representations are formed that compress information content while retaining only information which is relevant for generalization to unseen examples*. Specifically, we seek to analyze DNNs’ representational patterns with comparison to abstraction mechanisms employed by the brain. Arguably, if we can quantify representational similarity (and dissimilarity) of these mechanisms in DNNs in comparison to the brain then it may aid in understanding their inner-mechanisms and could allow us to leverage current and ongoing neuroscience research. Additionally, it could bring new perspectives or understanding to the optimization, regularization, and information theory works cited in the previous paragraph. We focus on a study particularly amenable to algorithmic translation into conventional statistical learning settings (in our case, visual classification tasks) [@taylor2015global], though we review similar works in the following related works section. As a summary, in the work from [@taylor2015global], large-scale analyses of fMRI data were carried out to arrive at a precise notion of hierarchical abstraction in the brain. Roughly stated, deeper neurons, where neuron depth is defined as the distance from primary sensory cortices, show higher activation values when abstract behaviors or tasks are being performed. The opposite is true for less abstract tasks. Another way of stating the result is that the Pearson correlation between the linear regressed slopes of the activation values ordered by depth and the abstractness of the corresponding tasks is approximately 1. This neuroscience result is expounded on in section 3, and in the supplement, for clarity. We translate these results into the form of a measure applicable to DNNs, which we term the Cognitive Neural Activation metric (CNA), which is computationally tractable, can be applied to any network archecture and dataset, and is easy to implement. The organization of this paper is as follows: In section 3, we provide background on neuroscience results and introduce the precise definition of the CNA. In section 4.1 and 4.2, we empirically relate test error and the CNA. We show three main results: 1. The loss landscape of classification error overlaps nicely with the CNA. 2. High entropy images return up to 16 times larger test error across training epochs. 3. Test error significantly correlates with the CNA across a breadth of image datasets and network architectures. In section 4.3, we adapt the CNA to predict the *difference* between training and test error (termed the *generalization gap*), and empirically validate its efficacy on a breadth of architectures and datasets, comprising over 100 dataset-architecture combinations. The architectures include Multi-layer perceptrons (MLPs), VGG-18, ResNet-18, and ResNet-101. The datasets include ImageNet, CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, SVHN, and corrupted labels counterparts (i.e. the same datasets with varyling levels training labels shuffled). The CNA outperforms recent metrics derived from theoretical generalization error bounds, especially in non-standard settings, showing significantly more robustness to additive noise. **Contributions Summary:** We approach generalization in DNNs from an unconventional angle, where we connect abstractness mechanisms in the human brain to generalization error in DNNs in statistical learning settings. We explicitly translate a neuroscience result into a computationally tractable, differentiable mathematical expression, termed the Cognitive Neural Activation metric (CNA), that is easily implementable and can be applied to any network architecture. The CNA shows interesting connections to test error, and can be adapted such that it is predictive of generalization error in DNNs, outperforming recent generalization metrics based on theoretical generalization error bounds, and shows significantly more robustness to additive noise and label corruption. The strong emprical results encourage further work and exploration into this area of study. Related Work ============ Comparisons of DNNs to cognitive neuroscience are often qualitative in nature or serve as loose analogies for illustration purposes only, e.g. historically much of the design of MLPs and CNNs were loosely motivated by computational neuroscience. Rigorous or empirical analyses are less common, however important progress has been made in characterizing representational similarity between DNNs and the brain. Seminal works [@yamins2013hierarchical; @yamins2014performance] show significant similarity between the firing patterns of the visual cortex of primates and supervised models, as does the seminal work from [@khaligh2014deep] though with the caveat that unsupervised models do not. Recently, the Brain-Score has been developed [@schrimpf2018brain; @kubilius2019brain] which uses the mean-score of neural predictivity (how well a given DNN predicts a single neuron’s response in various visual systems of the primate) and behavioral similarity (how similar rates of correct and incorrect responses for specific inputs are between a given DNN and the primate) to rank the top-1 performance of state-of-the-art networks on ImageNet. The Brain-Score achieved significant correlation with top-performing networks’ performance on ImageNet, showing neural representational similarity between the brain and DNNs can have useful predictive properties and, additionally, developed a high-performing shallow RNN based on it. The CNA mainly differs from the Brain-Score in that the CNA’s primary application is in understanding and measuring the generalization gap between train and test sets across many different datasets and tasks, as opposed to ranking top-performing networks on ImageNet. Additionally, it is a differentiable geometric property or equation that, though grounded in empirical neuroscience data, does not actually use neuroscience data in its computation, i.e. it is a more basic function of a DNN’s activation distribution and training distribution, not requiring empirical neuroscience data during inference at training or test. Thus, it is closer in nature to statistical-learning-based bounds on the generalization gap [@neyshabur2017exploring] and information-theoretic loss functions, e.g. the mutual information objective from Deep InfoMax [@hjelm2018learning]. Other related works utilizing cognitive neural activity for practical application include [@shen2019deep], which used DNNs to reconstruct accurate, realistic-looking images from fMRI data, [@xu2018deeper], which made use of empirical neuronal firing data to develop methods for explainable interpretability of DNNs, and [@arend2018single], which show that many one-to-one mappings exist between individual neurons in DNNs and individual neurons in the brain as well as correspondences between population-level groups. In [@saxe2019mathematical], they study the learning dynamics of linear networks and show that the learned representations share phemenoma observed in human semantic development. Lastly, in [@richards2019deep], they give strong arguments for shifting the research paradigm of computational neuroscience towards utilizing three essential design components of DNNs: The objective functions, the learning rules, and the architectures. Besides cognitive neuroscience work, the CNA primarily acts as a stand-in for margin-and-norm-based generalization metrics. We cover this in more detail in our main empirical results section. The Cognitive Neural Activation Metric ====================================== Due to its amenability to algorithmic implementation in statistical learning settings, we focus on a specific computational neuroscience work from [@taylor2015global]. Briefly stated, the study aimed to validate and characterize the extent to which and in what ways hierarchical representation occurs in the human brain. They approached this by first hypothesizing that concrete functions occur “earlier” in the brain (closer to sensory input) whereas more abstract functions occur deeper in the brain (further from sensory input). Their main results can be summarized as: The abstraction level of tasks highly correlates with higher concentration of neural activity “deeper” in the brain. The computationally tractable, DNN analogue to this can be expressed as the correlation between information complexity of images and the concentration of high activation values in the deeper layers of the DNN. To quantify concentration of high activation values, we use the linearly regressed slope of the sum of activations by layer depth as a coarse measure[^1], illustrated in Figure \[fig:cna-illus\] for intuitive visualization of the principles of CNA. A high level definition and overview of the CNA can be seen in Figure \[fig:cna-equations\]. ![image](brain-taylor.jpg){width="0.65\linewidth"} **(A)** **(B)** **Defining the CNA** For a network architecture $A$ and dataset $X$ with $n$ data points, define 1. $\alpha{(x)}$ – the information complexity (computed via histogram-binning approximation of Shannon entropy) of every datapoint $x \in X$, 2. $ \beta{(x)}$ – the slope of neuronal activity of network $A$ when presented with $x \in X$, 3. $\bm{\alpha}, \bm{\beta}$ – the vectors of length $n$ comprising the complexity and slope values on the whole dataset $X$. The CNA is defined by the Pearson correlation between the information complexity and the slope: $$\rho_{\bf{\alpha,\beta}} = \frac{\text{cov}(\bm{\alpha},\boldsymbol{\beta})}{\sigma_\alpha \sigma_\beta}$$ where $\text{cov}(\bm{\alpha},\boldsymbol{\beta})$ is the sample covariance of the two vectors: $\frac{1}{n-1}\sum_i(\boldsymbol{\alpha}_i - \overline{\boldsymbol{\alpha}})(\boldsymbol{\beta}_i - \overline{\boldsymbol{\beta}})$, $\overline{\boldsymbol{\alpha}}$ and $\overline{\boldsymbol{\beta}}$ are the means, and $\sigma_\alpha$, and $\sigma_\beta$ are the sample standard deviations $\frac{1}{n-1}\sum_i(\boldsymbol{\alpha}_i - \overline{\boldsymbol{\alpha}})^2$ and $\frac{1}{n-1}\sum_i(\boldsymbol{\beta}_i - \overline{\boldsymbol{\beta}})^2$. We now proceed with a more rigorous, motivated treatment of the CNA and its neuroscience motivations. Neuroscience Motivations ------------------------ The work from [@taylor2015global] aimed to validate and characterize the extent to which and in what ways hierarchical representation occurs in the human brain. They approached this by first hypothesizing that concrete functions occur “earlier” in the brain (closer to sensory input) whereas more abstract functions occur deeper in the brain (further from sensory input). To assess this, they associated every region of interest in the brain with a number corresponding to its integrated distance from sensory cortices based on rsfMRI and DTI data, which they termed *network depth*. Neurons were then binned according to the defined depth measure and activations for each bin were assessed via large-scale behavioral fMRI data analysis. By ordering the bins by depth and then performing a linear regression on the aggregated activation values, a scalar-value slope can be obtained for a given behavior or task. Interestingly, these slopes were highly correlated with the abstraction of the task as obtained from Amazon Mechanical Turk (MTurk) surveys. This high correlation result is what we will focus on translating into a form that can be applied to DNNs. For a more detailed description of the neuroscience study, please refer to the supplement. We now precisely define the various components that comprise this result. Define the slope function $\beta$ which maps a task $T$ to its corresponding scalar-value slope as $$\beta : \text{ task } T \longrightarrow \mathbb{R}$$ Similarly, denote the abstractness measure function $\alpha$ which maps a task $T$ to its estimated measure of abstraction (estimated by MTurk in [@taylor2015global]) as $$\alpha : \text{ task } T \longrightarrow \mathbb{R}$$ For $n$ tasks, denoted $T_1, T_2, \dots, T_n$, denote $\boldsymbol{\beta}\in \mathbb{R}^n$ as the vector of slopes where $\boldsymbol{\beta}_i = \beta(T_i)$ and denote $\mathbf{\alpha}\in \mathbb{R}^n$ as the vector of abstraction measurements where $\mathbf{\alpha}_i = \alpha(T_i)$. We denote the Pearson correlation between $\mathbf{\alpha}$ and $\boldsymbol{\beta}$ for $n$ tasks as $$\rho_{\alpha,\beta} = \frac{\text{cov}(\mathbf{\alpha},\boldsymbol{\beta})}{\sigma_\alpha \sigma_\beta}$$ where $\sigma_\alpha$ and $\sigma_\beta$ denote the standard deviations of $\mathbf{\alpha}$ and $\boldsymbol{\beta}$. Finally, the neuroscience results can be restated as being equivalent to $$\rho_{\alpha,\beta} \approx 1$$ for MTurk abstractness measurements $\mathbf{\alpha}$ and the slopes $\boldsymbol{\beta}$ obtained from the corresponding cognitive behavioral tasks. Thus, if we can determine appropriate definitions of $\mathbf{\alpha}$ and $\boldsymbol{\beta}$ for DNNs, then we now have a closed-form measure $\rho_{\alpha,\beta}$ for the extent to which a DNN employs similar abstraction mechanisms, where the activation distribution patterns are more similar as $\rho_{\alpha,\beta}$ approaches 1, unrelated as $\rho_{\alpha,\beta}$ approaches 0, and show the opposite pattern to that of the brain as $\rho_{\alpha,\beta}$ approaches -1. An illustration of a network closer to the $\rho_{\alpha,\beta} \approx 1$ pattern is shown in Figure \[fig:cna-illus\]B. Defining the CNA for DNNs ------------------------- Now, the question remains, what are appropriate definitions for $\mathbf{\alpha}$ and $\boldsymbol{\beta}$? Defining $\boldsymbol{\beta}$ is straight-forward: Simply order the layers by depth (already explicitly defined for DNNs), define a method for aggregation of layer-wise activation values (e.g. mean, sum, etc.), and perform a linear regression on those values to arrive at a slope value. Defining $\mathbf{\alpha}$ is less straight-forward since abstractness is not well defined in statistical learning settings. We could make use of MTurk surveys, as was done in [@taylor2015global], however this is not desirable for many reasons, especially since DNNs often require very large numbers of datapoints in order to be trained. To arrive at a well-defined notion for the CNA that is computationally tractable, we depart from biological equivalence and focus on the notion of compressibility as an analogue to abstraction. In fact, the MTurk results from [@taylor2015global] rely on the following definition of abstractness, > “A process of creating general concepts or representations ... often with the goal of compressing the information content ... and retaining only information which is relevant.” which corresponds to *information compressibility*. Shannon entropy, easily implemented and calculated through histogram binning of feature values, is a lower bound for minimum description length, or Kolmogorov complexity, [@grunwald2004shannon] thus conveniently connecting the notion of abstractness to statistical learning settings[^2]. We now precisely define the CNA for a given network, parameterization, and batch of datapoints: For a data point $x \in \mathbb{R}^{k}$ with $k$ features, denote the informational complexity of $x$ as the scalar $\alpha(x)$, where $\alpha(x)$ is computed via a histrogram-binning Shannon entropy approximation. We define the slope $\beta(x)$ of a given network and datapoint $x$. Then, we define the Cognitive Neural Activation metric $CNA(\mathbf{X})$ for a given network, dataset $\mathbf{X} \in \mathbb{R}^{N \times k}$ of size $N$. For a given feedforward network with $L$ layers and input vector $x$, let $z_\ell^k(x)$ be the pre-activation state of neuron $k$ in layer $\ell$ given input vector $x$, let $n_\ell$ be the number of neurons in layer $\ell$, and let $z_\ell(x) = \sum_{k=1}^{n_\ell} z_\ell^k(x)$, i.e. the sum of the pre-activation values in layer $\ell$. Let $\bm{z}(x)$ be the vector of length $L$ where $[\bm{z}(x)]_\ell = z_\ell(x)$ for $\ell = 1, \dots, L$. Peforming a linear regression via least squares on the points $\{ (\ell, z_\ell(x)) \mid \ell = 1,\dots,L \}$, we obtain the slope $\beta(x)$ of the network. For a given feedforward network with $L$ layers and data matrix $\mathbf{X} \in \mathbb{R}^{N \times r}$, where $N$ corresponds to the number of samples: Let $x_i$ denote the $i^{\text{th}}$ row of $\mathbf{X}$. Let $\bm{\beta}(X)$ be the vector of length $N$ where $[\bm{\beta}(\mathbf{X})]_i = \beta(x_i)$ for $i = 1, \dots, N$ and let $\bm{\alpha}(\mathbf{X})$ be the vector of length $N$ where $[\bm{\alpha}(\mathbf{X})]_i = \alpha(x_i)$ for $i = 1, \dots, N$. $CNA(\mathbf{X})$ is defined as $corr(\bm{\beta}(\mathbf{X}),\bm{\alpha}(\mathbf{X}))$, where $corr$ denotes Pearson correlation. Thus, we arrive at our definition for the CNA. This can be straightforwardly extended to any CNN variant where $X$ instead exists in $\mathbb{R}^{N\times c \times h \times w}$, where $c$ denotes number of channels, $h$ denotes height of input, and $w$ denotes width of input. The average activation $z_\ell(x)$ of intermdiate layer $\ell$ is just calculated as the average of all activations for that layer, i.e. flattening the intermediate representation will yield the same result for $z_\ell(x)$. Experimental Results ==================== In this section, we show empirical validation of the CNA as a useful measure for understanding training in DNNs and their generalization error. In sections 4.1 and 4.2, we show through empirical experiments and visualization of the loss landscape that there is a close connection between information complexity, CNA, and proper training of DNNs. In section 4.3, we adapt the CNA to be indicative of the generalization gap, the difference between training error and test error. We compare the CNA to other norm-and-margin based generalization gap metrics, showing it outperforms them, especially on datasets where there is additive noise and/or label corruption. How Does CNA Vary During Training? ---------------------------------- Intuitively, the CNA is a measure of how similar a given DNNs’ abstraction mechanisms are to that of the human brain on a given dataset. If a DNN generalizes well and has “learned” high-level concepts, it is conceivable that its CNA value, then, would be high. On the other hand, if the mechanisms by which DNNs abstract show no similarity to that of the brain, then we would expect to see no relationship between performance on a task and the CNA value. As a first step to investigating this, we train a simple MLP on the MNIST dataset and track the CNA value over training time. This is seen in Figure \[fig:cnacurve\]. From the figure, the CNA clearly tracks training loss well, suggesting that the DNN is learning specific abstraction mechanisms over training time. ![A simple MLP trained on MNIST with curves the training loss values (Top) and CNA values (Bottom) over training time. It is clear that the CNA shows a high correlation with training loss, with inflection points of both curves occuring at roughly the same timestep.[]{data-label="fig:cnacurve"}](train_loss_curve "fig:"){width="0.75\linewidth"} ![A simple MLP trained on MNIST with curves the training loss values (Top) and CNA values (Bottom) over training time. It is clear that the CNA shows a high correlation with training loss, with inflection points of both curves occuring at roughly the same timestep.[]{data-label="fig:cnacurve"}](cna-curve "fig:"){width="0.75\linewidth"} Another interpretation of this result is simply that the loss landscape of the CNA and the supervised loss function (in this case, categorical cross entropy) are similar. If the gradient of the CNA function and the gradient of the supervised loss function are well-aligned throughout training time, then there is a significant chance for the CNA and the loss to be correlated. To investigate this, we record all neuronal activation values of the MLP at each minibatch update. We then perform PCA on the recorded neuronal activations values in order to visualize the optimization path of the network over training time, plotting the network state as a function of its principal components. Then, we sample from the x-y plane of principal component values and calculate the CNA value at each sample. This allows us to approximate and visualize the CNA loss landscape in the lower-dimensional space. The network converged to around 98% test accuracy, showing that the high CNA value may be indicative of high test accuracy. These results and visualization can be seen in Figure \[fig:optimization-path-2d-2\]. ![A low-dimensional visualization of the optimization path of an MLP over training time on MNIST, showing the network approximately traverses the CNA loss surface, despite being trained only on classification loss. The network state (all neuronal activation values) was recorded during each training step and then visualized using PCA (the red curve) with “Start” and “End” denoting the start and end points of training, with the x-axis and y-axis corresponding to the principal component values. The contour map behind the red curve (i.e. the CNA loss surface) was generated via sampling from the 2D principal component space and calculating the CNA value at each point. Best viewed in color.[]{data-label="fig:optimization-path-2d-2"}](optimization-path-2d-2){width="0.9\linewidth"} The results of Figure \[fig:optimization-path-2d-2\] are perhaps surprising given that the CNA does not depend on labels, whereas the classification objective does. How, then, would the gradients of these two very different objectives be well aligned? A cursory look into the gradient expressions of the two terms gives a possible explanation, which we leave in the supplement in the interest of space. In short, the CNA and the supervised loss function could be well aligned in some cases where the error terms and information complexity terms for given datapoints significantly correlate. To give further credence to this explanation, we empirically analyze the relationship between the test error, which we denote as $\varepsilon$, and $\alpha$. We look at the mean error of the network on data points of varying $\alpha$ values to see if there is any relationship. Should the gradient argument hold true, we should expect that high-complexity datapoints will show larger error. This analysis is shown in Figure \[fig:error-complexity-curves\]. We bin datapoints by their $\alpha$ values and plot their mean test error over training time, e.g. the blue curve corresponds to the datapoints with the top 20% complexity values. ![A plot of the mean test error across training time for bins of datapoints with varying levels of information complexity. There is a clear monotonic relationship between test error and complexity, especially at the beginning of training time when the CNA and test errors have the largest change. Best viewed in color.[]{data-label="fig:error-complexity-curves"}](error-complexity-curves){width="0.9\linewidth"} Interestingly the figure shows a very clear, monotonic relationship between $\alpha$ and $\varepsilon$, especially in the beginning of training time. The differences between $\varepsilon$ at different complexity levels quickly decreases, though maintains its ordering, as training time increases. This makes sense given the network decreases its loss, and increases its CNA value, the most during the beginning epochs as was seen in Figure \[fig:cnacurve\]. In summary, these results and figures show an interesting relationship between complexity of datapoints, CNA, training of DNNs, and test performance. These experiments by no means warrant broad conclusions – more extensive analysis would be needed to draw more certain conclusions. Nonetheless, these raise questions as to how close the relationship is between CNA and training in DNNs, and whether there is a tight causal relationship between abstraction mechanisms and training. Extensive Evaluation of CNA and Test Performance ------------------------------------------------ We now carry out a far more extensive analysis of the CNA and test performance. We train four different architectures (MLP, VGG-18, ResNet-18, and ResNet-101) across six different datasets (MNIST, Fashion-MNIST, SVHN, CIFAR-10, CIFAR-100, and ImageNet) each, recording the network state, test error, and CNA value at every 20th epoch, comprising over 100 dataset-architecture combinations. These results are shown in Figure \[fig:bam-gen-fig\]. There is a high correlation between CNA and test accuracy, suggesting close causal relationship between abstraction mechanisms and generalization ability. The result is especially convincing given the broad range of architectures and datasets tested on. ![The CNA strongly correlates with test accuracy. Here we show 147 dataset-architecture combinations (i.e., each dot represents a trained network) across six different datasets (ImageNet, CIFAR-10, CIFAR-100, SVHN, MNIST, Fashion-MNIST), four different architectures (MLP, VGG-18, ResNet-18, and ResNet-101), and measured at multiple stages of training (every 20 epochs). CNA correlates significantly with test accuracy, with a nearly linear relationship at greater than 70%, suggesting neural activation properties of DNNs become more similar to the brain as classification results improve.[]{data-label="fig:bam-gen-fig"}](bam-gen-fig6){width="0.8\linewidth"} This extensive evaluation gives more credence to the hypothesis that, as was seen in the previous subsection, the CNA, training in DNNs, and generalization in DNNs have an important relationship that warrants further study. Extensive Evaluation of CNA and the Generalization Gap ------------------------------------------------------ Much work has been done on generalization bounds for DNNs based on norm and spectral properties of the weights [@neyshabur2017pac; @neyshabur2017exploring]. Others include [@arora2018stronger], which give bounds for DNNs based on compression properties, and [@neyshabur2018towards], which give bounds based on overparameterization of DNNs. In [@jiang2018predicting], a margin-based metric is developed and shows great success in correlating with the generalization gap, although with the drawback that a linear model needs to be fit between the generalization gap and margin parameters for each individual network and dataset. As empirical validation of CNA, we focus on the very general setting where any network architecture, task, and dataset is allowed, and no model fitting with respect to the generalization gap is done, i.e. the developed metric must be predictive *a priori*. To this end, we make use of the same 147 networks shown in Figure \[fig:bam-gen-fig\], and evaluate them on the CNA-Margin (a modified version of the CNA detailed in the supplement) and the competitive generalization metrics specified. Specifically, we record the generalization gap (the difference between train and test accuracy), each generalization metric for all networks, and calculate the Pearson correlation between each metric and the generalization gap. We additionally include a a Gaussian noise dataset. Each point in this dataset is drawn from the standard normal distribution of shape $3 \times 32 \times 32$ and labeled randomly to one of 10 classes – the networks then memorize the training set. The norm-based metrics perform very poorly on this Gaussian noise dataset, whereas the CNA-Margin remains comparatively robust. These results are shown in In Figure \[fig:gen-bar-gap\]A. The correlation is shown for all metrics, for each architecture, and for all architectures in aggregate (denoted “All Nets”). Lastly, we train a subset of the networks on the same datasets, except with a varying degree of shuffled labels, ranging from 10% to 50% labels shuffled during training time. Similar to Figure \[fig:gen-bar-gap\]A, the CNA-Margin remains robust compared to other metrics. All training details are included in the supplement, along with the chart of Figure \[fig:gen-bar-gap\]A without the Gaussian dataset included for comparison purposes. ![We show the Pearson correlation of various generalization metrics with the train-test generalization gaps of over 177 combinations of networks. In total: Datasets (ImageNet, CIFAR-10, CIFAR-100, SVHN, MNIST, Fashion-MNIST) and network architectures (MLP, VGG-18, ResNet-18, ResNet-100), which were analyzed every 20 training epochs. We show the correlation conditional on network architecture as well as for all networks in aggregate (“All Nets”). The CNA is comparatively robust to various types of corruption including in **(A)** where a random Gaussian noise dataset is included (detailed in the main text and supplement), and **(B)** where varying degrees of label corruption is present.[]{data-label="fig:gen-bar-gap"}](gen-bar-random){width="0.95\linewidth"} **(A)** ![We show the Pearson correlation of various generalization metrics with the train-test generalization gaps of over 177 combinations of networks. In total: Datasets (ImageNet, CIFAR-10, CIFAR-100, SVHN, MNIST, Fashion-MNIST) and network architectures (MLP, VGG-18, ResNet-18, ResNet-100), which were analyzed every 20 training epochs. We show the correlation conditional on network architecture as well as for all networks in aggregate (“All Nets”). The CNA is comparatively robust to various types of corruption including in **(A)** where a random Gaussian noise dataset is included (detailed in the main text and supplement), and **(B)** where varying degrees of label corruption is present.[]{data-label="fig:gen-bar-gap"}](gen-bar-shuffle){width="0.95\linewidth"} **(B)** Conclusion ========== We provide principled motivation for a generalization metric inspired by cognitive neuroscience results. Interestingly, and perhaps suprisingly, the CNA shows connections with and predictive power for task performance in DNNs across a wide range of scenarios. Our CNA formulations show a practical use-case in predicting the generalization gap, outperforming margin-and-norm-based metrics, especially in the presence of dataset corruption. To our knowledge, our results comprise the most extensive study of generalization gap metrics in terms of breadth of dataset-architecture combinations considered. Through both small-scale and large-scale experiments, we show strong empirical support for the value of future work on understanding the relationship between abstract mechanisms, information complexity, and generalization capabilities in DNNs. [^1]: Of course, the relationship between neural activation values and depth is almost certainly nonlinear – however, the purpose of the slope is to serve as a rough measure of neuronal activity, not to model the relationship between activation and depth. [^2]: Ideally, the notional equivalence of compressibility and abstractness could be empirically vetted via human psychological experiments. Nonetheless, this analogue is useful in statistical learning settings given that the CNA has great predictive power, as shown in later sections. We focus on the statistical learning field, leaving further human psychological experiments as future work outside the scope of this paper.
--- author: - Mikhail Kovalev - Maria Bergemann - 'Yuan-Sen Ting[^1]' - 'Hans-Walter Rix' date: 'Received 9 May, 2019; accepted xxx' title: 'NLTE Chemical abundances in Galactic open and globular clusters[^2].' --- [We study the effects of non-local thermodynamic equilibrium (NLTE) on the determination of stellar parameters and abundances of Fe, Mg, and Ti from the medium-resolution spectra of FGK stars.]{} [We extend *the Payne* fitting approach to draw on NLTE and LTE spectral models. These are used to analyse the spectra of the Gaia-ESO benchmark stars and the spectra of 742 stars in 13 open and globular clusters in the Milky Way: NGC 3532, NGC 5927, NGC 2243, NGC 104, NGC 1851, NGC 2808, NGC 362, M2, MGC 6752, NGC 1904, NGC 4833, NGC 4372 and M15.]{} [Our approach accurately recovers effective temperatures, surface gravities, and abundances of the benchmark stars and clusters members. The differences between NLTE and LTE are significant in the metal-poor regime, \[Fe/H\] $\lesssim -1$. The NLTE $\feh$ values are systematically higher, whereas the average NLTE \[Mg/Fe\] abundance ratios are $\sim 0.15$ dex lower, compared to LTE. Our LTE measurements of metallicities and abundances of stars in Galactic clusters are in a good agreement with the literature. Yet, for most clusters, our study yields the first estimates of NLTE abundances of Fe, Mg and Ti.]{} [All clusters investigated in this work are homogeneous in Fe and Ti, with the intra-cluster abundance variations of less then 0.04 dex. NGC 2808, NGC 4833, M2 and M 15 show significant dispersions in \[Mg/Fe\]. Contrary to common assumptions, the NLTE analysis changes the mean abundance ratios in the clusters, but it does not influence the intra-cluster abundance dispersions.]{} Introduction ============ Fast and reliable modelling of stellar spectra is becoming increasingly important for current stellar and Galactic astrophysics. Large-scale spectroscopic stellar surveys, such as Gaia-ESO [@Gilmore2012; @Randich2013], APOGEE [@Majewski2015], and GALAH [@DeSilva2015] are revolutionising our understanding of the structure and evolution of the Milky Way galaxy, stellar populations, and stellar physics. The ever-increasing amount of high-quality spectra, in return, demands rigorous, physically-realistic, and efficient data analysis techniques to provide an accurate diagnostic of stellar parameters and abundances. This problem has two sides. Precise spectral fitting and analysis requires powerful numerical optimisation and data-model comparison algorithms. On the other hand, the accuracy of stellar label estimates is mostly limited by the physics of spectral models used in the model-data comparison. The fitting aspect has been the subject of extensive studies over the past years, and various methods [e.g. @matisse; @Schoenrich2014; @ness2015; @CaseyCannonRAVE; @ting2018] have been developed and applied to the analysis of large survey datasets. Major developments have also occurred in the field of stellar atmosphere physics. Non-local Thermodynamic Equilibrium (NLTE) radiative transfer is now routinely performed for many elements in the periodic table. This allows detailed calculations of spectral profiles that account for NLTE effects. NLTE models consistently describe the interaction of the gas particles in stellar atmospheres with the radiation field [@Auer1969a], in this respect being more realistic than LTE models. In NLTE, photons affect atomic energy level populations, whilst in LTE those are set solely by the Saha equation for ionisation and by the Boltzmann distribution for excitation. NLTE models predict more realistic absorption line profiles and hence provide more accurate stellar parameters and abundances [e.g. @ruchti2013; @zhao2016]. However NLTE models are often incomplete in terms of atomic data, such as collisions with H atoms and electrons or photo-ionisation cross-sections. Major efforts to improve atomic data are underway [e.g. @yakovleva2016; @bautista2017; @belyaev2017; @barklem2017; @amarsi2018; @barklem2018] and there is no doubt that many gaps in the existing atomic and molecular databases will be filled in the near-term future. Besides, strictly speaking, no single NLTE model is complete in terms of atomic data, and also quantum-mechanical cross-sections are usually available for a small part of the full atomic or molecular system [@barklem2016]. In this work, we study the effect of NLTE on the analysis of stellar parameters and chemical abundances for FGK-type stars. We combine NLTE stellar spectral models with *the Payne*[^3] code developed by @ting2018 and apply our methods to the observed stellar spectra from the 3$^{rd}$ public data release by the Gaia-ESO survey. This work is a proof-of-concept of the combined NLTE-Payne approach and it is, hence, limited to the analysis to the Gaia-ESO benchmark stars and a sample of Galactic open and globular clusters, for which independent estimates of stellar labels, both stellar parameters and detailed abundances are available from earlier studies. The paper is organised as follows. In Section \[Method\], we describe the observed sample, the physical aspects of the theoretical spectral models, and the mathematical basis of *the Payne* code. We present the LTE and NLTE results in Section \[Results\] and compare them with the literature in Section \[discussion\]. Section \[Conclusions\] summarises the conclusions and outlines future prospects arising from this work. Methods {#Method} ======= Observed spectra {#Observations} ---------------- We use the spectra of FGK stars observed within the Gaia-ESO spectroscopic survey [@Gilmore2012; @Randich2013]. These spectra are now publicly available as a part of the third data release (DR3.1)[^4]. The data were obtained with the Giraffe instrument [@Pasquini2002] at the ESO (European Southern Observatory) VLT (Very Large Telescope). We use the spectra taken with the HR10 setting, which covers 280 Å  from 5334 Å  to 5611 Å, at a resolving power of $R=\lambda/ \Delta \lambda \sim 19\,800$. The average signal-to-noise ratio ($\snr$) of a spectrum ranges from 90 to 2800 per Å[^5], with the majority of the spectra sampling the $\snr$ in range of 150-200 Å$^{-1}$. Our observed sample contains 916 FGK-type stars with luminosity classes from III to V that includes main-sequence (MS), subgiants, and red giant branch (RGB) stars. A fraction of these are the Gaia-ESO benchmark stars (174 spectra of 19 stars), but we also include 742 stars in two open and 11 globular clusters. We exclude four benchmark stars with effective temperature $\teff<4000$ K, because this regime of stellar parameters is not covered by our model atmosphere grids. $\beta$ Ara is not a part of our calibration sample, as it is not recommended as a benchmark in @Pancino2017. These stars are previously analysed by Gaia-ESO [@Smiljanic2014a; @sanroman2015; @Pancino2017a] and included in the The Gaia-ESO DR3 catalogue. We estimate the radial velocity (RV) by cross-correlating the observed spectrum with a synthetic metal-poor spectral template ($\teff=5800$ K,$~\logg=4.5$ dex, $ \feh=-2$  dex)[^6], which is shifted in the RV range of $\pm 400~\kms$ with a step of $0.5~\kms$. We compute the cross-correlation function for all RV values and fit a parabola to $20$ points around the maximum value of the cross-correlation function. Then we apply the Doppler-shift to the observed spectrum using the velocity value at the position of the peak of the parabola. Since cross-correlation can incur small errors due to step size/template choice, we later fit for residual shift in the range $\pm~2 \kms$. Model atmospheres and synthetic spectra --------------------------------------- The grids of LTE and NLTE synthetic spectra are computed using the new online spectrum synthesis tool <http://nlte.mpia.de>. The model atmospheres are 1D plane-parallel hydrostatic LTE models taken from the MAFAGS-OS grid [@Grupp2004; @Grupp2004a]. For the NLTE grid we first compute the NLTE atomic number densities for Mg [@Bergemann2017a], Ti [@Bergemann2011], Fe [@Bergemann2012c] and Mn [@bergemann2008] using the DETAIL statistical equilibrium (SE) code [@detail]. These are then fed into the SIU [@SIUReetz] radiative transfer (RT) and spectrum synthesis code. In total, $626$ spectral lines of Mg I, Ti I, Fe I and Mn I are modelled in NLTE for the NLTE grid, while for the LTE grid these lines are modelled with default LTE atomic level populations. Our approach is conceptually similar to @buder2018a, but we employ different SE and RT codes. We have chosen to use the MAFAGS-OS atmosphere grids, because these are internally consistent with DETAIL and SIU. In particular, the latter codes adopt the atomic and molecular partial pressures and partition functions that are supplied with the MAFAGS-OS models. We compute $20\,000$ spectral models with $\teff$ uniformly distributed in the range from $4000$ to $7000$ K and $\logg$s in the range from $1.0$ to $5.0$ dex. Metallicity[^7], $\feh$, is uniformly distributed in the range from $\feh = -2.6$ to $0.5$ dex. We also allow for random variations in the rations of the magnesium, titanium, manganese to iron: \[Mg/Fe\], \[Ti/Fe\] from $-0.4$ to $0.8$ dex and \[Mn/Fe\] from $-0.8$ to $0.4$ dex. The abundances of other chemical elements are assumed to be solar and follow the iron abundance $\feh$. In the metal-poor regime ($\feh<-1$ dex), some elements (like important opacity contributors C and O) can be significantly enhanced relative to the solar values. Therefore we computed several metal-poor synthetic spectra using a $0.5$ dex enhancement of C and O abundances and found that there is no impact on the spectral models. Micro-turbulence varies from 0.6 to 2.0 $\kms$, in line with high-resolution studies of FGK stars [e.g. @ruchti2013]. The detailed solar abundances assumed in the MAFAGS-OS grids are reported in @Grupp2004. For the elements treated in NLTE, we adopt logA(Mg)$_\odot=7.58$ dex, logA(Ti)$_\odot=4.94$ dex, logA(Mn)$_\odot=5.53$ dex and logA(Fe)$_\odot=7.50$ dex [meteoritic values from @grevesse1998]. The widths of spectral lines in the observed spectra depend on many effects, such as the properties of the instrument, turbulence in stellar atmospheres, and stellar rotation [@gray]. However, it is not possible to separate these effects at the resolution of the Giraffe spectra. Hence, the macroturbulence, $V_{\rm mac}$, and the projected rotation velocity, $V{\rm sin~i}$, are dealt with by smoothing the model spectra with a Gaussian kernel, which corresponds to a characteristic velocity $V_{\rm broad}$ in the range from $5.0$ to $25.0$ $\kms$ that encompasses the typical values of $V_{\rm mac}$ and $V{\rm sin~i}$ reported for FGK stars [@gray; @Jofre2015]. After that, the synthetic spectra are degraded to the resolution of the HR10 setup by convolving them with an instrumental profile (Appendix \[LSF\]) and are re-sampled onto the observed spectrum wavelength grid using the sampling of $20$ wavelength points per Å. The Payne code {#payne} -------------- The data-model comparison is not performed directly. Instead, we use *the Payne* code to interpolate in the grid of synthetic spectra. The approach consists of two stages: the training (model building) and the test (data fitting) steps. In the training step, we build a *Payne* model using a set of pre-computed LTE and NLTE stellar spectra. We approximate the variation of the flux using an artificial neural network (ANN). In the test step, $\chi^2$ minimisation is employed to find the best-fit stellar parameters and abundances by comparing the model spectra to the observations. In what follows, we describe the key details of the method. For more details on the algorithm, we refer the reader to @ting2018. The conceptual idea of the code is simple. We employ a simple ANN that consists of several fully connected layers of neurons: an input layer, two hidden layers, and an output layer. The input data are given by a set of stellar parameters (hereafter, labels) $\teff$, $\logg$, $\Vmic$, $V_{\rm broad}$, $\feh$, $\mgfe$, $\tife$ and $\mnfe$. The output data comprise the normalised flux values tabulated on a wavelength grid, as a function of the input labels. Three hundred neurons in each hidden layer apply a weight and an offset to the output from the previous layer, and these outputs are activated using a $ReLU(z)={\rm max}(z,0)$ function for the first layer and a sigmoid function $s(z)=(1+e^{-z})^{-1}$ for the second layer. A subset of the pre-computed spectral grid (that is $15\,000$ synthetic spectra) is used to train the ANN, whereby the weights and the offsets are adjusted to the optimal values. This subset is referred to as a *training set*. We train the neural networks by minimising the $L^2$ loss. In other words, we compute a minimal sum of the Euclidean distances between the target ab-initio flux from the training set and the flux predicted by the model at each wavelength point. We use cross-validation with the remaining set of $5000$ spectra, which are referred to as a *cross-validation set* to prevent over-fitting. This requires optimal values of the ANN to decrease $L^2$ loss also for the *cross-validation set*, which is not directly used during training. Together, the ANN layers act like a function that predicts a flux spectrum for a set of given labels. The main difference of the current implementation of *Payne* with respect to the one in @ting2018 is that we use only one ANN to represent the full stellar spectrum. In our realisation[^8] an ANN can exploit information from the adjacent pixels, while previously each individual pixel was trained separately. A synthetic spectrum is generated at arbitrary points in stellar parameter space within the domain of the training grid and is compared to the observed spectrum. A standard $\chi^2$ minimisation is used to compute the likelihood of the fit and, hence, to find the stellar parameters that best characterise the observed spectrum. We also allow for a small Doppler shift, $\pm~2~\kms$, on top of the RV from cross-correlation, to optimise the spectral fit. The continuum normalisation of the observed spectra is performed during the $\chi^2$ minimisation. We search for the coefficients of a linear combination of the first ten Chebyshev polynomials, which represents a function that fits the shape of the continuum, using the full observed spectrum. A synthetic spectrum is then multiplied with this function. In total, for each observed spectrum, we optimise 19 free parameters: one Doppler shift, eight spectral labels and ten coefficients of Chebyshev polynomials. The abundances of individual elements are derived simultaneously with other stellar parameters via the full spectral fitting process. We also employed the classical method of fitting separately each spectral line using line masks. However, this method delivers less precise abundances, as gauged by the star-to-star scatter, hence, we do not use the line masks in the final abundance analysis. Following the result in @Bergemann2011 which strongly recommended to use only Ti II lines in abundance analysis, we masked out all Ti I lines. We note, however, that we did not include NLTE calculations for Ti II, as the NLTE effects on this ion are very small in the metallicity regime of our sample [@Bergemann2011]. Hence, the difference between our LTE and NLTE Ti abundance reflects only an indirect effect of NLTE on stellar parameters. Internal accuracy of the method {#cvtest} ------------------------------- We verify the internal accuracy of the method by subjecting it to tests similar to those employed by @ting2018. First, we compare the interpolated synthetic spectra to the original models from the cross-validation sample. In this case we explore how well *the Payne* can generate new spectrum. The median interpolation error of the flux across $5000$ models is $\leq 10^{-3}$, that is, within $0.1$%. We also find that larger errors occur for cooler stars, because there are many more spectral features. This result suggests that interpolation is more accurate than the typical $\snr$ of observed spectrum. Second, we test how well we can recover original labels from the model, through $\chi^2$ minimisation. In this case we apply random Doppler shift, multiply the model spectrum by a random combination of the first ten Chebyshev polynomials, that represent the continuum level and add noise. Such a modified model serves as a fair representation of a real observed spectrum. The tests are performed for the noiseless models and the models degraded to a $\snr$ of 90 Å$^{-1}$ and 224 Å$^{-1}$. This range of $\snr$ brackets the typical values of the observed HR10 spectra, with the majority of the spectra sampling the $\snr$ range of 150-200 Å$^{-1}$. The typical $\snr$ of the spectra of the benchmark stars is $\sim 200$ Å$^{-1}$. Table \[tab:interr\] presents the average differences between the input and the output stellar parameters for the cross-validation sample. The scatter is represented by one standard deviation. To facilitate the analysis, we group the results into three metallicity bins. The results for the noiseless models with \[Fe/H\] in the range from $-1.6$ to $0.5$ dex suggest high internal accuracy of the method. For the lower-metallicity models, there is a small bias and a larger dispersion in the residuals, because we have less spectral information in this regime. The bias is also marginal for the high-$\snr$ spectra with $\snr = 224$ Å$^{-1}$, although the scatter in the output is increased compared to the noiseless models. Our analysis of the noisy models, $\snr = 90$ Å$^{-1}$, yields acceptable results for the metal-rich and moderately metal-poor stars with $\feh \gtrapprox -1.6$ dex. On the other hand, the most metal-poor noisy spectra are not fitted well. Despite a modest bias in $\teff$, the dispersion of $\log g$ and the abundance ratios is very large and may require a different approach to obtain high-precision abundances in this regime. According to this test, good Mn abundances (better than $\sim 0.1$ dex) can be derived only for metal-rich stars. These tests illustrate only the internal accuracy of *the Payne* model reconstruction and, hence, set the minimum uncertainty on the parameters determined by our method, regardless of the training sample, its physical properties and completeness. The analysis of observed data may result in a larger uncertainty, as various other effects, such as the physical complexity of the model atmospheres and synthetic spectra and properties of the observed data (data reduction effects etc.), will contribute to the total uncertainties. We test this in the next section by analysing the Gaia-ESO benchmark stars. ---------- ----------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- $\snr$ $\feh$ $\Delta\teff$ $\Delta\logg$ $\Delta\Vmic$ $\Delta\Vbrd$ $\Delta\feh$ $\Delta\mgfe$ $\Delta\tife$ $\Delta\mnfe$ Å$^{-1}$ dex  1000 K dex  $\kms$  10 $\kms$ dex dex dex dex 90 -2.6:-1.6 0.00$\pm$0.27 -0.05$\pm$0.56 -0.10$\pm$0.77 -0.01$\pm$0.29 -0.00$\pm$0.18 0.01$\pm$0.17 -0.02$\pm$0.34 -0.01$\pm$0.64 -1.6:-0.6 0.01$\pm$0.12 0.00$\pm$0.21 0.01$\pm$0.26 -0.00$\pm$0.09 0.01$\pm$0.07 0.00$\pm$0.10 -0.01$\pm$0.13 -0.01$\pm$0.37 -0.6:0.5 0.01$\pm$0.07 0.00$\pm$0.12 0.00$\pm$0.10 -0.00$\pm$0.05 0.01$\pm$0.06 -0.00$\pm$0.09 -0.00$\pm$0.07 -0.00$\pm$0.11 224 -2.6:-1.6 -0.00$\pm$0.11 -0.04$\pm$0.25 -0.00$\pm$0.41 0.01$\pm$0.13 -0.00$\pm$0.07 0.01$\pm$0.07 -0.01$\pm$0.17 -0.03$\pm$0.55 -1.6:-0.6 -0.00$\pm$0.05 -0.00$\pm$0.08 -0.00$\pm$0.09 -0.00$\pm$0.04 -0.00$\pm$0.03 0.00$\pm$0.05 -0.00$\pm$0.05 -0.03$\pm$0.25 -0.6:0.5 0.00$\pm$0.03 0.00$\pm$0.06 0.00$\pm$0.05 -0.00$\pm$0.02 0.00$\pm$0.04 0.00$\pm$0.04 0.00$\pm$0.03 0.00$\pm$0.05 no -2.6:-1.6 -0.00$\pm$0.02 -0.01$\pm$0.06 -0.01$\pm$0.15 0.00$\pm$0.02 -0.00$\pm$0.02 -0.00$\pm$0.03 -0.00$\pm$0.04 -0.01$\pm$0.21 noise -1.6:-0.6 -0.00$\pm$0.01 0.00$\pm$0.03 0.00$\pm$0.03 0.00$\pm$0.01 0.00$\pm$0.01 -0.00$\pm$0.02 0.00$\pm$0.02 0.00$\pm$0.05 -0.6:0.5 0.00$\pm$0.01 0.00$\pm$0.05 0.00$\pm$0.03 0.00$\pm$0.01 0.00$\pm$0.05 -0.00$\pm$0.04 0.00$\pm$0.01 0.00$\pm$0.03 ---------- ----------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- Results {#Results} ======= Gaia-ESO benchmark stars ------------------------ ![image](GBSnew.pdf){width="\textwidth"} \[gaiabenchmark\] Our results for the Gaia-ESO benchmark stars are shown in Fig. \[fig:2\] and Fig. \[fig:3\]. Fig. \[fig:2\] compares our NLTE stellar parameters with the values from @Jofre2015, @Schoenrich2014, and with the Gaia-ESO DR3 catalogue (GES) [@Smiljanic2014a]. In @Jofre2015, $\teff$ estimates were determined from photometry and interferometry, $\logg$ from parallaxes and astroseismology. $\feh$ estimates were obtained from the NLTE analysis of Fe lines in the high-resolution spectra taken with the UVES, NARVAL and HARPS spectrographs [@blanco-cuaresma2014]. In order to be consistent with our reference solar $\feh$ scale, we subtracted 0.05 dex from Jofre and GES metallicities, as they are based on the @Grevesse2007 metallicity scale (logA(Fe)$_\odot=7.45$ dex). Likewise, we subtracted 0.03 dex from @amarsi2016 metallicities, as they employ logA(Fe)$_\odot=7.47$ dex. ![NLTE elemental abundances derived from the spectra taken at different exposure times. Abundances determined at $\snr=200$ Å$^{-1}$ are just as precise as those at $\snr>2000$ Å$^{-1}$, see section \[gaiabenchmark\] for details.[]{data-label="fig:3"}](stabSNRa.pdf){width="\columnwidth"} The estimates of stellar parameters in @Schoenrich2014 are derived using a full Bayesian approach by solving for the posterior in a multi-dimensional parameter space, including photometry, high-resolution spectra, parallaxes, and evolutionary constraints. The estimates of stellar parameters in the Gaia-ESO DR3 catalogue rely on the high-resolution (UVES at VLT) spectroscopy only. Fig. \[fig:2\] suggests that the agreement of our NLTE results with the literature studies is very good. The differences with @Jofre2015 are of the order -29$\pm$88 K in $\teff$, 0.09$\pm$0.16 dex in $\logg$ and 0.02$\pm$0.09 dex in $\feh$ across the full parameter space, and they also compare favourably with the results obtained by @Schoenrich2014 and reported in the Gaia-ESO DR3 catalogue. Results for individual stars are listed in Table \[tab:gbs\]. The scatter is slightly larger for the metal-poor stars. This could be the consequence of the limited coverage of the training set. In particular, the two very metal-poor evolved stars HD 122563 and HD 140283 are located next to the low-metallicity edge of our training grid. Since the Gaia-ESO benchmark star sample contains only three stars with $\feh< -1$, no reliable statistics can be drawn on the success of our approach in this regime of stellar parameters. Also the sample of RGB stars is very small and contains only five objects with $\logg < 3$ dex. We address the performance of our method for low-gravity stars in the next section, by analysing a set of open and globular clusters that cover a large metallicity range, $-2.3 \lesssim \feh \lesssim -0.1$ dex, and provide a better sampling on the RGB. Fig. \[fig:3\] illustrates the performance of our method for the spectra taken at different exposure times. We have chosen four stars representative of our calibration sample: HD 107328 - a moderately metal-poor giant ($\teff = 4384$ K, $\logg = 1.90$ dex, and $\feh_{\rm NLTE} = -0.60$ dex), $\xi~Hya$ - a metal-rich subgiant ($\teff = 5045$ K, $\logg = 3.01$ dex, and $\feh_{\rm NLTE} = -0.05$ dex), $\epsilon~For$ - a moderately metal-poor subgiant ($\teff = 5070$ K, $\logg = 3.28$ dex, and $\feh_{\rm NLTE} = -0.65$ dex), and $\alpha~Cen~B$ - a metal-rich dwarf ($\teff = 5167$ K, $\logg = 4.33$ dex, and $\feh_{\rm NLTE} = 0.14$ dex). These stars have been observed with different exposure times, corresponding to the $\snr$ ratios of $100$ to $2500$ Å$^{-1}$ that allows us to validate the differential precision of the adopted model. We do not detect any evidence of a systematic bias that depends on the data quality. In particular, the mean difference (taken as one standard deviation) between abundances of Fe, Mg, and Ti obtained from the $\snr \sim 100$ Å$^{-1}$ spectra and those obtained from the highest-quality data ($\snr\sim2000$ Å$^{-1}$) is not larger than $0.02$ dex for any of these stars, and is less than $0.01$ dex for the majority. We hence conclude that our results are not very sensitive to the quality of the observed data for a wide range of $\snr$ ratios. ![image](clusters2a.pdf){width="99.00000%"} Open and globular clusters -------------------------- ### Sample selection Our dataset includes two open clusters and $11$ globular clusters. The cluster members are chosen using the central coordinates and the RV estimates from the SIMBAD[^9] database listed in Table \[tab:gcinfo\]. We select only stars with an RV within 5 $\kms$ from the cluster median[^10], for the open clusters. For the globular clusters, we assume a 1$\sigma$ RV dispersion and the central values from @Pancino2017. We also apply a 2$\sigma$ clipping around the median in metallicity, and employ proper motions from Gaia DR2 [@gdr2] to exclude stars outside the 2$\sigma$ range from the median proper motion of each cluster. It is common to use distances to compute astrometric gravities [e.g. @ruchti2013]. However, the majority of clusters in our sample are located at heliocentric distances $d_\odot$ of $> 2 $ kpc, where parallaxes are very uncertain. Besides, poorly constrained differential extinction in some clusters limits the applicability of standard relations, to derive log(g) from distances and photometric magnitudes. We, hence, refrain from using the GDR2 parallaxes to compute surface gravities. Instead, we compare our results with the isochrones computed using our estimates of metallicities and the ages adopted from literature studies, in particular, from @kruijssen2018 for GCs and from the WEBDA database[^11] for open clusters. For most clusters, the ages are derived from the colour-magnitude diagram turn-off (TO) or horizontal branch (HB) fits. Hence, also this comparison can be performed only with the caveat that the TO/HB ages are not a fundamental reference, but are model-dependent and may not be fully unbiased. ![image](clustersn.pdf){width="\textwidth"} ![image](clustersl.pdf){width="\textwidth"} ### Stellar parameters and comparison with the isochrones {#isochrones} The majority of the globular clusters are distant and are represented by RGB stars in our sample. Main-sequence stars are observed only in the nearby metal-rich open cluster NGC 3532. Hence, in what follows, the discussion will mainly focus on the RGB population across a wide range of metallicities, from $-0.5$ (NGC 5927) to $-2.3$ dex (M 15). In Fig. \[fig:4\], we compare NLTE and LTE stellar parameters as a function of NLTE metallicity. Since most stars, within a cluster, are in the same evolutionary stage (lower or upper RGB), we have chosen to show only the mean NLTE-LTE differences, averaged over all stars in a given cluster. This is sufficient to illustrate the key result: the differences between NLTE and LTE measurements of $\teff$, $\log g$ and $\feh$ vary in lockstep with metallicity. This reflects the NLTE effects in the formation of the Fe I and Ti I spectral lines, which are ubiquitous in HR10. It is furthermore important, although not unexpected, that below $\feh \sim -1$ dex the changes are nearly linear, consistent with our earlier theoretical estimates [@lind2012] and with the analysis of the metal-poor field stars in the Milky Way [@ruchti2013]. The NLTE effect is most striking at $\feh \lesssim -2$, where we find differences of $\sim 300$ K in $\teff$, $\sim 0.6$ dex in $\log g$ and $\sim 0.3$ dex in . The \[Mg$/$Fe\] ratios tend to be lower in NLTE, this reflects negative NLTE abundance corrections for the only Mg line in HR10 (Mg I 5528 Å), which is consistent with earlier studies [@osorio2015; @bergemann2017b]. The upturn in \[Mg$/$Fe\] at $\feh\sim -2$ dex is real and it is caused by the change of the dominant NLTE effect at this metallicity. At higher \[Fe/H\], strong line scattering and photon loss, and, hence, the deviations of the source function from the Planck function, play an important role in the statistical equilibrium of the ion. However, in the metal-poor models, \[Fe/H\] $\lesssim -2$ dex, it is the over-ionisation driven by a hard UV radiation field that acts on the line opacity and thereby counteracts the NLTE effects on the source function. We have masked out all Ti I lines (see Section \[payne\]), so differences in $\tife$ are small $\lesssim0.06$ dex and represent indirect NLTE effects on other stellar parameters. The difference in $\mnfe$ is shown only for a few metal-rich clusters, and it is increasing to lower $\feh$. Fig. \[fig:5\] and Fig. \[fig:6\] show our NLTE/LTE results, respectively for the $12$ clusters in the $\teff$ - $\log g$ plane. We also overlay the PARSEC [@marigo2017] and Victoria-Regina [@VR hereafter, VR] isochrones to facilitate the analysis of the evolutionary stages probed by the stellar sample. The VR isochrones assume the He abundance of $Y=0.26$ and an $\alpha$-enhancement, as given by our measurements of $\mgfe$. The PARSEC isochrones are computed using an effective metallicity (Aldo Serenelly, priv. comm.) $$Z=Z_{0}(0.659 f_{\alpha}+ 0.341),$$ where $Z_{0}=10^{\feh}$ and $f_{\alpha}=10^{\mgfe}$. The error of the spectroscopic estimates is shown in the inset and it represents the typical uncertainty of our analysis ($\Delta (\teff) = 150$ K and $\Delta (\logg) = 0.3$ dex based on analysis of Gaia-ESO benchmark stars). The star-to-star scatter in the $\teff$-$\logg$ plane is very small, and, within the uncertainties, consistent with the isochrones. Surprisingly, both NLTE and LTE spectroscopic parameters agree well with the isochrones computed for the corresponding \[Fe/H\], despite the large differences between NLTE and LTE parameters ($\teff$, $\log g$, $\feh$, and $\mgfe$) especially at low metallicity. This would appear counter-intuitive, at first glance, given the large offsets demonstrated in Fig. \[fig:4\]. However, this effect is, in fact, simply a result of the complex correlations in stellar parameters [as also extensively discussed in @ruchti2013]: NLTE effects in the over-ionisation - dominated species (such as Fe I, Ti I) significantly change the excitation and ionisation balance, such that the theoretical spectral lines tend to be weaker and a higher abundance would be inferred by comparing them to the observed spectra. Consequently, larger estimates of $\teff$, $\log g$, and $\feh$ are expected from the NLTE modelling compared to LTE [see also @lind2012]. The difference between NLTE and LTE  estimates is exactly the offset needed to match the higher (lower) $\teff$ and higher (lower) $\log g$ to the corresponding isochrone computed for the NLTE (LTE) metallicity and $\alpha$-enhancement. This suggests that even large systematic errors in spectroscopic estimates may remain undetected in the $\teff - \log g$ plane, when spectroscopic values are gauged by comparing them with isochrones. In Fig. \[fig:fit\] we show examples of spectral fits for two stars randomly selected from our sample of clusters. Both LTE and NLTE model spectra match the observed ones very well, having similar $\chi^2_r$, while the fit residuals mostly show noise and data reduction artefacts. ![image](fitexample.pdf){width="\textwidth"} Our LTE and NLTE results show a slight tendency towards a hotter $\teff$ scale, which may appear more consistent with the PARSEC models. However, it might be premature to draw more specific conclusions on this matter, as we are aware of the imperfections of the stellar atmosphere and spectral model grids, such as an approximate treatment of convection as well as calibrations that are employed in the stellar evolution models [e.g. @fu2018]). At this stage, it appears to be sufficient to emphasise that our spectroscopic results are internally consistent, and allow predictive statements to be made on the astrophysical significance of the similarities and/or differences of chemical abundance patterns in the clusters. ![image](trendsall.pdf){width="\textwidth"} ### Error estimates {#systerr} To explore the sensitivity of the abundances to the uncertainties in stellar parameters, we use a method similar to the one employed in @Bergemann2017a. The standard errors are estimated by comparison with the independent stellar parameters for the benchmark stars (Section \[gaiabenchmark\]). These are $\pm\Delta \teff=150$ K, $\pm\Delta \logg=0.3$  dex and $\pm\Delta \feh=0.1$ dex. For $\Vmic$, we use the uncertainty of $\pm~0.2~\kms$. We perturb one parameter at a time by its standard error, and re-determine the abundance of an element, while keeping the parameter fixed during the $\chi^2$ optimisation. We then compare the resulting abundance with the estimate obtained from the full solution, when all labels are solved for simultaneously. Table \[tab:sist\] presents the resulting uncertainties for five stars representative of the sample. These differences are added in quadrature and are used as a measure of the systematic error of abundances –$\Delta X$. The systematic errors derived using this procedure are typically within $0.10$ to $0.15$ dex (Table \[tab:sist\]). The test of internal accuracy suggests (Section \[cvtest\]) that we cannot have derived robust Mn abundances for much of the parameter space, because Mn lines in the HR10 spectra are weak in the metal-poor regime. Hence, the mean \[Mn/Fe\] ratios are only provided for the two metal-rich clusters NGC 3532 and NGC 5927. ### Abundance spreads in clusters {#trends} Fig. \[fig:8\] shows the \[Fe/H\], \[Ti/Fe\], and \[Mg/Fe\] abundance estimates in stars of OCs and GCs against stellar $\teff$. The uncertainties represent the systematic errors computed as described in see Section \[systerr\]. The open cluster NGC 2243 is shown separately in Fig. \[fig:9\] as it shows signatures of atomic diffusion. Of a particular interest is the dip of \[Fe/H\] at the cluster TO ($\teff \sim 6400$ K), which is qualitatively consistent with the predictions of stellar evolution models, which include radiative acceleration and gravitational settling [e.g. @deal2018]. We leave a detailed exploration of this effect for our future study. ![Abundances as a function of $\teff$ and the $\teff$-$\logg$ diagram for the open cluster NGC 2243. All values are our NLTE results. The isochrones were computed for the age of $3.8$ Gyr from @Anthony-Twarog2005 and $\feh_{\rm NLTE}=-0.52$ dex.[]{data-label="fig:9"}](trends2243.pdf){width="\columnwidth"} Whereas prominent systematic biases appear to be absent for most clusters, there is some evidence for a small anti-correlation of $\mgfe$ and/or $\tife$ values with $\teff$, for the moderately metal-poor clusters NGC 1851, NGC 362, M2, and NGC 6752. These clusters also show a somewhat tilted distribution of stars relative to the isochrones in the $\teff - \log g$ plane (Fig.\[fig:5\],\[fig:6\]) suggesting that the origin of the trends is likely in the spectral models/method, employed in this work. Currently we have no straightforward solution for this effect. The average abundance of a cluster &lt;$X$&gt; and internal dispersion $\sigma_X$ are computed using maximum likelihood (ML) approach [@Walker2006; @Piatti2018], where we take into account the individual abundance uncertainties $\Delta X$ of each star. We numerically maximise the logarithm of the likelihood $L$, given as: $$\label{likelihood} \ln{L}=-\frac{1}{2} \sum_i^N \ln(\Delta X_i^2+\sigma_X^2) - \frac{1}{2} \sum_i^N \frac {(X_i-<X>)^2}{\Delta X_i^2+\sigma_X^2} -\frac{N}{2}\ln{2\pi}$$ where *N* is the number of stars in a cluster and *X* refers to one of $\feh,~\mgfe,~\tife$ and $\mnfe$. The errors of the mean and dispersion are computed from the respective covariance matrices [@Walker2006]. We find that all clusters are homogeneous in $\feh$ and $\tife$ at an uncertainty level of $0.03$ dex. Four clusters (M 15, M2, NGC 4833, NGC 2808) show a larger scatter in $\mgfe$ at the level of $0.07$ dex or greater. Modest internal dispersions $\sigma_{\mgfe} \sim 0.04$ dex are detected in NGC 1904 and NGC 6572. Spreads in light element abundances, including Mg, have already been reported for a number of clusters, including NGC 2808 [@carretta2015], M2 [@yong2015], NGC 4833 [@Carretta2014a] and M15 [@carretta2009]. These spreads are typically attributed to multiple episodes of star formation and self-enrichment [see the recent review by @Bastian2018 and references therein]. The estimated internal dispersions are summarised in Table \[tab:newnlte1\]. In the following, to be consistent with the literature, we will focus on the [*observed*]{} intra-cluster dispersion, instead of the ML estimated internal dispersion. We note that these two are not the same as the latter probes the [*intrinsic*]{} dispersion that is not accounted for by the measurement uncertainties, while the former includes both. ---------- --------- ----------------------- ---------------------- ------------------------ ----------------------- ------------------------ ----------------------- Cluster \#stars $\feh_{\rm NLTE}$ dex $\feh_{\rm LTE}$ dex $\mgfe_{\rm NLTE}$ dex $\mgfe_{\rm LTE}$ dex $\tife_{\rm NLTE}$ dex $\tife_{\rm LTE}$ dex N avg std &lt;err&gt; avg std &lt;err&gt; avg std &lt;err&gt; avg std &lt;err&gt; avg std &lt;err&gt; avg std &lt;err&gt; NGC 3532 12 -0.10 0.02 0.10 -0.09 0.03 0.11 -0.09 0.01 0.09 -0.07 0.01 0.12 0.01 0.03 0.10 0.01 0.03 0.11 NGC 5927 47 -0.48 0.05 0.16 -0.49 0.05 0.16 0.39 0.04 0.07 0.41 0.05 0.07 0.29 0.06 0.07 0.23 0.05 0.07 NGC 2243 84 -0.52 0.06 0.08 -0.57 0.07 0.11 0.15 0.07 0.08 0.26 0.09 0.09 0.02 0.08 0.09 0.01 0.09 0.10 NGC 104 68 -0.74 0.03 0.15 -0.75 0.03 0.17 0.38 0.05 0.08 0.42 0.04 0.08 0.30 0.07 0.08 0.26 0.07 0.08 NGC 1851 88 -1.11 0.04 0.14 -1.15 0.04 0.15 0.22 0.08 0.10 0.36 0.08 0.08 0.28 0.09 0.11 0.24 0.07 0.10 NGC 2808 25 -1.01 0.05 0.14 -1.03 0.05 0.15 0.11 0.14 0.08 0.22 0.15 0.06 0.33 0.04 0.07 0.30 0.04 0.07 NGC 362 62 -1.05 0.04 0.13 -1.09 0.04 0.16 0.15 0.06 0.09 0.26 0.07 0.08 0.29 0.06 0.09 0.26 0.06 0.09 M 2 78 -1.47 0.06 0.09 -1.54 0.06 0.12 0.17 0.11 0.09 0.34 0.13 0.08 0.23 0.07 0.10 0.25 0.06 0.09 NGC 6752 110 -1.48 0.06 0.09 -1.56 0.07 0.12 0.20 0.09 0.09 0.35 0.11 0.09 0.17 0.07 0.09 0.23 0.07 0.10 NGC 1904 44 -1.51 0.05 0.09 -1.60 0.07 0.12 0.16 0.09 0.09 0.31 0.11 0.09 0.21 0.08 0.10 0.24 0.09 0.10 NGC 4833 33 -1.88 0.06 0.08 -2.08 0.08 0.11 0.18 0.17 0.08 0.36 0.20 0.10 0.22 0.06 0.08 0.24 0.07 0.10 NGC 4372 45 -2.07 0.06 0.09 -2.34 0.08 0.13 0.31 0.07 0.09 0.51 0.09 0.09 0.20 0.06 0.08 0.22 0.07 0.10 M 15 46 -2.28 0.06 0.08 -2.58 0.07 0.10 0.22 0.19 0.11 0.36 0.23 0.09 0.21 0.05 0.09 0.19 0.05 0.12 ---------- --------- ----------------------- ---------------------- ------------------------ ----------------------- ------------------------ ----------------------- \[tab:newnlte\] Discussion ========== Comparison with the literature ------------------------------ In what follows, we discuss our results for the Galactic clusters in the context of their chemical properties. Many literature abundances are given in “standard” format: mean $\pm$ intra-cluster spread, computed as a simple standard deviation using all measurement in the cluster. In some cases, when not given in the same format, we recompute the mean and the standard deviations using the values of individual stars in the literature. Our own results are presented in the same format with mean from ML analysis and the [*observed*]{} intracluster spread (not the ML estimated internal dispersion) given in Table \[tab:newnlte\]. We start with two open clusters and then continue with globular clusters, in order from the most metal-rich to the most metal-poor one. ### NGC 3532 NGC 3532 is a young nearby metal-rich cluster at a heliocentric distance of $d_\odot \sim 0.5$ kpc [@Clem2011; @Fritzewski2019]. The cluster has been extensively surveyed for white dwarfs [@Dobbie2009; @Dobbie2012] which has allowed accurate estimate of the cluster age of $\sim 300$ Myr from the white dwarf cooling sequence. On the basis of $12$ main-sequence stars, we find the metallicity $\feh_{\rm NLTE}=-0.10\ \pm\ 0.02$ dex and $\feh_{\rm LTE} =-0.09\ \pm\ 0.03$ dex. This estimate is consistent, within the uncertainties, with estimates based on the analysis of high-resolution spectra by @santos2012, @Conrad2014c, and @Netopil2017c. @Fritzewski2019 reported the metallicity of $\feh $ of $-0.07\ \pm\ 0.10$ dex using lower-resolution near-IR spectra. Our NLTE abundance ratios suggest that the cluster is moderately $\alpha$-poor, with $\mgfe_{\rm NLTE}$ of $-0.09\ \pm\ 0.01$ dex, although the \[Ti/Fe\] ratio is solar $\tife_{\rm NLTE} =0.01\ \pm\ 0.03$ dex. The \[Mn/Fe\] ratio is sub-solar, $\mnfe_{\rm NLTE}=-0.16\ \pm\ 0.03$ dex. ### NGC 2243 {#ngc2243} NGC 2243 is an old Galactic open cluster located below the Galactic plane, at z $=-1.1$ kpc, and at a Galactocentric distance of $10.7$ kpc [@jacobson2011]. The age of the cluster was determined by several methods including spectroscopy, CMD isochrone fitting [@Anthony-Twarog2005], using model age-luminosity and age-radius relations for eclipsing binaries [@Kaluzny2006], bracketing $4 \pm 1$ Gyr. The cluster has been subject to a very detailed chemical abundance analysis (for example the review by @Heiter2014). @Gratton1982 and @Gratton1994 derived a spectroscopic metallicity of $\feh = -0.42\ \pm\ 0.05$, as well as detailed chemical abundances of the elements from C to Eu for a few RGB stars in the cluster. Their estimates were confirmed by @Friel2002 and @jacobson2011, who derived Fe, Ni, Ca, Si, Ti, Cr, Al, Na, and Mg abundances in a small sample of RGB stars. According to the latter study, this is one of the most metal-poor clusters at its location at an R$_{\rm GC} \sim 11$ kpc. This cluster has also been observed within the OCCAM APOGEE survey [@cuhna2016]. Their estimates of abundances in NGC 2243 are somewhat different from @jacobson2011, with Mg being $-0.14$ dex lower and more subtle differences for the other elements. In contrast to @jacobson2011, @cuhna2016 also find a very large spread of metallicities in the cluster members, ranging from $-0.4$ to $+0.3$ dex. @Franccois2013 reported detailed abundances for the main-sequence and subgiant stars in the cluster. Their $\feh$ of $-0.54\ \pm\ 0.10$ dex is consistent with our NLTE estimate of $\feh_{\rm NLTE}=-0.52\ \pm\ 0.06$ dex. Our estimate of $\tife_{\rm NLTE} =0.02\ \pm\ 0.08$ dex is also in agreement with the value obtained by @Franccois2013, $\tife = 0.20\ \pm\ 0.22$ dex, within the combined uncertainties of both measurements. In fact, our lower estimate of $\tife$ corroborates the scaled-solar estimates of other $\alpha$-elements reported by @Franccois2013, \[Ca/Fe\] $=0.00\ \pm\ 0.14$ dex and \[Si/Fe\] $=0.12\ \pm\ 0.20$ dex. ### NGC 5927 NGC 5927 is a metal-rich globular cluster located close to the Galactic plane, at an altitude $z \sim 0.6$ kpc [@Casetti-Dinescu2007]. With an age of $12$ Gyr [@Dotter2010] and metallicity of $\sim-0.5$ dex [@muraguzman2018], the cluster is among the oldest metal-rich clusters known in the Galaxy. High-resolution spectroscopy of the cluster revealed the presence of multiple populations, especially prominent in the anti-correlation between Na and O [@Pancino2017; @muraguzman2018]. The latter study also pointed out a similarity in the chemical properties of NGC 5927 and NGC 6440, a metal-rich GC in the Galactic bulge that could potentially hint at the common origin of the both systems. Our NLTE estimate $\feh_{\rm NLTE}=-0.48\ \pm\ 0.05$ dex is in very good agreement with earlier spectroscopic studies [@muraguzman2018 $=-0.47\ \pm\ 0.02$ dex]. However, the abundance ratios are somewhat different. In particular, we find both Mg and Ti to be higher, $\mgfe_{\rm NLTE}=0.39\ \pm\ 0.04$ dex and $\tife_{\rm NLTE}=0.29\ \pm\ 0.06$ dex, compared to the results of the latter study. For Ti, our higher estimate is likely the consequence of NLTE over-ionisation, as the LTE abundance is $\tife_{\rm LTE}=0.23\ \pm\ 0.05$ dex, which is consistent with the estimate of $\tife=0.32\ \pm\ 0.05$ dex from @muraguzman2018. In contrast, the difference in Mg abundance is not related to NLTE. Our LTE Mg abundance is $\mgfe_{\rm LTE}=0.41\ \pm\ 0.05$ dex, which is much higher than that of @muraguzman2018, $\mgfe = 0.27\ \pm\ 0.02$ dex. It is possible that the differences stem from the differences in atomic data and/or model atmospheres. @muraguzman2018 employ the MOOG code, Kurucz model atmospheres, and linelists from @Villanova2011 and references therein. Our linelists have been extensively updated over the past years, and in particular for Mg lines, we used the data from @PehlivanRhodin2017. We were unable to find the atomic data in @Villanova2011 and hence cannot provide a detailed analysis of the consistency of the models. Our average \[Mn/Fe\] abundance ratio in NGC 5927 is sub-solar $\mnfe_{\rm NLTE}=-0.20\ \pm\ 0.03$ dex, $\mnfe_{\rm LTE}=-0.34\ \pm\ 0.03$ dex. This estimate is much lower compared to $\mnfe=-0.09\ \pm\ 0.08$ dex derived by @muraguzman2018, but it is mostly due to the difference of $-0.16$ in the adopted solar abundance (logA(Mn)$_\odot=5.37$ dex and logA(Mn)$_\odot=5.53$ dex, respectively). ### NGC 104 (47 Tuc) NGC 104 (47 Tuc) is among the brightest and most well-studied clusters in the Milky Way . The recent estimate of the distance to the cluster is $d_\odot = 4.45$ kpc [@Chen2018], which was obtained on the basis of Gaia DR2 parallaxes. The reddening towards the system is very low $E(B-V)=0.03\ \pm\ 0.10$ mag allowing an accurate estimate of the cluster age of $\sim 12.5$ Gyr [@brogaard2017]. Chemical abundance patterns, in the form of Na-O anti-correlations, enrichment in He and N, and depletion of C, indicate complex chemical evolution in the cluster [@Cordero2014; @Kuvcinskas2014; @Marino2016j]. Our NLTE estimate of the cluster metallicity, $\feh_{\rm NLTE}=-0.74\ \pm\ 0.03$ dex, is in very good agreement with previous estimates [@Koch2008f; @Cordero2014; @Dobrovolskas2014; @thygesen2014]. The latter study reports $\feh=-0.78\ \pm\ 0.07$ dex obtained by 1D LTE modelling of Fe lines. The authors also test the effect of NLTE, finding the effects to be of the order $+0.02$ dex on the Fe abundances. Indeed, this is fully confirmed by our LTE metallicities, which are $0.01$ dex lower compared to our NLTE results. For Mg, @thygesen2014 report \[Mg/Fe\] $=0.44\ \pm\ 0.05$ dex in LTE, which is in excellent agreement with our LTE value, $\mgfe_{\rm LTE}=0.42\ \pm\ 0.04$ dex, and is only slightly higher than our NLTE result $\mgfe_{\rm NLTE}=0.38\ \pm\ 0.05$ dex. Also the Ti abundances are consistent with @thygesen2014. We obtain $\tife_{\rm NLTE}=0.30\ \pm\ 0.07$ dex and $\tife_{\rm LTE}=0.26\ \pm\ 0.07$ dex, which agrees within the uncertainties with the measured value of \[Ti/Fe\]=$0.28\ \pm\ 0.08$ dex from @thygesen2014. ### NGC 1851 NGC 1851 is a moderately metal-poor globular cluster located at an $R_{\rm GC}$ of 17 kpc from the Galactic centre and $\sim 7$ kpc below the disk plane [@harris 2010 edition]. @wagnerkaiser2017 find a cluster age of $11.5$ Gyr. Some have argued for an evolutionary connection between NGC 1851 and several other clusters (NGC 1904, NGC 2808, and NGC 2298) on the basis of their spatial proximity [@Bellazzini2001], as we confirm by our abundances below. An idea has been put forward that all four clusters are associated with the disrupted Canis Major dwarf galaxy [@martin2004]. Others suggest that NGC 1851 is possibly the nucleus of a disrupted dwarf galaxy [@Bekki2012; @Kuzma2018] or could have formed as a result of the merger of two globular clusters [@carretta2011]. The cluster hosts multiple stellar populations, seen in photometric data on the main sequence, subgiant branch, and RGB [@Milone2008; @turri2015; @Cummings2017]. Also the spectroscopic analysis of C and N suggests the presence of several populations [@Yong2008; @yong2015; @Simpson2017]. Our metallicities for NGC 1851 are slightly higher compared to previous studies. @Gratton2012a find a range of metallicities in the cluster from $\feh=-1.23\ \pm\ 0.06$ dex (subgiant branch) to $\feh=-1.14\ \pm\ 0.06$ dex (RGB). Our analysis yields $\feh_{\rm NLTE}=-1.11\ \pm\ 0.04$ dex and $\feh_{\rm LTE}=-1.15\ \pm\ 0.04$ dex, whereas @yong2015 report $\feh = -1.28\ \pm\ 0.05$ and @Marino2014 obtain $\feh = -1.33\ \pm\ 0.09$ dex. For Mg, we find $\mgfe_{\rm NLTE} =0.22\ \pm\ 0.08$ dex, which is lower than the value reported by @Marino2014 $\mgfe = 0.44\ \pm\ 0.16$ dex. However, this difference can be almost entirely explained by NLTE. Indeed our LTE estimates of $\mgfe$ are much higher, $\mgfe_{\rm LTE} =0.36\ \pm\ 0.05$ dex, and are also in agreement with the LTE estimates by @carretta2011, $\mgfe = 0.35\ \pm\ 0.03$ dex. For Ti, we find the opposite offset, in the sense that our NLTE values, $\tife_{\rm NLTE} =0.28\ \pm\ 0.06$ dex, are higher compared to the LTE results by @carretta2011 $\tife=0.17\ \pm\ 0.05$ dex. This can be explained by NLTE, as our LTE abundances of Ti are slightly lower, $\tife_{\rm LTE} =0.24\ \pm\ 0.06$ dex, consistent with the latter study within the combined uncertainties of the both LTE measurements. It is interesting, in the context of the common formation scenario of NGC 1851 and NGC 2808, as proposed by @martin2004, that our chemical abundances in the two clusters are very similar. In fact, given the uncertainties of our measurements, both clusters are consistent with being formed from the same material, and having the same progenitor system. ### NGC 2808 NGC 2808, a moderately metal-poor old cluster, is among the most massive and complex systems in the Milky Way galaxy [@Simioni2016], with the mass of $7.42 \times 10^5$ M$_\odot$ [@Baumgardt2018] and multiple populations [@Piotto2007; @Milone2015]. NGC 2808 was among the first clusters, for which a prominent Na-O anti-correlation was reported [@Carretta2006], along with a He spread [@DAntona2005] and a Mg-Al anti-correlation [@Carretta2006b]. Our LTE metallicity, $\feh_{\rm LTE} =-1.03\ \pm\ 0.05$ dex, is slightly higher compared to the recent literature values. @carretta2015 report $\feh=-1.13\ \pm\ 0.03$ dex using the Fe I lines and $\feh=-1.14\ \pm\ 0.03$ dex using the Fe II lines. They also find a large spread in \[Mg/Fe\] abundance ratios, which is corroborated by our results. In particular, we find that the individual LTE abundance ratios of \[Mg/Fe\] range from $0.08$ to $0.45$ dex, and the average value and its dispersion, $\mgfe_{\rm LTE}=0.22\ \pm\ 0.15$ dex, is consistent with $\mgfe=0.26\ \pm\ 0.16$ dex obtained by @carretta2015. For Ti, our estimate $\tife_{\rm LTE}=0.29\ \pm\ 0.04$ dex is slightly higher compared to $\tife=0.21\ \pm\ 0.04$ dex derived by @carretta2015. Our NLTE measurements are: $\feh_{\rm NLTE} =-1.01\ \pm\ 0.05$ dex, $\mgfe_{\rm NLTE} =0.11\ \pm\ 0.14$ dex, and $\tife_{\rm NLTE} =0.33\ \pm\ 0.04$ dex. ### NGC 362 The globular cluster NGC 362 has been extensively studied in the literature since the early work by @Menzies1967. A recent analysis of Gaia DR2 astrometric data by @Chen2018 places it at a heliocentric distance of $8.54$ kpc, relatively close to the Galactic disk plane. Photometric studies of the cluster revealed multiple sequences on the HB [@Dotter2010; @Gratton2010a; @Piotto2012]. Spectroscopic follow-up confirmed its unique nature, with discrete groups of Na/O ratios [@carretta2013], a bimodal distribution of CN [@Smith2009; @Lim2016], a very large spread of Al abundances. Our NLTE metallicity for this cluster, $\feh_{\rm NLTE}=-1.05\ \pm\ 0.04$ dex, is somewhat higher compared to the results of the earlier studies. Our LTE estimate is lower, $\feh_{\rm LTE} =-1.09\ \pm\ 0.04$ dex and is consistent with the RR Lyr-based value from @Szekely2007. A very careful analysis of high-resolution spectra by @Worley2010 yielded $\feh = -1.20\ \pm\ 0.09$ (from the Fe II lines), which is consistent within the uncertainty with our LTE estimate. A somewhat lower value is reported by @dorazi2015. They find $\feh$ of $-1.26$ dex from the LTE analysis of RGB stars. The perhaps most extensive chemical study of the cluster, to date, is that by @carretta2013 employing UVES and Giraffe spectra of 138 RGB stars. For the UVES sample, they find a mean LTE metallicity of $\feh =-1.17\ \pm\ 0.05$ dex from the Fe I lines and $\feh =-1.21\ \pm\ 0.08$ dex from Fe II lines that is in agreement with our LTE metallicity. Their abundance of \[Ti/Fe\] ($0.22\ \pm\ 0.04$ dex based on the UVES spectra) and \[Mg/Fe\] ($0.33\ \pm\ 0.04$ dex) are also in good agreement with our LTE estimates, $\tife_{\rm LTE} =0.26\ \pm\ 0.06$ dex and $\mgfe_{\rm LTE} =0.26\ \pm\ 0.06$ dex. In contrast, our NLTE values are considerably different, $\tife_{\rm NLTE} =0.29\ \pm\ 0.06$ dex and $\mgfe_{\rm NLTE} =0.15\ \pm\ 0.06$ dex. To the best of our knowledge, this paper is the first study to provide estimates of NLTE abundances in this cluster. ### M2 (NGC 7089) M2 is an old cluster in the Galactic halo at a distance of $\sim 7$ kpc below the Galactic plane and at a heliocentric distance of 11.5 kpc [@harris 2010 edition]. The cluster was the first system, in which a CN distribution bimodality was detected [@Smith1990; @Lardo2012; @Lardo2013]. @Yong2014 argued for a trimodal metallicity distribution that has been, however, disputed by @Lardo2016, who found a bimodal distribution using Fe II lines. @Milone2015 employed HST photometry to detect a very rich stellar environment, composed of three main populations distinguished by their metallicity and a spread in He abundance from the primordial mass fraction of $Y\sim 0.25$ to $Y\sim 0.31$. They also suggest that there are six sub-populations with unique light element abundance patterns, that could potentially hint at either an independent enrichment and star formation history of the individual components or at a unique merger formation history of the cluster. The imaging data by @Kuzma2016 further strengthen the latter interpretation, by demonstrating a diffuse stellar envelope that could possibly indicate that the GC is a stripped dSph nucleus. We find a modest metallicity spread in the cluster $\feh_{\rm NLTE}=-1.47\ \pm\ 0.06$ dex. Our LTE result $\feh_{\rm LTE}=-1.54\ \pm\ 0.06$ dex is in good agreement with the previous measurements, in particular with @Lardo2016, who derive $\feh =-1.50\ \pm\ 0.05$ dex for the metal-poor component, using Fe II lines. @Yong2014 report three groups with \[Fe/H\] ranging from $-1.66\ \pm\ 0.06$ dex to $-1.02\ \pm\ 0.06$ dex, as derived from the Fe II lines. It should be noted, however, that @Lardo2016 suggest that the metal-rich component may not constitute more than 1 % of the cluster population. As to abundance ratios, comparing our LTE estimates with @Yong2014, we find a good agreement in Mg with $\mgfe_{\rm LTE} =0.34\ \pm\ 0.13$ dex, that should be compared to their estimates of $0.38\ \pm\ 0.08$ dex. Yet, similar to the other clusters, our NLTE abundance of Mg is lower, $\mgfe_{\rm NLTE} =0.17\ \pm\ 0.11$ dex. We obtain $\tife_{\rm NLTE} = 0.23\ \pm\ 0.07$ dex in NLTE, and $\tife_{\rm LTE} =0.25\ \pm\ 0.06$ dex in LTE, which is lower than the estimates derived by @Yong2014 $\tife=0.31\ \pm\ 0.12$ dex. We note, however, that their approach leads to a significant ionisation imbalance of Ti I - Ti II in the two groups, and it is not clear which of the estimates is more reliable. Our measurement of \[Ti/Fe\] is more consistent with their estimate based on the Ti II lines. ### NGC 6752 NGC 6752 belongs to the benchmark globular clusters in our Galaxy. Its proximity, $d_\odot$ of $4.0$ kpc [@harris 2010 edition], allows a detailed spectroscopic and photometric analysis of the cluster members. The cluster has been extensively observed with the VLT [e.g. @Carretta2007; @Gruyters2014; @Lee2018] and with HST [e.g. @Ross2013; @Gruyters2017; @Milone2019]. In particular, deep narrow-band photometric observations have been essential to probe the substructure of this system, with multiple stellar populations identified on the RGB and MS [@Dotter2015; @Lee2018; @Milone2019]. A detailed chemical analysis of the cluster members was presented in different studies. The analysis of high-resolution UVES spectra of 38 RGB stars in NGC 6752 by @Yong2005 showed a prominent $\alpha$-enhancement at $ \mgfe=0.47\ \pm\ 0.06$ dex, and the iron abundances of $\feh=-1.56\ \pm\ 0.10$ dex. Both of these estimates are fully consistent with our LTE results of $\feh_{\rm LTE}=-1.56\ \pm\ 0.07$ dex and $\mgfe_{\rm LTE}=0.35\ \pm\ 0.11$ dex. Furthermore, their LTE estimate of Ti abundance, $\tife=0.14\ \pm\ 0.14$ dex, is consistent with our LTE value, $\tife_{\rm LTE}=0.23\ \pm\ 0.07$ dex. Our sample is larger than that of @Yong2005 and comprises $110$ stars at the base of the RGB, which may account for minor differences between our and their results. On the other hand, our somewhat larger dispersion in abundance ratios is probably not an artefact, as large intra-cluster abundance spreads have also been reported by @Yong2013 from the analysis of high-resolution spectra of RGB stars. Our NLTE estimates are slightly different, but they follow the general trends identified for other metal-poor clusters. The NLTE metallicity and slightly higher, $\feh_{\rm NLTE}=-1.48\ \pm\ 0.06$ dex, whereas the NLTE $\mgfe$ ratio is correspondingly lower, $\mgfe_{\rm NLTE}=0.20\ \pm\ 0.09$ dex. ### NGC 1904 (M79) NGC 1904 is a metal-poor globular cluster at $d_\odot =$ 12.9 kpc and 6.3 kpc below the Galactic plane [@harris 2010 edition]. @kains2012 employed variable stars to determine an accurate distance to the cluster, 13.4$\pm$0.4 kpc. The age of the system is 14.1$\pm$2.1 Gyr [@li2018]. Similar to NGC 1851, the outskirts of NGC 1904 reveal prominent streams signifying its possible accretion origin [@Carballo-Bello2018; @Shipp2018]. Our NLTE metallicity of the cluster is $\feh_{\rm NLTE}=-1.51\ \pm\ 0.05$ dex. This is consistent, modulo the LTE - NLTE difference of -0.07 dex, with the value reported by @carretta2009, $\feh=-1.58\ \pm\ 0.03$ dex. Also their LTE Mg abundance, $\mgfe=0.28\ \pm\ 0.06$ dex, is in good agreement with our LTE value of $\mgfe_{\rm LTE}=0.31\ \pm\ 0.11$ dex. Our NLTE estimate is $\mgfe_{\rm NLTE}=0.16\ \pm\ 0.09$ dex, which is lower than the LTE value. The cluster is also enriched in \[Ti/Fe\]. We find $\tife_{\rm NLTE}=0.21\ \pm\ 0.08$ dex and $\tife_{\rm LTE}=0.24\ \pm\ 0.09$ dex, and the latter is consistent with the LTE results obtained by @Fabbian2005, $\tife=0.31\ \pm\ 0.15$ dex. ### NGC 4833 The cluster is arguably one of the oldest systems in the Milky Way, with the age of 13.5 Gyr [@wagnerkaiser2017]. Its location at $d_\odot = 6.6$ kpc, $\sim 1$ kpc away from the disk plane [@harris 2010 edition] and orbital eccentricity are consistent with the cluster being a part of the inner halo system @Carretta2010b. The cluster is thought to host multiple populations [@Carretta2014a], based on chemical signatures. A detailed spectroscopic analysis of the cluster has been performed by several groups. @Carretta2014a employed UVES and Giraffe spectra of 78 stars to determine the abundances of $20$ elements from Na to Nd. They obtained relatively small dispersions for the majority of elements, including Fe. In contrast, they also found very pronounced Na-O and Mg-Na anti-correlations and a large intra-cluster variation in the abundances of light elements. Specifically, the \[Mg/Fe\] abundance ratios in the cluster range from slightly sub-solar, \[Mg/Fe\] $\sim -0.05$ dex, to highly super-solar values, \[Mg/Fe\] $\gtrapprox 0.7$ dex. Another high-resolution study of the cluster was presented by @roediger2015, who obtained high $\snr$ spectra with the MIKE spectrograph at the Magellan II telescope. Their estimates of elemental abundances are somewhat different from @Carretta2014a. In particular, they report $\feh=-2.25\ \pm\ 0.02$ dex from the neutral Fe lines, and $\feh=-2.19\ \pm\ 0.01$ dex from the ionised Fe lines, attributing the differences with respect to @Carretta2014a to the technical aspects of the analysis, such as the the linelist and the solar reference abundances. In terms of abundance inhomogeneities and correlations, their study is consistent with @Carretta2014a, with pronounced star-to-star variations in the light elements and signatures of bimodality in Na, Al, and Mg. Our LTE estimates of metallicity and abundance ratios are consistent with the literature estimates. In particular, we find $\feh_{\rm LTE}=-2.08\ \pm\ 0.08$ dex and $\mgfe_{\rm LTE}=0.36\ \pm\ 0.20$ dex, which can be compared to $\feh=-2.04\ \pm\ 0.02$ dex and $\mgfe = 0.36\ \pm\ 0.15$ dex derived by @Carretta2014a from the Giraffe spectra. We also confirm that there is negligible internal dispersion in Ti abundances, with $\tife_{\rm LTE}$ of $0.24\ \pm\ 0.07$ dex, consistent with @Carretta2014a estimate of $\tife = 0.17\ \pm\ 0.02$ dex. On the other hand, our NLTE abundances are considerably different. For Fe, we infer $\feh_{\rm NLTE}=-1.88\ \pm\ 0.06$ dex, which is higher compared to $\feh_{\rm LTE}=-2.08\ \pm\ 0.08$ dex. Also, the \[Mg/Fe\] ratios are much lower, $\mgfe_{\rm NLTE}=0.18\ \pm\ 0.17$ dex, with the abundances in the individual stars ranging from $-0.03$ to $0.70$ dex. The NLTE Ti abundances are only slightly higher compared to the LTE estimates, $\tife_{\rm NLTE} =0.22\ \pm\ 0.06$ dex. ### NGC 4372 Remarkable for its strong chemical peculiarities, NGC 4372 is nonetheless a rather typical GC system. It is metal-poor and nearby cluster, with an age of 12.5 Gyr [@kruijssen2018], at a distance of $5.8$ kpc and $1.0$ kpc below the Galactic plane [@harris 2010 edition]. The cluster reveals a signficant dispersion in Na, Mg, Al, and O, a Na-O anti-correlation, and, possibly, an Al-Mg anti-correlation [@sanroman2015]. Our average NLTE metallicity of stars in NGC 4372 is $-2.07\ \pm\ 0.06$ dex. Our LTE metallicity is much lower, $\feh_{\rm LTE} =-2.33\ \pm\ 0.08$ dex, following the general trend for all metal-poor clusters seen in Fig.\[fig:4\]. Comparing the latter estimate with the literature, we find a satisfactory agreement with a comprehensive study by @sanroman2015, which is also based on the spectra acquired within the Gaia-ESO survey. Their estimate of $\feh$ is $-2.23\ \pm\ 0.10$ dex[^12], consistent with our results within the combined uncertainties of the both estimates. Also the value from @carretta2009, $\feh = -2.19\ \pm\ 0.08$ dex, is somewhat higher than our LTE metallicity. The detailed abundance ratios of our study are also in agreement with those measured by @sanroman2015. We obtain $\mgfe_{\rm LTE} =0.51\ \pm\ 0.09$ dex and $\tife_{\rm LTE} =0.22\ \pm\ 0.07$ dex in LTE, whereas @sanroman2015 derive $\mgfe = 0.44\ \pm\ 0.07$ dex and $\tife = 0.31\ \pm\ 0.03$ dex. Our NLTE estimates are $\mgfe_{\rm NLTE} =0.31\ \pm\ 0.07$ dex and $\tife_{\rm NLTE} =0.20\ \pm\ 0.06$ dex. ### M15 (NGC 7078) Similar to NGC 1904 and NGC 4833, M15 represents one of the oldest and metal-poor systems in the Galactic halo at a distance $d_\odot = 10.4$ kpc and $4.8$ kpc below the Galactic plane [@harris 2010 edition]. Several studies report multiple stellar populations in the cluster [@Larsen2015; @Nardiello2018; @Bonatto2019]. M15 has the lowest metallicity in our sample and shows the largest NLTE effects: $\feh_{\rm NLTE}=-2.28\ \pm\ 0.06$ dex, but $\feh_{\rm LTE}=-2.58\ \pm\ 0.07$ dex. Our LTE estimate compares favourably well with @sobeck2011, who derived $\feh = -2.62\ \pm\ 0.08$ dex[^13] from the analysis of high-resolution spectra of several RGB and RHB stars in the cluster collected with the HIRES spectrograph at the Keck telescope. @Worley2013 report $\feh$ in the range from $-2.4$ to $-2.3$ dex with an uncertainty of $0.1$ dex, which is closer to the estimate of $\feh=-2.37$ dex derived by @letarte2006 and $\feh=-2.32$ dex by @carretta2009. Our average LTE abundances of Mg is $\mgfe_{\rm LTE}=0.36\ \pm\ 0.23$ dex, with the star-to-star variation in the range from $-0.26$ to $0.66$ dex. This is consistent with @carretta2009, within the uncertainties, and also with the abundances derived by @sobeck2011, who measured \[Mg/Fe\] ratios from $-0.01$ to $0.6$ dex. In contrast, the cluster stars exhibit very tight \[Ti/Fe\] ratios with the mean of $\tife_{\rm LTE}=0.19\ \pm\ 0.05$ dex. Our NLTE results for Mg are much lower than the LTE ones, $\mgfe_{\rm NLTE}=0.22\ \pm\ 0.19$ dex, whereas the NLTE Ti abundances are nearly consistent with LTE, $\tife_{\rm NLTE}=0.21\ \pm\ 0.05$ dex. ![Toomre diagram for clusters and @Bensby2014 field stars. The thin disk population is shown in red colour, the thick disk population in green colour, and the halo population in blue colour. The isolines for total velocity $V_{\rm tot}=\sqrt{U_{\rm LSR}^2+V_{\rm LSR}^2+W_{\rm LSR}^2}=100,\ 200,\ 300~\kms$ are shown as dotted lines. For the details of population assignment see Appendix \[distances\].[]{data-label="fig:fig8"}](toomre1.pdf){width="\columnwidth"} ![image](clustfieldmg.pdf){width="\textwidth"} ![image](clustfieldti.pdf){width="\textwidth"} Comparison with Milky Way field stars ------------------------------------- It is useful to combine our chemical characterisation of the clusters with their kinematics, in order to compare our results with Galactic field stars. We employ the kinematic selection criteria from @Bensby2014 to assign Galactic population membership to the clusters (see Appendix \[distances\]). The Toomre diagram for the clusters and field stars is shown in Fig. \[fig:fig8\]. In Fig. \[fig:fig9\] and \[fig:fig10\], we overlay our LTE and NLTE abundance ratios in the clusters with the literature measurements in the Galactic field stars. The field sample is taken from @Bensby2014 and @bergemann2017b. The former dataset represents populations in the solar neighbourhood and has a large coverage in metallicity, $-2.7 \lesssim \feh \lesssim 0.5$. The Fe abundances were derived in NLTE, while Mg and Ti were derived in LTE analysis. The dataset @bergemann2017b lacks a thin disk component, $\feh > -0.5$, but contains a significant fraction of the thick disk and halo stars. The study provides LTE and NLTE estimates of $\feh$ and $\mgfe$ derived using 1D and &lt;3D&gt; atmospheric models. For consistency with our 1D analysis, we use their 1D LTE and 1D NLTE results. There are several important results, which stand out by comparing our LTE and NLTE measurements in clusters against Galactic field stars. Firstly, our LTE abundances in GCs trace the Galactic field population remarkably well, at least as long as LTE field distributions are employed for the comparison. This supports the conclusions drawn by @Pritzl2005. NGC 3532 and NGC 2243, the two metal-rich clusters with disk-like kinematic properties, occupy the chemical locus of the thin disk. The metal-poor globular clusters trace the thick disk and the halo. Despite a difference of two orders of magnitude in metallicity, all metal-poor GCs follow very tight trends of the average \[Mg/Fe\] and \[Ti/Fe\] with \[Fe/H\]. In particular, all of them occupy the locus situated at \[Ti/Fe\] $\approx 0.25$ dex with small dispersion. On the other hand, the intra-cluster dispersions of \[Mg/Fe\] increase substantially. This is not unexpected and has been extensively discussed in the literature [@Gratton2004; @Carretta2014a; @Carretta2014b]. The large variation of Mg abundances is usually attributed to the nuclear processing associated with high temperature hydrogen burning and multiple star formation episodes. In such a scenario first generation massive stars evolve fast, converting their Mg into Al. Second generation stars, formed from the material of first generation stars, are depleted in Mg and enriched in Al. The absence of any noticeable dispersion in \[Ti/Fe\] in all GCs corroborates this interpretation. Notwithstanding the good agreement of our LTE results with earlier LTE studies, we find important differences between LTE and NLTE results (Fig. \[fig:fig10\]), which impact the astrophysical interpretation of the results. When comparing our NLTE abundances for globular clusters with the NLTE abundances of field stars, only two metal-rich clusters with the thick disk kinematics (NGC 104 and NGC 5927) and the metal-poor cluster NGC 4372 appear to be consistent with the field stars. All other metal-poor clusters are systematically depleted in \[Mg/Fe\] relative to the metal-poor disk and the halo. This may imply that the metal-poor clusters were not formed *in-situ*, but were accreted from disrupted dwarf satellite galaxies. Conclusions {#Conclusions} =========== In this work, we employ non-LTE radiative transfer models and the Payne code to determine chemical abundances for 13 stellar clusters in the Milky Way. The observed spectra are taken from the public 3$^{rd}$ data release of the Gaia-ESO survey, and we focus on the $R\sim 19\,800$ spectra taken with the Giraffe instrument. The NLTE synthetic spectra are computed using the model atoms presented in earlier works [@bergemann2008; @Bergemann2011; @Bergemann2012c; @Bergemann2017a]. *The Payne* code is used to interpolate in the grids of synthetic spectra to maximise the efficiency of the analysis, where we simultaneously fit for all spectral parameters, exploring more information from the full spectrum. The spectral grids are computed at random nodes in stellar parameter space and a $\chi^2$ minimisation is employed to find the best-fit stellar parameters and chemical abundances by comparing the models with the observations. We validate our method and the models on the Gaia-ESO benchmark stars, for which stellar parameters are well constrained by parallaxes, asteroseismology, and interferometric angular diameter measurements. The calibration sample includes $19$ main-sequence dwarfs, subgiants, and red giants in the $\feh$ range from $-2.5$ to $0.3$ dex with spectra taken at different exposure times spanning the $\snr$ range of 100 to 2800 Å$^{-1}$. We find a very good agreement between our NLTE spectroscopic results and the independently determined stellar parameters. The residuals are within $-29 \pm 88$ K in $\teff$, $0.09\ \pm\ 0.16$ dex in $\logg$, and $0.02\ \pm\ 0.09$ dex in $\feh$. The analysis of repeat observations of the same stars indicates the absence of a systematic bias or correlation of the abundance error with the quality the spectra within the full range of $\snr$ probed in this work. We compute stellar parameters and abundances for $742$ stars in two open clusters and $11$ globular clusters in the Milky Way galaxy. The results are provided in Table \[tab:catalog\] and are archived electronically on CDS. The typical $\snr$ of the spectra is 200 Å$^{-1}$. We find that spectroscopic estimates of stellar parameters ($\teff$, $\log g$, and $\feh$) agree with evolutionary expectations, based on isochrones. However, different isochrones are needed to match the LTE and NLTE data. At low metallicity, the difference between LTE and NLTE parameters is significant, confirming earlier studies [i.e. @Bergemann2012c; @lind2012; @ruchti2013]. The systematic error of LTE increases in proportionality with decreasing metallicity, and amounts to $300$ K in $\teff$, $0.6$ dex in $\log g$, and $0.3$ dex in $\feh$ for the RGB stars with $\feh_{\rm NLTE} = -2.3$. The $\mgfe$ abundance ratios are typically lower in NLTE compared to LTE. Our abundances show no significant trends with stellar parameters, supporting their relative accuracy. Our results for the Galactic open and globular clusters can be summarised as follows: - NGC 3532, a young metal-rich open cluster, is consistent in its chemical abundance pattern and its kinematics with the Galactic thin disk. The cluster is slightly depleted in Mg compared to the solar neighbourhood, although the difference is generally within the uncertainties of the abundance measurements. - NGC 2243, a relatively old open cluster lies on the metal-poor end of the thin disk track, and shows a noticeable dispersion in $\feh$, $\mgfe$, and $\tife$ ratios contrasting with the tight chemical patterns in the field stars. This is the only cluster in our sample that is represented by main-sequence and TO stars, and this spread likely has an astrophysical origin. In particular, the pronounced dip in $\feh$ at the TO signifies the action of atomic diffusion consistent with depletion predicted by detailed stellar evolution models. - Two metal-rich clusters with thick disk like kinematics NGC 104 and NGC 5927 are also very similar to the thick disk in their abundance ratios of \[Mg/Fe\] and \[Ti/Fe\]. They show small dispersions in all elements $\lesssim 0.06$ dex, which are much smaller then the typical systematic uncertainties of our measurements, and are consistent with being chemically homogeneous populations. - The metal-poor clusters NGC 2808 and NGC 6752, despite being kinematically similar to the thick disk, appear to be depleted in \[Mg/Fe\] compared to the field stars, based on NLTE analysis. On the other hand, their \[Ti/Fe\] ratios are representative of the halo clusters. - NLTE analysis suggests that the majority of metal-poor clusters with \[Fe/H\] $<-1$ dex and halo-like kinematics, show a prominent, $\sim 0.15$ dex, depletion of \[Mg/Fe\] compared to field stars of the same metallicity. This may indicate their [*ex situ*]{} formation history. - NGC 2808 and NGC 1851 exhibit remarkably similar chemical abundance patterns and overlap in metallicity that reinforces the evidence for their common origin proposed in the literature. - Large intra-cluster spreads in \[Mg/Fe\], compared to the field population, are seen in the clusters M 2, NGC 2808, NGC 4833 and M15, corroborating with the long-postulated scenario that globular clusters have undergone multiple episodes of star formation and self-enrichment. On the other hand, the clusters are homogeneous in \[Ti/Fe\]. - The metal-poor globular cluster NGC 4372 stands out in comparison with the other globular clusters with a similar metallicity. Its \[Mg/Fe\] spread is relatively small, consistent with the study by @sanroman2015. Given our standard abundance uncertainties of $\sim 0.1$ dex, which exceed the intra-cluster dispersion, the cluster is homogeneous in \[Fe/H\], \[Mg/Fe\] and \[Ti/Fe\]. - For M15 and NGC 4833, which are the most metal-poor clusters in our sample, we find strong evidence for a multi-modality in \[Mg/Fe\]. However, our samples are too small to draw statistically robust conclusions on whether these clusters host two or more sub-populations. The combination of NLTE models and [*the Payne*]{} is a powerful tool for homogeneous analysis of the stellar parameters and chemical abundances. Our results for a large sample of stars in wide range of metallicity suggests that NLTE effects are significant for metal-poor regime ($\feh<-1$) and should be always taken into account. We thank Nikolay Kacharov, Diane Feuillet and David Hogg for valuable discussions. We thank anonymous referee for useful suggestions. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 188.B-3002. This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement. This research has made use of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University. We acknowledge support by the Collaborative Research centre SFB 881 (Heidelberg University) of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). YST is supported by the NASA Hubble Fellowship grant HST-HF2-51425.001 awarded by the Space Telescope Science Institute. Supplementary tables ==================== In Table \[tab:sist\] we list the sensitivity of measured abundance ratios to the typical errors in atmospheric parameters. We run the analysis with one parameter fixed to a perturbed value, allowing code to fit the others. We list the quadratic sum of individual sensitivities in the last row as the total systematic error. The results are given for one star from the open cluster NGC 2243, two stars in the metal-rich cluster NGC 104 and two stars in metal-poor cluster M 15. In Table \[tab:gbs\], we provide NLTE stellar parameters for the highest $\snr$ spectra of 19 Gaia benchmark stars along with reference values from @Jofre2015. The last row indicates the mean difference of our values. In Table \[tab:gcinfo\], we list cluster parameters from the literature including equatorial coordinates, heliocentric distances, reddening, mean RV, age and $\feh$. In Table \[tab:newnlte1\], we provide maximum likelihood estimates of the cluster average abundances and internal dispersions (more precisely, abundance dispersion of the cluster that is not accounted for by the abundance measurement systematic error) with associated errors. In Table \[tab:catalog\], we provide NLTE/LTE stellar parameters and abundances with systematic errors for all $742$ stars in the cluster sample. [lccc]{} star/parameter&$\Delta$\[Fe/H\] &$\Delta$\[Mg/Fe\] &$\Delta$\[Ti/Fe\]\ & dex & dex & dex\ \ $\teff$ +150 K& 0.08 & -0.01 &0.01\ $\logg$ +0.3 dex&0.07 & -0.06 &0.08\ $\feh$ +0.1 dex& $\cdots$ & -0.02 &0.01\ $\Vmic$ +0.2 $\kms$& 0.01 & 0.02 &-0.01\ total & 0.10&0.06&0.07\ \ $\teff$ +150 K& 0.11 & -0.07 &0.04\ $\logg$ +0.3 dex& 0.08 & -0.10 &0.08\ $\feh$ +0.1 dex& $\cdots$ & -0.05 &0.02\ $\Vmic$ +0.2 $\kms$&-0.03 & 0.02 &-0.04\ total & 0.14&0.14&0.10\ \ $\teff$ +150 K& 0.13 & -0.08 &0.03\ $\logg$ +0.3 dex& 0.08 & -0.09 &0.04\ $\feh$ +0.1 dex& $\cdots$ & -0.04 &-0.01\ $\Vmic$ +0.2 $\kms$&-0.04 & 0.02 &-0.02\ total & 0.16&0.12&0.05\ \ $\teff$ +150 K& 0.10 & -0.05 &-0.01\ $\logg$ +0.3 dex& 0.01 & -0.01 &0.10\ $\feh$ +0.1 dex& $\cdots$ & -0.05 &-0.03\ $\Vmic$ +0.2 $\kms$&-0.04 &0.02 &-0.01\ total & 0.11&0.08&0.11\ \ $\teff$ +150 K& 0.10 & -0.02 &0.02\ $\logg$ +0.3 dex& 0.06 & -0.01 &0.06\ $\feh$ +0.1 dex& $\cdots$ & -0.03 &0.01\ $\Vmic$ +0.2 $\kms$&0.06 &-0.01 & 0.03\ total & 0.13&0.03&0.07\ ----------------- --------------------- ----------------------- ------------------------- ----------------------- Star $\teff$,K $\logg$,dex $\feh$,dex $\Vmic$, $\kms$ fit, ref fit, ref fit, ref fit, ref HD107328 4384, 4496 $\pm$ 59 1.90, 2.09 $\pm$ 0.14 -0.60, -0.38 $\pm$ 0.16 1.71, 1.65 $\pm$ 0.26 HD220009 4336, 4275 $\pm$ 54 1.86, 1.47 $\pm$ 0.14 -0.79, -0.79 $\pm$ 0.13 1.42, 1.49 $\pm$ 0.14 ksiHya 5045, 5044 $\pm$ 38 3.01, 2.87 $\pm$ 0.02 -0.05, 0.11 $\pm$ 0.20 1.54, 1.40 $\pm$ 0.32 muLeo 4462, 4474 $\pm$ 60 2.45, 2.51 $\pm$ 0.09 0.01, 0.20 $\pm$ 0.15 1.54, 1.28 $\pm$ 0.26 HD122563 4771, 4636 $\pm$ 37 1.29, 1.42 $\pm$ 0.01 -2.56, -2.52 $\pm$ 0.11 2.53, 1.92 $\pm$ 0.11 HD140283 5888, 5787 $\pm$ 48 3.63, 3.57 $\pm$ 0.12 -2.39, -2.34 $\pm$ 0.03 2.16, 1.56 $\pm$ 0.20 delEri 5006, 4954 $\pm$ 26 3.61, 3.75 $\pm$ 0.02 -0.00, 0.01 $\pm$ 0.05 1.15, 1.10 $\pm$ 0.22 epsFor 5070, 5123 $\pm$ 78 3.28, 3.52 $\pm$ 0.07 -0.65, -0.65 $\pm$ 0.10 1.14, 1.04 $\pm$ 0.13 18Sco 5838, 5810 $\pm$ 80 4.32, 4.44 $\pm$ 0.03 0.02, -0.02 $\pm$ 0.03 1.27, 1.07 $\pm$ 0.20 alfCenB 5167, 5231 $\pm$ 20 4.33, 4.53 $\pm$ 0.03 0.14, 0.17 $\pm$ 0.10 1.06, 0.99 $\pm$ 0.31 muAra 5743, 5902 $\pm$ 66 4.05, 4.30 $\pm$ 0.03 0.22, 0.30 $\pm$ 0.13 1.32, 1.17 $\pm$ 0.13 betVir 6259, 6083 $\pm$ 41 4.06, 4.10 $\pm$ 0.02 0.18, 0.19 $\pm$ 0.07 1.51, 1.33 $\pm$ 0.09 epsEri 5079, 5076 $\pm$ 30 4.54, 4.60 $\pm$ 0.03 -0.14, -0.14 $\pm$ 0.06 1.11, 1.14 $\pm$ 0.05 etaBoo 6183, 6099 $\pm$ 28 3.84, 3.80 $\pm$ 0.02 0.27, 0.27 $\pm$ 0.08 1.52, 1.52 $\pm$ 0.19 HD22879 5907, 5868 $\pm$ 89 3.98, 4.27 $\pm$ 0.03 -0.80, -0.91 $\pm$ 0.05 1.24, 1.05 $\pm$ 0.19 HD49933 6718, 6635 $\pm$ 91 4.16, 4.20 $\pm$ 0.03 -0.36, -0.46 $\pm$ 0.08 1.51, 1.46 $\pm$ 0.35 HD84937 6481, 6356 $\pm$ 97 3.91, 4.15 $\pm$ 0.06 -2.00, -1.99 $\pm$ 0.02 1.76, 1.39 $\pm$ 0.24 Procyon 6686, 6554 $\pm$ 84 3.91, 3.99 $\pm$ 0.02 0.03, -0.04 $\pm$ 0.08 1.83, 1.66 $\pm$ 0.11 tauCet 5349, 5414 $\pm$ 21 4.26, 4.49 $\pm$ 0.01 -0.52, -0.54 $\pm$ 0.03 1.00, 0.89 $\pm$ 0.28 &lt;ref-fit&gt; -29 $\pm$ 88 0.09 $\pm$ 0.16 0.02 $\pm$ 0.09 -0.16 $\pm$ 0.18 ----------------- --------------------- ----------------------- ------------------------- ----------------------- Cluster $\alpha$,deg $\delta$,deg $d_{\odot}$, kpc E(B-V) &lt;RV&gt;, $\kms$ Age, Gyr $\feh$,dex --------------- -------------- -------------- ------------------ ------------ -------------------- ---------- ------------ NGC 3532 (oc) 166.4125 -58.7533 0.5 0.03 4.3 0.3 -0.01 NGC 5927 (gc) 232.0029 -50.6730 7.7 0.45 -100.5 11.9 -0.48 NGC 2243 (oc) 97.3917 -31.2833 4.5 0.05 59.8 3.8 -0.57 NGC 104 (gc) 6.0224 -72.0815 4.5 0.04 -18.7 12.5 -0.75 NGC 1851 (gc) 78.5281 -40.0465 12.1 0.02 320.9 10.5 -1.10 NGC 2808 (gc) 138.0129 -64.8635 9.6 0.22 102.8 10.9 -1.14 NGC 362 (gc) 15.8094 -70.8488 8.5 0.05 222.9 10.9 -1.23 M 2 (gc) 323.3626 -0.8233 11.5 0.06 -6.7 12.0 -1.52 NGC 6752 (gc) 287.7170 -59.9846 4.0 0.02 -27.4 12.3 -1.43 NGC 1904 (gc) 81.0441 -24.5242 12.9 0.01 205.8 11.1 -1.37 NGC 4833 (gc) 194.8913 -70.8765 6.6 0.32 201.1 12.7 -1.97 NGC 4372 (gc) 186.4393 -72.6591 5.8 0.30..0.80 72.6 12.5 -1.88 M 15 (gc) 322.4930 12.1670 10.4 0.10 -106.6 13.0 -2.25 ---------- --------------------------- ------------------------- ---------------------------- -------------------------- ---------------------------- -------------------------- Cluster &lt;$\feh_{\rm NLTE}$&gt; $\sigma\feh_{\rm NLTE}$ &lt;$\mgfe_{\rm NLTE}$&gt; $\sigma\mgfe_{\rm NLTE}$ &lt;$\tife_{\rm NLTE}$&gt; $\sigma\tife_{\rm NLTE}$ N stars &lt;$\feh_{\rm LTE}$&gt; $\sigma\feh_{\rm LTE}$ &lt;$\mgfe_{\rm LTE}$&gt; $\sigma\mgfe_{\rm LTE}$ &lt;$\tife_{\rm LTE}$&gt; $\sigma\tife_{\rm LTE}$ dex dex dex dex dex dex NGC 3532 -0.10 $\pm$ 0.02 0.00 $\pm$ 0.02 -0.09 $\pm$ 0.03 0.00 $\pm$ 0.03 0.01 $\pm$ 0.03 0.00 $\pm$ 0.03 12 -0.09 $\pm$ 0.02 0.00 $\pm$ 0.03 -0.07 $\pm$ 0.03 0.00 $\pm$ 0.03 0.01 $\pm$ 0.03 0.00 $\pm$ 0.03 NGC 5927 -0.48 $\pm$ 0.02 0.00 $\pm$ 0.02 0.39 $\pm$ 0.02 0.00 $\pm$ 0.02 0.29 $\pm$ 0.01 0.00 $\pm$ 0.02 47 -0.49 $\pm$ 0.02 0.00 $\pm$ 0.02 0.41 $\pm$ 0.02 0.00 $\pm$ 0.02 0.23 $\pm$ 0.01 0.00 $\pm$ 0.02 NGC 2243 -0.52 $\pm$ 0.01 0.00 $\pm$ 0.01 0.15 $\pm$ 0.01 0.00 $\pm$ 0.02 0.02 $\pm$ 0.01 0.00 $\pm$ 0.04 84 -0.57 $\pm$ 0.01 0.00 $\pm$ 0.02 0.26 $\pm$ 0.01 0.00 $\pm$ 0.02 0.01 $\pm$ 0.01 0.00 $\pm$ 0.05 NGC 104 -0.74 $\pm$ 0.02 0.00 $\pm$ 0.02 0.38 $\pm$ 0.02 0.00 $\pm$ 0.02 0.30 $\pm$ 0.01 0.00 $\pm$ 0.02 68 -0.75 $\pm$ 0.02 0.00 $\pm$ 0.02 0.42 $\pm$ 0.02 0.00 $\pm$ 0.02 0.26 $\pm$ 0.01 0.00 $\pm$ 0.03 NGC 1851 -1.11 $\pm$ 0.01 0.00 $\pm$ 0.02 0.22 $\pm$ 0.01 0.00 $\pm$ 0.02 0.28 $\pm$ 0.01 0.00 $\pm$ 0.02 88 -1.15 $\pm$ 0.01 0.00 $\pm$ 0.02 0.36 $\pm$ 0.01 0.00 $\pm$ 0.02 0.24 $\pm$ 0.01 0.00 $\pm$ 0.01 NGC 2808 -1.01 $\pm$ 0.03 0.00 $\pm$ 0.03 0.11 $\pm$ 0.03 0.09 $\pm$ 0.03 0.33 $\pm$ 0.02 0.00 $\pm$ 0.02 25 -1.03 $\pm$ 0.03 0.00 $\pm$ 0.03 0.22 $\pm$ 0.03 0.00 $\pm$ 0.09 0.30 $\pm$ 0.01 0.00 $\pm$ 0.02 NGC 362 -1.05 $\pm$ 0.02 0.00 $\pm$ 0.02 0.15 $\pm$ 0.01 0.00 $\pm$ 0.02 0.29 $\pm$ 0.01 0.00 $\pm$ 0.01 62 -1.09 $\pm$ 0.02 0.00 $\pm$ 0.02 0.26 $\pm$ 0.02 0.00 $\pm$ 0.02 0.26 $\pm$ 0.01 0.00 $\pm$ 0.01 M 2 -1.47 $\pm$ 0.01 0.00 $\pm$ 0.02 0.17 $\pm$ 0.01 0.07 $\pm$ 0.01 0.23 $\pm$ 0.01 0.00 $\pm$ 0.02 78 -1.54 $\pm$ 0.01 0.00 $\pm$ 0.02 0.34 $\pm$ 0.02 0.08 $\pm$ 0.02 0.25 $\pm$ 0.01 0.00 $\pm$ 0.01 NGC 6752 -1.48 $\pm$ 0.01 0.00 $\pm$ 0.01 0.20 $\pm$ 0.01 0.03 $\pm$ 0.02 0.17 $\pm$ 0.01 0.00 $\pm$ 0.01 110 -1.56 $\pm$ 0.01 0.00 $\pm$ 0.01 0.35 $\pm$ 0.01 0.04 $\pm$ 0.02 0.23 $\pm$ 0.01 0.00 $\pm$ 0.01 NGC 1904 -1.51 $\pm$ 0.02 0.00 $\pm$ 0.02 0.16 $\pm$ 0.01 0.04 $\pm$ 0.02 0.21 $\pm$ 0.01 0.00 $\pm$ 0.02 44 -1.60 $\pm$ 0.02 0.00 $\pm$ 0.02 0.31 $\pm$ 0.02 0.00 $\pm$ 0.04 0.24 $\pm$ 0.01 0.00 $\pm$ 0.03 NGC 4833 -1.88 $\pm$ 0.02 0.00 $\pm$ 0.02 0.18 $\pm$ 0.03 0.15 $\pm$ 0.02 0.22 $\pm$ 0.01 0.00 $\pm$ 0.02 33 -2.08 $\pm$ 0.02 0.00 $\pm$ 0.03 0.36 $\pm$ 0.03 0.18 $\pm$ 0.03 0.24 $\pm$ 0.02 0.00 $\pm$ 0.02 NGC 4372 -2.07 $\pm$ 0.02 0.00 $\pm$ 0.02 0.31 $\pm$ 0.01 0.00 $\pm$ 0.03 0.20 $\pm$ 0.01 0.00 $\pm$ 0.02 45 -2.33 $\pm$ 0.02 0.00 $\pm$ 0.02 0.51 $\pm$ 0.02 0.00 $\pm$ 0.04 0.22 $\pm$ 0.01 0.00 $\pm$ 0.02 M 15 -2.28 $\pm$ 0.02 0.00 $\pm$ 0.02 0.22 $\pm$ 0.03 0.16 $\pm$ 0.02 0.21 $\pm$ 0.02 0.00 $\pm$ 0.02 46 -2.58 $\pm$ 0.02 0.00 $\pm$ 0.02 0.36 $\pm$ 0.04 0.22 $\pm$ 0.03 0.19 $\pm$ 0.01 0.00 $\pm$ 0.02 ---------- --------------------------- ------------------------- ---------------------------- -------------------------- ---------------------------- -------------------------- \[tab:newnlte1\] [ccccccccccccc]{} Star &Cluster &$\snr$&&&&&\ & & & NLTE & LTE & NLTE & LTE& NLTE & LTE& NLTE & LTE& NLTE & LTE\ HHMMSSss-DDMMSSs& & Å$^{-1}$& K & K & dex & dex &$\kms$&$\kms$& dex&dex& dex&dex\ 11070908-5839114 &NGC 3532 &537.5 & 5389 & 5379& 4.53 &4.52&1.39&1.28&-0.10&-0.09&0.09&0.08\ 11071871-5849027 &NGC 3532 &521.9 & 5606 & 5615& 4.54& 4.54& 1.43& 1.41&-0.08&-0.08&0.09&0.09\ ..& .. & .. & .. &.. & .. &.. & ..& ..& ..& ..& ..& ..\ & & & & & & & & & & & &\ &&&&&&\ & NLTE & LTE & NLTE & LTE& NLTE & LTE& NLTE & LTE& NLTE & LTE& NLTE & LTE\ & dex&dex& dex&dex& dex&dex& dex&dex& dex&dex& dex&dex\ &-0.08 &-0.08 & 0.09 &0.10 &-0.01 &0.04 & 0.08 &0.14 &-0.14 &-0.26 & 0.11 &0.13\ &-0.08 &-0.07 & 0.09 &0.10 & 0.02 &0.03 & 0.06 &0.10 &-0.16 &-0.26 & 0.12 &0.12\ & .. & .. & .. &.. & .. &.. & ..& ..& ..& ..& ..& ..\ \[tab:catalog\] Instrumental profile {#LSF} ==================== ![Results of instrumental profile fitting. Residuals of the fit are up-scaled three times. []{data-label="fig:lsfplot"}](insrumental.pdf){width="\columnwidth"} Similarly to the technique that @Damiani2016 used to obtain an instrumental profile for the Giraffe HR15N setting, we used a sum of two Gaussian profiles to fit the line at 5578 Å in the calibration spectrum of the thorium-argon lamp, downloaded from the ESO webpage[^14]. It is shown in Fig. \[fig:lsfplot\] that such a new instrumental profile describes the spectral profile much better than a single Gaussian computed according to the reported resolution of HR10 setting $R=19\,800$. The error of a single Gaussian profile can be up to 5-7% while, using two-Gaussian the profile error is alway below the 1% level. The resulting instrumental profile with the best-fitted parameters is listed below: $$\lambda(v)=\frac{A_1}{\sqrt{2 \pi \sigma_{1}^2}} {\rm exp} \left (-\frac{(v-v_1)^2}{2 \sigma_{1}^2} \right)+\frac{A_2}{\sqrt{2 \pi \sigma_{2}^2}} {\rm exp} \left (-\frac{(v-v_2)^2}{2 \sigma_{2}^2} \right)$$ with $A_1=0.465$, $A_2=0.194$, $\sigma_1= 4.971\kms$, $\sigma_2 =3.799\kms$, $v_1=-2.249\kms$, $v_2= 5.754\kms$. Kinematic assignment of the populations. {#distances} ======================================== We employ the cluster distances listed in Table \[tab:gcinfo\]. They were obtained from the colour magnitude diagram horizontal branch (globular clusters @harris [2010 edition]) or turn-off point (open clusters WEBDA database) fitting. The same distance is assumed for all stars within given cluster. We also take proper motions from Gaia DR2 [@gdr2] and radial velocities from our analysis and compute galactocentric rectangular velocity components (U,V,W) for all stars in the clusters, using *Astropy* [@astropy], package, with respect to solar motion from @schoenrich2010. The computed velocities are used to calculate the probability ratios $TD/D$ and $TD/H$ [@Bensby2014 Appendix 1], which allow us to assign population membership to the clusters. We use the following selection criteria: thick disk if $TD/D > 2$ and $TD/H>~1$; thin disk if $TD/D< 0.5$; halo if $TD/H<1$. Only the open cluster NGC 2243 has a probability ratio of $TD/D=1.25$ in between the thin and the thick disk. We therefore decide to assign it to the thick disk on the basis of its large separation ($|z|=1$ kpc) from the Galactic plane. [^1]: Hubble Fellow [^2]: Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 188.B-3002 [^3]: <https://github.com/tingyuansen/The_Payne> [^4]: <http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form?collection_name=GAIAESO> [^5]: We employ the following relationship: $\snr$ \[Å$^{-1}$\]=$\sqrt{20}~\snr$ \[pixel$^{-1}$\], where 20 pixels are equivalent to 1 Å, that is, the sampling of the Giraffe HR10 spectra. [^6]: Our tests showed that this template provides robust RV estimates for the full metallicity range. [^7]: Hereafter, the abundance of iron $\feh$, is used as a proxy for metallicity. [^8]: as it is now implemented in the Github version: <https://github.com/tingyuansen/The_Payne>. [^9]: <http://simbad.u-strasbg.fr/simbad/> [^10]: The median is used because it is less sensitive to outliers. [^11]: <https://www.univie.ac.at/webda/> [^12]: Note that this value depends on whether large outliers are included or not. [^13]: We recompute value using the mean of all measurements from nine RGB and RHB stars. [^14]: <http://www.eso.org/observing/dfo/quality/GIRAFFE/pipeline/SKY/html/GI_SRBS_2004-09-26T22_48_10.511_Medusa2_H548.8nm_o10.fits>
--- abstract: 'Finite mixtures of multivariate normal distributions have been widely used in empirical applications in diverse fields such as statistical genetics and statistical finance. Testing the number of components in multivariate normal mixture models is a long-standing challenge even in the most important case of testing homogeneity. This paper develops likelihood-based tests of the null hypothesis of $M_0$ components against the alternative hypothesis of $M_0 + 1$ components for a general $M_0 \geq 1$. For heteroscedastic normal mixtures, we propose an EM test and derive the asymptotic distribution of the EM test statistic. For homoscedastic normal mixtures, we derive the asymptotic distribution of the likelihood ratio test statistic. We also derive the asymptotic distribution of the likelihood ratio test statistic and EM test statistic under local alternatives and show the validity of parametric bootstrap. The simulations show that the proposed test has good finite sample size and power properties.' author: - | Hiroyuki Kasahara[^1]\ Vancouver School of Economics\ University of British Columbia\ hkasahar@mail.ubc.ca - | Katsumi Shimotsu\ Faculty of Economics\ University of Tokyo\ shimotsu@e.u-tokyo.ac.jp title: '**Testing the Order of Multivariate Normal Mixture Models**' --- Key words: asymptotic distribution; EM test; likelihood ratio test; multivariate normal mixture models; number of components Introduction ============ Finite mixtures of multivariate normal distributions have been widely used in empirical applications in diverse fields such as statistical genetics and statistical finance. Comprehensive surveys on theoretical properties and applications can be found, for example, @lindsay95book, @mclachlanpeel00book, and @fruehwirth06book. In many applications of finite mixture models, the number of components is of substantial interest. In multivariate normal mixture models, however, testing for the number of components has been an unsolved problem even in the most important case of testing homogeneity. For general finite mixture models, the asymptotic distribution of the likelihood ratio test statistic (LRTS) has been derived as a functional of the Gaussian process [@dacunha99as; @liushao03as; @zhuzhang04jrssb; @azais09esaim]. These results are not applicable to normal mixtures because normal mixtures have an undesirable mathematical property that invalidates key assumptions in these works [@chenlifu12jasa]. In particular, the normal density with mean $\mu$ and variance $\sigma^2$, $f(y;\mu,\sigma^2)$, has the property $\frac{\partial^2}{\partial \mu \partial \mu} f(y;\mu,\sigma^2) = 2\frac{\partial}{\partial \sigma^2} f(y;\mu,\sigma^2)$. This leads to the loss of “strong identifiability” condition introduced by @chen95as. As a result, neither Assumption (P1) of @dacunha99as nor Assumption 7 of @azais09esaim holds, and Assumption 3 of @zhuzhang04jrssb is violated, while Corollary 4.1 of @liushao03as does not hold in normal mixtures. Heteroscedastic normal mixture models have an additional problem called the infinite Fisher information problem [@lcm09bm] that the score of the LRTS has infinite variance if the range of the variance is unrestricted. This paper develops likelihood-based tests of the null hypothesis of $M_0$ components against the alternative hypothesis of $M_{0} + 1$ components for a general $M_0 \geq 1$ in multivariate normal mixtures. We consider both heteroscedastic and homoscedastic mixtures. For heteroscedastic normal mixtures, we propose an EM test by building on the EM approach pioneered by @lcm09bm and @lichen10jasa. The asymptotic null distribution of the proposed EM test statistic is shown to be the maximum of $M_0$ random variables, each of which is a projection of a Gaussian random variable on a cone. For homoscedastic normal mixtures, we derive the asymptotic distribution of the LRTS because homoscedastic normal mixtures do not suffer from the infinite Fisher information problem. In univariate heteroscedastic normal mixtures, @chenli09as develop an EM test for $M_0 = 1$ against $M_0=2$, and @chenlifu12jasa develop an EM test for testing $H_0:M = M_0$ against $H_A:M > M_0$. Our result may be viewed as generalization of @chenli09as to the multivariate case. @kasaharashimotsu15jasa develop an EM test for testing $H_0:M = M_0$ against $H_A:M = M_0+1$ for general $M_0 \geq 1$ in finite normal mixture regression models. In univariate homoscedastic normal mixtures, @chenchen03sinica derive the asymptotic distribution of the LRTS. Our results generalize the results in @chenchen03sinica to multivariate homoscedastic normal mixtures. For some specific models such as binomial mixtures, the asymptotic distribution of the LRTS has been derived by, for example, @ghoshsen85book [@chernofflander95jspi; @lemdanipons97spl; @chenchen01cjstat; @chenchen03sinica; @cck04jrssb; @garel01jspi; @garel05jspi]. The remainder of this paper is organized as follows. Section 2 introduces the likelihood ratio test for heteroscedastic multivariate normal mixture models as a precursor of the EM test and derives the asymptotic distribution of the LRTS. Section 3 introduces the EM test and derives the asymptotic distribution of the EM test statistic. Section 4 derives the asymptotic distribution of the LRTS and EM test statistics under local alternatives and Section 5 shows the validity of parametric bootstrap. Section 6 analyzes homoscedastic multivariate normal mixture models. Section 7 reports the simulation results and provides empirical applications. Appendix A contain proofs, and Appendices B–D collect auxiliary results. We collect notation. Let $:=$ denote “equals by definition.” Boldface letters denote vectors or matrices. For a matrix $\bs{B}$, denote its $(i,j)$ element by $B_{ij}$, and let $\lambda_{\min}(\bs{B}) $ and $\lambda_{\max}(\bs{B})$ be the smallest and the largest eigenvalue of $\bs{B}$, respectively. For a $k$-dimensional vector $\bs{x} = (x_1,\ldots,x_k)\t$ and a matrix $\bs{B}$, define $|\bs{x}| := (\bs{x}\t\bs{x})^{1/2}$ and $|\bs{B}| := (\lambda_{\max}(\bs{B}\t\bs{B}))^{1/2}$. Let $\bs{x}^{\otimes k} := \bs{x} \otimes \bs{x} \otimes \cdots \otimes \bs{x}$ ($k$ times). Let $\mathbb{I}\{A\}$ denote an indicator function that takes value 1 when $A$ is true and 0 otherwise. $\mathcal{C}$ denotes a generic nonnegative finite constant whose value may change from one expression to another. Given a sequence $\{f(\bs{Y}_i)\}_{i=1}^n$, let $\nu_n(f(\bs{y})) := n^{-1/2} \sum_{i=1}^n [f(\bs{Y}_i) - Ef(\bs{Y}_i)]$ and $P_n(f(\bs{y})) := n^{-1} \sum_{i=1}^n f(\bs{Y}_i)$. All the limits are taken as $n \to \infty$ unless stated otherwise. Heteroscedastic multivariate finite normal mixture models {#sec:hetero} ========================================================= Denote the density of a $d$-variate normal distribution with mean $\bs{\mu}+ \bs{\gamma}^\top\bs{z}$ and variance $\bs{\Sigma}$ by $$\label{normal_density} f(\bs{x}|\bs{z};\bs{\gamma},\bs{\mu},\bs{\Sigma}):= (2\pi)^{-\frac{d}{2}}(\det \bs{\Sigma})^{-\frac{1}{2}} \exp \left(-\frac{(\bs{x}-\bs{\mu}-\bs{\gamma}^\top\bs{z})\t\bs{\Sigma}^{-1}(\bs{x}-\bs{\mu}-\bs{\gamma}^\top\bs{z})}{2}\right),$$ where $\bs{x}$ and $\bs{\mu}$ are $d \times 1$, $\bs{\gamma}$ is $d\times p$, and $\bs{z}$ is $p \times 1$. Let $\Theta_{\bs{\gamma}} \subset \mathbb{R}^{dp}$, $\Theta_{{\bs{\mu}}} \subset \mathbb{R}^{d}$, and $\Theta_{\bs\Sigma} \subset \mathbb{S}^d_+$ denote the space of $\bs{\gamma}$, ${\bs{\mu}}$, and $\bs{\Sigma}$, respectively, where $\mathbb{S}^d_+$ denotes the space of $d\times d$ positive definite matrices. For $M \geq 2$, denote the density of $M$-component finite normal mixture distribution as: $$f_M(\bs{x}|\bs{z};{\bs{\vartheta}}_M) : = \sum_{j = 1}^M \alpha_j f(\bs{x}|\bs{z}; \bs{\gamma},{\bs{\mu}}_j,\bs{\Sigma}_j),\label{general_model}$$ where $\bs{\vartheta}_M := ({\bs{\alpha}},\bs{\gamma},{\bs{\mu}}_1,\ldots,{\bs{\mu}}_{M},\bs{\Sigma}_1,\ldots,\bs{\Sigma}_M)$ with ${\bs{\alpha}} : = (\alpha_1,\ldots,\alpha_{M-1})^{\top}$, and $\alpha_{M}$ being determined by $\alpha_{M}: = 1-\sum_{j = 1}^{M - 1} \alpha_j$. $\bs{\mu}_j$ and $\bs{\Sigma}_j$ are mixing parameters that characterize the $j$-th component, and $\alpha_j$s are mixing probabilities. $\bs{\gamma}$ is the coefficient of the covariate $\bs{z}$, and $\bs{\gamma}$ is assumed to be common to all the components. Define the set of admissible values of ${\bs{\alpha}}$ by $\Theta_{{\bs{\alpha}}} :=\{{\bs{\alpha}}: \alpha_j \geq 0, \sum_{j = 1}^{M - 1} \alpha_j \in [0,1] \}$, and let the space of ${\bs{\vartheta}}_M$ be $\Theta_{{\bs{\vartheta}}_M} := \Theta_{{\bs{\alpha}}}\times \Theta_{\bs{\gamma}}\times\Theta_{{\bs{\mu}}}^M\times\Theta_{\Sigma}^M$. The number of components $M$ is the smallest number such that the data density admits the representation (\[general\_model\]). Our objective is to test $$H_0:\ M = M_0\quad \text{against}\quad H_A: M = M_0 + 1.$$ Likelihood ratio test of $H_0: M = 1$ against $H_A: M = 2$ {#section:homogeneity} ---------------------------------------------------------- As a precursor of the EM test developed in Section \[section:emtest\], this section establishes the asymptotic distribution of the LRTS for testing the null hypothesis $H_0: M = 1$ against $H_A: M = 2$ when the data are from $H_0$. We consider a random sample of $n$ independent observations $\{\bs{X}_i,\bs{Z}_i\}_{i = 1}^n$ from the true one-component density $f(\bs{x}|\bs{z}; \bs{\gamma}^*, \bs{\mu}^*,\bs{\Sigma}^{*})$. Here, the superscript $*$ signifies the true parameter value. Let a two-component mixture density with ${\bs{\vartheta}}_2 = (\alpha,\bs{\gamma}, \bs{\mu}_1, \bs{\mu}_2,\bs{\Sigma}_1,\bs{\Sigma}_2) \in \Theta_{{\bs{\vartheta}}_2}$ be $$f_2(\bs{x}|\bs{z};\bs{\vartheta}_2) := \alpha f(\bs{x}|\bs{z}; \bs{\gamma} ,\bs{\mu}_1,\bs{\Sigma}_1) + (1-\alpha) f(\bs{x}|\bs{z}; \bs{\gamma},\bs{\mu}_2,\bs{\Sigma}_2). \label{two-component}$$ We partition the null hypothesis $H_0: m = 1$ into two as follows: $$H_{01}: (\bs{\mu}_{1},\bs{\Sigma}_1) = (\bs{\mu}_{2},\bs{\Sigma}_2)\ \text{ and }\ H_{02}: \alpha(1-\alpha) = 0.$$ In the following, we focus on testing $H_{01}:(\bs{\mu}_{1},\bs{\Sigma}_1) = (\bs{\mu}_{2},\bs{\Sigma}_2)$ because, as discussed in @chenli09as, the Fisher information for testing $H_{02}$ is not finite unless the range of $\det(\bs{\Sigma}_1)/\det(\bs{\Sigma}_2)$ is restricted. The log-likelihood function for testing $H_{01}:(\bs{\mu}_{1},\bs{\Sigma}_1) = (\bs{\mu}_{2},\bs{\Sigma}_2)$ is unbounded if $\det(\bs{\Sigma}_1)$ and $\det(\bs{\Sigma}_1)$ are not bounded away from 0 [@hartigan85book]. Therefore, we consider a maximum penalized likelihood estimator (PMLE) introduced by @chentan09jmva. Similar to @chentan09jmva, we use the following penalty function with $M=2$: $$\label{pen_pmle} p_n(\bs{\vartheta}_M) = \sum_{m=1}^M p_{n}(\bs{\Sigma}_m;\widehat{\bs{\Omega}}) = \sum_{m=1}^M -a_n\left\{ \text{tr}(\widehat{\bs{\Omega}}\bs{\Sigma}_m^{-1}) - \log ( \det(\widehat{\bs{\Omega}}\bs{\Sigma}_m^{-1})) -d \right\},$$ where $\widehat{\bs{\Omega}}$ is the maximum likelihood estimator (MLE) of $\bs{\Sigma}$ from the one-component model, and $a_n$ is a non-random sequence such that $a_n \geq 1/n$ and $a_n =o(n)$. Let $\widehat{\bs{\vartheta}}_2$ denote the PMLE that maximizes $PL_n({\bs{\vartheta}}_2):= \sum_{i = 1}^n f_2(\bs{X}_i|\bs{Z}_i;{\bs{\vartheta}}_2) + p_n(\bs{\vartheta}_2)$. \[assn\_consis\] $\bs{Z}$ has finite second moment, and $\Pr(\bs{\gamma}^\top \bs{Z}_i \neq \bs{\gamma}^{*\top} \bs{Z}_i )>0$ for any $\bs{\gamma} \neq \bs{\gamma}^*$. Model (\[two-component\]) yields the true density $f(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^{*})$ if ${\bs{\vartheta}}_2$ lies in the set $\Theta_2^* := \{{\bs{\vartheta}}_2 \in \Theta_{{\bs{\vartheta}}_2}: \{ (\bs{\mu}_1,\bs{\Sigma}_1) = (\bs{\mu}_2,\bs{\Sigma}_2) = (\bs{\mu}^*,\bs{\Sigma}^{*}), \bs{\gamma}=\bs{\gamma}^*\}\ \text{or}\ \{\alpha =1, (\bs{\mu}_1,\bs{\Sigma}_1) = (\bs{\mu}^*,\bs{\Sigma}^{*}), \bs{\gamma}=\bs{\gamma}^*\}\ \text{or}\ \{\alpha=0, (\bs{\mu}_2,\bs{\Sigma}_2) = (\bs{\mu}^*,\bs{\Sigma}^{*}), \bs{\gamma}=\bs{\gamma}^*\}\}$. The following proposition shows the consistency of $\widehat{\bs{\vartheta}}_2$. \[P-consis\] Suppose that Assumption \[assn\_consis\] holds. Then, under the null hypothesis $H_0: M=1$, $\inf_{{\bs{\vartheta}}_2 \in \Theta_2^*} |\widehat {\bs{\vartheta}}_2 - {\bs{\vartheta}}_2| \rightarrow_p 0$. In testing $H_{01}$, the standard asymptotic analysis of the LRTS breaks down because the Fisher information matrix is degenerate. This is due to the fact that, for any $\bar {\bs{\vartheta}}_2$ such that $(\bs{\mu}_1,\bs{\Sigma}_1)=(\bs{\mu}_2,\bs{\Sigma}_2)$, the derivatives of the density of different orders are linearly dependent as $$\begin{aligned} &\nabla_{\bs{\mu}_1} f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2) =\frac{\alpha}{1-\alpha} \nabla_{\bs{\mu}_2} f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2),\ \nabla_{\bs{\Sigma}_1}f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2) = \frac{\alpha}{1-\alpha} \nabla_{\bs{\Sigma}_2}l(y|\bs{x,z};\bar {\bs{\vartheta}}_2), \nonumber \\ &\nabla_{\mu_{1i}\mu_{1j}} f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2) = 2\nabla_{\Sigma_{1,ij}} f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2), \quad \nabla_{\mu_{2i}\mu_{2j}} f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2) = 2\nabla_{\Sigma_{2,ij}} f_2(\bs{x}|\bs{z};\bar{\bs{\vartheta}}_2).$$ This dependence leads to the loss of strong identifiability and causes substantial difficulties in existing literature. We analyze the LRTS for testing $H_{01}: (\bs{\mu}_{1},\bs{\Sigma}_1)=(\bs{\mu}_{2},\bs{\Sigma}_2)$ by developing a higher-order approximation of the log-likelihood function through an ingenious reparameterization that extends the result of @rotnitzky00bernoulli and @kasaharashimotsu15jasa. Collect the unique elements in $\bs{\Sigma}$ into a $d(d+1)/2$-vector $$\begin{aligned} \bs{v} &= (v_{11},v_{12},\ldots,v_{1d},v_{22},v_{23},\ldots,v_{2d},\ldots,v_{d-1,d-1},v_{d-1,d},v_{dd}) \t \\ & := (\Sigma_{11},2\Sigma_{12},\ldots,2\Sigma_{1d},\Sigma_{22},2\Sigma_{23},\ldots,2\Sigma_{2d},\ldots,\Sigma_{d-1,d-1},2\Sigma_{d-1,d},\Sigma_{dd}) \t. \end{aligned}$$ Define the density of $N(\bs{\mu},\bs{\Sigma})$ parameterized in terms of $\bs{\mu}$ and $\bs{v}$ as $$\label{fv_defn} f_v(\bs{\mu},\bs{v}) : = f(\bs{\mu},\bs{S}(\bs{v})), \quad \text{ where }\ S_{ij}(\bs{v}):= \begin{cases} v_{ii} & \text{if } i=j , \\ v_{ij}/2 & \text{if } i \neq j . \end{cases}$$ For a $d\times d$ symmetric matrix $\bs{A}$, define a function $\bs{w}(\bs{A}) \in \mathbb{R}^{d(d+1)/2}$ that collects the unique elements of $\bs{A}$ as $$\begin{aligned} \bs{w}(\bs{A}) & := (A_{11},2A_{12},\ldots,2A_{1d},A_{22},2A_{23},\ldots,2A_{2d},\ldots,A_{d-1,d-1},2A_{d-1,d},A_{dd}) \t.\end{aligned}$$ Then $f_v(\bs{\mu},\bs{v})$ and $f(\bs{\mu},\bs{\Sigma})$ are related as $$f(\bs{\mu},\bs{\Sigma}) = f_v(\bs{\mu},\bs{w}(\bs{\Sigma})).$$ We introduce the following one-to-one mapping between $(\bs{\mu}_1,\bs{\mu}_2,\bs{v}_1,\bs{v}_2)$ and the reparameterized parameter $(\bs{\lambda}_{\bs{\mu}},\bs{\nu}_{\bs\mu},\bs{\lambda}_{\bs{v}},\bs{\nu}_{\bs{v}})$: $$\label{repara2} \begin{pmatrix} {\bs{\mu}}_1\\ {\bs{\mu}}_2\\ \bs{v}_1\\ \bs{v}_2 \end{pmatrix} = \begin{pmatrix} \bs{\nu}_{\bs\mu} + (1-\alpha) \bs{\lambda}_{\bs\mu} \\ \bs{\nu}_{\bs\mu} -\alpha \bs{\lambda}_{\bs\mu}\\ \bs{\nu}_{\bs{v}} + (1- \alpha)(2\bs{\lambda}_{\bs{v}}+ C_1 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) )\\ \bs{\nu}_{\bs{v}} - \alpha(2\bs{\lambda}_{\bs{v}}+ C_2 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) \end{pmatrix},$$ where $C_1 := -(1/3)(1 + \alpha)$ and $C_2 := (1/3)(2 - \alpha)$. Collect the reparameterized parameters, except for $\alpha$, into one vector $\bs{\psi}$ defined as $$\bs{\psi}:=(\bs{\gamma},\bs{\nu}_{\bs\mu},\bs{\nu}_{\bs{v}},\bs{\lambda}_{\bs\mu},\bs{\lambda}_{\bs{v}}) \in \Theta_{\bs{\psi}}.$$ In the reparameterized model, the null hypothesis of $H_{01}:(\bs{\mu}_{1},\bs{v}_1) = (\bs{\mu}_{2},\bs{v}_2)$ is written as $H_{01}:(\bs{\lambda}_{\bs\mu},\bs{\lambda}_{\bs v})= \bs{0}$, and the density is given by $$\label{loglike} \begin{aligned} g(\bs{x}|\bs{z};\bs{\psi},\alpha) & = \alpha f_v\left(\bs{x}\middle|\bs{z};\bs{\gamma},\bs{\nu}_{\bs\mu}+(1-\alpha)\bs{\lambda}_{\bs\mu}, \bs{\nu}_{\bs{v}} + (1 - \alpha)(2\bs{\lambda}_{\bs{v}} + C_1 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) ) \right) \\ & \quad + (1 - \alpha) f_v \left(\bs{x} \middle|\bs{z};\bs{\gamma},\bs{\nu}_{\bs\mu} -\alpha\bs{\lambda}_{\bs\mu},\bs{\nu}_{\bs{v}} - \alpha(2\bs{\lambda}_{\bs v}+ C_2 \bs{w}( \bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) ) \right). \end{aligned}$$ Partition $\bs{\psi}$ as $\bs{\psi} = (\bs{\eta}\t,\bs{\lambda}\t)\t$, where $\bs{\eta}: = (\bs{\gamma}\t,\bs{\nu}_{\bs\mu}\t,\bs{\nu}_{\bs{v}}\t)\t \in \Theta_{\bs{\eta}}$ and $\bs{\lambda}: = (\bs{\lambda}_{\bs\mu}\t,\bs{\lambda}_{\bs v}\t)\t \in\Theta_{\bs{\lambda}}$. Denote the true values of $\bs{\eta}$, $\bs{\lambda}$, and $\bs{\psi}$ by $\bs{\eta}^*: = ((\bs{\gamma}^*)\t,({\bs{\mu}}^*)\t,(\bs{v}^{*})\t)\t$, $\bs{\lambda}^*: = \bs{0}$, and $\bs{\psi}^* = ((\bs{\eta}^*)\t, \bs{0}\t)\t$, respectively. Under this reparameterization, the first derivative of (\[loglike\]) with respect to (w.r.t., hereafter) $\bs{\eta}$ under $\bs{\psi} = \bs{\psi}^*$ is identical to the first derivative of the density of the one-component model: $$\label{dvareta} \nabla_{\bs{\eta}}g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha) =\nabla_{(\bs{\gamma}\t,\bs{\mu}\t,\bs{v}\t)\t} f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{v}^{*}).$$ On the other hand, the first, second, and third derivatives of $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ w.r.t. $\bs{\lambda}_{\bs\mu}$ and the first derivative w.r.t. $\bs{\lambda}_{\bs v}$ become zero when evaluated at $\bs{\psi}=\bs{\psi}^*$. Consequently, the information on $\bs{\lambda}_{\bs\mu}$ and $\bs{\lambda}_{\bs v}$ is provided by the fourth derivative w.r.t. $\bs{\lambda}_{\bs\mu}$, the cross-derivative w.r.t. $\bs{\lambda}_{\bs{\mu}}$ and $\bs{\lambda}_{\bs v}$, and the second derivative w.r.t. $\bs{\lambda}_{\bs v}$. We derive the asymptotic distribution of the LRTS. Let $f^*_v$ and $\nabla f^*_v$ denote $f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{v}^*)$ and $\nabla f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{v}^*)$, and let $d_\eta:=(p+d+d(d+1)/2)$, $d_{\mu v}:=d(d+1)(d+2)/6$, and $d_{\mu^4}:=d(d+1)(d+2)(d+3)/24$. Define the score vector $\bs{s}(\bs{x},\bs{z})$ as $$\label{score_defn} \begin{aligned} \bs{s}(\bs{x},\bs{z}) & := \begin{pmatrix} \bs{s}_{\bs{\eta}}\\ \bs{s}_{\bs{\lambda}} \end{pmatrix} := \begin{pmatrix} \bs{s}_{\bs{\eta}}\\ \bs{s}_{\bs{\mu v}}\\ \bs{s}_{\bs{\mu}^4} \end{pmatrix} \quad \text{with }\ \underset{(d_\mu \times 1)}{\bs{s}_{\bs{\eta}}} := \frac{\nabla_{(\bs{\gamma}\t,\bs{\mu}\t,\bs{v}\t)\t}f^*_v }{ f^*_v}, \\ \underset{(d_{\mu v} \times 1)}{\bs{s}_{\bs{\mu v}}} &:= \left\{\frac{\nabla_{\mu_i \mu_j \mu_k} f^*_v}{ 3! f^*_v} \right\}_{1 \leq i \leq j \leq k \leq d}, \quad \underset{(d_{\mu^4} \times 1)}{\bs{s}_{\bs{\mu}^4}} := \left\{ \frac{\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f^*_v }{ 4! f^*_v} \right\}_{1 \leq i \leq j \leq k \leq \ell \leq d} , \end{aligned}$$ where we suppress the dependence of $(\bs{s}_{\bs{\eta}}, \bs{s}_{\bs{\mu v}}, \bs{s}_{\bs{\mu}^4})$ on $(\bs{x},\bs{z})$. Collect the relevant reparameterized parameters as $$\label{tpsi_defn} \bs{t}(\bs{\psi},\alpha) := \begin{pmatrix} \bs{\eta}-\bs{\eta}^*\\ \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) \end{pmatrix} := \begin{pmatrix} \bs{\eta}-\bs{\eta}^*\\ \alpha(1-\alpha)12 \bs{\lambda}_{\bs{\mu v}}\\ \alpha(1-\alpha)[12\bs{\lambda}_{\bs{v}^2}+ b(\alpha) \bs{\lambda}_{\bs{\mu}^4}] \end{pmatrix},$$ with $b(\alpha): = -(2/3) (\alpha^2 - \alpha + 1)<0$ and $$\label{lambda_muv_defn} \begin{aligned} \underset{(d_{\mu v} \times 1)}{\bs{\lambda}_{\bs{\mu v}}} &:= \{(\bs{\lambda}_{\bs{\mu v}})_{ijk}\}_{1 \leq i \leq j \leq k \leq d}, \text{ where } (\bs{\lambda}_{\bs{\mu v}})_{ijk} :=\sum_{(t_1,t_2,t_3) \in p_{12}(i,j,k)} \lambda_{\mu_{t_1}}\lambda_{v_{t_2t_3}}, \\ \underset{(d_{\mu^4} \times 1)}{\bs{\lambda}_{\bs{v}^2}} &:= \{(\bs{\lambda}_{\bs{v}^2})_{ijk\ell}\}_{1 \leq i \leq j \leq k \leq \ell \leq d}, \text{ where } (\bs{\lambda}_{\bs{v}^2})_{ijk\ell}:= \sum_{(t_1,t_2,t_3,t_4) \in p_{22}(i,j,k,\ell)} \lambda_{v_{t_1t_2}}\lambda_{v_{t_3t_4}} ,\\ \underset{(d_{\mu^4} \times 1)}{\bs{\lambda}_{\bs{\mu}^4}} &:= \{(\bs{\lambda}_{\bs{\mu}^4})_{ijk\ell}\}_{1 \leq i \leq j \leq k \leq \ell \leq d}, \text{ where } (\bs{\lambda}_{\bs{\mu}^4})_{ijk\ell}:= \sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)} \lambda_{\mu_{t_1}}\lambda_{\mu_{t_2}}\lambda_{\mu_{t_3}}\lambda_{\mu_{t_4}} , \end{aligned}$$ where $\sum_{(t_1,t_2,t_3) \in p_{12}(i,j,k)}$ denotes the sum over all distinct permutations of $(i,j,k)$ to $(t_1,t_2,t_3)$ with $t_2 \leq t_3$, $\sum_{(t_1,t_2,t_3,t_4) \in p_{22}(i,j,k,\ell)}$ denotes the sum over all distinct permutations of $(i,j,k,\ell)$ to $(t_1,t_2,t_3,t_4)$ with $t_1 \leq t_2$ and $t_3 \leq t_4$, and $\sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)}$ denotes the sum over all distinct permutations of $(i,j,k,\ell)$ to $(t_1,t_2,t_3,t_4)$. In (\[lambda\_muv\_defn\]), $\bs{\lambda}_{\bs{\mu v}}$ is a function of $\bs{\lambda}_{\bs{\mu}} \otimes \bs{\lambda}_{\bs{v}}$ and corresponds to the score vector $\bs{s}_{\bs{\mu v}}$. $\bs{\lambda}_{\bs{v}^2}$ is a function of $\bs{\lambda}_{\bs{v}}^{\otimes 2}$, and $\bs{\lambda}_{\bs{\mu}^4}$ depends on $\bs{\lambda}_{\bs{\mu}}^{\otimes 4}$. Here, $\alpha(1-\alpha)12\bs{\lambda}_{\bs{\mu v}}\t \bs{s}_{\bs{\mu v}}$ collects the unique elements that correspond to the cross-derivative with respect to $\bs{\lambda}_{\bs{\mu}}$ and $\bs{\lambda}_{\bs{v}}$ in the expansion of the log-likelihood function, and $\alpha(1-\alpha)[12\bs{\lambda}_{\bs{v}^2}+ b(\alpha) \bs{\lambda}_{\bs{\mu}^4}]\t \bs{s}_{\bs{\mu}^4}$ collects the unique elements of the second-order terms with respect to $\bs{\lambda}_{\bs{v}}$ and the fourth-order terms with respect to $\bs{\lambda}_{\bs{\mu}}$. Let $L_n(\bs{\psi},\alpha): = \sum_{i = 1}^n \log g(\bs{X}_i|\bs{Z}_i;\bs{\psi},\alpha)$ denote the reparameterized log-likelihood function. Let $\widehat{\bs{\psi}} : = \arg\max_{\bs{\psi} \in \Theta_{\bs{\psi}}} PL_n(\bs{\psi},\alpha)$ denote the PMLE of $\bs{\psi}$, where $\Theta_{\bs{\psi}}$ is defined so that the value of ${\bs{\vartheta}}_2$ implied by $\bs{\psi}$ is in $\Theta_{{\bs{\vartheta}}_2}$. Let $(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)$ denote the one-component MLE that maximizes the one-component log-likelihood function $L_{0,n}(\bs{\gamma},{\bs{\mu}},\bs{\Sigma}) := \sum_{i=1}^n \log f (\bs{X}_i|\bs{Z}_i;\bs{\gamma},{\bs{\mu}},\bs{\Sigma})$. Define the LRTS for testing $H_{01}$ as, with $\epsilon_1 \in (0,1/2)$, $$\label{LRT1} LR_{n}(\epsilon_1) := \max_{\alpha \in [\epsilon_1,1-\epsilon_1]} 2\{L_n(\widehat{\bs{\psi}},\alpha) - L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)\}.$$ We could use the penalized LRTS defined by $PLR_{n}(\epsilon_1):= \max_{\alpha \in [\epsilon_1,1-\epsilon_1]} 2\{PL_n(\widehat{\bs{\psi}},\alpha) - L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)\}$ instead of $LR_n(\epsilon_1)$. Because the effect of the penalty term is negligible under our assumptions, $PLR_n(\epsilon_1)$ has the same asymptotic distribution as $LR_n(\epsilon_1)$. With $(\bs{s}_{\bs{\eta}}, \bs{s}_{\bs{\lambda}})$ defined in (\[score\_defn\]), define $$\label{I_lambda} \begin{aligned} &\bs{\mathcal{I}}_{\bs{\eta}} := E[\bs{s}_{\bs{\eta}}\bs{s}_{\bs{\eta}}\t], \quad \bs{\mathcal{I}}_{\bs{\lambda}} := E[\bs{s}_{\bs{\lambda}}\bs{s}_{\bs{\lambda}}\t], \quad \bs{\mathcal{I}}_{\bs{\lambda\eta} } := E[\bs{s}_{\bs{\lambda}}\bs{s}_{\bs{\eta}}\t],\\ &\bs{\mathcal{I}}_{\bs{\eta \lambda}} := \bs{\mathcal{I}}_{\bs{\lambda \eta}}\t, \quad \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}:=\bs{\mathcal{I}}_{\bs{\lambda}}-\bs{\mathcal{I}}_{\bs{\lambda\eta}}\bs{\mathcal{I}}_{\bs{\eta}}^{-1}\bs{\mathcal{I}}_{\bs{\eta\lambda}}, \quad \bs{Z}_{\bs{\lambda}}:=(\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}})^{-1} \bs{G}_{\bs{\lambda}.\bs{\eta}}, \end{aligned}$$ where $\bs{G}_{\bs{\lambda}.\bs{\eta}} \sim N(0,\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}})$. The following sets characterize the limit of possible values of $\sqrt{n}\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)$ defined in (\[tpsi\_defn\]) as $n\rightarrow\infty$. Define $$\label{Lambda-e} \begin{aligned} \Lambda_{\bs{\lambda} }^{1} & := \left\{ \left((\bs{t}_{\bs{\mu v}} )\t, ( \bs{t}_{\bs{\mu}^4} ) \t \right)\t \in \mathbb{R}^{d_{\mu v}+d_{\mu^4}}: \bs{t}_{\bs{\mu v}} = \bs{\lambda}_{\bs{\mu v}},\ \bs{t}_{\bs{\mu}^4}= \bs{\lambda}_{\bs{v}^2} \text{ for some $\bs{\lambda} \in \mathbb{R}^{d+d(d+1)/2}$} \right\},\\ \Lambda_{\bs{\lambda} }^{2} & := \left\{ \left((\bs{t}_{\bs{\mu v}} )\t, ( \bs{t}_{\bs{\mu}^4} ) \t \right)\t \in \mathbb{R}^{d_{\mu v}+d_{\mu^4}}: \bs{t}_{\bs{\mu v}} = \bs{\lambda}_{\bs{\mu v}},\ \bs{t}_{\bs{\mu}^4}= -\bs{\lambda}_{\bs{\mu}^4} \text{ for some $\bs{\lambda} \in \mathbb{R}^{d+d(d+1)/2}$} \right\}. \end{aligned} $$ For $j=1,2$, define $\widehat{\bs{t}}_{\bs{\lambda}}^{j}$ by $$\label{t-lambda} r(\widehat{\bs{t}}_{\bs{\lambda}}^{j}) = \inf_{\bs{t}_{\bs{\lambda}} \in \Lambda_{\bs{\lambda} }^{j}}r(\bs{t}_{\bs{\lambda}}), \quad r(\bs{t}_{\bs{\lambda}}) := (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}}),$$ where $\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}$, $\bs{Z}_{\bs{\lambda}}$, and $\Lambda_{\bs{\lambda} }^{j}$ for $j=1,2$ are defined in (\[I\_lambda\])–(\[Lambda-e\]). The following proposition establishes the asymptotic null distribution of the LRTS. \[A-taylor1\] $\bs{Z}$ has finite tenth moment. \[P-LR-N1\] Suppose that Assumptions \[assn\_consis\] and \[A-taylor1\] hold, $a_n$ in (\[pen\_pmle\]) satisfies $a_n = O(1)$, and $\bs{\mathcal{I}}:=E[\bs{s}(\bs{X},\bs{Z})\bs{s}(\bs{X},\bs{Z})\t]$ is finite and nonsingular. Then, under the null hypothesis of $M=1$, $LR_{n}(\epsilon_1) \rightarrow_d \max\left\{ (\widehat{\bs{t}}_{\bs{\lambda}}^{1})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}}^{1}, (\widehat{\bs{t}}_{\bs{\lambda}}^{2})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}}^{2} \right\}$, where $LR_{n}(\epsilon_1)$ and $\widehat{\bs{t}}_{\bs{\lambda}}^{j}$ are defined in (\[LRT1\]) and (\[t-lambda\]), respectively. For each $j=1,2$, the random variable $(\widehat{\bs{t}}_{\bs{\lambda}}^{j})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}}^{j}$ is a projection of a Gaussian random variable on a cone $\Lambda_{\bs{\lambda} }^{j}$. When $d=1$ with $\bs{\lambda}=(\lambda_\mu,\lambda_v)\t$, we have $\Lambda_{\bs{\lambda} }^{1} \cup \Lambda_{\bs{\lambda} }^{2} = \mathbb{R}^2$ and $LR_{n}(\epsilon_1) \rightarrow_d \bs{Z}_{\bs{\lambda}}\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \bs{Z}_{\bs{\lambda}} \sim \chi^2(2)$. When $d=2$, we have $\bs{\lambda}=(\bs{\lambda}_{\bs{\mu}}\t,\bs{\lambda}_{\bs{v}}\t)\t=(\lambda_{\mu_1},\lambda_{\mu_2},\lambda_{v_{11}},\lambda_{v_{12}}, \lambda_{v_{22}})\t$, $$\begin{aligned} \bs{s}_{\bs{\mu v}} &= (\nabla_{\mu_1^3} f^*_v,\nabla_{\mu_1^2\mu_2} f^*_v,\nabla_{\mu_1\mu_2^2} f^*_v,\nabla_{\mu_2^3} f^*_v)\t/ 3! f^*_v, \\ \bs{s}_{\bs{\mu}^4} & = (\nabla_{\mu_1^4} f^*_v,\nabla_{\mu_1^3\mu_2} f^*_v,\nabla_{\mu_1^2\mu_2^2} f^*_v,\nabla_{\mu_1\mu_2^3} f^*_v,\nabla_{\mu_2^4} f^*_v)\t/ 4! f^*_v, \end{aligned}$$ and $\Lambda_{\bs{\lambda} }^{j}$ is given by (\[Lambda-e\]) with $$\begin{aligned} \bs{t}_{\bs{\mu v}} & =(\lambda_{\mu_1}\lambda_{v_{11}},\lambda_{\mu_1}\lambda_{v_{12}}+\lambda_{\mu_2}\lambda_{v_{11}}, \lambda_{\mu_1}\lambda_{v_{22}}+\lambda_{\mu_2}\lambda_{v_{12}},\lambda_{\mu_2}\lambda_{v_{22}})\t,\\ \bs{t}_{\bs{\mu }^4} &= \begin{cases} ( \lambda_{v_{11}}^2, 2\lambda_{v_{11}}\lambda_{v_{12}}, 2\lambda_{v_{11}}\lambda_{v_{22}}+\lambda_{v_{12}}^2, 2\lambda_{v_{12}}\lambda_{v_{22}} , \lambda_{v_{22}}^2)\t& \text{if } j=1,\\ -(\lambda_{\mu_1}^4,4\lambda_{\mu_1}^3\lambda_{\mu_2},6\lambda_{\mu_1}^2\lambda_{\mu_2}^2,4\lambda_{\mu_1}\lambda_{\mu_2}^3,\lambda_{\mu_2}^4)\t& \text{if } j=2. \end{cases} \end{aligned}$$ Likelihood ratio test of $H_0: M = M_0$ against $H_A: M = M_0 + 1$ for $M_0\geq 2$ {#sec-general} ---------------------------------------------------------------------------------- This section establishes the asymptotic distribution of the LRTS for testing the null hypothesis of $M_0$ components against the alternative of $M_0+1$ components for general $M_0 \geq 1$. We consider a random sample of $n$ independent observations $\{\bs{X}_i,\bs{Z}_i\}_{i = 1}^n$ from the $M_0$-component $d$-variate finite normal mixture distribution, whose density with the true parameter value ${\bs{\vartheta}}_{M_0}^*=(\alpha_{1}^*,\ldots,\alpha_{M_0 - 1}^*,{\bs{\gamma}}^*,\bs{\mu}_1^*,\ldots,\bs{\mu}_{M}^*,\bs{\Sigma}_{1}^{*},\ldots,\bs{\Sigma}_{M}^{*})$ is $$f_{M_0}(\bs{x}|\bs{z};{\bs{\vartheta}}_{M_0}^*):=\sum_{j=1}^{M_0} \alpha_{j}^{*} f(\bs{x}|\bs{z};{\bs{\gamma}}^*, \bs{\mu}_{j}^{*},\bs{\Sigma}_{j}^{*}), \label{true_model}$$ where $\alpha_j^*>0$. We assume $(\bs{\mu}_{1}^*,\bs{\Sigma}^{*}_{1}) <\ldots< (\bs{\mu}_{M_0}^*,\bs{\Sigma}^{*}_{M_0})$ for identification. Let the density of an $(M_0+1)$-component mixture model be $$f_{M_0+1}(\bs{x}|\bs{z};{\bs{\vartheta}}_{M_0+1}):=\sum_{j=1}^{M_0+1}\alpha_j f(\bs{x}|\bs{z};\bs{\gamma},\bs{\mu}_j,\bs{\Sigma}_j),\label{fitted_model}$$ where ${\bs{\vartheta}}_{M_0+1} = (\alpha_1,\ldots,\alpha_{M_0},\bs{\gamma},\bs{\mu}_1,\ldots.,\bs{\mu}_{M_0+1},\bs{\Sigma}_1,\ldots,\bs{\Sigma}_{M_0+1})$. As in the case of the test of homogeneity, we partition the null hypothesis into two as $H_0 = H_{01} \cup H_{02}$, where $H_{01}: = \cup_{m=1}^{M_0} H_{0,1m}$ and $H_{02} := \cup_{m=1}^{M_0+1} H_{0,2m}$ with $$H_{0,1m} :(\bs{\mu}_1,\bs{\Sigma}_1) < \cdots < (\bs{\mu}_{m},\bs{\Sigma}_{m}) = (\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1}) < \cdots < (\bs{\mu}_{M_0 + 1},\bs{\Sigma}_{M_0 + 1}) \ \text{and}\ H_{0,2m}: \alpha_m = 0.$$ The inequality constraints are imposed on $(\bs{\mu}_j,\bs{\Sigma}_j)$ for identification. We focus on testing $H_{01}$ because the LRTS for testing $H_{02}$ has infinite Fisher information unless a stringent restriction is imposed on the admissible values of $\bs{\Sigma}_j$ [@kasaharashimotsu15jasa]. Define the set of values of ${\bs{\vartheta}}_{M_0+1}$ that yields the true density (\[true\_model\]) as $$\Upsilon^*:=\{{\bs{\vartheta}}_{M_0 + 1}: f_{M_0 + 1}(\bs{X}|\bs{Z};{\bs{\vartheta}}_{M_0 + 1}) = f_{M_0}(\bs{X}|\bs{Z};{\bs{\vartheta}}_{M_0}^*)\text{ with probability one}\}.$$ Under $H_{0,1m}$, the $(M_0 + 1)$-component model (\[fitted\_model\]) generates the true $M_0$-component density (\[true\_model\]) when $(\bs{\mu}_m,\bs{\Sigma}_m) = (\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1})=(\bs{\mu}_{m}^{*},\bs{\Sigma}^{*}_{m})$. Define the subset of $\Upsilon^*$ corresponding to $H_{0,1m}$ as $$\begin{aligned} \Upsilon_{1m}^* &:= \left\{{\bs{\vartheta}}_{M_0 + 1} \in \Theta_{{\bs{\vartheta}}_{M_0 + 1}}:\ \alpha_j = \alpha_{j}^{*}\ \text{and}\ (\bs{\mu}_j,\bs{\Sigma}_j)=(\bs{\mu}_{j}^*,\bs{\Sigma}_{j}^{*})\ \text{for $j < m$}; \right. \nonumber \\ & \qquad \left. \alpha_m + \alpha_{m + 1} = \alpha_{m}^{*}\ \text{and}\ (\bs{\mu}_m,\bs{\Sigma}_m)=(\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1})=(\bs{\mu}_{m}^{*},\bs{\Sigma}_{m}^{*});\ \right.\nonumber \\ & \qquad \left. \alpha_{j}=\alpha_{j - 1}^*\ \text{and}\ (\bs{\mu}_{j},\bs{\Sigma}_j)=(\bs{\mu}_{j - 1}^*,\bs{\Sigma}_{j - 1}^{*})\ \text{for $j> m + 1$};\ {\bs{\gamma}}={\bs{\gamma}}^* \right\}, \nonumber $$ and define $\Upsilon_1^*:= \Upsilon_{11}^* \cup \cdots \cup \Upsilon_{1M_0}^*$. Let $\Theta_{{\bs{\vartheta}}_{M_0 + 1}}(\epsilon_1)$ be a subset of $\Theta_{{\bs{\vartheta}}_{M_0 + 1}}$ such that $\alpha_j\in[\epsilon_1,1-\epsilon_1]$ for $j = 1,\ldots,M_0 + 1$, and define the PMLE by $$\label{PMLEs} \begin{aligned} \widehat{\bs{\vartheta}}_{M_0+1}(\epsilon_1) & := \mathop{\arg \max}_{{\bs{\vartheta}}_{M_0+1}\in\Theta_{{\bs{\vartheta}}_{M_0+1}}(\epsilon_1)}PL_n({\bs{\vartheta}}_{M_0 + 1}),\\ \widehat{\bs{\vartheta}}_{M_0} & := \mathop{\arg \max}_{{\bs{\vartheta}}_{M_0}\in\Theta_{{\bs{\vartheta}}_{M_0}}}PL_{0,n}({\bs{\vartheta}}_{M_0}), \end{aligned}$$ where $PL_n({\bs{\vartheta}}_{M_0 + 1}):=L_n({\bs{\vartheta}}_{M_0 + 1})+p_n(\bs{\vartheta}_{M_0+1})$ and $PL_{0,n}({\bs{\vartheta}}_{M_0}):=L_{0,n}({\bs{\vartheta}}_{M_0})+p_n(\bs{\vartheta}_{M_0})$ with $L_n({\bs{\vartheta}}_{M_0 + 1}):=\sum_{i = 1}^n \log f_{M_0 + 1}(\bs{X}_i|\bs{Z}_i;{\bs{\vartheta}}_{M_0 + 1})$ and $L_{0,n}({\bs{\vartheta}}_{M_0}):=\sum_{i = 1}^n \log f_{M_0}(\bs{X}_i|\bs{Z}_i;{\bs{\vartheta}}_{M_0})$ for the density (\[true\_model\])–(\[fitted\_model\]) and the penalty function in (\[pen\_pmle\]). We consider the LRTS for testing $H_{01}$ given by $$\label{LR-M_0} LR_{n}^{M_0}(\epsilon_1):= 2\{L_n(\widehat{{\bs{\vartheta}}}_{M_0+1}(\epsilon_1))- L_{0,n}(\widehat{{\bs{\vartheta}}}_{M_0})\}.$$ Collect the score vector for testing $H_{0,11},\ldots,H_{0,1M_0}$ into one vector as $$\widetilde{\bs{s}}(\bs{x},\bs{z}) := \begin{pmatrix} \widetilde{\bs{s}}_{\bs{\eta}} \\ \widetilde{\bs{s}}_{\bs{\lambda}} \end{pmatrix},\ \text{ where }\ \widetilde{\bs{s}}_{\bs{\eta}}:= \begin{pmatrix} \bs{s}_{\bs{\alpha}} \\ \bs{s}_{(\bs{\gamma},\bs{\mu}, \bs{v})} \end{pmatrix}\ \text{and}\ \widetilde{\bs{s}}_{\bs{\lambda}}: = \begin{pmatrix} \bs{s}_{\bs{\mu}\bs{v}}^1 \\ \bs{s}_{\bs{\mu}^4}^1 \\ \vdots \\ \bs{s}_{\bs{\mu}\bs{v}}^{M_0} \\ \bs{s}_{\bs{\mu}^4}^{M_0} \end{pmatrix}, \label{stilde}$$ where, with $f_0^*:=f_{M_0}(\bs{x}|\bs{z};{\bs{\vartheta}}_{M_0}^*)$ and for $m=1,\ldots,M_0$, $$\label{sh} \begin{aligned} \bs{s}_{\bs{\alpha}} & := \begin{pmatrix} f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_1^{*},\bs{v}_1^*)-f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_{M_0}^{*},\bs{v}_{M_0}^*) \\ \vdots \\ f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_{M_0-1}^{*},\bs{v}_{M_0-1}^*)-f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^{*}_{M_0},\bs{v}_{M_0}^*) \\ \end{pmatrix} \Bigl/ f_0^*, \bigr. \\ \bs{s}_{(\bs{\gamma},\bs{\mu}, \bs{v})} & :=\sum_{m=1}^{M_0}\alpha_{m}^* \nabla_{(\bs{\gamma}\t,\bs{\mu}\t, \bs{v}\t)\t} f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_m^{*},\bs{v}_m^*)/ f_0^*, \\ \bs{s}_{\bs{\mu v}}^m &:= \left\{\alpha_{m}^* \nabla_{\mu_i \mu_j \mu_k} f^*_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_m^{*},\bs{v}_m^*) / (3! f_0^*) \right\}_{1 \leq i \leq j \leq k \leq d}, \\ \bs{s}_{\bs{\mu}^4}^m &:= \left\{ \alpha_{m}^* \nabla_{\mu_i \mu_j \mu_k \mu_\ell} f^*_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_m^{*},\bs{v}_m^*) / (4! f_0^*) \right\}_{1 \leq i \leq j \leq k \leq \ell \leq d} . \end{aligned}$$ Define $$\label{Itilde} \begin{aligned} &\widetilde{\bs{\mathcal{I}}}:= E[ \widetilde{\bs{s}}(\bs{X},\bs{Z})\widetilde{\bs{s}}(\bs{X},\bs{Z})^{\top}],\ \widetilde{\bs{\mathcal{I}}}_{\bs{\eta}}:= E[\widetilde {\bs{s}}_{\bs{\eta}} \widetilde {\bs{s}}_{\bs{\eta}}^{\top}], \ \widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}\bs{\eta}}:= E[\widetilde {\bs{s}}_{\bs{\lambda}} \widetilde {\bs{s}}_{\bs{\eta}}^{\top}], \ \\ &\widetilde{\bs{\mathcal{I}}}_{\bs{\eta\lambda}}:= \widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}\bs{\eta}}^{\top},\quad \widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}}:= E[\widetilde {\bs{s}}_{\bs{\lambda}} \widetilde {\bs{s}}_{\bs{\lambda}}^{\top}],\ \widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}} :=\widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}} - \widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}\bs{\eta}}\widetilde{\bs{\mathcal{I}}}_{\bs{\eta}}^{-1} \widetilde{\bs{\mathcal{I}}}_{\bs{\eta\lambda}}. \end{aligned}$$ Let $\widetilde{\bs{G}}_{\bs{\lambda}.\bs{\eta}}=((\bs{G}_{\bs{\lambda}.\bs{\eta}}^{1})^{\top},\ldots,(\bs{G}_{\bs{\lambda}.\bs{\eta}}^{M_0})^\top)^\top \sim N(0,\widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}})$ be an $\mathbb{R}^{M_0 (d_{\mu v} + d_{\mu^4})}$–valued random vector, and define ${\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}}^m:=E[\bs{G}_{\bs{\lambda}.\bs{\eta}}^m (\bs{G}_{\bs{\lambda}.\bs{\eta}}^m)^\top]$ and $\bs{Z}_{\bs{\lambda}}^m:=({\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}}^m)^{-1}\bs{G}_{\bs{\lambda}.\bs{\eta}}^m$. For $j=1,2$, similar to $\widehat{\bs{t}}_{\bs{\lambda}}^{j}$ in the test of homogeneity, define $\widehat{\bs{t}}_{\bs{\lambda},m}^{j}$ by $$r^m(\widehat{\bs{t}}_{\bs{\lambda},m}^{j}) = \inf_{\bs{t}_{\bs{\lambda}} \in \Lambda_{\bs{\lambda} }^{j}}r^m(\bs{t}_{\bs{\lambda}}), \quad r^m(\bs{t}_{\bs{\lambda}}) := (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}}^m)\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}}^m),$$ where $\Lambda_{\bs{\lambda} }^{j}$ is defined in (\[Lambda-e\]). The following proposition gives the asymptotic null distribution of the LRTS for testing $H_{01}$. In the neighborhood of $\Upsilon_{1h}^*$, the log-likelihood function permits a quadratic approximation in terms of polynomials of the parameters similar to testing $H_0:M=1$ against $H_A:M=2$. Consequently, the LRTS is asymptotically distributed as the maximum of $M_0$ random variables. \[A-vec-2\] (a) $\alpha_{j}^*\in [\epsilon_1,1-\epsilon_1]$ for $j = 1,\ldots,M_0$. (b) $\widetilde{\bs{\mathcal{I}}}$ defined in (\[Itilde\]) is nonsingular. \[local\_lr-2\] Suppose that Assumptions \[assn\_consis\], \[A-taylor1\], and \[A-vec-2\] hold and $a_n$ in (\[pen\_pmle\]) satisfies $a_n = o(1)$. Then, under the null hypothesis $H_0: M=M_0$, $LR_{n}^{M_0}(\epsilon_1) \rightarrow_d \max\{v_1,\ldots, v_{M_0}\}$, where $v_m := \max\left\{ (\widehat{\bs{t}}_{\bs{\lambda},m}^{1})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m \widehat{\bs{t}}_{\bs{\lambda},m}^1, (\widehat{\bs{t}}_{\bs{\lambda},m}^{2})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m \widehat{\bs{t}}_{\bs{\lambda},m}^2\right\}$. EM test {#section:emtest} ======= Implementing the likelihood ratio test in Section \[sec:hetero\] requires the researcher to choose a lower bound $\epsilon_1$ on $\alpha_j$ and assume $\alpha_j^* >\epsilon_1$. In this section, we develop an EM test of $H_0:M= M_0$ against $H_A:M = M_0 + 1$ that does not require such a lower bound on $\alpha_j$. For brevity, we suppress covariate $\bs{Z}$ in this section. First, we develop an EM test statistic for testing $H_{0,1m}: (\bs{\mu}_m,\bs{\Sigma}_m)=(\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1})$. We construct $M_0$ sets $\{D_1^*,\cdots,D_{M_0}^*\}$ of admissible values of $(\bs{\mu},\bs{\Sigma})$, such that $D_m$ contains $(\bs{\mu}_m^*,\bs{\Sigma}_m^{*})$ but no other $(\bs{\mu}_j^*,\bs{\Sigma}_j^{*})$’s for $j \neq m$. For example, as in our simulation, we may assume that the first element of $\bs{\mu}$ are distinct, let $\overline{\mu}_j^*:=(\mu_{j1}^* +\mu_{j + 1,1}^*)/2$ with $\mu_{j1}$ denoting the first element of $\bs{\mu}_j$, and set $D_1^* = (-\infty , \overline{\mu}_{1}^*] \times\Theta_{\tilde{\bs{\mu}}}\times\Theta_{\bs{\Sigma}}$, $D_j^* =[\overline{\mu}_{j-1}^*, \overline{\mu}_{j}^*]\times\Theta_{\tilde{\bs{\mu}}}\times\Theta_{\bs{\Sigma}}$ for $j = 2,\ldots,M_0 - 1$, and $D_{M_0}^* = [\overline{\mu}_{M_0 - 1}^*, \infty ) \times\Theta_{\tilde{\bs{\mu}}} \times \Theta_{\bs{\Sigma}}$, where $\Theta_{\tilde{\bs{\mu}}}$ denotes the space of $\tilde{\bs{\mu}}:=(\mu_2,\ldots,\mu_d)\top$. Collect the mixing parameters of the $(M_0 + 1)$-component model into one vector as $\bs{\varsigma}:=(\bs{\mu}_1,\ldots,\bs{\mu}_{M_0 + 1},\bs{\Sigma}_1,\ldots,\bs{\Sigma}_{M_0 + 1}) \in \Theta_{\bs{\varsigma}}:=\Theta_{\bs{\mu}}^{M_0 + 1}\times \Theta_{\bs{\Sigma}}^{M_0 + 1}$. For $m = 1,\ldots,M_0$, define a restricted parameter space of $\bs{\varsigma}$ by $\Xi_m^*: = \{\bs{\varsigma} \in \Theta_{\bs{\varsigma}}: (\bs{\mu}_j,\bs{\Sigma}_j) \in D_j^* \ \text{for}\ j = 1,\ldots, m - 1;\ (\bs{\mu}_m,\bs{\Sigma}_m),\ (\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1})\in D_m^*;\ (\bs{\mu}_j,\bs{\Sigma}_j) \in D_{j - 1}^* \ \text{for}\ j = m + 2,\ldots,M_0 + 1 \}$. Let $\widehat{\Xi}_m$ and $\widehat{D}_m$ be consistent estimates of $\Xi_m^*$ and $D_m^*$, which can be constructed from the PMLE of the $M_0$-component model. We test $H_{0,1m}: (\bs{\mu}_m,\bs{\Sigma}_m)=(\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1})$ by estimating the $(M_0 + 1)$-component model (\[fitted\_model\]) under the restriction $\bs{\varsigma} \in \widehat{\Xi}_m$. For example, when we test $H_{0,11}:(\bs{\mu}_1,\bs{\Sigma}_1) = (\bs{\mu}_2,\bs{\Sigma}_2)$ in a three-component model, the restriction can be given as $(\bs{\mu}_1,\bs{\Sigma}_1), (\bs{\mu}_2,\bs{\Sigma}_2) \in \widehat{D}_1$ and $(\bs{\mu}_3,\bs{\Sigma}_3) \in \widehat{D}_2$. Define the penalty term $p_n^m(\bs{\vartheta}_{M_0+1})$ on $\bs{\Sigma}_j$’s as $$\label{penalty-em} p_n^m(\bs{\vartheta}_{M_0+1}) := \sum_{j=1}^{M_0+1}p^m_{n}(\bs{\Sigma}_j;\bs{\Omega}_j), \quad p^m_{n}(\bs{\Sigma}_j;\bs{\Omega}_j):=- a_n \left\{ \text{tr}(\bs{\Omega}_j\bs{\Sigma}_j^{-1}) - \log ( \det({\bs{\Omega}}_j\bs{\Sigma}_j^{-1})) -d \right\},$$ with $\bs{\Omega}_j = \widehat{\bs{\Sigma}}_j$ for $j=1,\ldots,m-1$, $\bs{\Omega}_j =\widehat{\bs{\Sigma}}_m$ for $j=m,m+1$, and $\bs{\Omega}_j =\widehat{\bs{\Sigma}}_{j-1}$ for $j=m+2,\ldots,M_0+1$, where $\widehat{\bs{\Sigma}}_j$ is a consistent estimator of $\bs{\Sigma}_j$ from the $M_0$-component PMLE. This penalty term is a multivariate version of the one in @chenlifu12jasa and satisfies $p^m_{n}(\bs{\Omega}_j;\bs{\Omega}_j)=0$. Let $\mathcal{T}$ be a finite set of numbers from $(0,0.5]$, and let $p(\tau) \leq 0$ be a penalty term that is continuous in $\tau$, $p(0.5)=0$, and $p(\tau) \to -\infty$ as $\tau$ goes to $0$. For each $\tau_0 \in \mathcal{T}$, define the restricted penalized MLE as $\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0) := \\\mathop{\arg \max}_{\bs{\vartheta}_{M_0+1} \in \Theta^m(\tau_0)} (PL_n^m({\bs{\vartheta}}_{M_0 + 1}) + p(\tau_0))$, where $\Theta^m(\tau) := \{ \bs{\vartheta}_{M_0+1} \in \Theta_{\bs{\vartheta}_{M_0+1}}: \alpha_{m}/(\alpha_{m} + \alpha_{m + 1})=\tau \text{ and } \bs{\varsigma}\in \hat \Xi_m\}$ and $PL_{n}^m({\bs{\vartheta}}_{M_0+1}):= \sum_{i=1}^n f_{M_0+1}(\bs{X}_i ;\bs{\vartheta}_{M_0+1}) + p_n^m(\bs{\vartheta}_{M_0+1})$. Starting from $\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0)$, we update $\bs{\vartheta}_{M_0+1}$ by the following generalized EM algorithm. Henceforth, we suppress $(\tau_0)$ from $\bs{\vartheta}_{M_0+1}^{m(k)}(\tau_0)$. Suppose we have already calculated $\bs{\vartheta}_{M_0+1}^{m(k)}$. For $i = 1,\ldots,n$ and $j = 1,\ldots,M_0+1$, define the weights for an E-step as $$\begin{aligned} w_{ij}^{(k)} &:= \begin{cases} \alpha_j^{(k)} f(\bs{X}_i;{\bs{\mu}}_j^{(k)},\bs{\Sigma}_j^{(k)}) / f_{M_0+1}(\bs{X}_i;{\bs{\vartheta}}_{M_0+1}^{m(k)}) & \mbox{for } j=1,\ldots,m-1,\\ \alpha_{j}^{(k)} f(\bs{X}_i;{\bs{\mu}}_j^{(k)},\bs{\Sigma}_j^{(k)}) / f_{M_0+1}(\bs{X}_i;{\bs{\vartheta}}_{M_0+1}^{m(k)}) & \mbox{for } j=m+2,\ldots,M_0+1,\\ \end{cases} \\ w_{im}^{(k)} &:= \frac{\tau^{(k)}(\alpha_m^{(k)}+\alpha_{m+1}^{(k)}) f(\bs{X}_i;{\bs{\mu}}_m^{(k)},\bs{\Sigma}_m^{(k)})}{f_{M_0+1}(\bs{X}_i;{\bs{\vartheta}}_{M_0+1}^{m(k)})}, \ w_{i,m+1}^{(k)} := \frac{(1-\tau^{(k)})(\alpha_m^{(k)}+\alpha_{m+1}^{(k)})f(\bs{X}_i;{\bs{\mu}}_{m+1}^{(k)},\bs{\Sigma}_{m+1}^{(k)}) }{f_{M_0+1}(\bs{X}_i;{\bs{\vartheta}}_{M_0+1}^{m(k)})}.\end{aligned}$$ In an M-step, update $\tau$ and $\bs{\alpha}$ by $$\begin{aligned} \tau^{(k+1)} &:= \mathop{\arg \max}_{\tau} \left\{\sum_{i=1}^n w_{im}^{(k)} \log(\tau) + \sum_{i=1}^n w_{i,m+1}^{(k)} \log(1-\tau) + p(\tau) \right\},\\ \alpha_j^{(k+1)} &:= n^{-1} \sum_{i=1}^n w_{ij}^{(k)} \quad \mbox{for } j=1, \ldots,M_0+1, \end{aligned}$$ and update $\bs{\mu}_j$ and $\bs{\Sigma}_j$ for $j=1,\ldots,M_0+1$ by $$\begin{aligned} \bs{\mu}_j^{(k+1)} & : = \frac{\sum_{i=1}^n w_{ij}^{(k)} \bs{X}_i}{\sum_{i=1}^n w_{ij}^{(k)} },\quad \bs{\Sigma}_j^{(k+1)} : = \frac{2a_n \bs{\Omega}_j + \bs{S}_j^{(k+1)}}{2a_n + \sum_{i=1}^n w_{ij}^{(k)}}, \\ \end{aligned}$$ where $\bs{S}_j^{(k+1)}:= \sum_{i=1}^nw_{ij}^{(k)} \left(\bs{X}_i-\bs{\mu}_j^{(k+1)}\right) \left(\bs{X}_i-\bs{\mu}_j^{(k+1)}\right)\t$. The penalized likelihood value never decreases after each generalized EM step [@dempster77jrssb Theorem 1]. Note that $\bs{\vartheta}_{M_0+1}^{m(k)}$ for $k \geq 2$ does not use the restriction $\hat \Xi_m$. For each $\tau_0 \in \mathcal{T}$ and $k$, define $$\text{M}_{n}^{m(k)}(\tau_0) : = 2\left\{PL_{n}^m(\bs{\vartheta}_{M_0+1}^{m(k)}(\tau_0) ) +p(\tau^{(k)}) - L_{0,n}(\widehat{\bs{\vartheta}}_{M_0}) \right\}, $$ where $\widehat{\bs{\vartheta}}_{M_0}$ and $L_{0,n}({\bs{\vartheta}}_{M_0})$ are defined in (\[PMLEs\]). Finally, with a pre-specified number $K$, define the *local EM test statistic* for testing $H_{0,1m}$ by taking the maximum of $\text{M}_n^{m(K)}(\tau_0)$ over $\tau_0\in\mathcal{T}$ as $\text{EM}_{n}^{m(K)} : = \max \{\text{M}_{n}^{m(K)}(\tau_0): \tau_0 \in \mathcal{T} \}$. The *EM test statistic* is defined as the maximum of $M_0$ local EM test statistics: $$\label{EM-test} \text{EM}_{n}^{(K)}: = \max\left\{\text{EM}_{n}^{1(K)},\text{EM}_{n}^{2(K)},\ldots,\text{EM}_{n}^{M_0(K)}\right\}.$$ The following proposition shows that for any finite $K$, the EM test statistic is asymptotically equivalent to the penalized LRTS for testing $H_{01}$. \[EM\_stat-1\] Suppose that Assumptions \[assn\_consis\] and \[A-vec-2\] hold, $a_n$ in (\[penalty-em\]) satisfies $a_n = O(1)$, and $\{0.5\} \in \mathcal{T}$. Then, under the null hypothesis $H_0: M=M_0$, for any fixed finite $K$, $\text{EM}_{n}^{{(K)}} \rightarrow_d \max\{v_1,\ldots, v_{M_0}\}$ as $n \rightarrow \infty$, where the $v_m$’s are given in Proposition \[local\_lr-2\]. Asymptotic distribution under local alternatives ================================================ In this section, we derive the asymptotic distribution of our LRTS and EM test statistic under local alternatives. For brevity, we focus on the case of testing $H_0: M=1$ against $H_A: M=2$. Given a local parameter $\bs{h} =(\bs{h}_{\bs \eta}\t, \bs{h}_{\bs \lambda }\t)\t$ and $\alpha \in (\epsilon_1,1-\epsilon_1)$, we consider the sequence of contiguous local alternatives $\bs{\vartheta}_{n} = (\bs{\psi}_n\t,\alpha_n)\t = (\bs{\eta}_n\t,\bs\lambda_n\t,\alpha_n)\t\in\Theta_{\bs{\eta}}\times\Theta_{\bs\lambda}\times\Theta_{\alpha}$ such that, with $\bs{t}_{\bs\lambda}(\bs\lambda,\alpha)$ given by (\[tpsi\_defn\]), $$\label{local-alternative} \bs{h}_{\bs \eta}= \sqrt{n}(\bs{\eta}_n-\bs\eta^*),\quad \bs{h}_{\bs\lambda} =\sqrt{n} \bs{t}_{\bs\lambda}(\bs\lambda_{n},\alpha_n)+o(1),\quad \text{and }\ \alpha_n = \alpha+o(1).$$ Let $\mathbb{P}_{\bs{\vartheta}}^n$ be the probability measure on $\{\bs{X}_i\}_{i=1}^n$ conditional on $\{\bs{Z}_i\}_{i=1}^n$ under $\bs{\vartheta}$. Then, for the density (\[loglike\]), the log-likelihood ratio is given by $$\log \frac{d\mathbb{P}_{\bs{\vartheta}_n}^n}{d \mathbb{P}_{\bs{\vartheta}^*}^n} = L_n(\bs{\psi}_n,\alpha_n)-L_n(\psi^*,\alpha)= \sum_{i=1}^n \log \left( \frac{g(X_i|Z_i;\bs{\eta}_n,\bs{\lambda}_n,\alpha_n)}{g(X_i|Z_i;\bs{\eta}^*,\bs{0},\alpha)} \right).$$ The following proposition provides the asymptotic distribution of the LRTS under contiguous local alternatives. \[P-LAN2\] Suppose that the assumptions of Proposition \[P-LR-N1\] hold. Consider a sequence of contiguous local alternatives $\bs{\vartheta}_{n} = ((\bs{\eta}^*)\t,(\bs{\lambda}_n) \t,\alpha_n)$, where $\bs{\lambda}_n$ and $\alpha_n$ are given by (\[local-alternative\]). Then, under $H_{1n}: \bs{\vartheta} = \bs{\vartheta}_{n}$, we have $LR_{n}(\epsilon_1) \rightarrow_d \max\left\{ (\widehat{\bs{t}}_{\bs{\lambda}}^{1})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}}^{1}, (\widehat{\bs{t}}_{\bs{\lambda}}^{2})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}}^{2} \right\}$, where $\widehat{\bs{t}}_{\bs{\lambda}}^1$ and $\widehat{\bs{t}}_{\bs{\lambda}}^2$ are defined as in (\[t-lambda\]) but replacing $\bs{Z_{\lambda}}$ with $\left(\bs{I_{\lambda,\eta}}\right)^{-1} \bs{G_{\lambda.\eta}}+\bs{h}_{\bs\lambda}$. In this proposition, the local alternatives are implicitly defined through the condition that $\bs{h}_{\bs\lambda} =\sqrt{n} \bs{t}_{\bs\lambda} (\bs\lambda_{n} ,\alpha_n)+o(1)$. We now give an example for $d=1$, where we explicitly construct local alternatives with different orders, including the ones with order $n^{1/8}$. When $d=1$, for $\overline\alpha\in(\epsilon_1,1-\epsilon_1)$ and $\overline{\bs{\lambda}}:=(\overline{{\lambda}}_{\mu},\overline{{\lambda}}_{v})\t$, define $$\begin{aligned} &H_{1n}^a : \bs{\vartheta}_n^a=((\bs{\eta}_n^a)\t,(\bs{\lambda}_{n}^a)\t,\alpha_n^a)\t : = ((\bs\eta^*)\t,(\overline{\bs{\lambda}}/n^{1/4})\t,\overline\alpha + o(1))\t,\\ &H_{1n}^b : \bs{\vartheta}_n^b=((\bs{\eta}_n^b)\t,(\bs{\lambda}_{n}^b)\t,\alpha_n^b)\t : = ((\bs\eta^*)\t,\overline{{\lambda}}_{\mu}/n^{1/8},\overline{{\lambda}}_{v}/n^{3/8},\overline\alpha + o(1))\t.\end{aligned}$$ Then, for $j\in\{a,b\}$, $\bs{h}_{\bs\lambda}^j =\sqrt{n} \bs{t}_{\bs\lambda} (\bs\lambda_{n}^j ,\alpha_n^j)+o(1)$ holds with $\bs{h}_{\bs\lambda}^a := 12\overline\alpha(1-\overline\alpha)\times ( \overline{{\lambda}}_{\mu} \overline{{\lambda}}_{v}, \overline{{\lambda}}_{v}^2)$ and $\bs{h}_{\bs\lambda}^b := \overline\alpha(1-\overline\alpha)\times ( 12\overline{{\lambda}}_{\mu} \overline{{\lambda}}_{v}, b(\overline\alpha)\overline{{\lambda}}_{\mu}^4 )\t$. Therefore, Proposition \[P-LAN2\] gives the asymptotic distribution of $LR_{n}(\epsilon_1)$. Parametric bootstrap ==================== Given that it may not be easy to simulate the asymptotic distributions of the LRTS and the EM test statistic for testing $H_0: M=M_0$ against $H_A: M=M_0+1$, we provide the validity of parametric bootstrap. We consider the following parametric bootstrap to obtain the bootstrap critical value $c_{\alpha,B}$ and bootstrap $p$-value. 1. Using the observed data, compute $\widehat{\bs{\vartheta}}_{M_0}$ and compute $LR_{n}^{M_0}(\epsilon_1)$ in (\[LR-M\_0\]) and $\text{EM}_{n}^{{(K)}}$ in (\[EM-test\]). 2. Given $\widehat{\bs{\vartheta}}_{M_0}$, generate $B$ independent samples $\{\bs X_1^b,\ldots,\bs X_n^b\}_{b=1}^B$ under $H_0$ with $\bs\vartheta_{M_0}=\widehat{\bs{\vartheta}}_{M_0}$ conditional on the observed value of $\{\bs Z_1,\ldots,\bs Z_n\}$. 3. For each simulated sample $\{\bs X_1^b,\ldots,\bs X_n^b\}$ with $\{\bs Z_1,\ldots,\bs Z_n\}$, compute $LR_{n}^{M_0,b}(\epsilon_1)$ and $\text{EM}_{n}^{{(K),b}}$ as in Step 1 for $b=1,\ldots,B$. 4. Let $c_{\alpha,B}$ be the $(1-\alpha)$ quantile of $\{LR_{n}^{M_0,b}\}_{b=1}^B$ or $\{EM_{n}^{(K),b}\}_{b=1}^B$, and define the bootstrap $p$-value as $B^{-1}\sum_{b=1}^B \mathbb{I}\{ LR_{n}^{M_0,b} > LR_{ n}^{M_0}\}$ or $B^{-1}\sum_{b=1}^B \mathbb{I}\{ EM_{n}^{(K),b} > EM_{ n}^{(K)}\}$. The following proposition shows the consistency of the bootstrap critical values $c_{\alpha,B}$ for testing $H_0: M=1$. The case of testing $H_0:M=M_0$ for $M_0\geq 2$ can be proven similarly. \[P-bootstrap\] Suppose that the assumptions of Proposition \[P-LR-N1\] holds. Then, the bootstrap critical values $c_{\alpha,B}$ converge to the asymptotic critical values in probability as $n$ and $B$ go to infinity under $H_0$ and under the local alternatives described in Propositions \[P-LAN2\]. Homoscedastic multivariate finite normal mixture models ======================================================== In this section, we consider testing the order of homoscedastic multivariate normal mixtures. We consider the likelihood ratio test but do not consider the EM test because, unlike heteroscedastic normal mixtures, homoscedastic normal mixture models do not suffer from infinite Fisher information and unbounded likelihood. Likelihood ratio test of $H_0: M = 1$ against $H_A: M = 2$ {#likelihood-ratio-test-of-h_0-m-1-against-h_a-m-2} ---------------------------------------------------------- Consider a two-component normal mixture density function with common variance: $$f_2(\bs{x}|\bs{z};\bs{\vartheta}_2) := \alpha f(\bs{x}|\bs{z}; \bs{\gamma} ,\bs{\mu}_1,\bs{\Sigma}) + (1-\alpha) f(\bs{x}|\bs{z}; \bs{\gamma},\bs{\mu}_2,\bs{\Sigma}), $$ with ${\bs{\vartheta}}_2 = (\alpha,\bs{\gamma}, \bs{\mu}_1, \bs{\mu}_2,\bs{\Sigma}) \in \Theta_{{\bs{\vartheta}}_2}$. We assume $\Theta_{{\bs{\vartheta}}_2}$ is compact. \[assn\_compact\] The parameter space $\Theta_{{\bs{\vartheta}}_2}$ is compact. Assume $\alpha\in [0,3/4]$ without loss of generality. Then, the null hypothesis $H_0:M=1$ is written as $$H_0: \alpha(\bs{\mu}_1-\bs{\mu}_2)=0.$$ For an arbitrary small $\zeta>0$, we partition the parameter space as $\Theta_{{\bs{\vartheta}}_2} = \Theta_{{\bs{\vartheta}}_2,\zeta}^1 \cup \Theta_{{\bs{\vartheta}}_2,\zeta}^2 $, where $$\Theta_{{\bs{\vartheta}}_2,\zeta}^1 =\{{\bs{\vartheta}}_2\in \Theta_{{\bs{\vartheta}}_2} : |\bs{\mu}_{1}- \bs{\mu}_{2}|\leq \zeta \}\ \ \text{and}\ \ \Theta_{{\bs{\vartheta}}_2,\zeta}^2 =\{{\bs{\vartheta}}_2\in \Theta_{{\bs{\vartheta}}_2}: |\bs{\mu}_{1}- \bs{\mu}_{2}| \geq \zeta \}.$$ Let $L_n({\bs{\vartheta}}_2): = \sum_{i = 1}^n \log f_2( \bs{X}_i|\bs{Z}_i;{\bs{\vartheta}}_2)$ denote the log-likelihood function, and define the two-component MLE by $\widehat{\bs{\vartheta}}_2 :=\arg\max_{ {\bs{\vartheta}}_2 \in \Theta_{{\bs{\vartheta}}_2}} L_n({\bs{\vartheta}}_2)$. Define the restricted two-component MLE by $\widehat{\bs{\vartheta}}_{2,\zeta}^j :=\arg\max_{ {\bs{\vartheta}}_2^j \in \Theta_{{\bs{\vartheta}}_2,\zeta}^j } L_n({\bs{\vartheta}}_2)$ for $j=1,2$ so that $L_n(\widehat {\bs{\vartheta}}_2) = \max\{ L_n(\widehat {\bs{\vartheta}}_{2,\zeta}^1 ),L_n(\widehat {\bs{\vartheta}}_{2,\zeta}^2 ) \}$. Let $(\widehat{\bs{\gamma}}_0,\widehat{{\mu}}_0,\widehat{{\sigma}}^2_0)$ denote the one-component MLE that maximizes the one-component log-likelihood function $L_{0,n}(\bs{\gamma},{ {\mu}},\bs{\sigma}) := \sum_{i=1}^n \log f ( \bs{X}_i|\bs{Z}_i;\bs{\gamma},{{\mu}}, {\sigma^2})$. Define the LRTS for testing $H_0$ as $LR_{n}:= 2\{L_n(\widehat {\bs{\vartheta}}_2) - L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{{\mu}}_0,\widehat{{\sigma}}^2_0)\}=\max \{LR_{n,\zeta}^1 ,LR_{n,\zeta}^2 \}$, where $LR_{n,\zeta}^j := 2\{L_n(\widehat{\bs{\vartheta}}_{2,\zeta}^j ) - L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{{\mu}}_0,\widehat{{\sigma}}^2_0)\}$. In the following, we derive the asymptotic null distribution of $LR_{n,\zeta}^1$, $LR_{n,\zeta}^2$, and $LR_{n}$ by using a similar approach to the heteroscedastic case. We introduce a reparameterization that extracts the direction in which the Fisher information matrix is singular and approximate the log-likelihood in terms of the polynomials of the reparameterized parameters. ### Asymptotic distribution of $LR_{n,\zeta}^1$ In this section, we derive the asymptotic distribution of $LR_{n,\zeta}^1$. Let $\bs{v}=\bs{w}(\bs{\Sigma})$ and consider the following one-to-one mapping between $(\bs{\mu}_1,\bs{\mu}_2,\bs{v})$ and the reparameterized parameter $(\bs{\lambda} ,\bs{\nu}_{\bs\mu},\bs{\nu}_{\bs{v}})$: $$\label{repara2-homo} \begin{pmatrix} {\bs{\mu}}_1\\ {\bs{\mu}}_2\\ \bs{v} \end{pmatrix} = \begin{pmatrix} \bs{\nu}_{\bs\mu} + (1-\alpha) \bs{\lambda} \\ \bs{\nu}_{\bs\mu} -\alpha \bs{\lambda} \\ \bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda} \bs{\lambda} \t) \end{pmatrix}.$$ In the reparameterized model, the density is given by $$\label{loglike-homo} \begin{aligned} g(\bs{x}|\bs{z};\bs{\psi},\alpha) & = \alpha f_v\left(\bs{x}\middle|\bs{z};\bs{\gamma},\bs{\nu}_{\bs\mu}+(1-\alpha)\bs{\lambda}, \bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda}\bs{\lambda}\t)\right) \\ & \quad + (1 - \alpha) f_v \left(\bs{x} \middle|\bs{z};\bs{\gamma},\bs{\nu}_{\bs\mu} -\alpha\bs{\lambda},\bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda}\bs{\lambda}\t) \right). \end{aligned}$$ We partition $\bs{\psi}$ as $\bs{\psi} = (\bs{\eta}\t,\bs{\lambda}\t)\t$, where $\bs{\eta}: = (\bs{\gamma}\t,\bs{\nu}_{\bs\mu}\t,\bs{\nu}_{\bs{v}}\t)\t \in \Theta_{\bs{\eta}}$ and $\bs{\lambda} \in \Theta_{\bs{\lambda}}$. Denote the true values of $\bs{\eta}$, $\bs{\lambda}$, and $\bs{\psi}$ under $H_{0}$ by $\bs{\eta}^*: = ((\bs{\gamma}^*)\t,({\bs{\mu}}^*)\t,(\bs{v}^{*})\t)\t$, $\bs{\lambda}^*: = \bs{0}$, and $\bs{\psi}^* = ((\bs{\eta}^*)\t, \bs{0}\t)\t$, respectively. The first derivative of (\[loglike-homo\]) w.r.t. $\bs{\eta}$ under $\bs{\psi} = \bs{\psi}^*$ is given by (\[dvareta\]) and the first and second derivatives of $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ w.r.t. $\bs{\lambda}$ become zero when evaluated at $\bs{\psi}=\bs{\psi}^*$. Consequently, the information on $\bs{\lambda}$ is provided by the third and fourth derivatives w.r.t. $\bs{\lambda}$. Define the score vector $\bs{s}(\bs{x},\bs{z})$ as $$\label{score_defn-homo} \begin{aligned} \bs{s}(\bs{x},\bs{z}) & := \begin{pmatrix} \bs{s}_{\bs{\eta}}\\ \bs{s}_{\bs{\lambda}} \end{pmatrix} := \begin{pmatrix} \bs{s}_{\bs{\eta}}\\ \bs{s}_{\bs{\mu^3}}\\ \bs{s}_{\bs{\mu}^4} \end{pmatrix} \quad \text{with }\ \underset{(d \times 1)}{\bs{s}_{\bs{\eta}}} := \frac{\nabla_{(\bs{\gamma}\t,\bs{\mu}\t,\bs{v}\t)\t}f^*_v }{ f^*_v}, \\ \underset{(d_{\mu^3} \times 1)}{\bs{s}_{\bs{\mu}^3}} &:= \left\{\frac{\nabla_{\mu_i \mu_j \mu_k} f^*_v}{3! f^*_v} \right\}_{1 \leq i \leq j \leq k \leq d}, \quad \underset{(d_{\mu^4} \times 1)}{\bs{s}_{\bs{\mu}^4}} := \left\{ \frac{\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f^*_v }{ 4!f^*_v} \right\}_{1 \leq i \leq j \leq k \leq \ell \leq d} , \end{aligned}$$ where we suppress the dependence of $(\bs{s}_{\bs{\eta}}, \bs{s}_{\bs{\mu}^3}, \bs{s}_{\bs{\mu}^4})$ on $(\bs{x},\bs{z})$. Collect the relevant reparameterized parameters as $$\label{tpsi_defn-homo} \bs{t}(\bs{\psi},\alpha) := \begin{pmatrix} \bs{\eta}-\bs{\eta}^*\\ \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) \end{pmatrix} := \begin{pmatrix} \bs{\eta}-\bs{\eta}^*\\ \alpha(1-\alpha)(1-2\alpha)\bs{\lambda}_{\bs{\mu}^3}\\ \alpha(1-\alpha)(1-6\alpha+6\alpha^2)\bs{\lambda}_{\bs{\mu}^4} \end{pmatrix},$$ where $\underset{(d_{\mu^3} \times 1)}{\bs{\lambda}_{\bs{\mu}^3}} = \{(\bs{\lambda}_{\bs{\mu}^3})_{ijk}\}_{1 \leq i \leq j \leq k \leq d}$ with $(\bs{\lambda}_{\bs{\mu }^3})_{ijk} := \sum_{(t_1,t_2,t_3) \in p(i,j,k)} \lambda_{t_1} \lambda_{t_2} \lambda_{t_3}$, where $\sum_{(t_1,t_2,t_3) \in p(i,j,k)}$ denotes the sum over all distinct permutations of $(i,j,k)$ to $(t_1,t_2,t_3)$, and $\bs{\lambda}_{\bs{\mu}^4}$ is given in (\[lambda\_muv\_defn\]). The third and fourth order derivatives of the density ratio w.r.t. $\bs{\lambda}$ are given by $\alpha(1-\alpha)(1-2\alpha)\bs{\lambda}_{\bs{\mu}^3}\t \bs{s}_{\bs{\mu^3}}$ and $\alpha(1-\alpha)(1-6\alpha+6\alpha^2)\bs{\lambda}_{\bs{\mu}^4}\t\bs{s}_{\bs{\mu}^4}$, respectively. When $\alpha$ is bounded away from $1/2$, the third order derivative identifies $\bs{\lambda}$ because the third order derivative dominates the fourth order derivative as $\bs{\lambda}\to 0$. When $\alpha=1/2$, the third order derivative is identically equal to zero and the fourth order derivative identifies $\bs{\lambda}$. When $\alpha$ is in the neighborhood of $1/2$ such that $1-2\alpha \propto \bs{\lambda}$, the third and fourth order derivatives jointly identify $\bs{\lambda}$. Accordingly, we characterize the limit of possible values of $\sqrt{n}\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)$ defined in (\[tpsi\_defn-homo\]) as $n\rightarrow\infty$ by the following two sets: $$\label{Lambda-e-homo} \begin{aligned} \Lambda_{\bs{\lambda} }^1 &:= \left\{\left( \bs{t}_{\bs{\mu}^3}\t, \bs{t}_{\bs{\mu}^4}\t \right)\t\in \mathbb{R}^{d_{\mu^3}+d_{\mu^4}}:\ \bs{t}_{\bs{\mu}^3}= \bs{\lambda_{\mu^3}},\ \bs{t}_{\bs{\mu}^4}= \bs{0} \text{ for some $ \bs{\lambda} \in \mathbb{R}^{d}$ }\right\}, \\ \Lambda_{\bs{\lambda} }^2 &:= \left\{\left( \bs{t}_{\bs{\mu}^3}\t, \bs{t}_{\bs{\mu}^4}\t \right)\t\in \mathbb{R}^{d_{\mu^3}+d_{\mu^4}}:\ \bs{t}_{\bs{\mu}^3}=c \bs{\lambda_{\mu^3}},\ \bs{t}_{\bs{\mu}^4}=- \bs{\lambda_{\mu^4}}\text{ for some $(\bs{\lambda}\t,c)\t \in \mathbb{R}^{d+1}$ }\right\}, \end{aligned}$$ where $\Lambda_{\bs{\lambda} }^1$ represents the case when $\alpha$ is bounded away from $1/2$ while, by choosing different values of $c$, $\Lambda_{\bs{\lambda} }^2$ represents both cases when $\alpha=1/2$ and when $\alpha$ is in the neighborhood of $1/2$. Define $\widehat{\bs{t}}_{\bs{\lambda}}^j $ by $$\label{t-lambda-homo} r(\widehat{\bs{t}}_{\bs{\lambda}}^j ) = \inf_{\bs{t}_{\bs{\lambda}} \in \Lambda_{\bs{\lambda}}^j } r(\bs{t}_{\bs{\lambda}}), \quad r(\bs{t}_{\bs{\lambda}}) := (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}}),$$ where $\Lambda_{\bs{\lambda} }^j$ for $j=1,2$ is defined in (\[Lambda-e-homo\]) while $\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}$ and $\bs{Z}_{\bs{\lambda}}$ are defined by $$\begin{aligned} & \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}:=\bs{\mathcal{I}}_{\bs{\lambda}}-\bs{\mathcal{I}}_{\bs{\lambda\eta}}\bs{\mathcal{I}}_{\bs{\eta}}^{-1}\bs{\mathcal{I}}_{\bs{\eta\lambda}} \quad\text{and}\quad \bs{Z}_{\bs{\lambda}}:=(\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}})^{-1} \bs{G}_{\bs{\lambda}.\bs{\eta}} \quad\text{with}\\ &\bs{\mathcal{I}}_{\bs{\eta}} := E[\bs{s}_{\bs{\eta}}\bs{s}_{\bs{\eta}}\t], \quad \bs{\mathcal{I}}_{\bs{\lambda}} := E[\bs{s}_{\bs{\lambda}}\bs{s}_{\bs{\lambda}}\t], \quad \bs{\mathcal{I}}_{\bs{\lambda\eta} } := E[\bs{s}_{\bs{\lambda}}\bs{s}_{\bs{\eta}}\t],\quad \bs{\mathcal{I}}_{\bs{\eta \lambda}} := \bs{\mathcal{I}}_{\bs{\lambda \eta}}\t, \end{aligned}$$ given $(\bs{s}_{\bs{\eta}}, \bs{s}_{\bs{\lambda}})$ in (\[score\_defn-homo\]), where $\bs{G}_{\bs{\lambda}.\bs{\eta}} \sim N(0,\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}})$. The following proposition establishes the asymptotic null distribution of $LR_{n,\zeta}^1$. \[P-LR-N1-homo-1\] Suppose that Assumptions \[assn\_consis\] and \[A-taylor1\] hold and $\bs{\mathcal{I}} := E[\bs{s}(\bs{X},\bs{Z})\bs{s}(\bs{X},\bs{Z})\t]$ is finite and nonsingular. Then, under the null hypothesis of $H_{01}: \mu_1=\mu_2$, for any $\zeta>0$, $LR_{n,\zeta}^1 \rightarrow_d \max\left\{ (\widehat{\bs{t}}^1_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}^1_{\bs{\lambda}}, (\widehat{\bs{t}}^2_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}^2_{\bs{\lambda}} \right\}$. When $d=1$ with $\bs{\lambda}=\lambda$, we have $\bs{s_{\mu^3}} = {\nabla_{\mu^3 } f^*_v}/{3! f^*_v}$, $\bs{s_{\mu^4}} = {\nabla_{\mu^4} f^*_v}/{4! f^*_v}$, and the possible values of $\sqrt{n}\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)=\sqrt{n}\alpha(1-\alpha) \left( (1-2\alpha)\lambda^3, (1-6\alpha+6\alpha^2)\lambda^4\right)\t$ as $n\to \infty$ are given by $\Lambda_{\lambda}^1\cup \Lambda_{\lambda}^2=\mathbb{R}\times \mathbb{R}_-$. In this case, $LR_{n,\zeta}^1 \rightarrow_d (\widehat{\bs{t}}_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}} $ with $\widehat{\bs{t}}_{\bs{\lambda}} $ defined by $r(\widehat{\bs{t}}_{\bs{\lambda}} ) = \inf_{\bs{t}_{\bs{\lambda}} \in \mathbb{R}\times \mathbb{R}_-} r(\bs{t}_{\bs{\lambda}})$. When $d=2$ with $\bs{\lambda}=(\lambda_1,\lambda_2)\t$, we have $\Lambda_{\bs{\lambda} }^1= \left\{(\bs{\lambda_{\mu^3}}\t,\bs{0}\t)\t: (\lambda_1,\lambda_2)\t \in \mathbb{R}^2\right\}$ and $ \Lambda_{\bs{\lambda} }^2 := \left\{(c\bs{\lambda_{\mu^3}}\t,-\bs{\lambda_{\mu^4}}\t)\t: (\lambda_1,\lambda_2,c)\t \in \mathbb{R}^3\right\}$ with $\bs{\lambda_{\mu^3}} = (\lambda_1^3,3\lambda_1^2\lambda_2,3\lambda_1\lambda_2^2,\lambda_2^3)\t$ and $\bs{\lambda_{\mu^4}} = (\lambda_1^4,4\lambda_1^3\lambda_2,6\lambda_1^2\lambda_2^2,4\lambda_1\lambda_2^3,\lambda_2^4)\t$. ### Asymptotic distribution of $LR_{n,\zeta}^2$ This section derives the asymptotic distribution of $LR_{n,\zeta}^2$. We use the reparameterization (\[repara2-homo\]) but collect the reparameterized parameters into $\bs{\phi}:=(\bs{\eta}\t,\alpha)\t$ and $\bs{\lambda}$, where $\bs{\eta}: = (\bs{\gamma}\t,\bs{\nu}_{\bs\mu}\t,\bs{\nu}_{\bs{v}}\t)\t$. Let the resulting density be $$\label{loglike-homo-2} \begin{aligned} h(\bs{x}|\bs{z};\bs{\phi},\bs{\lambda}) & := \alpha f_v\left(\bs{x}\middle|\bs{z};\bs{\gamma},\bs{\nu}_{\bs\mu}+(1-\alpha)\bs{\lambda}, \bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda}\bs{\lambda}\t)\right) \\ & \quad + (1 - \alpha) f_v \left(\bs{x} \middle|\bs{z};\bs{\gamma},\bs{\nu}_{\bs\mu} -\alpha\bs{\lambda},\bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda}\bs{\lambda}\t) \right). \end{aligned}$$ The right hand side of (\[loglike-homo-2\]) is the same as that of (\[loglike-homo\]). When we restrict the parameter space to $\Theta_{{\bs{\vartheta}}_2,\zeta}^2$, the reparameterized density $h(\bs{x}|\bs{z};\bs{\phi},\bs{\lambda})$ in (\[loglike-homo-2\]) becomes the one-component density if and only if $\alpha =0$. Furthermore, $\bs{\lambda}$ is not identified when $\alpha=0$. Denote the true value of $\bs{\phi}$ under $H_{0}$ by $\bs{\phi}^* = ((\bs{\eta}^*)\t, 0)\t$, where $\bs{\eta}^* := ((\bs{\gamma}^*)\t,({\bs{\mu}}^*)\t,(\bs{v}^{*})\t)\t$. The MLE of $\bs{\phi}$ under the restriction $\bs{\vartheta}_2\in\Theta_{{\bs{\vartheta}}_2,\zeta}^2$ converges to $\bs{\phi}^*$ in probability. Define $f_v^*(\bs{x}|\bs{z};\bs{\lambda}):=f_v\left(\bs{x}|\bs{z};\bs{\gamma}^*, \bs\mu^*+ \bs{\lambda}, \bs{v}^*\right)$ so that $f^*_v(\bs{x}|\bs{z};\bs{0}) = f^*_v$ and $\nabla f^*_v(\bs{x}|\bs{z};\bs{0}) = \nabla f^*_v$. Define the score vectors $\bs{s}(\bs{x},\bs{z};\bs{\lambda})$ indexed by $\bs{\lambda}$ as $$\label{score_defn-homo-2} \bs{s}(\bs{x},\bs{z};\bs{\lambda}) := \begin{pmatrix} \bs{s}_{\bs{\eta}}\\ {s}_{\alpha}(\bs{\lambda}) \end{pmatrix},$$ where $\bs{s}_{\bs{\eta}} =\nabla_{(\bs{\gamma}\t,\bs{\mu}\t,\bs{v}\t)\t}f_v^*/f_v^*$ as defined in (\[score\_defn-homo\]) and $$\label{score_defn-homo-3} {s}_{\alpha}(\bs{\lambda}) := \frac{f_v^*(\bs{x}|\bs{z};\bs{\lambda}) -f_v^* - \nabla_{\bs{\mu}\t} f_v^* \bs{\lambda} - \nabla_{\bs{v}\t} f_v^* \bs{\lambda}_{\bs{\mu}^2} }{ |\bs{\lambda}|^3 f_v^*},$$ where $\underset{(d_{\mu^2} \times 1)}{\bs{\lambda}_{\bs{\mu}^2}} := \{(\bs{\lambda}_{\bs{\mu}^2})_{ij}\}_{1 \leq i \leq j \leq d}$ with $(\bs{\lambda}_{\bs{\mu }^2})_{ij}:=\lambda_{ii}^2$ if $i=j$ and $2\lambda_{ij}$ if $i \neq j$. The division by $|\bs{\lambda}|^3$ is necessary to define $s_\alpha(\bs{\lambda})$ here because, if we were to define $s_\alpha(\bs{\lambda})$ as $(f_v^*(\bs{x}|\bs{z};\bs{\lambda}) -f_v^* - \nabla_{\bs{\mu}\t} f_v^* \bs{\lambda} - \nabla_{\bs{v}\t} f_v^* \bs{\lambda}_{\bs{\mu}^2})/ f_v^*$, then we have $s_\alpha(\bs{\lambda})\to 0$ as $\bs{\lambda}\to 0$, invalidating the approximation when $\bs{\lambda}$ is close to zero. Collect the relevant reparameterized parameters as $$\label{tpsi_defn-homo-2} \bs{t} (\bs{\phi},\bs\lambda) := \begin{pmatrix} \bs{t}_{\bs{\eta}}(\bs\lambda)\\ t_\alpha(\bs{\lambda}) \end{pmatrix}\quad\text{with}\quad \bs{t}_{\bs{\eta}}(\bs\lambda):= \begin{pmatrix} \bs{\gamma}-\bs{\gamma}^*\\ \bs{\nu_\mu}-\bs{\mu}^* \\ \bs{\nu_v}-\bs{v}^* \end{pmatrix}\quad \text{and} \quad t_\alpha(\bs{\lambda}):=\alpha |\bs{\lambda}|^3.$$ In (\[score\_defn-homo-3\]), $s_\alpha(\bs{\lambda})$ is non-degenerate and not perfectly correlated with $\bs{s}_{\bs{\eta}}$ even when $\bs{\lambda} \to \bs{0}$. With $(\bs{s}_{\bs{\eta}}, {s}_{\alpha}(\bs{\lambda}))$ defined in (\[score\_defn-homo-2\]), define $$\label{I_lambda-homo-2} \begin{aligned} &\bs{\mathcal{I}}_{\bs{\eta}} := E[\bs{s}_{\bs{\eta}}(\bs{s}_{\bs{\eta}})\t], \quad \bs{\mathcal{I}}_{\alpha\bs{\eta} }(\bs{\lambda}) := E[{s}_{\alpha}(\bs{\lambda}) \bs{s}_{\bs{\eta}} \t], \quad \bs{\mathcal{I}}_{\bs{\eta }\alpha}(\bs{\lambda}) := (\bs{\mathcal{I}}_{\alpha \bs{\eta}}(\bs{\lambda}))\t,\\ & {\mathcal{I}}_{\alpha}(\bs{\lambda}_1,\bs{\lambda}_2) := E[{s}_{\alpha}(\bs{\lambda}_1) {s}_{\alpha}(\bs{\lambda}_2) ],\quad {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}_1,\bs{\lambda}_2):= {\mathcal{I}}_{\alpha}(\bs{\lambda}_1,\bs{\lambda}_2)-\bs{\mathcal{I}}_{\alpha\bs{\eta}}(\bs{\lambda}_1)(\bs{\mathcal{I}}_{\bs{\eta}})^{-1}\bs{\mathcal{I}}_{\bs{\eta}\alpha}(\bs{\lambda}_2), \\ & {Z}_{\alpha}(\bs{\lambda}):=( {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}, \bs{\lambda}))^{-1} {G}_{\alpha.\bs{\eta}}(\bs{\lambda}), \end{aligned}$$ where ${G}_{\alpha.\bs{\eta}}(\bs{\lambda})$ is a mean zero Gaussian process indexed by $\bs{\lambda}$ with $\text{Cov}({G}_{\alpha.\bs{\eta}}(\bs{\lambda}_1),{G}_{\alpha.\bs{\eta}}(\bs{\lambda}_2)) = {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}_1,\bs{\lambda}_2)$. Define $\widehat{t}_\alpha(\bs{\lambda})$ by $$\begin{aligned} \label{t-lambda-homo-2} r(\widehat{t}_\alpha(\bs{\lambda}))= \inf_{ t_\alpha \geq 0 }r( t_\alpha),\quad r( t_\alpha) := (t_\alpha - {Z}_{{\alpha}}(\bs{\lambda}))^2 {\mathcal{I}} _{{\alpha}.\bs{\eta}}(\bs{\lambda}, \bs{\lambda}). \end{aligned}$$ The following proposition establishes the asymptotic null distribution of $LR_{n,\zeta}^{2}$. Define $\bs{\mathcal{I}}(\bs{\lambda}) := E[\bs{s}(\bs{X},\bs{Z};\bs{\lambda}) \bs{s}(\bs{X},\bs{Z};\bs{\lambda})\t ]$. \[P-LR-N1-homo-2\] Suppose that Assumptions \[assn\_consis\], \[A-taylor1\], and \[assn\_compact\] hold and $0 < \inf_{\Theta_{\bs{\lambda}} \setminus \{\bs{0}\}} \lambda_{\min}(\bs{\mathcal{I}}(\bs{\lambda})) \leq \sup_{\Theta_{\bs{\lambda}} \setminus \{\bs{0}\}} \lambda_{\max}(\bs{\mathcal{I}}(\bs{\lambda})) < \infty$. Then, under the null hypothesis of $H_{0}: M=1$, for any $\zeta>0$, $LR_{n,\zeta}^{2} \rightarrow_d \sup_{\Theta_{\bs{\lambda}} \cap \{|\bs{\lambda}|\geq \zeta\} }\ (\widehat{t}_\alpha(\bs{\lambda}) )^2 {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}, \bs{\lambda})$. ### Testing $H_{0}: M=1$ The following proposition derives the asymptotic distribution of $LR_{n}$. The proof is omitted because it is a straightforward consequence of Propositions \[P-LR-N1-homo-1\] and \[P-LR-N1-homo-2\] in view of $LR_{n}=\lim_{\zeta\to 0}\max \{LR_{n,\zeta}^1 ,LR_{n,\zeta}^2 \}$. \[P-LR-N1-homo\] Suppose that Assumptions of Propositions \[P-LR-N1-homo-1\] and \[P-LR-N1-homo-2\] hold. Then, under the null hypothesis of $H_{0}: M=1$, $$LR_{n} \rightarrow_d \max\left\{ (\widehat{\bs{t}}^1_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}^1_{\bs{\lambda}}, (\widehat{\bs{t}}^2_{\bs{\lambda}})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}^2_{\bs{\lambda}}, \ \sup_{\Theta_{\bs{\lambda}} \setminus \{\bs{0}\} }\ (\widehat{t}_\alpha(\bs{\lambda}) )^2 {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}, \bs{\lambda}) \right\},$$ where $\widehat{\bs{t}}_{\bs{\lambda}}^j $ for $j=1,2$ is defined in (\[t-lambda-homo\]) and $\widehat{t}_\alpha(\bs{\lambda})$ is defined in (\[t-lambda-homo-2\]). This result generalizes Theorem 2 of @chenchen03sinica, who derive the asymptotic distribution of the LRTS in the univariate case. Likelihood ratio test of $H_0: M = M_0$ against $H_A: M = M_0 + 1$ for $M_0\geq 2$ {#likelihood-ratio-test-of-h_0-m-m_0-against-h_a-m-m_0-1-for-m_0geq-2} ---------------------------------------------------------------------------------- We consider a random sample $\{\bs{X}_i,\bs{Z}_i\}_{i = 1}^n$ generated from the following $M_0$-component $d$-variate normal mixture density model with common variance: $$f_{M_0}(\bs{x}|\bs{z};{\bs{\vartheta}}_{M_0}^*):=\sum_{j=1}^{M_0} \alpha_{j}^{*} f(\bs{x}|\bs{z};{\bs{\gamma}}^*, \bs{\mu}_{j}^{*},\bs{\Sigma}^{*}), \label{true_model-homo}$$ where ${\bs{\vartheta}}_{M_0}^*=(\alpha_{1}^*,\ldots,\alpha_{M_0 - 1}^*,{\bs{\gamma}}^*,\bs{\mu}_1^*,\ldots,\bs{\mu}_{M}^*,\bs{\Sigma}^{*} )$ and $\alpha_j^*>0$. We assume $\bs{\mu}_{1}^* <\ldots< \bs{\mu}_{M_0}^*$ for identification. The corresponding density of an $(M_0+1)$-component mixture model is given by $$f_{M_0+1}(\bs{x}|\bs{z};{\bs{\vartheta}}_{M_0+1}):=\sum_{j=1}^{M_0+1}\alpha_j f(\bs{x}|\bs{z};\bs{\gamma},\bs{\mu}_j,\bs{\Sigma} ),\label{fitted_model-homo}$$ where ${\bs{\vartheta}}_{M_0+1} = (\alpha_1,\ldots,\alpha_{M_0},\bs{\gamma},\bs{\mu}_1,\ldots.,\bs{\mu}_{M_0+1},\bs{\Sigma})$. Partition the null hypothesis as $H_0 = \cup_{m=1}^{M_0} H_{0,m}$ with $H_{0,m}: \alpha_m(\bs{\mu}_{m} - \bs{\mu}_{m + 1}) = 0$. Define the LRTS for testing $H_{01}$ as $$LR_{n}^{M_0} := \max_{{\bs{\vartheta}}_{M_0+1}\in \Theta_{{\bs{\vartheta}}_{M_0+1}}}2\{ L_n({\bs{\vartheta}}_{M_0+1})- L_{0,n}(\widehat{{\bs{\vartheta}}}_{M_0})\},$$ where $L_n({\bs{\vartheta}}_{M_0 + 1}):=\sum_{i = 1}^n \log f_{M_0 + 1}(\bs{X}_i|\bs{Z}_i;{\bs{\vartheta}}_{M_0 + 1})$, $L_{0,n}({\bs{\vartheta}}_{M_0})=\sum_{i = 1}^n \log f_{M_0}(\bs{X}_i|\bs{Z}_i;{\bs{\vartheta}}_{M_0})$, and $\widehat{\bs{\vartheta}}_{M_0}=\mathop{\arg \max}_{{\bs{\vartheta}}_{M_0}\in\Theta_{{\bs{\vartheta}}_{M_0}}}L_{0,n}({\bs{\vartheta}}_{M_0})$ for the densities (\[true\_model-homo\])–(\[fitted\_model-homo\]). Define $\widetilde{\bs{\lambda}}:=((\bs{\lambda}_1)\t,\ldots,(\bs{\lambda}_{M_0})\t)\t \in \Theta_{\widetilde{\bs{\lambda}}}$ with $\bs{\lambda}_m \in \Theta_{\bs{\lambda}_m}$. Collect the score vector for testing $H_{0,1},\ldots,H_{0,M_0}$ as $$\begin{aligned} &\widetilde{\bs{s}}(\bs{x},\bs{z}) := \begin{pmatrix} \widetilde{\bs{s}}_{\bs{\eta}} \\ \widetilde{\bs{s}}_{\bs{\lambda}} \end{pmatrix}\quad \text{and}\quad \bar{\bs{s}}(\bs{x},\bs{z};\widetilde{\bs{\lambda}}):= \begin{pmatrix} \widetilde{\bs{s}}_{\bs{\eta}} \\ \bar{\bs{s}}_{\bs\alpha}(\tilde{\bs{\lambda}}) \end{pmatrix}, \ \text{ where }\\ &\widetilde{\bs{s}}_{\bs{\eta}}:= \left(\begin{array}{c} {\bs{s}}_{\bs{\alpha}} \\ {\bs{s}}_{{ (\bs{\gamma},\bs{\mu}, \bs{v}) }} \end{array}\right),\quad \widetilde{\bs{s}}_{\bs{\lambda}}: = \begin{pmatrix} \bs{s}_{\bs{\mu}^3}^1 \\ \bs{s}_{\bs{\mu}^4}^1 \\ \vdots \\ \bs{s}_{\bs{\mu}^3}^{M_0} \\ \bs{s}_{\bs{\mu}^4}^{M_0} \end{pmatrix}, \quad \text{and}\quad \bar{\bs{s}}_{\bs\alpha}(\widetilde{\bs{\lambda}}): = \begin{pmatrix} {s}_{\bs{\alpha}}^1(\bs{\lambda}_1)\\ \vdots \\ {s}_{\bs{\alpha}}^{M_0}(\bs{\lambda}_{M_0}) \end{pmatrix}, \end{aligned} \label{stilde-homo}$$ where $\bs{s}_{\bs{\mu^3}}^m := \left\{\alpha_{m}^* \nabla_{\mu_i \mu_j \mu_k} f^*_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}_m^{*},\bs{v}^*) / (3!f_0^*) \right\}_{1 \leq i \leq j \leq k \leq d}$; $\bs{s}_{\bs{\alpha}}$, $\bs{s}_{(\bs{\gamma},\bs{\mu}, \bs{v})}$, and $\bs{s}_{\bs{\mu}^4}^m$ are defined similarly to those in (\[sh\]) but using the density (\[true\_model-homo\]) in place of (\[true\_model\]) with the common value of $v^*$ across components; ${s}_{\alpha}^m(\bs{\lambda}_m)$ is defined as $${s}_{\alpha}^m(\bs{\lambda}_m) := \alpha_m^* \frac{f_v^{m*}(\bs{\lambda}_m) -f_v^{m*}-\nabla_{\bs{\mu}} f^{m*}_v \bs{\lambda}_m - \nabla_{\bs{v}} f^{m*}_v \bs{\lambda}_{\bs{\mu}^2,m} }{ |\bs{\lambda}_m|^3f^{m*}_v},$$ where $ f_v^{m*}(\bs{\lambda}_m):=f_v\left(\bs{x}|\bs{z};\bs{\gamma}^*, \bs\mu_m^*+ \bs{\lambda}_m, \bs{v}^*\right)$ and $f_v^{m*} :=f_v\left(\bs{x}|\bs{z};\bs{\gamma}^*, \bs\mu_m^*, \bs{v}^*\right)$, and $\bs{\lambda}_{\bs{\mu}^2,m} $ is defined similarly to $\bs{\lambda}_{\bs{\mu}^2}$ but with $\bs{\lambda}_m$ in place of $\bs{\lambda}$. Define $\widetilde{\bs{\mathcal{I}}}$, $\widetilde{\bs{\mathcal{I}}}_{\bs{\eta}}$, $\widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}\bs{\eta}}$, $\widetilde{\bs{\mathcal{I}}}_{\bs{\eta\lambda}}$, $\widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}}$, $\widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}}$ similarly to those in (\[Itilde\]) but using $\widetilde{\bs{s}}(\bs{x},\bs{z})$ defined in (\[stilde-homo\]) in place of (\[stilde\]). Let $\widetilde{\bs{G}}_{\bs{\lambda}.\bs{\eta}}=((\bs{G}_{\bs{\lambda}.\bs{\eta}}^{1})^{\top},\ldots,(\bs{G}_{\bs{\lambda}.\bs{\eta}}^{M_0})^\top)^\top \sim N(0,\widetilde{\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}})$, and define ${\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}}^m:=E[\bs{G}_{\bs{\lambda}.\bs{\eta}}^m (\bs{G}_{\bs{\lambda}.\bs{\eta}}^m)^\top]$ and $\bs{Z}_{\bs{\lambda}}^m:=({\bs{\mathcal{I}}}_{\bs{\lambda}.\bs{\eta}}^m)^{-1}\bs{G}_{\bs{\lambda}.\bs{\eta}}^m$. For $j=1,2$, define $\widehat{\bs{t}}_{\bs{\lambda},m}^j $ by $$r^m(\widehat{\bs{t}}_{\bs{\lambda},m}^j ) = \inf_{\bs{t}_{\bs{\lambda}} \in \Lambda_{\bs{\lambda} }^j}r^m(\bs{t}_{\bs{\lambda}}), \quad r^m(\bs{t}_{\bs{\lambda}}) := (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}}^m)\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m (\bs{t}_{\bs{\lambda}} -\bs{Z}_{\bs{\lambda}}^m),$$ where $\Lambda_{\bs{\lambda}}^j$ is given by (\[Lambda-e-homo\]). Define $\widehat{t}_{\alpha,m}(\bs{\lambda}_m)$ by$$\begin{aligned} r^m(\widehat{t}_{\alpha,m} (\bs{\lambda}_m))= \inf_{ t_\alpha \geq 0 }r^m( t_\alpha),\quad r^m( t_\alpha) := (t_\alpha - {Z}_{\bs{\alpha}}^m(\bs{\lambda}_m))^2 {\mathcal{I}} _{\bs{\alpha}.\bs{\eta}}^m(\bs{\lambda}_m, \bs{\lambda}_m),\end{aligned}$$ where ${\mathcal{I}}_{\bs{\alpha}.\bs{\eta}}^m(\bs{\lambda}_m, \bs{\lambda}_m)$ and ${Z}_{\bs{\alpha}}^m(\bs{\lambda}_m)$ are defined similarly to $ {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}, \bs{\lambda})$ and $Z_{{\alpha}}(\bs{\lambda})$ in (\[I\_lambda-homo-2\]), respectively, but using $\widetilde{\bs{s}}_{\bs{\eta}}$ and $s_{\bs{\alpha}}^m(\bs{\lambda}_m)$ in place of ${\bs{s}}_{\bs{\eta}}$ and $s_{\alpha}(\bs{\lambda})$. \[A-vec-2-homo\] (a) The parameter spaces $\Theta_{\bs{\vartheta}_{M_0}}$ and $\Theta_{\bs{\vartheta}_{M_0+1}}$ are compact. (b) $\widetilde{\bs{\mathcal{I}}}=E[\widetilde{\bs{s}}(\bs{X},\bs{Z})\widetilde{\bs{s}}(\bs{X},\bs{Z})\t]$ is finite and nonsingular and $0 < \inf_{\Theta_{\widetilde{\bs{\lambda}}}\setminus \{\bs{0}\} } \lambda_{\min}(\bar{\bs{\mathcal{I}}}(\widetilde{\bs{\lambda}})) \leq \sup_{\Theta_{\widetilde{\bs{\lambda}}}\setminus \{\bs{0}\}} \lambda_{\max}(\bar{\bs{\mathcal{I}}}(\widetilde{\bs{\lambda}})) < \infty$, where $\bar{\bs{\mathcal{I}}}(\widetilde{\bs{\lambda}}):= E[\bar{\bs{s}}(\bs{X},\bs{Z};\widetilde{\bs{\lambda}}) (\bar{\bs{s}}(\bs{X},\bs{Z};\widetilde{\bs{\lambda}}))\t]$ and $\widetilde{\bs{s}}(\bs{X},\bs{Z})$ and $\bar{\bs{s}}(\bs{X},\bs{Z};\widetilde{\bs{\lambda}})$ are defined in (\[stilde-homo\]). \[local\_lr-2-homo\] Suppose that Assumptions \[assn\_consis\], \[A-taylor1\], and \[A-vec-2-homo\] hold. Then, under the null hypothesis $H_0: m=M_0$, $LR_{n}^{M_0} \rightarrow_d \max\{v_1,\ldots, v_{M_0}\}$, where $$v_m := \max\left\{ (\widehat{\bs{t}}_{\bs{\lambda},m}^1 )\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m \widehat{\bs{t}}_{\bs{\lambda},m}^1 , \ (\widehat{\bs{t}}_{\bs{\lambda},m}^2 )\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m \widehat{\bs{t}}_{\bs{\lambda},m}^2 , \ \sup_{\Theta_{\bs{\lambda}_m} \setminus \{\bs{0}\}}\ (\widehat{t}_{\alpha,m}(\bs{\lambda}_m) )^2 {\mathcal{I}}_{\bs{\alpha}.\bs{\eta}}^m(\bs{\lambda}_m, \bs{\lambda}_m) \right\}.$$ Simulation {#section:simulation } ========== Choice of penalty function {#section:penalty} -------------------------- To apply our EM test, we need to specify the set $\mathcal{T}$, number of iterations $K$, and penalty functions for $p(\tau)$. Based on our experience, we recommend $\mathcal{T} = \{0.1, 0.3, 0.5\}$ and $K = \{1,2,3\}$. We set $p(\tau)= \log(2\min\{\tau,1-\tau\})$ as suggested by @chenli09as. When estimating the model under the null hypothesis and computing $L_{0,n}(\widehat{\bs{\vartheta}}_{M_0})$, we use the penalty function (\[pen\_pmle\]) and set $a_n = n^{-1/2}$ as recommended by @chentan09jmva. For the alternative model, we consider $a_n = n^{-1/2}$ and $1$ to examine the sensitivity of the rejection frequencies to the choice of $a_n$. Simulation results {#section:results} ------------------ We examine the type I error rates and powers of the EM test by small simulations using mixtures of bivariate normal distributions. Computation was done using R [@R]. The critical values are computed by bootstrap with $399$ and $199$ bootstrap replications when testing $H_0:M=1$ and $H_0:M=2$, respectively. We use $2000$ replications, and the sample sizes are set to $200$ and $400$. Table \[table1\] reports the type I error rates of the EM test of $H_0:M = 1$ against the alternative $H_1:M = 2$ under the null hypothesis using two models given at the bottom of Table \[table1\]. In both models, the EM test statistics give accurate type I errors for $n=200$ and $400$ across two values of $a_n=n^{-1/2}$ and $1$. Table \[table3\] reports the powers of the EM test when $a_n=1$ under three alternative models given in Table \[table2\]. Comparing the rejection frequency of Model 1 with that of Model 2 or Model 3, the EM test shows higher power as the distance between two component distributions in the alternative model increases in terms of means (Model 2) or variance (Model 3). Table \[table4\] reports the type I error rates of the EM test of $H_0:M = 2$ against the alternative $H_1:M = 3$ under the two null models given at the bottom of Table \[table4\]. The EM test gives accurate type I errors across two models, sample sizes, and the values of $a_n$. Table \[table6\] reports the powers of the EM test of $H_0:M = 2$ against the alternative $H_1:M = 3$ under the alternative model given in Table \[table5\]. Overall, the EM test shows good power under finite sample size. Empirical applications ====================== The sequential hypothesis testing based on our EM test provides a useful alternative to the AIC or the BIC in determining the number of components in empirical applications.[^2] The flea beetles ---------------- The flea beetles data available in R package **tourr** contains a sample of 74 flea beetles from three species, “Concinna," “Heikertingeri," and “Heptapotamica” with 21, 31, and 22 observations, respectively.[^3] Figure \[figure1\] provides a scatter plot of two physical measurements “tars1” and “aede1,” which measure the width of the first joint of the first tarsus in microns (the sum of measurements for both tarsi) and the maximal width of the aedeagus in the fore-part in microns, respectively, for each of three species. We sequentially test the number of components in this data set without utilizing the information on which species each observation is from. As shown in Table \[table7\], the $p$-values of EM test for testing $H_0: M=1$ and $H_0: M=2$ are $0.00$ and $0.01$, suggesting that the number of components is larger than two. On the other hand, the $p$-values of the EM test for testing $H_0: M=3$ are between 0.32 and 0.36; consistent with the actual number of species in this data set, we fail to reject $H_0: M=3$. In contrast, both the AIC and the BIC incorrectly indicate that there is only one component. Table \[table8\] compares the estimated three-component bivariate normal mixture model in the first panel with the single component models estimated from a subsample of each of three species in the second panel, showing that each of estimated three component distributions accurately captures the corresponding species. Analysis of differential gene expression ---------------------------------------- A multivariate normal mixture model can be used to find differentially expressed genes by means of the posterior probability that an individual gene is non-differentially expressed. We analyze the rat dataset of 1,176 genes in middle-ear mucosa of six rat samples, the first two without pneumococcal middle-ear infection and the latter four with the disease [@pan02bio; @he06csda]. As in @pan02bio, the data were normalized by log-transformation and median centering. Denote the resulting expression levels of gene $i$ of sample $j$ by $x_{ij}$. We apply finite bivariate normal mixtures to model the sample average expression levels for gene $i$ under the two conditions, $(z_{i0},z_{i1})=(\sum_{j=1}^2 x_{ij}/2,\sum_{j=3}^6 x_{ij}/4)$ for $i=1,\ldots,1176$. As shown in Table \[table9\], the sequential hypothesis testing based on EM test and the AIC indicate that there are six components; on the other hand, consistent with the result in [@he06csda], the BIC chooses the five-component model. Table \[table10\] presents the estimates from the six component model. We classify each pair of gene expression levels into six clusters using their posterior probabilities and plot them in Figure \[figure2\]. 32 genes classified into cluster 5 show some evidence for differential expression with a mean difference of 0.23. Similarly, 13 genes in cluster 6 demonstrate strong evidence for differential expression, albeit with large variability. In contrast, the genes in clusters 1–4 show a flat expression pattern, where the observations in each cluster center around the 45 degree line. Proof of propositions {#app} ===================== As shown by @alexandrovich14jmva [p. 248], $p_n(\bs{\vartheta}_M)$ satisfies Assumptions C1–C3 of @chentan09jmva under the stated condition on $a_n$. Therefore, the stated result follows from Theorem 1 of @chentan09jmva and Corollary 3 of @alexandrovich14jmva. The proof is similar to that of Proposition 3 of @kasaharashimotsu15jasa. Let $\bs{t}_{\bs{\eta}}: = \bs{\eta} - \bs{\eta}^*$, so that $\bs{t}(\bs{\psi},\alpha)$ in (\[tpsi\_defn\]) is written as $(\bs{t}_{\bs{\eta}}\t,\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)\t)\t$. Let $$\bs{G}_{ n} := \nu_n (\bs{s}(\bs{x},\bs{z})) = \begin{bmatrix} \bs{G}_{\bs{\eta} n} \\ \bs{G}_{\bs{\lambda} n} \end{bmatrix}, \quad \begin{aligned} \bs{G}_{\bs{\lambda}.\bs{\eta} n} &:= \bs{G}_{\bs{\lambda} n} - \bs{\mathcal{I}}_{\bs{\lambda} \bs{\eta}}\bs{\mathcal{I}}_{\bs{\eta} }^{-1} \bs{G}_{\bs{\eta} n}, \quad \bs{Z}_{\bs{\lambda} . \bs{\eta} n} := \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta} }^{-1}\bs{G}_{\bs{\lambda}.\bs{\eta} n},\\ \bs{t}_{\bs{\eta}.\bs{\lambda} } &:= \bs{t}_{\bs{\eta}} + \bs{\mathcal{I}}_{\bs{\eta} }^{-1}\bs{\mathcal{I}}_{\bs{\eta}\bs{\lambda} } \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) . \end{aligned}$$ Write $$\begin{aligned} LR_n(\epsilon_1) &= \max_{\alpha \in [\epsilon_1,1-\epsilon_1]} 2\{L_n(\widehat{\bs{\psi}},\alpha) - L_n(\bs{\psi}^*,\alpha) - [ L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)- L_n(\bs{\psi}^*,\alpha) ]\} \\ &= \max_{\alpha \in [\epsilon_1,1-\epsilon_1]} 2\{L_n(\widehat{\bs{\psi}},\alpha) - L_n(\bs{\psi}^*,\alpha)\} - 2\{ L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)- L_{0,n}(\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^*) \}. \end{aligned}$$ We apply Lemma \[P-quadratic\] in \[section:quadratic\] and Lemma \[Ln\_thm2\] in \[section:expansion\] to these two terms. Note that the penalized MLE, $\widehat{\bs{\psi}}$, is consistent and that $p_n(\widehat {\bs{\vartheta}}_2) =o_p(1)$ from $a_n=O(1)$, $p_{n}(\bs{\Sigma};\bs{\Sigma})=0$ and $\widehat{\bs{\Sigma}}_1, \widehat{\bs{\Sigma}}_2, \widehat{\bs{\Omega}} \to_p \bs{\Sigma}$. Therefore, $\widehat{\bs{\psi}}$ is in the set $A_{n\varepsilon}(\delta)$ in Lemma \[P-quadratic\], and Lemma \[P-quadratic\] holds under the current set of assumptions. Split the quadratic form in Lemma \[P-quadratic\](b) and write it as $$\label{LR_appn} \sup_{\bs{\vartheta} \in A_{n\varepsilon}(\delta) } \left| 2 \left[L_n(\bs{\psi},\alpha) - L_n(\bs{\psi}^*,\alpha) \right] - B_n(\sqrt{n} \bs{t}_{\bs{\eta}.\bs{\lambda} }) - C_n(\sqrt{n} \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)) \right| =o_{p \varepsilon}(1),$$ where $$\label{B_pi} \begin{aligned} B_n(\bs{t}_{\bs{\eta}.\bs{\lambda} }) & = 2\bs{t}_{\bs{\eta}.\bs{\lambda} }\t\bs{G}_{\bs{\eta} n} - \bs{t}_{\bs{\eta}.\bs{\lambda} }\t\bs{\mathcal{I}}_{\bs{\eta}}\bs{t}_{\bs{\eta}.\bs{\lambda} }, \\ C_n(\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)) &= 2\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)\t \bs{G}_{\bs{\lambda}.\bs{\eta} n} - \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta} } \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) \\ & = \bs{Z}_{\bs{\lambda} . \bs{\eta} n}\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta} } \bs{Z}_{\bs{\lambda} . \bs{\eta} n}- (\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) - \bs{Z}_{\bs{\lambda} . \bs{\eta} n})\t\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta} }(\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) - \bs{Z}_{\bs{\lambda} . \bs{\eta} n}). \end{aligned}$$ Observe that $2[L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0) - L_{0,n}(\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^*)] = \max_{\bs{t}_{\bs{\eta}.\bs{\lambda} }} B_n(\sqrt{n} \bs{t}_{\bs{\eta}.\bs{\lambda} }) + o_p(1)$ from applying Lemma \[Ln\_thm2\] to $L_{0,n}(\bs{\gamma},\bs{\mu},\bs{\Sigma})$ and noting that the set of possible values of both $\sqrt{n} \bs{t}_{\bs{\eta}}$ and $\sqrt{n}\bs{t}_{\bs{\eta}.\bs{\lambda} }$ approaches $\mathbb{R}^{d_\eta}$. Therefore, in conjunction with $p_{n}(\widehat{\bs{\vartheta}}_2)= o_p(1)$ and (\[LR\_appn\]), we obtain $$\label{LR_appn2} 2[L_n(\widehat{\bs{\psi}},{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = C_n(\sqrt{n}\bs{t}_{\bs{\lambda}}(\widehat{\bs{\lambda}},\alpha)) + o_p(1).$$ Split $\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)$ as $\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)=(\bs{t}_{\bs{\mu v}}(\bs{\lambda},\alpha)\t,\bs{t}_{\bs{\mu}^4}(\bs{\lambda},\alpha)\t)\t = (12c(\alpha) \bs{\lambda}_{\bs{\mu v}}\t, c(\alpha)[12\bs{\lambda}_{\bs{v}^2}+ b(\alpha) \bs{\lambda}_{\bs{\mu}^4}]\t)\t$ with $c(\alpha):=\alpha(1-\alpha)$. Partition the parameter space as $\Theta_{\bs{\lambda}} = \Theta_{\bs{\lambda}}^{1} \cup \Theta_{\bs{\lambda}}^2$ with $$\begin{aligned} \Theta_{\bs{\lambda}}^{1} &:= \{ |\lambda_{\mu_i}| \leq n^{-1/8} (\log n)^{-1} \text{ for all $i\in\{1,\ldots,d\}$} \},\\ \Theta_{\bs{\lambda}}^{2} &:= \{ |\lambda_{\mu_i}| \geq n^{-1/8} (\log n)^{-1} \text{ for some $i\in\{1,\ldots,d\}$} \}.\end{aligned}$$ For $j\in \{1,2\}$, define $\ddot{\bs{\lambda}}^{j}$ by $C_n(\sqrt{n} \bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha)) = \max_{\bs{\lambda} \in \Theta_{\bs{\lambda}}^{j}} C_n(\sqrt{n} \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha))$. Then, we have $$\begin{aligned} & \bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha) =(\bs{t}_{\bs{\mu v}}(\ddot{\bs{\lambda}}^{j},\alpha)\t,\bs{t}_{\bs{\mu}^4}(\ddot{\bs{\lambda}}^{j},\alpha)\t)\t = O_p(n^{-1/2}), \label{ddot_rate} \\ & 2[L_n(\widehat{\bs{\psi}},{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = \max_{j\in \{1,2\}} C_n(\sqrt{n} \bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha)) + o_p(1), \label{LR_appn3}\end{aligned}$$ where (\[ddot\_rate\]) follows from noting that $C_n(\sqrt{n}\bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha)) \geq o_p(1)$ and using the argument following (\[rk\_lower2\]) in the proof of Lemma \[Ln\_thm2\], and (\[LR\_appn3\]) holds because (i) $\max_{j \in \{1,2\}} C_n(\sqrt{n}\bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha)) \geq 2[L_n(\widehat{\bs{\psi}},{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] + o_p(1)$ from the definition of $\bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha)$ and (\[LR\_appn2\]), and (ii) $2[L_n(\widehat{\bs{\psi}},{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)]\geq \max_{j \in \{1,2\}} C_n(\sqrt{n}\bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha)) + o_p(1)$ from the definition of $\widehat{\bs{\psi}}$ and (\[LR\_appn\]). We proceed to construct a parameter space $\tilde\Lambda_{\bs{\lambda}}^{j}$ that is locally equal to the cone $\Lambda_{\bs{\lambda}}^{j}$ defined in (\[Lambda-e\]). Define $\ddot{\bs{\lambda}}_{\bs{\mu v}}^{j}$, $\ddot{\bs{\lambda}}_{\bs{v}^2}^{j}$, and $\ddot{\bs{\lambda}}_{\bs{\mu}^4}^j $ similarly to $\bs{\lambda}_{\bs{\mu v}}$, $\bs{\lambda}_{\bs{v}^2}$, $\bs{\lambda}_{\bs{\mu}^4}$ but using $\ddot{\bs{\lambda}}^{j}$ in place of $\bs{\lambda}$. Observe that the definition of $\Theta_{\bs{\lambda}}^{1}$ and $\Theta_{\bs{\lambda}}^{2}$, (\[ddot\_rate\]), and Lemma \[lemma\_lambda\_e\] in \[section:auxiliary\] imply that $\ddot{\bs{\lambda}}_{\bs{\mu}^4}^{1} = o_p(n^{-1/2})$ and $\ddot{\bs{\lambda}}_{\bs{v}^2}^{2} = o_p(n^{-1/2})$. Therefore, $$\begin{aligned} &\bs{t}_{\bs{\mu v}}(\ddot{\bs{\lambda}}^{j},\alpha) = 12c(\alpha)\ddot{\bs{\lambda}}_{\bs{\mu v}}^{j}\ \text{ for $ j=1,2$}, \\ &\bs{t}_{\bs{\mu}^4}(\ddot{\bs{\lambda}}^{j},\alpha)= \begin{cases} 12 c(\alpha) \ddot{\bs{\lambda}}_{\bs{v}^2}^{1}+ o_p(n^{-1/2}) &\text{if } j=1,\\ c(\alpha) b(\alpha)\ddot{\bs{\lambda}}_{\bs{\mu}^4}^{2}+ o_p(n^{-1/2}) &\text{if } j=2.\\ \end{cases}\end{aligned}$$ Define $$\widetilde{\bs{t}}_{\bs{\mu v}} (\bs{\lambda},\alpha ) : = 12 c(\alpha)\bs{\lambda}_{\bs{\mu v}} \quad \text{and}\quad \widetilde{\bs{t}}_{\bs{\mu}^4}^{j}(\bs{\lambda},\alpha ): = \begin{cases} 12 c(\alpha) \bs{\lambda}_{\bs{v}^2}&\text{if } j=1,\\ c(\alpha) b(\alpha)\bs{\lambda}_{\bs{\mu}^4} & \text{if } j=2, \end{cases}$$ and $$\widetilde{\Lambda}_{\bs{\lambda} }^{j}(\alpha) := \left\{ \left(\bs{t}_{\bs{\mu v}}\t, \bs{t}_{\bs{\mu}^4}\t\right)\t: \bs{t}_{\bs{\mu v}} = \widetilde{\bs{t}}_{\bs{\mu v}} (\bs{\lambda},\alpha ),\ \bs{t}_{\bs{\mu}^4}=\widetilde{\bs{t}}_{\bs{\mu}^4}^{j}(\bs{\lambda},\alpha)\ \text{for some $\bs{\lambda} \in \Theta_{\bs{\lambda}}$} \right\}.$$ Define $\widetilde{\bs{t}}_{\bs{\lambda}}^{j}$ by $C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j}) = \max_{\bs{t}_{\bs{\lambda}} \in\widetilde{\Lambda}_{\bs{\lambda} }^{j} } C_n(\sqrt{n} \bs{t}_{\bs{\lambda}})$, then we have $\max_{j \in \{1,2\}} C_n(\sqrt{n} \bs{t}(\ddot{\bs{\lambda}}^{j}, \alpha))= \max_{j \in \{1,2\}} C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j}) +o_p(1)$. Therefore, in view of (\[LR\_appn3\]), we have $$2[L_n(\widehat{\bs{\psi}},{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] =\max_{j \in \{1,2\} } C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j}) + o_p(1) .$$ The asymptotic distribution of the LRTS follows from applying Theorem 3(c) of @andrews99em [p. 1362] to $C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j})$. First, Assumption 2 of @andrews99em holds trivially for $C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j})$. Second, Assumption 3 of @andrews99em holds with $B_T=n^{1/2}$ because $\bs{G}_{\bs{\lambda}.\bs{\eta} n} \to_d \bs{G}_{\bs{\lambda}.\bs{\eta}} \sim N(0,\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}})$ and $\bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}$ is nonsingular. Assumption 4 of @andrews99em holds from the same argument as (\[ddot\_rate\]). Assumption $5$ of @andrews99em follows from Assumption $5^*$ of @andrews99em because $\widetilde{\Lambda}_{\bs{\lambda}}^{j}$ is locally equal to the cone ${\Lambda}_{\bs{\lambda}}^{j}$. Therefore, it follows from Theorem 3(c) of @andrews99em that $\max_{j \in \{1,2\}} C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j}) \rightarrow_d \max_{j \in \{1,2\}} (\widehat{\bs{t}}_{\bs{\lambda}}^{j})^{\top}\bs{\mathcal{I}}_{\bs{\lambda},\bs{\eta}}\widehat{\bs{t}}_{\bs{\lambda}}^{j}$, giving the stated result. For $m = 1,\ldots,M_0$, let $\mathcal{N}_{m}^* \subset \Theta_{\bs{\vartheta}_{M_0+1}}(\epsilon_1)$ be a sufficiently small closed neighborhood of $\Upsilon_{1m}^*$, such that $\alpha_m,\alpha_{m + 1} > 0$ hold and $\Upsilon_{1k}^* \notin \mathcal{N}_{m}^*$ if $k\neq m$. For $\bs{\vartheta}_{M_0 + 1} \in \mathcal{N}_{m}^*$, we introduce the following one-to-one reparameterization, which is similar to (\[repara2\]): $$\begin{aligned} &\beta_{m}: = \alpha_{m} + \alpha_{m + 1}, \quad \tau: = \alpha_{m} /(\alpha_{m} + \alpha_{m + 1}), \\ &(\beta_1,\ldots,\beta_{m - 1},\beta_{m + 1}\ldots,\beta_{M_0 - 1})^{\top}: = (\alpha_1,\ldots,\alpha_{m - 1},\alpha_{m + 2},\ldots,\alpha_{M_0})^{\top},\\ & \begin{pmatrix} {\bs{\mu}}_m\\ {\bs{\mu}}_{m+1}\\ \bs{v}_m\\ \bs{v}_{m+1} \end{pmatrix} = \begin{pmatrix} \bs{\nu}_{\bs\mu} + (1-\tau) \bs{\lambda}_{\bs\mu} \\ \bs{\nu}_{\bs\mu} -\tau \bs{\lambda}_{\bs\mu}\\ \bs{\nu}_{\bs{v}} + (1- \tau)(2\bs{\lambda}_{\bs{v}}+ C_1 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) )\\ \bs{\nu}_{\bs{v}} - \tau(2\bs{\lambda}_{\bs{v}}+ C_2 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) \end{pmatrix}, \end{aligned}$$ where $\beta_{M_0} = 1 - \sum_{m = 1}^{M_0 - 1} \beta_m$, and we suppress the dependence of $(\bs{\lambda}_{\bs{\mu}},\bs{\nu}_{\bs{\mu}},\bs{\lambda}_{\bs{v}},\bs{\nu}_{\bs{v}})$ on $\tau$. With this reparameterization, the null restriction $(\bs{\mu}_{m},\bs{\Sigma}_{m}) = (\bs{\mu}_{m + 1},\bs{\Sigma}_{m + 1})$ implied by $H_{0, 1m}$ holds if and only if $(\bs{\lambda}_{\bs{\mu}},\bs{\lambda}_{\bs{v}}) = \bs{0}$. Collect the reparameterized parameters except for $\tau$ into one vector $\bs{\psi}^m$, and let $\bs{\psi}^{m*}$ denote its true value. Define the reparameterized density as $$\begin{aligned} f_{\bs{M_0+1}}^m(\bs{x}|\bs{z}; \bs{\psi}^{m},\tau) & : = \beta_{m} g^m(\bs{x}|\bs{z}; \bs{\psi}^{m},\tau) + \sum_{j = 1}^{m - 1} \beta_j f_v(\bs{x}|\bs{z}; \bs{\gamma},\bs{\mu}_j,\bs{\Sigma}_j) + \sum_{j = m + 1}^{M_0} \beta_{j} f_v(\bs{x}|\bs{z}; \bs{\gamma},\bs{\mu}_{j + 1},\bs{\Sigma}_{j + 1}),\end{aligned}$$ where, similar to (\[loglike\]), $$\begin{aligned} g^m(\bs{x}|\bs{z}; \bs{\psi}^{m},\tau) & := \tau f_v \left(\bs{x}|\bs{z}; \bs{\gamma}, \bs{\nu}_{\bs{v}} + (1- \tau)(2\bs{\lambda}_{\bs{v}}+ C_1 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) ) \right)\\ & \quad + (1 - \tau) f_v \left(\bs{x}|\bs{z}; \bs{\gamma},\bs{\nu}_{\bs{\mu}} - \tau \bs{\lambda}_{\bs{\mu}}, \bs{\nu}_{\bs{v}} - \tau(2\bs{\lambda}_{\bs{v}}+ C_2 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}\t) \right).\end{aligned}$$ Observe that Lemma \[dv3\] in \[section:auxiliary\] is applicable to $g^m(\bs{x}|\bs{z}; \bs{\psi}^{m},\tau)$ by replacing $\alpha$ with $\tau$. Define $L_n^m(\bs{\psi}^{m},\tau): = \sum_{i = 1}^n \log[f_{M_0+1}^m(\bs{X}_i|\bs{Z}_i; \bs{\psi}^{m},\tau)] $. Then, $L_{n}^m(\bs{\psi}^m,\tau) - L_n^m(\bs{\psi}^{m*},\tau)$ admits the same expansion as $L_n(\bs{\psi},\alpha) - L_n(\bs{\psi}^*,\alpha)$ in Lemma \[P-quadratic\]  in \[section:quadratic\] by replacing $(\bs{t}(\bs{\psi},\alpha), \bs{s}(\bs{x},\bs{z}), \bs{\mathcal{I}})$ with $(\bs{t}_{m}(\bs{\psi}^m,\tau), \bs{s}_{m}(\bs{x},\bs{z}), \bs{\mathcal{I}}^m)$, where $(\bs{s}_{m}(\bs{x},\bs{z}),\bs{\mathcal{I}}^m)$ is defined in the same manner as $(\bs{s}(\bs{x},\bs{z}),\bs{\mathcal{I}})$ but using $(\widetilde{\bs{s}}_{\bs{\eta}},\bs{s}_{\bs{\mu v}}^m,\bs{s}_{\bs{\mu}^4}^m)$ in place of $(\bs{s}_{\boldsymbol{\eta}},\bs{s}_{\bs{\lambda}})$. Define the local penalized MLE of $\bs{\psi}^{m}$ by $$\widehat{\bs{\psi}}^m: = \arg\max_{\bs{\psi}^{m} \in\mathcal{N}_{m}^*} PL_n^m(\bs{\psi}^{m},\tau),\ \text{ where} \ PL_n^m(\bs{\psi}^{m},\tau): =L_n^m(\bs{\psi}^{m},\tau) + p_n(\bs{\psi}^{m}). \label{local_mle}$$ Because $\bs{\psi}^{m*}$ is the only parameter value in $\mathcal{N}_m^*$ that generates true density, $\widehat{\bs{\psi}}^m - \bs{\psi}^{m*} = o_p(1)$ follows from a straightforward extension of Proposition \[P-consis\]. For $\epsilon_\tau\in(0, 1/2)$, define the LRTS for testing $H_{0,1m}$ as $LR_{n,1m}(\epsilon_\tau) : = \max_{\tau\in[\epsilon_\tau,1 - \epsilon_\tau]}2\{L_n^m(\widehat{\bs{\psi}}^m,\tau) - L_{0,n}(\widehat{\bs{\vartheta}}_{M_0})\}$. Observe that $p_n(\widehat{\bs{\psi}}^{m}) = o_p(1)$ because $a_n = o(1)$. Repeating the proof of Proposition \[P-LR-N1\] for each local penalized MLE by replacing $\bs{G}_{n}$ with $\bs{G}_{n,m} := \nu_n (\bs{s}_m(\bs{x},\bs{z}))$ and collecting the results while noting that $(\bs{G}_{n,1}^{\top},\ldots ,\bs{G}_{n,M_0}^{\top})^{\top}\rightarrow_d (\bs{G}_1^{\top},\ldots ,\bs{G}_{M_0}^{\top})^{\top}$, we obtain $$(LR_{n,11}(\epsilon_\tau), \ldots, LR_{n,1M_0}(\epsilon_\tau))^{\top}\rightarrow_d (v_1,\ldots, v_{M_0})^{\top},$$ with $v_m$’s defined in Proposition \[local\_lr-2\]. Therefore, the stated result holds. For $j=1,2$, let $\omega_{n,m}^{j}$ be the sample counterpart of $(\widehat{\bs{t}}_{\bs{\lambda},m}^{j})^{\top} \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}}^m \widehat{\bs{t}}_{\bs{\lambda},m}^{j}$ in Proposition \[local\_lr-2\] such that the local LRTS satisfies $2 [L_n^m(\widehat{\bs{\psi}}_{\tau}^{m},\tau) - L_{0, n}(\widehat{\bs{\vartheta}}_{m_0}) ] = \max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$, where $\widehat{\bs{\psi}}_{\tau}^{m}$ is the local penalized MLE defined as in (\[local\_mle\]) but using the penalty function $p_n^m(\bs{\vartheta}_{M_0+1})$ in (\[penalty-em\]) in place of $p_n(\bs{\vartheta}_{M_0+1})$ in (\[pen\_pmle\]). First, we show $\text{EM}_n^{m(1)} = \max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$. For $\tau \in (0, 1)$, define $\bs{\vartheta}_{M_0+1}^{m*}(\tau):= \{\bs{\vartheta}_{M_0+1} \in\Upsilon_{1m}^* : \alpha_m/(\alpha_m+\alpha_{m+1})=\tau\}$, which gives the true density. Because $\bs{\vartheta}_{M_0+1}^{m*}(\tau_0)$ is the only value of $\bs{\vartheta}_{M_0+1}$ that yields the true density if $\bs{\varsigma} \in \Xi^*_m$ and $\alpha_m/(\alpha_m+\alpha_{m+1})=\tau_0$, $\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0)$ equals a reparameterized local penalized MLE in the neighborhood of $\bs{\vartheta}_{M_0+1}^{m*}(\tau_0)$. Therefore, $2[PL_{n}(\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0)) - L_{0,n}(\widehat{\bs{\vartheta}}_{M_0})] =\max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$ follows from repeating the proof of Proposition \[local\_lr-2\], and $\text{EM}_n^{m(1)} =\max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$ holds by noting that $\{0.5\} \in \mathcal{T}$. We proceed to show that $\text{EM}_n^{m(K)} = \max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$ for any finite $K$. Because a generalized EM step never decreases the likelihood value [@dempster77jrssb], we have $PL_{n}^m(\bs{\vartheta}_{M_0+1}^{m(K)}(\tau_0)) + p( \tau^{(K)}) \geq PL_{n}^m(\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0)) + p( \tau_0)$. Therefore, it follows from Theorem 1 of @chentan09jmva, Lemma \[tau\_update\] in \[section:auxiliary\], and induction that $\bs{\vartheta}_{M_0+1}^{m(K)}(\tau_0) - \bs{\vartheta}_{M_0+1}^{m*}(\tau_0)= o_p(1)$ for any finite $K$. Let $\widetilde{\bs{\vartheta}}_{M_0+1}^{m}$ be the maximizer of $PL_{n}^m(\bs{\vartheta}_{M_0+1})$ under the constraint $\alpha_m/(\alpha_m + \alpha_{m+1}) = \tau^{(K)}$ in an arbitrary small closed neighborhood of $\bs{\vartheta}_{M_0+1}^{m*}(\tau^{(K)})$. Then, we have $PL_{n}(\widetilde{\bs{\vartheta}}_{M_0+1}^{m}) \geq PL_{n}^m({\bs{\vartheta}}_{M_0+1}^{m(K)}(\tau_0))+o_p(1)$ from the consistency of ${\bs{\vartheta}}_{M_0+1}^{m(K)}(\tau_0)$, and $2[PL_{n}(\widetilde{\bs{\vartheta}}_{M_0+1}^{m}) - L_{0,n}(\widehat{\bs{\vartheta}}_{M_0}) ] = \max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$ holds from the definition of $\widetilde{\bs{\vartheta}}_{M_0+1}^{m}$. Furthermore, note that $PL_{n}(\bs{\vartheta}_{M_0+1}^{m(K)}(\tau_0)) \geq PL_{n}(\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0)) + o_p(1)$ from the definition of $\bs{\vartheta}_{M_0+1}^{m(K)}(\tau_0)$ and $\tau^{(K)} - \tau_0 = o_p(1)$, and we have already shown $2[PL_{n}(\bs{\vartheta}_{M_0+1}^{m(1)}(\tau_0)) - L_{0,n}(\widehat{\bs{\vartheta}}_{M_0})] = \max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$. Therefore, $2[PL_{n}({\bs{\vartheta}}_{M_0+1}^{m(K)}(\tau_0)) - L_{0, n}(\widehat{\bs{\vartheta}}_{M_0}) ] = \max_{j}\{\omega_{n,m}^{j}\} + o_p(1)$ holds for all $m$, and follows because $\tau^{(K)} - \tau_0 = o_p(1)$ and $\{0.5\} \in \mathcal{T}$. The stated result then follows from the definition of $\text{EM}_n^{(K)}$. The proof follows the argument in the proof of Proposition \[P-LR-N1\]. Observe that $\bs{h}_{\bs \eta}=0$ and $\bs{h}_{\bs\lambda} =\sqrt{n}\bs{t}_{\bs{\lambda}}(\bs{\lambda}_n,\alpha_n)+o(1)$ hold under $H_{1n}$. Therefore, Lemma \[P-LAN\] in \[section:auxiliary\] holds under $\mathbb{P}_{\bs{\vartheta}_n}^n$ implied by $H_{1n}$, and, in conjunction with Theorem 12.3.2 of @lehmannromano05book, Lemma \[P-quadratic\] holds under $\mathbb{P}_{\bs\vartheta_n }^n$. Consequently, the proof of Proposition \[P-LR-N1\] goes through if we replace $\bs{G}_{\bs{\lambda.\eta} n}\rightarrow_d \bs{G}_{\bs{\lambda.\eta} }$ with $G_{\bs{\lambda.\eta} n} \rightarrow_d G_{\bs{\lambda.\eta}} + ( \bs{\mathcal{I}_{\lambda}} - \bs{\mathcal{I}_{\lambda\eta}} \bs{\mathcal{I}_{\eta}}^{-1} \bs{\mathcal{I}_{\eta\lambda}} ) \bs{h}_{\bs\lambda} = \bs{G}_{\bs{\lambda.\eta}} + \bs{\mathcal{I}_{\lambda.\eta}} \bs{h_{\lambda}}$, and the stated result follows. The proof follows the argument in the proof of Theorem 15.4.2 in @lehmannromano05book. Define $\bf{C}_{\bs\eta}$ as the set of sequences $\{{\bs{\eta}}_n\}$ satisfying $\sqrt{n}({\bs{\eta}}_n - {\bs{\eta}}^*) \to \bs{h}_{\bs\eta}$ for some finite $ \bs{h}_{\bs\eta}$. Denote the MLE of the model with $M_0=1$ by $\hat{\bs{\eta}}_n$, then $\sqrt{n}(\hat{\bs{\eta}}_n - \bs{\eta}^*)$ converges in distribution to a $\mathbb{P}_{\bs{\vartheta}^*}$-a.s. finite random variable. Then, by the Almost Sure Representation Theorem (e.g., Theorem 11.2.19 of @lehmannromano05book), there exist random variables $\widetilde{\bs{\eta}}_n$ and $\widetilde{\bs{h}}_{\bs\eta}$ defined on a common probability space such that $\hat{\bs{\eta}}_n$ and $\widetilde{\bs{\eta}}_n$ have the same distribution and $\sqrt{n}(\widetilde{\bs{\eta}}_n -\bs \eta^*)\rightarrow \widetilde{\bs{h}}_{\bs\eta}$ almost surely. Therefore, $\{ \widetilde{\bs{\eta}}_n \}\in \bf{C}_{\bs\eta}$ with probability one, and the stated result under $H_0$ follows from Lemma \[lemma\_btsp\] in \[section:auxiliary\] because $\hat{\bs{\eta}}_n$ and $\widetilde{\bs{\eta}}_n$ have the same distribution. For the MLE under $H_{1n}$, note that the proof of Proposition \[P-LAN2\] goes through when $\bs{h}_{\bs\eta}$ is finite. Therefore, $\sqrt{n}(\hat{\bs{\eta}}_n - \bs{\eta}^*)$ converges in distribution to a $\mathbb{P}_{\bs\vartheta_n}$-a.s. finite random variable under $H_{1n}$. Hence, the stated result for LRTS follows from repeating the argument in the case of $H_0$. The corresponding result for EM test follows from the asymptotic equivalence of $LR_n^{M_0}$ and $EM_n^{(K)}$. The proof is similar to that of Proposition \[P-LR-N1\]. Let $(\widehat{\bs{\psi}},\widehat{\alpha})$ denote the reparameterization of $\widehat{\bs{\vartheta}}_2^1$. Write $\bs{t}(\bs{\psi},\alpha)$ in (\[tpsi\_defn-homo\]) as $(\bs{t}_{\bs{\eta}}\t,\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)\t)\t=(\bs{t}_{\bs{\eta}}\t,\bs{t}_{\bs{\mu}^3}(\bs{\lambda},\alpha)\t,\bs{t}_{\bs{\mu}^4}(\bs{\lambda},\alpha)\t)\t$. Repeating the argument that leads to (\[LR\_appn2\]) in the proof of Proposition \[P-LR-N1\] but using Lemma \[P-quadratic-homo-1\] in place of Lemma \[P-quadratic\], we obtain $$2[L_n(\widehat{\bs{\psi}},\widehat{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = C_n(\sqrt{n}\bs{t}(\widehat{\bs{\psi}},\widehat{\alpha})) + o_p(1),$$ where $C_n(\cdot)$ is defined as in (\[B\_pi\]) but using $\bs{s}(\bs{x},\bs{z})$ defined in (\[score\_defn-homo\]) in place of (\[score\_defn\]). Partition the parameter space as $\Theta_{\bs{\lambda\alpha}}=\Theta_{\bs{\lambda\alpha}}^1\cup \Theta_{\bs{\lambda\alpha}}^2$ with $$\begin{aligned} \label{theta_lambda_e-homo} &\Theta_{\bs{\lambda\alpha}}^1 := \{ (\bs{\lambda},\alpha) \in \Theta_{\bs{\lambda\alpha}} : |1-2\alpha| \geq n^{-1/8} \log n \}\ \text{and}\ \Theta_{\bs{\lambda\alpha}}^2 := \{ (\bs{\lambda},\alpha) \in \Theta_{\bs{\lambda\alpha}} : |1-2\alpha| < n^{-1/8} \log n \},\end{aligned}$$ where $\Theta_{\bs\lambda\alpha}$ be the set of values of $(\bs\lambda,\alpha)$ such that the value of ${\bs{\vartheta}}_2$ implied by $(\bs{\lambda},\alpha)$ is in $\Theta_{{\bs{\vartheta}}_2,\zeta}^1$. Define $(\ddot{\bs{\lambda}}^j,\ddot{\alpha}^j)$ by $C_n(\sqrt{n} \bs{t}(\ddot{\bs{\lambda}}^j, \ddot{\alpha}^j)) = \max_{(\bs{\lambda},\alpha) \in \Theta_{\bs{\lambda\alpha}}^j} C_n(\sqrt{n} \bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha))$ for $j=1,2$. Then, repeating the argument following (\[ddot\_rate\])–(\[LR\_appn3\]), we have $$\begin{aligned} & \bs{t}(\ddot{\bs{\lambda}}^j,\ddot{\alpha}^j) = (\bs{t}_{\bs{\mu}^3}(\ddot{\bs{\lambda}}^j,\ddot{\alpha}^j)\t,\bs{t}_{\bs{\mu}^4}(\ddot{\bs{\lambda}}^j,\ddot{\alpha}^j)\t)\t=O_p(n^{-1/2})\quad\text{for $j=1,2$}, \label{ddot_rate-homo} \\ & 2[L_n(\widehat{\bs{\psi}},\widehat{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = \max_{j\in\{1,2\}}\left\{ C_n(\sqrt{n} \bs{t}(\ddot{\bs{\lambda}}^j, \ddot{\alpha}^j)) \right\}+ o_p(1).\label{LR_appn3-homo}\end{aligned}$$ Define $ \ddot{\bs{\lambda}}_{\bs{\mu}^3}^j$ and $\ddot{\bs{\lambda}}_{\bs{\mu}^4}^j $ similarly to $\bs{\lambda}_{\bs{\mu}^3}$ and $\bs{\lambda}_{\bs{\mu}^4}$ but using $\ddot{\bs{\lambda}}^j$ in place of $\bs{\lambda}$. Observe that (\[theta\_lambda\_e-homo\]) and (\[ddot\_rate-homo\]) imply that, with $c(\alpha):=\alpha(1-\alpha)$, $$\label{Lambda-ddot-homo} \begin{aligned} &\bs{t}_{\bs{\mu}^3}(\ddot{\bs{\lambda}}^j,\ddot{\alpha}^j) = c(\ddot{\alpha}^j)(1-2\ddot{\alpha}^j) \ddot{\bs{\lambda}}_{\bs{\mu}^3}^j + o_p(n^{-1/2})\quad\text{for $j=1, 2$}, \\ &\bs{t}_{\bs{\mu}^4}(\ddot{\bs{\lambda}}^j,\ddot{\alpha}^j) = \begin{cases} o_p(n^{-1/2}) &\text{if } j=1\\ c(\ddot{\alpha}^j)(1-6\ddot{\alpha}^j+6(\ddot{\alpha}^j)^2) \ddot{\bs{\lambda}}_{\bs{\mu}^4}^j + o_p(n^{-1/2})&\text{if } j=2, \end{cases} \end{aligned}$$ where $\bs{t}_{\bs{\mu}^4}(\ddot{\bs{\lambda}}^1,\ddot{\alpha}^1) =o_p(n^{-1/2})$ holds because $|1-2\ddot{\alpha}^1|\geq n^{-1/8}\log n$ and $c(\ddot{\alpha}^{1})(1-2\ddot{\alpha}^1) \ddot{\bs{\lambda}}_{\bs{\mu}^3}^1=O_p(n^{-1/2})$ from (\[theta\_lambda\_e-homo\]) and (\[ddot\_rate-homo\]) imply that $\ddot{\lambda}_i^1 = O_p(n^{-1/8}(\log n)^{-1/3})$ for any $i=1,\ldots,d$. For $j=1,2$, consider the following set: $$\label{Lambda-tilde-e-homo} \widetilde{\Lambda}_{\bs{\lambda} }^j := \left\{ \left( \bs{t}_{\bs{\mu}^3}\t, \bs{t}_{\bs{\mu}^4}\t\right)\t: \bs{t}_{\bs{\mu}^3} = \tilde{\bs{t}}_{\bs{\mu^3}}(\bs{\lambda},\alpha),\ \bs{t}_{\bs{\mu}^4} = \tilde{\bs{t}}_{\bs{\mu^4}}^j(\bs{\lambda},\alpha) \ \text{for some $(\bs{\lambda},\alpha) \in \Theta_{\bs{\lambda}\alpha}$} \right\},$$ where $$\label{Lambda-tilde-constraint-homo} \tilde{\bs{t}}_{\bs{\mu}^3} (\bs{\lambda},\alpha) := c(\alpha)(1-2\alpha) \bs{\lambda}_{\bs{\mu}^3}\quad\text{and}\quad \tilde{\bs{t}}_{\bs{\mu}^4}^j(\bs{\lambda},\alpha) := \begin{cases} 0& \text{if } j=1,\\ c(\alpha)(1-6\alpha+6\alpha^2) \bs{\lambda}_{\bs{\mu}^4} &\text{if } j=2. \end{cases} $$ Define $\widetilde{\bs{t}}_{\bs{\lambda}}^j$ by $C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^j) = \max_{\bs{t}_{\bs{\lambda}} \in\widetilde{\Lambda}_{\bs{\lambda} }^j } C_n(\sqrt{n} \bs{t}_{\bs{\lambda}})$. Then, it follows from (\[Lambda-ddot-homo\]) and (\[Lambda-tilde-constraint-homo\]) that $ C_n(\sqrt{n} \bs{t}(\ddot{\bs{\lambda}}^j, \ddot{\alpha}^j))= C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^j) + o_p(1)$ for $j=1,2$. Therefore, in view of (\[LR\_appn3-homo\]), we have $$2[L_n(\widehat{\bs{\psi}},{\alpha}) -L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = \max_{j\in\{1,2\}}\left\{C_n(\sqrt{n} \widetilde{\bs{t}}_{\bs{\lambda}}^{j}) \right\} + o_p(1) .$$ Note that $\widetilde{\Lambda}_{\bs{\lambda} }^j$ in (\[Lambda-tilde-e-homo\]) is locally (in a neighborhood of $\bs{\lambda}=0$) equal to the cone $\Lambda_{\bs{\lambda} }^j$ in (\[Lambda-e-homo\]) for $j=1,2$ given that $\lim_{\alpha\rightarrow 1/2}(1-6\alpha+6\alpha^2) =-1/2$. Therefore, the stated result follows from applying Theorem 3(c) of @andrews99em by repeating the argument in the last paragraph of the proof of Proposition \[P-LR-N1\]. The proof is similar to that of Proposition \[P-LR-N1\]. Let $(\widehat{\bs{\psi}}_{\bs{\lambda}}(\bs{\lambda}))=\arg\max_{{\bs{\psi}}_{\bs{\lambda}}\in \Theta_{\bs{\psi_\lambda}}} L_n( {\bs{\psi}}_{\bs{\lambda}}, {\bs{\lambda}}) $ for $\bs{\lambda}$ such that $|\bs{\lambda}|\geq \zeta$. Let $$\bs{G}_{ n}(\bs{\lambda}) := \nu_n (\bs{s}(\bs{x},\bs{z};\bs{\lambda})) = \begin{bmatrix} \bs{G}_{\bs{\eta} n} \\ {G}_{\alpha n}(\bs{\lambda}) \end{bmatrix}, \quad \begin{aligned} {G}_{\alpha.\bs{\eta} n}(\bs{\lambda}) &:= {G}_{\alpha n}(\bs{\lambda}) - \bs{\mathcal{I}}_{\alpha\bs{\eta}}(\bs{\lambda})\bs{\mathcal{I}}_{\bs{\eta} }^{-1} \bs{G}_{\bs{\eta} n}, \\ {Z}_{\alpha . \bs{\eta} n}(\bs{\lambda}) &:= \bs{\mathcal{I}}_{\alpha.\bs{\eta} }^{-1}{G}_{\alpha.\bs{\eta} n}(\bs{\lambda}). \end{aligned}$$ Following the argument that leads to (\[LR\_appn2\]) in the proof of Proposition \[P-LR-N1\] but using Lemma \[P-quadratic-homo-2\] in place of Lemma \[P-quadratic\], we obtain $$2[L_n(\widehat{\bs{\psi}}_{\bs{\lambda}}(\bs{\lambda}),\bs{\lambda})-L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = C_n(\sqrt{n}|\bs{\lambda}|^3\widehat{\alpha}(\bs{\lambda}) ;\bs{\lambda}) + o_p(1),$$ where $$C_n(t_\alpha;\bs{\lambda}) : = (Z_{\alpha .\bs{\eta} n}(\bs{\lambda}))^2{\mathcal{I}}_{\alpha.\bs{\eta} } (\bs{\lambda})- ( t_\alpha- {Z}_{\alpha.\bs{\eta} n}(\bs{\lambda}))^2 {\mathcal{I}}_{\alpha.\bs{\eta} }(\bs{\lambda}).$$ Define $\ddot{\alpha}(\bs{\lambda})$ by $C_n(\sqrt{n}|\bs{\lambda}|^3\ddot{\alpha}(\bs{\lambda});\bs{\lambda})=\max_{\alpha\in [0,3/4]} C_n(\sqrt{n} |\bs{\lambda}|^3\alpha ;\bs{\lambda}) $. Then, repeating the argument following (\[ddot\_rate\])–(\[LR\_appn3\]), we have $$\begin{aligned} & |\bs{\lambda}|^3 \ddot{\alpha}(\bs{\lambda}) = O_p(n^{-1/2}), \label{ddot_rate-homo-2} \\ & 2[L_n(\widehat{\bs{\psi}}_{\bs{\lambda}}(\bs{\lambda}),\bs{\lambda})-L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = C_n(\sqrt{n} |\bs{\lambda}|^3\ddot{\alpha}(\bs{\lambda}) ;\bs{\lambda}) + o_p(1). \label{LR_appn3-homo2}\end{aligned}$$ Define $\widetilde{t}_{\alpha}(\bs{\lambda})$ by $C_n(\sqrt{n}\widetilde{t}_{\alpha}(\bs{\lambda});\bs{\lambda})=\max_{t_\alpha\in \widetilde{\Lambda}_\alpha(\bs{\lambda})} C_n(\sqrt{n} t_{\alpha} ;\bs{\lambda}) $, where $\widetilde{\Lambda}_\alpha(\bs{\lambda}): = \{t_\alpha: t_\alpha=|\bs{\lambda}|^3 \alpha \text{ for some $\alpha\in[0,3/4]$}\}$. Then, because $C_n(\sqrt{n}\widetilde{t}_{\alpha}(\bs{\lambda});\bs{\lambda})=C_n(\sqrt{n} |\bs{\lambda}|^3\ddot{\alpha}(\bs{\lambda}) ;\bs{\lambda}) $, (\[LR\_appn3-homo2\]) implies that $$2[L_n(\widehat{\bs{\psi}}_{\bs{\lambda}}(\bs{\lambda}),\bs{\lambda})-L_{0,n}(\widehat{\bs{\gamma}}_0,\widehat{\bs{\mu}}_0,\widehat{\bs{\Sigma}}_0)] = C_n(\sqrt{n}\widetilde{t}_{\alpha}(\bs{\lambda});\bs{\lambda}) + o_p(1).$$ The stated result follows from Theorem 1(c) of @andrews01em as $$\sup_{\Theta_{\bs{\lambda}} \cap \{ |\bs{\lambda}|\geq \zeta \}} C_n(\sqrt{n}\widetilde{t}_{\alpha}(\bs{\lambda});\bs{\lambda}) \rightarrow_d \sup_{\Theta_{\bs{\lambda}} \cap \{ |\bs{\lambda}|\geq \zeta \}} (\widehat{{t}_\alpha}(\bs{\lambda}) )^2 {\mathcal{I}}_{\alpha.\bs{\eta}}(\bs{\lambda}),$$ where Assumption 2 of @andrews01em trivially holds for $C_n(\sqrt{n}\widetilde{t}_{\alpha};\bs{\lambda})$; Assumption 3 of @andrews01em holds with $B_T=n^{1/2}$ because ${G}_{\alpha.\bs{\eta} n}(\bs{\lambda})\Rightarrow {G}_{\alpha.\bs{\eta}}(\bs{\lambda})$ from (\[weak\_cgce\]); Assumption 4 of @andrews01em holds from the same argument as (\[ddot\_rate-homo-2\]); Assumption 5 of @andrews01em holds because $\tilde \Lambda_\alpha(\bs{\lambda})$ is locally equal to the cone $\mathbb{R}_+$. We omit the proof of Proposition \[local\_lr-2-homo\] because its argument is similar to that of Proposition \[local\_lr-2\] except that the proof of Proposition \[local\_lr-2-homo\] will refer to the proof of Proposition \[P-LR-N1-homo\] in place of that of Proposition \[P-LR-N1\]. Quadratic approximation of the log-likelihood function {#section:quadratic} ====================================================== When testing the number of components by the likelihood ratio test, the Fisher information matrix becomes singular and the log-likelihood function will be approximated by a quadratic function of polynomials of parameters. Further, a part of parameter is not identified under the null hypothesis. This section establishes a quadratic approximation of the log-likelihood function using the results in \[section:expansion\] and \[section:auxiliary\]. Lemma \[P-quadratic\] considers the case of testing $H_0:M=1$ against $H_A:M=2$ in the heteroscedastic case. For a sequence $X_{n\varepsilon}$ indexed by $n=1,2,\ldots$ and $\varepsilon$, we write $X_{n\varepsilon} = O_{p\varepsilon}(a_n)$ if, for any $\Delta>0$, there exist $\varepsilon>0$ and $M, n_0 <\infty$ such that $\mathbb{P}(|X_{n\varepsilon}/a_n| \leq M) \geq 1- \Delta$ for all $n > n_0$, and we write $X_{n\varepsilon} = o_{p\varepsilon}(a_n)$ if, for any $\Delta_1,\Delta_2>0$, there exist $\varepsilon>0$ and $n_0$ such that $\mathbb{P}(|X_{n\varepsilon}/a_n| \leq \Delta_1) \geq 1- \Delta_2$ for all $n > n_0$. Loosely speaking, $X_{n\varepsilon} = O_{p\varepsilon}(a_n)$ and $X_{n\varepsilon} = o_{p\varepsilon}(a_n)$ mean that $X_{n\varepsilon} = O_{p}(a_n)$ and $X_{n\varepsilon} = o_{p}(a_n)$ when $\varepsilon$ is sufficiently small, respectively. \[P-quadratic\] Suppose that Assumptions \[assn\_consis\] and \[A-taylor1\] hold and $\bs{X}$ given $\bs{Z}$ has the density $f(\bs{x}|\bs{z}; \bs{\gamma},{\bs{\mu}},\bs{\Sigma})$ defined in (\[normal\_density\]). Let $L_n(\bs{\psi},\alpha): = \sum_{i = 1}^n \log g(\bs{X}_i|\bs{Z}_i;\bs{\psi},\alpha)$ with $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ defined in (\[loglike\]). For $\alpha \in (0,1)$, define $\bs{s}(\bs{x},\bs{z})$ and $\bs{t}(\bs{\psi},\alpha)$ as in (\[score\_defn\]) and (\[tpsi\_defn\]), and let $\mathcal{N}_{\varepsilon} := \{ \bs{\vartheta}_2 \in \Theta_{\bs{\vartheta}_2} : |\bs{t}(\bs{\psi},\alpha)|< \varepsilon\}$ and $\bs{\mathcal{I}}:=E[\bs{s}(\bs{X},\bs{Z})\bs{s}(\bs{X},\bs{Z})\t]$. Then, for $\epsilon_\sigma\in(0,1)$ and any $\delta>0$, we have (a) $\sup_{\bs{\vartheta}_2 \in A_{n \varepsilon}(\delta)} |\bs{t}(\bs{\psi},\alpha)| = O_{p\varepsilon}(n^{-1/2})$; $$(b)\ \sup_{\bs{\vartheta}_2 \in A_{n \varepsilon}(\delta)}\left|L_n(\bs{\psi},\alpha) - L_n(\bs{\psi}^*,\alpha) - \sqrt{n} \bs{t}(\bs{\psi},\alpha)\t \nu_n(\bs{s}(\bs{x},\bs{z})) + n \bs{t}(\bs{\psi},\alpha)\t \bs{\mathcal{I}} \bs{t}(\bs{\psi},\alpha)/2 \right| = o_{p\varepsilon}(1),$$ where $A_{n \varepsilon}(\delta) := \{\bs{\vartheta}_2 \in \mathcal{N}_\varepsilon: L_n(\bs{\psi},\alpha) - L_n(\bs{\psi}^*,\alpha) \geq -\delta \}$. We prove the stated result by using Lemma \[Ln\_thm2\] in \[section:expansion\], where $\ell(\bs{y},\bs{\psi},\alpha):=g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha)/g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ with $\bs{y}:=(\bs{x}\t,\bs{z}\t)\t$ plays the role of $\ell(\bs{y},\bs{\vartheta})$ as defined in (\[density\_ratio\]) and $\bs{t}(\bs{\psi},\alpha)$ plays the role of $\bs{t}(\bs{\vartheta})$. Observe that $\bs{t}(\bs{\psi},\alpha)$ defined in (\[tpsi\_defn\]) satisfies $\bs{t}(\bs{\psi},\alpha) = 0$ if and only if $\bs{\psi}=\bs{\psi}^*$ because $\bs{\lambda}=0$ if and only if the $(i,i,i,i)$th element of $ 12 \bs{\lambda}_{\bs{ v}^2} + b(\alpha) \bs{\lambda}_{\bs{\mu}^4} $ is 0 for all $1 \leq i \leq d$. We expand $ \ell(\bs{y},\bs{\psi},\alpha) -1$ five times with respect to $\bs{\psi}$ and show that the expansion satisfies Assumption \[assn\_expansion\] in \[section:expansion\]. Define $$\label{v_defn} \bs{v}(\bs{y};\bs{\vartheta}_2):=(\nabla_{\bs{\psi}} g(\bs{x}|\bs{z};\bs{\psi},\alpha)\t, \nabla_{\bs{\psi}^{\otimes 2}} g(\bs{x}|\bs{z};\bs{\psi},\alpha)\t, \ldots, \nabla_{\bs{\psi} ^{\otimes 5}} g(\bs{x}|\bs{z};\bs{\psi},\alpha)\t)\t/g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha),$$ which satisfies $E[\bs{v}(\bs{Y};\bs{\vartheta}_2)]=0$. In order to apply Lemma \[Ln\_thm2\] to $ \ell(\bs{y},\bs{\psi},\alpha) -1$, we first show $$\begin{aligned} &\sup_{\bs{\vartheta}_2 \in \mathcal{N}_\varepsilon}\left|P_n [\bs{v}(\bs{y};\bs{\vartheta}_2)\bs{v}(\bs{y};\bs{\vartheta}_2)\t] - E[\bs{v}(\bs{Y};\bs{\vartheta}_2)\bs{v}(\bs{Y};\bs{\vartheta}_2)\t]\right| = o_p(1), \label{uniform_lln} \\ &\nu_n(\bs{v}(\bs{y};\bs{\vartheta}_2)) \Rightarrow \bs{W}(\bs{\vartheta}_2), \label{weak_cgce}\end{aligned}$$ where $\bs{W}(\bs{\vartheta}_2)$ is a mean-zero continuous Gaussian process with $E[\bs{W}(\bs{\vartheta}_2)\bs{W}(\bs{\vartheta}_2')\t] = E[\bs{v}(\bs{Y};\bs{\vartheta}_2)\bs{v}(\bs{Y};\bs{\vartheta}_2')\t]$. (\[uniform\_lln\]) holds because $\bs{v}(\bs{Y}_i;\bs{\vartheta}_2)\bs{v}(\bs{Y}_i;\bs{\vartheta}_2)\t$ satisfies a uniform law of large numbers (see, for example, Lemma 2.4 of @neweymcfadden94hdbk) because $\bs{v}(\bs{y};\bs{\vartheta}_2)$ is continuous in $\bs{\vartheta}_2$ and $E\sup_{\bs{\vartheta}_2 \in \mathcal{N}_{\epsilon}}|\bs{v}(\bs{Y};\bs{\vartheta}_2)|^2 < \infty$ from the property of the normal density and Assumption \[A-taylor1\]. (\[weak\_cgce\]) follows from Theorem 10.2 of @pollard90book if (i) $\Theta_{\bs{\vartheta}_2}$ is totally bounded, (ii) the finite dimensional distributions of $\nu_n(\bs{v}(\bs{y};\bs{\vartheta}_2))$ converge to those of $\bs{W}(\bs{\vartheta}_2)$, and (iii) $\{\nu_n(\bs{v}(\bs{y};\bs{\vartheta}_2)): n\geq 1\}$ is stochastically equicontinuous. Condition (i) holds because $\Theta_{\bs{\vartheta}_2}$ is compact in the Euclidean space. Condition (ii) follows from Assumption \[A-taylor1\] and the multivariate CLT. Condition (iii) holds Theorem 2 of @andrews94hdbk because $\bs{v}(\bs{y};\bs{\vartheta}_2)$ is Lipschitz continuous in $\bs{\vartheta}_2$. Note that the $(p+1)$-th order Taylor expansion of $g(\bs{\psi})$ around $\bs{\psi}=\bs{\psi}^*$ is given by $$\begin{aligned} g(\bs{\psi})& = g(\bs{\psi}^*)+ \sum_{j=1}^p \frac{1}{j!} \nabla_{(\bs{\psi}^{\otimes j})\t} g(\bs{\psi}^*) (\bs{\psi}-\bs{\psi}^*)^{\otimes j} + \frac{1}{(p+1)!} \nabla_{(\bs{\psi}^{\otimes (p+1)})\t} g(\overline{\bs{\psi}}) (\bs{\psi}-\bs{\psi}^*)^{\otimes (p+1)} ,\end{aligned}$$ where $\overline{\bs{\psi}}$ lies between $\bs{\psi}$ and $\bs{\psi}^*$, and $\overline{\bs{\psi}}$ may differ from element to element of $\nabla_{(\bs{\psi}^{\otimes (p+1)})\t} g(\overline{\bs{\psi}})$. Let $g^*$ and $\nabla g^*$ denote $g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha)$ and $\nabla g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha)$, and let $\nabla\overline{g}$ denote $\nabla g(\bs{x}|\bs{z};\overline{\bs{\psi}},\alpha)$. Let $\dot{\bs{\psi}}:=\bs{\psi} - \bs{\psi}^*$ and $\dot{\bs{\eta}}:=\bs{\eta} - \bs{\eta}^*$. Expanding $\ell(\bs{y};\bs{\psi},\alpha)$ five times around $\bs{\psi}^*$ while fixing $\alpha$ and using Lemma \[dv3\] in \[section:auxiliary\], we can write $\ell(\bs{y};\bs{\psi},\alpha)-1$ as $$\ell(\bs{y};\bs{\psi},\alpha) -1 = s(\bs{y};\bs{\eta},\bs{\lambda}) + r(\bs{y};\bs{\eta},\bs{\lambda}),$$ where $$s(\bs{y};\bs{\eta},\bs{\lambda}) :=\frac{\nabla_{\bs{\eta}\t}g^* }{g^*}\dot{\bs{\eta}} + \frac{1}{2!} \frac{\nabla_{(\bs{\lambda}^{\otimes 2})\t} g^*}{g^*} \bs{\lambda}^{\otimes 2} + \frac{1}{4!} \frac{\nabla_{(\bs{\lambda}_{\bs{\mu}}^{\otimes 4})\t} g^* }{g^*} \bs{\lambda}_{\bs{\mu}}^{\otimes 4},$$ and, with $\dot{\bs{\psi}}_-:=(\dot{\bs{\eta}}\t,\bs{\lambda}_{\bs v}\t)\t$, $$\begin{aligned} r(\bs{y};\bs{\eta},\bs{\lambda}) &:= \frac{1}{2!} \frac{\nabla_{(\bs{\eta}^{\otimes 2})\t}g^*}{g^*} \dot{\bs{\eta}}^{\otimes 2} + \frac{1}{3!} \frac{\nabla_{(\bs{\psi}^{\otimes 3})\t} g^* }{g^*} \dot{\bs{\psi}}^{\otimes 3} \label{l_expand_2} \\ & \quad + \frac{1}{4!}\sum_{p=0}^{3} \binom{4}{p} \frac{\nabla_{(\bs{\psi}_-^{\otimes (4-p)} \otimes \bs{\lambda}_{\bs{\mu}}^{\otimes p} )\t} g^* }{g^*} (\dot{\bs{\psi}}_-^{\otimes (4-p)} \otimes \bs{\lambda}_{\bs{\mu}}^{\otimes p} ) \label{l_expand_3} \\ &\quad + \frac{1}{5!} \frac{\nabla_{(\bs{\lambda}_{\bs{\mu}}^{\otimes 5})\t} \overline{g} }{g^*} \bs{\lambda}_{\bs{\mu}}^{\otimes 5} + \frac{1}{5!} \sum_{p=0}^{4} \binom{5}{p}\frac{\nabla_{(\bs{\psi}_-^{\otimes (5-p)} \otimes \bs{\lambda}_{\bs{\mu}}^{\otimes p} )\t} \overline{g} }{g^*} (\dot{\bs{\psi}}_-^{\otimes (5-p)} \otimes \bs{\lambda}_{\bs{\mu}}^{\otimes p} ) \label{l_expand_4}.\end{aligned}$$ $s(\bs{y};\bs{\eta},\bs{\lambda})$ is the leading term in the expansion. We first show $s(\bs{y};\bs{\eta},\bs{\lambda}) = \bs{t}(\bs{\psi},\alpha)\t \bs{s}(\bs{x},\bs{z})$ with $\bs{s}(\bs{x},\bs{z})$ and $\bs{t}(\bs{\psi},\alpha)$ defined in (\[score\_defn\])–(\[lambda\_muv\_defn\]). Let $f^*$ and $\nabla f^*$ denote $f(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^*)$ and $\nabla f(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^*)$. The first term of $s(\bs{y};\bs{\eta},\bs{\lambda})$ is simply $(\nabla_{(\bs{\gamma}\t,\bs{\mu}\t,\bs{v}\t)\t}f^*/f^*)\dot{\bs{\eta}}$. Using Lemmas \[mv\_derivative\] and \[dv3\] and commutativity of partial derivatives, the second term of $s(\bs{y};\bs{\eta},\bs{\lambda})$ is written as $(1/2!)(\nabla_{(\bs{\lambda}^{\otimes 2})\t} g^*/g^*) \bs{\lambda}^{\otimes 2} = (\nabla_{(\bs{\lambda}_{\bs{\mu}} \otimes \bs{\lambda}_{\bs{v}})^{\top}} g^*/g^*) (\bs{\lambda}_{\bs{\mu}} \otimes \bs{\lambda}_{\bs{v}}) + (1/2) (\nabla_{(\bs{\lambda}_{\bs{v}}^{\otimes 2})\t} g^*/g^*) \bs{\lambda}_{\bs{v}}^{\otimes 2}$. Observe that $$\label{nabla_lambda_v_mu} \begin{aligned} \frac{\nabla_{(\bs{\lambda}_{\bs{\mu}} \otimes \bs{\lambda}_{\bs{v}})^{\top}} g^*}{g^*} (\bs{\lambda}_{\bs{\mu}} \otimes \bs{\lambda}_{\bs{v}}) & = \alpha(1-\alpha) \sum_{\substack{1\leq i \leq d \\ 1 \leq j \leq k \leq d}} \frac{\nabla_{\mu_i \mu_j \mu_k} f^* }{f^*} \lambda_{\mu_i}\lambda_{v_{jk}} \\ & = \alpha(1-\alpha) \sum_{1 \leq i \leq j \leq k \leq d} \frac{\nabla_{\mu_i \mu_j \mu_k} f^* }{f^*} \sum_{(t_1,t_2,t_3) \in p_{12}(i,j,k)} \lambda_{\mu_{t_1}}\lambda_{v_{t_2t_3}}\\ & = \alpha(1-\alpha) 12 \bs{s}_{\bs{\mu v}}\t \bs{\lambda}_{\bs{\mu v}}, \end{aligned}$$ where $\sum_{(t_1,t_2,t_3) \in p_{12}(i,j,k)}$ denotes the sum over all distinct permutations of $(i,j,k)$ to $(t_1,t_2,t_3)$ with $t_2 \leq t_3$, and $$\begin{aligned} \frac{1}{2} \frac{\nabla_{(\bs{\lambda}_{\bs{v}}^{\otimes 2})^{\top}} g^*}{g^*} \bs{\lambda}_{\bs{v}}^{\otimes 2} & = \frac{\alpha(1-\alpha)}{2} \sum_{\substack{1\leq i \leq j \leq d \\ 1 \leq k \leq \ell \leq d}} \frac{\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f^* }{f^*} \lambda_{v_{ij}}\lambda_{v_{k\ell}} \\ & = \frac{\alpha(1-\alpha)}{2} \sum_{1 \leq i \leq j \leq k \leq \ell \leq d} \frac{\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f^* }{f^*} \sum_{(t_1,t_2,t_3,t_4) \in p_{22}(i,j,k,\ell)} \lambda_{v_{t_1t_2}}\lambda_{v_{t_3t_4}},\end{aligned}$$ where $\sum_{(t_1,t_2,t_3,t_4) \in p_{22}(i,j,k,\ell)}$ denotes the sum over all distinct permutations of $(i,j,k,\ell)$ to $(t_1,t_2,t_3,t_4)$ with $t_1 \leq t_2$ and $t_3 \leq t_4$. From Lemma \[dv3\], the third term of $s(\bs{y};\bs{\eta},\bs{\lambda})$ is written as $$\begin{aligned} \frac{1}{4!} \frac{\nabla_{(\bs{\lambda}_{\bs{\mu}}^{\otimes 4})\t} g^* }{g^*} \bs{\lambda}_{\bs{\mu}}^{\otimes 4} & = \frac{\alpha(1-\alpha)}{4!} \sum_{1 \leq i \leq j \leq k \leq \ell \leq d} b(\alpha) \frac{\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f^* }{f^*} \sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)} \lambda_{\mu_{t_1}}\lambda_{\mu_{t_2}}\lambda_{\mu_{t_3}}\lambda_{\mu_{t_4}},\end{aligned}$$ where $\sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)}$ denotes the sum over all distinct permutations of $(i,j,k,\ell)$ to $(t_1,t_2,t_3,t_4)$. Therefore, the sum of $(1/2) (\nabla_{(\bs{\lambda}_{\bs{v}}^{\otimes 2})\t} g^*/g^*) \bs{\lambda}_{\bs{v}}^{\otimes 2}$ and $(1/4!) (\nabla_{(\bs{\lambda}_{\bs{\mu}}^{\otimes 4})\t} g^* / g^*) \bs{\lambda}_{\bs{\mu}}^{\otimes 4} $ is $\alpha(1-\alpha) \bs{s}_{\bs{\mu}^4}\t[12\bs{\lambda}_{\bs{v}^2} + b(\alpha)\bs{\lambda}_{\bs{\mu}^4}]$, and hence $s(\bs{y};\bs{\eta},\bs{\lambda})=\bs{t}(\bs{\psi},\alpha)\t \bs{s}(\bs{x},\bs{z})$ holds. $\bs{s}(\bs{x},\bs{z})$ clearly satisfies Assumption \[assn\_expansion\](a)(b)(e) from Assumption \[A-taylor1\], the property of the normal density, (\[uniform\_lln\]), and (\[weak\_cgce\]). Therefore, the stated result holds if $r(\bs{y};\bs{\eta},\bs{\lambda})$ defined in (\[l\_expand\_2\])–(\[l\_expand\_4\]) satisfies Assumption \[assn\_expansion\](c)(d). We proceed to show that (\[l\_expand\_2\])–(\[l\_expand\_4\]) can be expressed as $\bs{\xi}(\bs{y};\bs{\vartheta}) O(|\bs{\psi}-\bs{\psi}^*||\bs{t}(\bs{\psi},\alpha)|)$ where $\sup_{\bs{\vartheta}_2 \in \mathcal{N}_\epsilon}|\bs{\xi}(\bs{y};\bs{\vartheta}_2)| \leq \sup_{\bs{\vartheta}_2 \in \mathcal{N}_\epsilon}|\bs{v}(\bs{y};\bs{\vartheta}_2)|$ with $\bs{v}(\bs{y};\bs{\vartheta}_2)$ defined in (\[v\_defn\]). Then, Assumption \[assn\_expansion\](c)(d) follows from the property of the normal density and (\[weak\_cgce\]). First, the first term on the right hand side of (\[l\_expand\_2\]) is written as $(\nabla_{\bs{\eta}^{\otimes 2}} g^*/g^*) O(|\dot{\bs{\eta}}|^2)$. Second, write the second term in (\[l\_expand\_2\]) as $(1/3!)\sum_{p=0}^3 \binom{3}{p} (\nabla_{(\bs{\eta}^{\otimes p} \otimes \bs{\lambda}^{\otimes (3-p)} )\t} g^*/g^*) (\dot{\bs{\eta}}^{\otimes p} \otimes \bs{\lambda}^{\otimes (3-p)} )$. The terms with $p \geq 1$ are written as $(\nabla_{\bs{\psi}^{\otimes 3}} g^*/g^*) O(|\dot{\bs{\eta}}|)O(|\bs{\lambda}|)$. Write the term with $p=0$ as $(1/3!)\sum_{q=0}^3 \binom{3}{q} A_q$, where $A_q:=(\nabla_{(\bs{\lambda}_{\bs v}^{\otimes q} \otimes \bs{\lambda}_{\bs \mu}^{\otimes (3-q)} )\t} g^*/g^*) (\bs{\lambda}_{\bs v}^{\otimes q} \otimes \bs{\lambda}_{\bs \mu}^{\otimes (3-q)} )$. We have $A_0=0$ because $\nabla_{\lambda_{\mu_i} \lambda_{\mu_j} \lambda_{\mu_k}} g^*=0$ from Lemma \[dv3\]. From a similar argument to (\[nabla\_lambda\_v\_mu\]), we obtain $A_1=\sum_{i=1}^d \lambda_{\mu_i} ( \nabla_{(\bs{\lambda}_{\bs v} \otimes \bs{\lambda}_{\bs \mu} )\t} \nabla_{\lambda_{\mu_i}}g^*/g^*) (\bs{\lambda}_{\bs v} \otimes \bs{\lambda}_{\bs \mu}) = (\nabla_{\bs{\lambda}^{\otimes 3}}g^* / g^*) O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu v}}|)$. A similar argument gives $A_2=(\nabla_{\bs{\lambda}^{\otimes 3}}g^*/g^*) O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu v}}|)$. For $A_3$, observe that, for any sequence $a_{ijk\ell m n}$, $$\begin{aligned} & \sum_{1 \leq i \leq j \leq d} \sum_{1 \leq k \leq \ell \leq d} \sum_{1 \leq m \leq n \leq d} a_{ijk\ell m n} \lambda_{v_{ij}}\lambda_{v_{k\ell}} \lambda_{v_{m n}} \\ & = \sum_{1 \leq m \leq n \leq d} \lambda_{v_{m n}} \sum_{1 \leq i \leq j \leq k \leq \ell \leq d} a_{ijk\ell mn} \left( 12 \sum_{(t_1,t_2,t_3,t_4) \in p_{22}(i,j,k,\ell)} \lambda_{v_{t_1t_2}}\lambda_{v_{t_3t_4}} \right.\\ & \left. \quad + b(\alpha) \sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)} \lambda_{\mu_{t_1}}\lambda_{\mu_{t_2}}\lambda_{\mu_{t_3}}\lambda_{\mu_{t_4}} \right) \\ & \quad -b(\alpha) \sum_{i=1}^d\sum_{j=1}^d\sum_{k=1}^d \lambda_{\mu_{i}}\lambda_{\mu_{j}}\lambda_{\mu_{k}} \sum_{1 \leq \ell \leq m \leq n \leq d}\ a_{ijk\ell m n} \sum_{(t_1,t_2,t_3) \in p_{12}(\ell,m,n)} \lambda_{\mu_{t_1}}\lambda_{v_{t_2t_3}}.\end{aligned}$$ Using this result with $a_{ijk\ell mn}=\nabla_{\lambda_{v_{i j}} \lambda_{v_{k \ell}} \lambda_{v_{m n}}}g^*/g^*$ gives $A_3=(\nabla_{\bs{\psi}^{\otimes 3}} g^*/g^*) [O(|\bs{\lambda}||12\bs{\lambda}_{\bs{v}^2}+ b(\alpha) \bs{\lambda}_{\bs{\mu}^4}|) + O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu v}}|)]$. Therefore, the terms on the right hand side of (\[l\_expand\_2\]) are $\bs{v}(\bs{y};\bs{\vartheta}_2) O(|\bs{\psi}-\bs{\psi}^*||\bs{t}(\bs{\psi},\alpha)|)$. We proceed to bound (\[l\_expand\_3\]). The terms in (\[l\_expand\_3\]) with $p \geq 1$ are written as $(\nabla_{\bs{\psi}^{\otimes 4}}g^* / g^*) [ O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu v}}|) + O(|\bs{\lambda}||\dot{\bs{\eta}}|)]$ because they contain either $\sum_{1 \leq i \leq j \leq d} \lambda_{v_{ij}} (\bs{\lambda}_{\bs v} \otimes \bs{\lambda}_{\bs \mu})$ or $\bs{\bs{\lambda}_{\bs{\mu}} \otimes \dot{\bs{\eta}}}$. The term with $p=0$ is written as $(\nabla_{\bs{\psi}^{\otimes 4}}g^* / g^*) [O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu}^4}|) + O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu v}}|)]$ from a similar argument to bound $A_3$. It remains to bound (\[l\_expand\_4\]). For the first term in (\[l\_expand\_4\]), observe that, for any sequence $a_{ijk\ell m}$, $$\begin{aligned} & \sum_{i=1}^d\sum_{j=1}^d\sum_{k=1}^d \sum_{\ell=1}^d \sum_{m=1}^d a_{ijk\ell m} \lambda_{\mu_{i}} \lambda_{\mu_{j}} \lambda_{\mu_{k}} \lambda_{\mu_{\ell}} \lambda_{\mu_{m}} \\ & = \frac{1}{b(\alpha)} \sum_{m=1}^d \lambda_{\mu_{m}} \sum_{1 \leq i \leq j \leq k \leq \ell \leq d} a_{ijk\ell m} \left(b(\alpha) \sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)} \lambda_{\mu_{t_1}}\lambda_{\mu_{t_2}}\lambda_{\mu_{t_3}}\lambda_{\mu_{t_4}} \right.\\ & \left. \quad \quad + 12 \sum_{(t_1,t_2,t_3,t_4) \in p_{22}(i,j,k,\ell)} \lambda_{v_{t_1t_2}}\lambda_{v_{t_3t_4}} \right) \\ & \quad -\frac{b(\alpha)}{12} \sum_{1 \leq i \leq j \leq d} \lambda_{v_{ij}} \sum_{1 \leq k \leq \ell \leq m \leq d}\ a_{ijk\ell m} \sum_{(t_1,t_2,t_3) \in p_{12}(k,\ell,m)} \lambda_{\mu_{t_1}}\lambda_{v_{t_2t_3}}.\end{aligned}$$ Therefore, the first term in (\[l\_expand\_4\]) can be written as $(\nabla_{\bs{\psi}^{\otimes 5}}\overline{g}/g^*) [O(|\bs{\lambda}||12\bs{\lambda}_{\bs{v}^2}+ b(\alpha) \bs{\lambda}_{\bs{\mu}^4}|) + O(|\bs{\lambda}||\bs{\lambda}_{\bs{\mu v}}|)]$. The second term in (\[l\_expand\_4\]) is written as $(\nabla_{\bs{\psi}^{\otimes 5}}\overline{g}/g^*) O(|\bs{\lambda}||\bs{t}(\bs{\psi},\alpha)|)$ from the same argument as (\[l\_expand\_3\]), and the stated result follows. The following lemmas establish quadratic approximations of the log-likelihood function in the case of testing $H_0:M=1$ against $H_A:M=2$ in the homoscedastic case. Lemma \[P-quadratic-homo-1\] considers the case $|\bs{\mu}_{1}- \bs{\mu}_{2}|\leq \zeta$, and Lemma \[P-quadratic-homo-2\] considers the case $|\bs{\mu}_{1}- \bs{\mu}_{2}| \geq \zeta$. \[P-quadratic-homo-1\] Suppose that Assumptions \[assn\_consis\] and \[A-taylor1\] hold and $\bs{X}$ given $\bs{Z}$ has the density $f(\bs{x}|\bs{z}; \bs{\gamma},{\bs{\mu}},\bs{\Sigma})$ defined in (\[normal\_density\]). Let $L_n(\bs{\psi},\alpha): = \sum_{i = 1}^n \log g(\bs{X}_i|\bs{Z}_i;\bs{\psi},\alpha)$ with $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ defined in (\[loglike-homo\]). For $\alpha \in (0,1)$, define $\bs{s}(\bs{x},\bs{z})$ and $\bs{t}(\bs{\psi},\alpha)$ as in (\[score\_defn-homo\]) and (\[tpsi\_defn-homo\]), and let $\mathcal{N}_{\varepsilon} := \{ \bs{\vartheta}_2 \in \Theta_{\bs{\vartheta}_2} : |\bs{t}(\bs{\psi},\alpha)|< \varepsilon\}$ and $\bs{\mathcal{I}}:=E[\bs{s}(\bs{X},\bs{Z})\bs{s}(\bs{X},\bs{Z})\t]$. Then, for any $\delta>0$ and $\zeta>0$, we have (a) $\sup_{\alpha \in [0,1]}\sup_{\bs{\vartheta}_2 \in A^1_{n \varepsilon}(\delta,\zeta) }|\bs{t}(\bs{\psi},\alpha)| = O_{p\varepsilon}(n^{-1/2})$; $$(b)\ \sup_{\alpha \in [0,1]}\sup_{\bs{\vartheta}_2 \in A^1_{n \varepsilon}(\delta,\zeta)}\left|L_n(\bs{\psi},\alpha) - L_n(\bs{\psi}^*,\alpha) - \sqrt{n} \bs{t}(\bs{\psi},\alpha)\t \nu_n(\bs{s}(\bs{x},\bs{z})) + n \bs{t}(\bs{\psi},\alpha)\t \bs{\mathcal{I}} \bs{t}(\bs{\psi},\alpha)/2 \right| = o_{p\varepsilon}(1),$$ where $A^1_{n \varepsilon}(\delta,\zeta) := \{\bs{\vartheta}_2 \in \mathcal{N}_\varepsilon\cap \Theta_{{\bs{\vartheta}}_2,\zeta}^1: L_n(\bs{\psi},\alpha) - L_n(\bs{\psi}^*,\alpha) \geq -\delta \}$. The proof is similar to that of Lemma \[P-quadratic\] but using $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ defined in (\[loglike-homo\]) in place of (\[loglike\]). We expand $g(\bs{x}|\bs{z};\bs{\psi},\alpha)/g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha)-1$ five times with respect to $\bs{\psi}$ and show that the expansion satisfies Assumption \[assn\_expansion\]. Observe that $\bs{t}(\bs{\psi},\alpha)$ defined in (\[tpsi\_defn-homo\]) satisfies $\bs{t}(\bs{\psi},\alpha) = 0$ if and only if $\bs{\psi}=\bs{\psi}^*$ because $\bs{\lambda}=0$ only if the $(i,i,i)$th element of $\bs{\lambda}_{\bs{\mu}^3}$ or the $(i,i,i,i)$th element of $\bs{\lambda}_{\bs{\mu}^4}$ is 0 for all $1 \leq i \leq d$. Following the argument in the proof of Lemma \[P-quadratic\] after (\[v\_defn\]), we may show that (\[uniform\_lln\])–(\[weak\_cgce\]) hold for $g(\cdot)$ defined in (\[loglike-homo\]). Define $g^*$, $\nabla g^*$, and $\nabla\overline{g}$ as in the proof of Lemma \[P-quadratic\] but using $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ defined in (\[loglike-homo\]) in place of (\[loglike\]). Expanding $\ell(\bs{y};\bs{\psi},\alpha):=g(\bs{x}|\bs{z};\bs{\psi}^*,\alpha)/g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ five times around $\bs{\psi}^*$ while fixing $\alpha$ and using Lemma \[dv3-homo\] in \[section:auxiliary\], we can write $\ell(\bs{y};\bs{\psi},\alpha)-1$ as $$\ell(\bs{y};\bs{\psi},\alpha) -1 = s(\bs{y};\bs{\eta},\bs{\lambda}) +r(\bs{y};\bs{\eta},\bs{\lambda}), $$ where $$s(\bs{y};\bs{\eta},\bs{\lambda}) := \frac{\nabla_{\bs{\eta}\t}g^* }{g^*}\dot{\bs{\eta}} + \frac{1}{3!} \frac{\nabla_{(\bs{\lambda}^{\otimes 3})\t} g^* }{g^*} \bs{\lambda}^{\otimes 3} + \frac{1}{4!} \frac{\nabla_{(\bs{\lambda}^{\otimes 4})\t} g^* }{g^*} \bs{\lambda}^{\otimes 4},$$ with $\dot{\bs{\eta}}:=\bs{\eta}-\bs{\eta}^*$ and $$\begin{aligned} r(\bs{y};\bs{\eta},\bs{\lambda}) &: = \frac{1}{2!} \frac{\nabla_{({\bs{\eta}}^{\otimes 2})\t} g^* }{g^*}{\dot{\bs{\eta}}}^{\otimes 2}+ \frac{1}{3!} \frac{\nabla_{({\bs{\eta}}^{\otimes 3})\t} g^* }{g^*}{\dot{\bs{\eta}}}^{\otimes 3} \label{l_expand_2-homo} \\ & \quad + \sum_{p=0}^{3} \frac{1}{p!(4-p)!}\frac{\nabla_{({\bs{\eta}}^{\otimes (4-p)} \otimes \bs{\lambda}^{\otimes p} )\t} g^* }{g^*} (\dot{\bs{\eta}}^{\otimes (4-p)} \otimes \bs{\lambda}^{\otimes p} ) \label{l_expand_3-homo} \\ &\quad + \frac{1}{5!} \frac{\nabla_{(\bs{\lambda}^{\otimes 5})\t} \overline{g} }{g^*} \bs{\lambda}^{\otimes 5} + \sum_{p=0}^{4} \frac{1}{p!(5-p)!}\frac{\nabla_{({\bs{\eta}}^{\otimes (5-p)} \otimes \bs{\lambda}^{\otimes p} )\t} \overline{g} }{g^*} (\dot{\bs{\eta}}^{\otimes (5-p)} \otimes \bs{\lambda}^{\otimes p} ) \label{l_expand_4-homo}.\end{aligned}$$ We first show $s(\bs{y};\bs{\eta},\bs{\lambda}) = \bs{t}(\bs{\psi},\alpha)\t \bs{s}(\bs{x},\bs{z})$ with $\bs{s}(\bs{x},\bs{z})$ and $\bs{t}(\bs{\psi},\alpha)$ defined in (\[score\_defn-homo\]) and (\[tpsi\_defn-homo\]). The first term of $s(\bs{y};\bs{\eta},\bs{\lambda})$ is $\nabla_{(\bs{\gamma}\t,\bs{\mu}\t,\bs{v}\t)\t}f^*/f^*$. Using Lemma \[dv3-homo\], the second and third terms of $s(\bs{y};\bs{\eta},\bs{\lambda})$ are written as $$\begin{aligned} & \frac{\alpha(1-\alpha) (1-2\alpha)}{3!} \sum_{1\leq i\leq j\leq k\leq d} \frac{\nabla_{\mu_i \mu_j \mu_k} f^* }{f^*} \sum_{(t_1,t_2,t_3)\in p(i,j,k)} \lambda_{t_1}\lambda_{t_2}\lambda_{t_3} \quad\text{and}\\ & \frac{ \alpha(1-\alpha)(1-6\alpha+6\alpha^2)}{4!} \sum_{1\leq i\leq j\leq k\leq \ell\leq d} \frac{\nabla_{\mu_i \mu_j \mu_k\mu_{\ell}} f^* }{f^*} \sum_{(t_1,t_2,t_3,t_4)\in p(i,j,k,\ell)} \lambda_{t_1}\lambda_{t_2}\lambda_{t_3}\lambda_{t_4},\end{aligned}$$ where $\sum_{(t_1,t_2,t_3) \in p(i,j,k)}$ denotes the sum over all distinct permutations of $(i,j,k)$ to $(t_1,t_2,t_3)$ while $\sum_{(t_1,t_2,t_3,t_4) \in p(i,j,k,\ell)}$ denotes the sum over all distinct permutations of $(i,j,k,\ell)$ to $(t_1,t_2,t_3,t_4)$. Combining these results gives $s(\bs{y};\bs{\eta},\bs{\lambda}) = \bs{s}_{\bs{\eta}}\t \dot{\bs{\eta}} + \alpha(1-\alpha) (1-2\alpha) \bs{s}_{\bs{\mu^3}}\t \bs{\lambda}_{\bs{\mu^3}} + \alpha(1-\alpha)(1-6\alpha+6\alpha^2)\bs{s}_{\bs{\mu}^4}\t \bs{\lambda}_{\bs{\mu}^4}= \bs{t}(\bs{\psi},\alpha)\t \bs{s}(\bs{x},\bs{z})$, where $(\bs{s}_{\bs{\eta}},\bs{s}_{\bs{\mu^3}}, \bs{s}_{\bs{\mu}^4})$ satisfies Assumption \[assn\_expansion\](a)(b)(e) from Assumption \[A-taylor1\], the property of the normal density, (\[uniform\_lln\]), and (\[weak\_cgce\]). The stated result holds if $\bs{r(\bs{y};\bs{\eta},\bs{\lambda})}$ can be written as $\bs{\xi}(\bs{y};\bs{\vartheta}_2) O(|\bs{\psi}||\bs{t}(\bs{\psi},\alpha)|)$ where $\sup_{\bs{\vartheta}_2 \in \mathcal{N}_\epsilon}|\bs{\xi}(\bs{y};\bs{\vartheta}_2)| \leq \sup_{\bs{\vartheta}_2 \in \mathcal{N}_\epsilon}|\bs{v}(\bs{y};\bs{\vartheta}_2)|$ with $\bs{v}(\bs{y};\bs{\vartheta}_2)$ defined in (\[v\_defn\]) but using $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ defined in (\[loglike-homo\]), because then $r(\bs{y};\bs{\eta},\bs{\lambda})$ satisfies Assumption \[assn\_expansion\](c)(d) from (\[uniform\_lln\]) and (\[weak\_cgce\]). First, the terms on the right hand side of (\[l\_expand\_2-homo\]) are written as $(\nabla_{\bs{\eta}^{\otimes 2}} g^*/g^*) O(|\dot{\bs{\eta}}|^2)+(\nabla_{\bs{\eta}^{\otimes 3}} g^*/g^*) O(|\dot{\bs{\eta}}|^2)$. Second, the terms in (\[l\_expand\_3-homo\]) are written as $(\nabla_{\bs{\psi}^{\otimes 4}} g^*/g^*) [O(|\dot{\bs{\eta}}||\bs{\lambda}|)+ O(|\dot{\bs{\eta}}|^2)]$. Finally, the terms in (\[l\_expand\_4-homo\]) are written as $(\nabla_{\bs{\psi}^{\otimes 5}} \bar g /g^*) O(|\bs{\lambda_{\mu ^4}}||\bs{\lambda}|)+ (\nabla_{\bs{\psi}^{\otimes 5}} \bar g /g^*)[O(|\dot{\bs{\eta}}||\bs{\lambda}|)+ O(|\dot{\bs{\eta}}|^2)]$, and the stated result follows. \[P-quadratic-homo-2\] Suppose that Assumptions \[assn\_consis\] and \[A-taylor1\] hold and $\bs{X}$ given $\bs{Z}$ has the density $f(\bs{x}|\bs{z}; \bs{\gamma},{\bs{\mu}},\bs{\Sigma})$ defined in (\[normal\_density\]). Let $L_n(\bs{\phi},\bs{\lambda}):= \sum_{i=1}^n h(\bs{X}_i|\bs{Z}_i;\bs{\phi},\bs{\lambda})$ with $h(\bs{x}|\bs{z};\bs{\phi},\bs{\lambda})$ defined in (\[loglike-homo-2\]). Define $\bs{s}(\bs{x},\bs{z};\bs{\lambda})$ and $\bs{t}(\bs{\phi},\bs{\lambda})$ as in (\[score\_defn-homo-2\]) and (\[tpsi\_defn-homo-2\]), and let $\mathcal{N}_{\varepsilon} := \{ \bs{\vartheta}_2 \in \Theta_{\bs{\vartheta}_2} : |\bs{t}(\bs{\phi},\bs{\lambda})|< \varepsilon\}$ and $\bs{\mathcal{I}}(\bs{\lambda}):=E[\bs{s}(\bs{X},\bs{Z};\bs{\lambda})\bs{s}(\bs{X},\bs{Z};\bs{\lambda})\t]$. Then, for any $|\bs{\lambda}|\geq \zeta> 0$ and any $\delta>0$, we have (a) $\sup_{\bs{\vartheta} \in A_{n\varepsilon}^2(\delta,\zeta)} |\bs{t}(\bs{\phi},\bs{\lambda}) | = O_{p\varepsilon}(n^{-1/2})$; $$(b)\ \sup_{\bs{\vartheta} \in A_{n\varepsilon}^2(\delta,\zeta)}\left|L_n(\bs{\phi},\bs{\lambda}) - L_n(\bs{\phi^*},\bs{\lambda}) - \sqrt{n} \bs{t}(\bs{\phi},\bs\lambda) \t \nu_n(\bs{s}(\bs{x},\bs{z};\bs{\lambda})) + n \bs{t}(\bs{\phi},\bs\lambda) \t \bs{\mathcal{I}}(\bs{\lambda}) \bs{t}(\bs{\phi},\bs\lambda)/2 \right| = o_{p\varepsilon}(1),$$ where $A_{n\varepsilon}^2(\delta,\zeta) := \{\bs{\vartheta} \in \mathcal{N}_\varepsilon\cap \Theta_{{\bs{\vartheta}}_2,\zeta}^2: L_n(\bs{\phi},\bs{\lambda}) - L_n(\bs{\phi^*},\bs{\lambda}) \geq -\delta \}$. The proof is similar to that of Lemma \[P-quadratic\]. Observe that $\bs{t}(\bs{\phi},\bs{\lambda})$ defined in (\[tpsi\_defn-homo-2\]) satisfies $\bs{t}(\bs{\phi},\bs{\lambda}) = \bs{0}$ if and only if $\bs{\phi}=\bs{\phi}^* = ((\bs{\eta}^*)\t,0)\t$ because $|\bs{\lambda}| \geq \zeta$. Let $\bs{y}:=(\bs{x}\t,\bs{z}\t)\t$, and write $h(\bs{x}|\bs{z};\bs{\phi},\bs{\lambda})$ as $h(\bs{y};\bs{\phi},\bs{\lambda})$. Let $\ell(\bs{y},\bs{\phi},\bs{\lambda}) := h(\bs{y};\bs{\phi},\bs{\lambda})/h(\bs{y};\bs{\phi}^*,\bs{\lambda})$. Expanding $\ell(\bs{y};\bs{\phi},\bs{\lambda})-1$ twice around $\bs{\phi} = \bs{\phi}^*$ while fixing the value of $\bs{\lambda}$ and using $h(\bs{y};\bs{\phi}^*,\bs{\lambda}) = f_v^*$ and $\nabla_{\eta}h(\bs{y};\bs{\phi}^*,\bs{\lambda}) = \nabla_{\eta} f_v^*$ gives $$\ell(\bs{y};\bs{\phi},\bs{\lambda})-1 = s(\bs{y};\bs{\phi},\bs{\lambda}) + r(\bs{y};\bs{\phi},\bs{\lambda}),$$ where $$s(\bs{y};\bs{\phi},\bs{\lambda}) := \frac{\nabla_{\alpha} h(\bs{y};\bs{\phi}^*,\bs{\lambda}) }{f_v^*} \alpha+ \frac{\nabla_{\bs{\eta}\t} f_v^*}{f_v^*}\dot{\bs{\eta}},$$ with $\dot{\bs{\eta}}:=\bs{\eta}-\bs{\eta}^*$ and, for some $\overline{\bs{\phi}}\in (\bs{\phi},\bs{\phi}^*)$, $$\begin{aligned} r(\bs{y};\bs{\phi},\bs{\lambda}) & = \frac{1}{2}\frac{\nabla_{\alpha^2} h(\bs{y};\overline{\bs{\phi}},\bs{\lambda})}{f_v^*}\alpha^2 + \frac{1}{2}\frac{\nabla_{(\bs{\eta}^{\otimes 2})\t} h(\bs{y};\overline{\bs{\phi}},\bs{\lambda})}{f_v^*} \dot{\bs{\eta}}^{\otimes 2}+ \frac{\nabla_{\alpha\bs{\eta}\t} h(\bs{y};\overline{\bs{\phi}},\bs{\lambda})}{f_v^*} \alpha\dot{\bs{\eta}}.\end{aligned}$$ Let $f_v^*(\bs{\lambda}):= f_v(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*+ \bs{\lambda}, \bs{v}^*)$, so that $f_v^*(\bs{0})=f_v^*$. Define $$\label{v_defn-homo-2} \bs{v}(\bs{y};\bs{\vartheta}_2) :=\left(\bs{s}_{\bs{\phi}}(\bs{y};\bs{\phi},\bs{\lambda})\t ,\bs{s}_{\bs{\eta}}\t, s_{\alpha}(\bs{\lambda}) \right)\t,$$ where $\bs{s}_{\bs{\phi}}(\bs{y};\bs{\phi},\bs{\lambda}) : = (\nabla_{((\bs{\phi}^{\otimes 2})\t, (\bs{\phi}^{\otimes 3})\t)} s(\bs{y};\bs{\phi},\bs{\lambda}))\t / f_v^*$. In view of (\[score\_defn-homo-2\])–(\[tpsi\_defn-homo-2\]) and the argument in the proof of Lemma \[P-quadratic\] , the stated result holds if $$\label{gr_condition} \begin{aligned} \text{(A)} \quad & \nabla_{\alpha} h(\bs{y};\bs{\phi}^*,\bs{\lambda}) = f_v^*(\bs{\lambda}) - f_v^* - \nabla_{ \bs{\mu}\t} f_v^* \bs{\lambda} - \nabla_{ \bs{v}\t} f_v^* \bs{\lambda}_{\bs{\mu}^2},\\ \text{(B)} \quad & r(\bs{y};\bs{\phi},\bs{\lambda}) = \bs{\xi}(\bs{y};\bs{\vartheta}_2) O(|\bs{\phi}-\bs{\phi}^*||\bs{t}(\bs{\phi},\bs{\lambda})|), \\ \text{(C)} \quad & \bs{v}(\bs{y};\bs{\vartheta}_2)\text{ satisfies (\ref{uniform_lln})--(\ref{weak_cgce})}, \end{aligned}$$ where $\sup_{\bs{\vartheta}_2}|\bs{\xi}(\bs{y};\bs{\vartheta}_2)| \leq \sup_{\bs{\vartheta}_2}|\bs{v}(\bs{y};\bs{\vartheta}_2)|$, and the domain of $\bs{\vartheta}_2$ is such that $\bs{\phi} \in \Theta_{\bs{\eta}} \times [0, 3/4]$ and $\bs{\lambda} \in \Theta_{\bs{\lambda}}$. We proceed to show (A)–(C) in (\[gr\_condition\]). (A) is shown in Lemma \[s\_der\_alpha\] in \[section:auxiliary\], and (B) follows from Lemma \[s\_der\_alpha\] and the definition of $\bs{t}(\bs{\phi},\bs{\lambda})$. For (C), $\bs{s}_{\bs{\phi}}(\bs{y};\bs{\phi},\bs{\lambda})$ and $\bs{s}_{\bs{\eta}}$ clearly satisfy (\[uniform\_lln\])–(\[weak\_cgce\]). The proof completes if we show that $s_{\alpha}(\bs{\lambda})$ satisfies (\[uniform\_lln\])–(\[weak\_cgce\]). We first extend the domain of $s_{\alpha}(\bs{\lambda})$ so that it is well-defined when $|\bs{\lambda}|=0$. Then, $s_{\alpha}(\bs{\lambda})$ satisfies (\[uniform\_lln\])–(\[weak\_cgce\]) if $s_{\alpha}(\bs{\lambda})$ is Lipschitz continuous in $\bs{\lambda}$ in this extended domain and the Lipschitz constant is in $L^{2+\delta}$. Write $\bs{\lambda}$ in the $d$-spherical coordinates as $\bs{\lambda} = r \widetilde{\bs{\lambda}}(\bs{\theta})$, where $r$ is scalar with $r \geq 0$, $\bs{\theta} := (\theta_1,\ldots,\theta_{d-1})\t \in \Theta := [0,\pi)^{d-2}\times [0,2\pi)$, and $\widetilde{\bs{\lambda}}(\cdot)$ is a function from $\mathbb{R}^{d-1}$ to $\mathbb{R}^d$ whose elements are products of $\sin(\theta_j)$’s and $\cos(\theta_j)$’s such that $|\widetilde{\bs{\lambda}}(\bs{\theta})|=1$ (e.g., $\widetilde{\bs{\lambda}}(\theta) = (\cos(\theta),\sin(\theta))\t$ when $d=2$). For $r>0$ and $\bs{\theta} \in \Theta$, define $s_\alpha(r,\bs{\theta}) := s_\alpha(r \widetilde{\bs{\lambda}}(\bs{\theta}))$, and write $s_\alpha(r,\bs{\theta})$ as $${s}_{\alpha}(r,\bs{\theta}) := \frac{f_v^*(r \widetilde{\bs{\lambda}}(\bs{\theta})) -f_v^*(\bs{0})-\nabla_{\bs{\mu}\t} f^*_v(\bs{0}) r \widetilde{\bs{\lambda}}(\bs{\theta}) - \nabla_{(\bs{\mu}^{\otimes 2})\t} f^*_v(\bs{0}) r^2 \widetilde{\bs{\lambda}}(\bs{\theta})^{\otimes 2}/2 }{ r^3 f^*_v(\bs{0})},$$ where $f_v^*(\bs{\lambda}):=f_v\left(\bs{x}|\bs{z};\bs{\gamma}^*, \bs\mu^*+ \bs{\lambda}, \bs{v}^*\right)$. Define $s_\alpha(0,\bs{\theta}) = \nabla_{(\bs{\mu}^{\otimes 3})\t} f^*_v(\bs{0}) \widetilde{\bs{\lambda}}(\bs{\theta})^{\otimes 3}/(3!f^*_v(\bs{0}))$, then ${s}_{\alpha}(r,\bs{\theta})$ converges to $s_\alpha(0,\bs{\theta})$ as $r \to 0$, and $s_\alpha(r,\bs{\theta})$ is continuous in $r \geq 0$ and $\bs{\theta} \in \Theta$. We show that $s_\alpha(r,\bs{\theta})$ is Lipschitz continuous in $(r,\bs{\theta})\in [0,M]\times \Theta$. Let $\bs{\Lambda}(\bs{\theta}):=\nabla_{\bs{\theta}\t}\widetilde{\bs{\lambda}}(\bs{\theta})$ denote the $d \times (d-1)$ Jacobian matrix of $\widetilde{\bs{\lambda}}(\bs{\theta})$. It follows from a direct calculation that $|\nabla_r {s}_{\alpha}(r,\bs{\theta})| \leq \mathcal{C} \sup_{\bs{\lambda} }|\nabla_{(\bs{\mu}^{\otimes 4})\t} f^*_v(\bs{\lambda})|| \bs{\lambda}^{\otimes 4}|/f^*_v(\bs{0})$ and $|\nabla_{\bs{\theta}} {s}_{\alpha}(r,\bs{\theta})| \leq C \sup_{\bs{\theta}}|\bs{\Lambda}(\bs{\theta})|\sup_{\bs{\lambda} }|\nabla_{(\bs{\mu}^{\otimes 3})\t} f^*_v(\bs{\lambda})|| \bs{\lambda}^{\otimes 2}|/f^*_v(\bs{0})$, which are in $L^{2+\delta}$ from $\sup_{\bs{\theta}}|\bs{\Lambda}(\bs{\theta})| < \infty$ and the property of the normal density. Consequently, $s_\alpha(r,\bs{\theta})$ is Lipschitz continuous in $(r,\bs{\theta})\in [0,M]\times \Theta$ and the Lipschitz constant is in $L^{2+\delta}$. Therefore, (C) of (\[gr\_condition\]) holds, and the stated result is proven. Quadratic expansion under singular Fisher information matrix {#section:expansion} ============================================================ This appendix derives a Le Cam’s differentiable in quadratic mean (DQM)-type expansion that is useful for proving Lemmas \[P-quadratic\]–\[P-quadratic-homo-2\] in \[section:quadratic\]. @liushao03as develop a DQM expansion under the loss of identifiability in terms of the generalized score function. Lemmas \[Ln\_thm1\] and \[Ln\_thm2\] modify @liushao03as to fit our context of parametric models where the derivatives of the density of different orders are linearly dependent. @kasaharashimotsu18markov derive a similar expansion that accommodates dependent and heterogeneous $Y_i$’s under additional assumptions than ours. Lemmas \[Ln\_thm1\] and \[Ln\_thm2\] may be viewed as a specialization of @kasaharashimotsu18markov to the random sampling case. Let $\bs{\vartheta}$ be a parameter vector, and let $g(\bs{y};\bs{\theta})$ denote the density of $\bs{Y}$. Let $L_n(\bs{\vartheta}) := \sum_{i=1}^n \log g(\bs{Y}_i;\bs{\vartheta})$ denote the log-likelihood function. Split $\bs{\vartheta}$ as $\bs{\vartheta} = (\bs{\psi}\t,\bs{\pi}\t)\t$, and write $L_n(\bs{\vartheta}) = L_n(\bs{\psi},\bs{\pi})$. $\bs{\pi}$ corresponds to the part of $\bs{\vartheta}$ that is not identified under the null. Denote the true parameter value of $\bs{\psi}$ by $\bs{\psi}^*$, and denote the set of $(\bs{\psi},\bs{\pi})$ corresponding to the null hypothesis by $\Gamma^*= \{(\bs{\psi},\bs{\pi})\in\Theta: \bs{\psi}=\bs{\psi}^*\}$. Let $\bs{t}(\bs{\vartheta})$ be a continuous function of $\bs{\vartheta}$ such that $\bs{t}(\bs{\vartheta})=0$ if and only if $\bs{\psi}=\bs{\psi}^*$. For $\varepsilon>0$, define a neighborhood of $\Gamma^*$ by $$\mathcal{N}_{\varepsilon} := \{ \bs{\vartheta} \in \Theta: |\bs{t}(\bs{\vartheta})|< \varepsilon\}.$$ We establish a general quadratic expansion that expresses $L_n(\bs{\psi},\bs{\pi})-L_n(\bs{\psi}^*,\bs{\pi})$ as a quadratic function of $\bs{t}(\bs{\vartheta})$ for $\bs{\vartheta}\in \mathcal{N}_{\varepsilon}$. Denote the density ratio by $$\label{density_ratio} \ell(\bs{y};\bs{\vartheta}) := \frac{g (\bs{y}; \bs{\psi},\bs{\pi})}{ g (\bs{y}; \bs{\psi}^*,\bs{\pi})},$$ so that $L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi}) = \sum_{i=1}^n\log \ell(\bs{Y}_i;\bs{\vartheta})$. We assume that $\ell(\bs{y};\bs{\vartheta})$ can be expanded around $\ell(\bs{y};\bs{\vartheta}^*)=1$ as follows. \[assn\_expansion\] $\ell(\bs{y};\bs{\vartheta}) -1$ admits an expansion $$\ell(\bs{y};\bs{\vartheta}) -1 = \bs{t}(\bs{\vartheta})\t \bs{s}(\bs{y};\bs{\pi}) + r(\bs{y};\bs{\vartheta}),$$ where $\bs{s}(\bs{y};\bs{\pi})$ and $r(\bs{y};\bs{\vartheta})$ satisfy, for some $C \in (0,\infty)$ and $\varepsilon >0$, (a) $E\sup_{\bs{\pi} \in \Theta_{\bs{\pi}}} \left|\bs{s}(\bs{Y};\bs{\pi})\right|^2 < C$, (b) $\sup_{\bs{\pi} \in \Theta_{\bs{\pi}}}| P_n(\bs{s}(\bs{y};\bs{\pi})\bs{s}(\bs{y};\bs{\pi})\t) - \bs{\mathcal{I}}_{\bs{\pi}}| = o_p(1)$ with $0<\inf_{\bs{\pi}\in \Theta_{\bs{\pi}}} \lambda_{\min}(\bs{\mathcal{I}}_{\bs{\pi}}) \leq \sup_{\bs{\pi}\in \Theta_{\bs{\pi}} } \lambda_{\max}(\bs{\mathcal{I}}_{\bs{\pi}})<C$, (c) $E[\sup_{\bs{\vartheta} \in \mathcal{N}_\varepsilon} |r(\bs{Y};\bs{\vartheta})/(|\bs{t}(\bs{\vartheta})||\bs{\psi}-\bs{\psi}^*|)|^2] < \infty$, (d) $\sup_{\bs{\vartheta} \in \mathcal{N}_\varepsilon} [ \nu_n(r(\bs{y};\bs{\vartheta}))/(|\bs{t}(\bs{\vartheta})||\bs{\psi}-\bs{\psi}^*|)] = O_p(1)$, (e) $\sup_{\bs{\pi} \in \Theta_{\bs{\pi}}}|\nu_n(\bs{s}(\bs{y};\bs{\pi}))|=O_p(1)$. We first establish an expansion $L_n(\bs{\psi},\bs{\pi})$ in a neighborhood $\mathcal{N}_{c/\sqrt{n}}$ that holds for any $c>0$. \[Ln\_thm1\] Suppose that Assumption \[assn\_expansion\](a)–(d) holds. Then, for all $c>0$, $$\sup_{\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}} \left| L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi}) - \sqrt{n}\bs{t}(\bs{\vartheta})\t \nu_n (\bs{s}(\bs{y};\bs{\pi})) + n \bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta})/2 \right| = o_p(1).$$ Define $h(\bs{y},\bs{\vartheta}) := \sqrt{\ell(\bs{y},\bs{\vartheta})}-1$. By using the Taylor expansion of $2 \log(1+x) = 2x - x^2(1+o(1))$ for small $x$, we have, uniformly for $\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}$, $$\label{Ln_expand} L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi}) = 2 \sum_{i=1}^n \log(1+h(\bs{Y}_i,\bs{\vartheta})) = n P_n(2h(\bs{y},\bs{\vartheta}) - [1+o_p(1)] h(\bs{y},\bs{\vartheta})^2).$$ The stated result holds if we show $$\begin{aligned} & \sup_{\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}} \left| nP_n(h(\bs{y},\bs{\vartheta})^2) - n \bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta})/4 \right| = o_p(1), \label{hk_appn} \\ & \sup_{\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}} \left| nP_n(h(\bs{y},\bs{\vartheta})) - \sqrt{n} \bs{t}(\bs{\vartheta})\t \nu_n(\bs{s}(\bs{y};\bs{\pi}) )/2 + n\bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta})/8 \right| = o_p(1), \label{hk_appn2}\end{aligned}$$ because then the right hand side of (\[Ln\_expand\]) equals $\sqrt{n} \bs{t}(\bs{\vartheta})\t \nu_n(\bs{s}(\bs{y};\bs{\pi}) ) - n\bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta})/2$ uniformly in $\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}$. To show (\[hk\_appn\]), write $4P_n ( h(\bs{y},\bs{\vartheta})^2)$ as $$\label{B0} 4P_n ( h(\bs{y},\bs{\vartheta})^2) = P_n \left(\frac{4(\ell(\bs{y};\bs{\vartheta})-1)^2}{(\sqrt{\ell(\bs{y};\bs{\vartheta})} + 1)^2}\right) = P_n(\ell(\bs{y},\bs{\vartheta})-1)^2 - P_n \left( (\ell(\bs{y};\bs{\vartheta})-1)^3 \frac{(\sqrt{\ell(\bs{y};\bs{\vartheta})}+3)}{(\sqrt{\ell(\bs{y};\bs{\vartheta})} + 1)^3}\right).$$ It follows from Assumption \[assn\_expansion\](a)(b)(c) and $(E|XY|)^2 \leq E|X|^2E|Y|^2$ that, uniformly for $\bs{\vartheta} \in \mathcal{N}_\varepsilon$, $$\begin{aligned} \label{lk_lln} P_n(\ell(\bs{y};\bs{\vartheta})-1)^2 & = \bs{t}(\bs{\vartheta})\t P_n(\bs{s}(\bs{y};\bs{\pi})\bs{s}(\bs{y};\bs{\pi})\t)\bs{t}(\bs{\vartheta}) + 2 \bs{t}(\bs{\vartheta})\t P_n[\bs{s}(\bs{y};\bs{\pi}) r(\bs{y};\bs{\vartheta})] + P_n(r(\bs{y};\bs{\vartheta}))^2 \nonumber \\ & = (1+o_p(1))\bs{t}(\bs{\vartheta})\t\bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta}) +O_p(|\bs{t}(\bs{\vartheta})|^2|\bs{\psi}-\bs{\psi}^*|). \end{aligned}$$ Therefore, the second term on the right of (\[B0\]) is $\bs{t}(\bs{\vartheta})\t\bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta})$ + $o_p(n^{-1})$. Note that, if $X_1,\ldots,X_n$ are random variables with $\max_{1\leq i \leq n}\mathbb{E}|X_i|^q < C$ for some $q>0$ and $C < \infty$, then we have $\max_{1 \leq i \leq n} |X_i|= o_p(n^{1/q})$. Therefore, from Assumption \[assn\_expansion\](a)(c), we have $$\max_{1\leq i \leq n} \sup_{\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}} |\ell(\bs{Y}_i,\bs{\vartheta})-1| = \max_{1\leq i \leq n} \sup_{\bs{\vartheta} \in \mathcal{N}_{c/\sqrt{n}}} |\bs{t}(\bs{\vartheta})\t\bs{s}(\bs{Y}_i;\bs{\pi}) + r(\bs{Y}_i;\bs{\vartheta})| = o_p(1) .$$ Consequently, the third term on the right of (\[B0\]) is $o_p(n^{-1})$, and (\[hk\_appn\]) follows. We proceed to show (\[hk\_appn2\]). Consider the following expansion of $h(\bs{y},\bs{\vartheta})$: $$\label{hk_1} h(\bs{y},\bs{\vartheta}) = (\ell(\bs{y};\bs{\vartheta})-1)/2 - h(\bs{y},\bs{\vartheta})^2/2 = (\bs{t}(\bs{\vartheta})\t \bs{s}(\bs{y};\bs{\pi}) + r(\bs{y};\bs{\vartheta}))/2 - h(\bs{y},\bs{\vartheta})^2/2 .$$ Then, (\[hk\_appn2\]) follows from (\[hk\_1\]), (\[hk\_appn\]), and Assumption \[assn\_expansion\](d), and the stated result follows. The next lemma expands $L_n(\bs{\psi},\bs{\pi})$ in $A_{n\varepsilon}(\delta) := \{\bs{\vartheta} \in \mathcal{N}_\varepsilon: L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi}) \geq -\delta \}$ for $\delta \in (0,\infty)$. This lemma is useful for deriving the asymptotic distribution of the LRTS because a consistent MLE is in $A_{n\varepsilon}(\delta)$ by definition. Define $O_{p\varepsilon}(\cdot)$ and $o_{p\varepsilon}(\cdot)$ as in \[section:quadratic\]. \[Ln\_thm2\] Suppose that Assumption \[assn\_expansion\] holds. Then, for any $\delta>0$, (a) $\sup_{\vartheta \in A_{n\varepsilon}(\delta)} |\bs{t}(\bs{\vartheta})| = O_{p\varepsilon}(n^{-1/2})$; $$(b)\ \sup_{\bs{\vartheta} \in A_{n\varepsilon}(\delta)}\left|L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi}) - \sqrt{n} \bs{t}(\bs{\vartheta})\t \nu_n (\bs{s}(\bs{y};\bs{\pi})) + n \bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta})/2 \right| = o_{p\varepsilon}(1).$$ For part (a), applying the inequality $\log(1+x) \leq x$ to the log-likelihood ratio function and using (\[hk\_1\]) give $$\label{Ln_ineq} L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi}) = 2 \sum_{i=1}^n \log(1+h(\bs{Y}_i,\bs{\vartheta})) \leq 2 n P_n(h(\bs{y},\bs{\vartheta})) = \sqrt{n} \nu_n(\ell(\bs{y};\bs{\vartheta})-1) - n P_n(h(\bs{y},\bs{\vartheta})^2).$$ We derive a lower bound on $P_n (h(\bs{y},\bs{\vartheta})^2)$. Observe that $h(\bs{y},\bs{\vartheta})^2= {(\ell(\bs{y};\bs{\vartheta})-1)^2}/(\sqrt{\ell(\bs{y};\bs{\vartheta})} + 1)^2\geq \mathbb{I}{\{\ell(\bs{y};\bs{\vartheta}) \leq \kappa\}(\ell(\bs{y};\bs{\vartheta})-1)^2}/(\sqrt{\kappa} + 1)^2$ for any $\kappa>0$. Therefore, $$\begin{aligned} P_n (h(\bs{y},\bs{\vartheta})^2) & \geq (\sqrt{\kappa}+1)^{-2}P_n \left( \mathbb{I}{\{\ell(\bs{y};\bs{\vartheta}) \leq \kappa\}} (\ell(\bs{y};\bs{\vartheta})-1)^2\right) \\ & \geq (\sqrt{\kappa}+1)^{-2} \left[ P_n ((\ell(\bs{y};\bs{\vartheta})-1)^2) - P_n \left( \mathbb{I}{\{\ell(\bs{y};\bs{\vartheta}) > \kappa\}} (\ell(\bs{y};\bs{\vartheta})-1)^2\right) \right] .\end{aligned}$$ Let $B:=\sup_{\bs{\vartheta} \in \mathcal{N}_\varepsilon}|\ell(\bs{y};\bs{\vartheta})-1|$. From Assumption \[assn\_expansion\](a)(c), we have $EB^2 < \infty$, and hence $\lim_{\kappa \rightarrow \infty} \sup_{\bs{\vartheta} \in \mathcal{N}_\varepsilon} P_n \left( \mathbb{I}{\{\ell(\bs{y};\bs{\vartheta}) > \kappa\}} (\ell(\bs{y};\bs{\vartheta})-1)^2\right) \leq \lim_{\kappa\rightarrow \infty} P_n \left( \mathbb{I}\{B+1 > \kappa\}B^2\right) =0$ almost surely. Let $\tau=(\sqrt{\kappa}+1)^{-2}/2$. By choosing $\kappa$ sufficiently large, it follows from (\[lk\_lln\]) and Assumption \[assn\_expansion\](e) that, uniformly for $\bs{\vartheta} \in \mathcal{N}_\varepsilon$, $$\label{rk_lower} P_n (h(\bs{y},\bs{\vartheta})^2) \geq \tau (1+o_p(1))\bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta}) +O_p(|\bs{t}(\bs{\vartheta})|^2|\bs{\psi}-\bs{\psi}^*|) .$$ Because $\sqrt{n} \nu_n(\ell(\bs{y};\bs{\vartheta})-1) = \sqrt{n} \bs{t}(\bs{\vartheta})\t [\nu_n(\bs{s}(\bs{y};\bs{\pi})) + O_p(1)]$ from Assumption \[assn\_expansion\](d), it follows from (\[Ln\_ineq\]) and (\[rk\_lower\]) that $$\begin{aligned} -\delta& \leq L_n(\bs{\psi},\bs{\pi}) - L_n(\bs{\psi}^*,\bs{\pi})\nonumber\\ & \leq \sqrt{n} \bs{t}(\bs{\vartheta})\t [\nu_n(\bs{s}(\bs{y};\bs{\pi})) + O_p(1)] - \tau (1+o_p(1)) n \bs{t}(\bs{\vartheta})\t \bs{\mathcal{I}}_{\bs{\pi}} \bs{t}(\bs{\vartheta}) + O_p(n|\bs{t}(\bs{\vartheta})|^2|\bs{\psi}-\bs{\psi}^*|) .\label{rk_lower2}\end{aligned}$$ Let $\bs{T}_{n}:= \bs{\mathcal{I}_{\pi}}^{1/2}\sqrt{n} \bs{t}(\bs{\vartheta})$. From (\[rk\_lower2\]), Assumption \[assn\_expansion\](c)(e), and the fact $\bs{\psi}-\bs{\psi}^* \to 0$ if $\bs{t}(\bs{\vartheta}) \to 0$, we obtain the following result: for any $\Delta>0$, there exist $\varepsilon>0$ and $M,n_0<\infty$ such that $$\Pr\left( \inf_{\bs{\vartheta} \in \mathcal{N}_{\varepsilon}} \left( |\bs{T}_n| M - (\tau/2) |\bs{T}_n|^2 + M \right) \geq 0\right) \geq 1-\Delta,\ \text{ for all }\ n > n_0.$$ Rearranging the terms inside $\Pr(\cdot)$ gives $ \sup_{\bs{\vartheta} \in \mathcal{N}_{\varepsilon}} (|\bs{T}_{n}|-(M/\tau))^2 \leq 2M/\tau+(M/\tau)^2$, and part (a) follows. Part (b) follows from part (a) and Lemma \[Ln\_thm1\]. Auxiliary results and their proofs {#section:auxiliary} ================================== \[mv\_derivative\] Let $f_v(\bs{x};\bs{\mu},\bs{v}) := (2\pi)^{-d/2}(\det \bs{S}(\bs{v}))^{-1/2} \exp(-(\bs{x}-\bs{\mu})\t\bs{S}(\bs{v})^{-1}(\bs{x}-\bs{\mu})/2)$ denote the density of a $d$-variate normal distribution with mean $\bs{\mu}= (\mu_1,\ldots,\mu_d)\t$ and variance $\bs{S}(\bs{v})$ with $\bs{v} = \{ v_{ij} \}_{1\leq i \leq j \leq d}$ as specified in (\[fv\_defn\]). Then, the following holds for any $t_1,t_2,t_3,t_4,t_5,t_6 \in \{1,\ldots,d\}$: $$\begin{aligned} \frac{\partial f_v(\bs{x};\bs{\mu},\bs{v})}{\partial v_{t_1t_2}} &= \frac{1}{2}\frac{\partial^2 f_v(\bs{x};\bs{\mu},\bs{v})}{\partial \mu_{t_1} \partial \mu_{t_2}}, \quad \frac{\partial^2 f_v(\bs{x};\bs{\mu},\bs{v})}{\partial v_{t_1t_2}\partial v_{t_3t_4}} = \frac{1}{4}\frac{\partial^4 f_v(\bs{x};\bs{\mu},\bs{v})}{\partial \mu_{t_1} \partial \mu_{t_2} \partial \mu_{t_3} \partial \mu_{t_4}}, \\ \frac{\partial^3 f_v(\bs{x};\bs{\mu},\bs{v})}{\partial v_{t_1t_2}\partial v_{t_3t_4}\partial v_{t_5t_6}} & = \frac{1}{8}\frac{\partial^6 f_v(\bs{x};\bs{\mu},\bs{v})}{\partial \mu_{t_1} \partial \mu_{t_2} \partial \mu_{t_3} \partial \mu_{t_4} \partial \mu_{t_5} \partial \mu_{t_6}} .\end{aligned}$$ Henceforth, we suppress $(\bs{x};\bs{\mu},\bs{\Sigma})$ and $(\bs{x};\bs{\mu},\bs{v})$ and from $f(\bs{x};\bs{\mu},\bs{\Sigma})$ and $f_v(\bs{x};\bs{\mu},\bs{v})$ unless confusions might arise. In view of the definition of $\bs{S}(\bs{v})$ in (\[fv\_defn\]), the following holds for any function $g(\bs{\Sigma})$ of $\bs{\Sigma}$: $$\label{del_v_sigma} \frac{\partial g(\bs{S}(\bs{v}))}{\partial v_{t_1t_2}} = \frac{\partial g(\bs{\Sigma}) / \partial \Sigma_{t_1t_2} + \partial g(\bs{\Sigma}) / \partial \Sigma_{t_2t_1}}{2} = \frac{\partial g(\bs{\Sigma})}{\partial \Sigma_{t_1t_2}} .$$ Let $\bs{s}_i$ denote the $i$th column of $\bSig^{-1}$, and let $s_{ij}$ denote the $(i,j)$th element of $\bSig^{-1}$. A direct calculation gives $\partial^2 f(\bs{x};\bs{\mu},\bs{\Sigma}) / \partial \bs{\mu}\partial \bs{\mu}\t = - \bs{\Sigma}^{-1} f + \bs{\Sigma}^{-1}(\bs{x}-\bs{\mu}) (\bs{x}-\bs{\mu})\t \bs{\Sigma}^{-1} f$ and $\partial f(\bs{x};\bs{\mu},\bs{\Sigma}) / \partial \bSig = - (1/2) \bs{\Sigma}^{-1} f + (1/2)\bs{\Sigma}^{-1}(\bs{x}-\bs{\mu}) (\bs{x}-\bs{\mu})\t \bs{\Sigma}^{-1} f$. Therefore, the first result follows immediately from (\[del\_v\_sigma\]). To prove the second result, we first derive ${\partial^4 f(\bs{x};\bs{\mu},\bs{\Sigma})}/{\partial \mu_{t_1} \partial \mu_{t_2} \partial \mu_{t_3} \partial \mu_{t_4}}$. Noting that ${\partial \bs{s}_{j}\t (\bs{x}-\bs{\mu})}/{\partial \mu_{i}} = -s_{ji}$ and ${\partial f(\bs{x};\bs{\mu},\bs{\Sigma})}/{\partial \mu_{i}} = \bs{s}_{i}\t (\bs{x}-\bs{\mu})f$ and differentiating ${\partial^2 f(\bs{x};\bs{\mu},\bs{\Sigma})}/{\partial \mu_{t_1} \partial \mu_{t_2}} = [ - s_{t_1t_2} + \bs{s}_{t_1}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_2}\t (\bs{x}-\bs{\mu}) ] f$ with respect to $\mu_{t_3}$ and $\mu_{t_4}$, we obtain $$\label{f_del_mu_4} \begin{aligned} & \frac{\partial^4 f(\bs{x};\bs{\mu},\bs{\Sigma})}{\partial \mu_{t_1} \partial \mu_{t_2} \partial \mu_{t_3} \partial \mu_{t_4}} = \left( \sum_{\{i,j\},\{k,\ell\}} s_{t_it_j} s_{t_kt_\ell} - \sum_{\{i,j\},\{k\}, \{\ell\}} s_{t_it_j} \bs{s}_{t_k}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_\ell}\t (\bs{x}-\bs{\mu}) + \prod_{i=1}^4\bs{s}_{t_i}\t (\bs{x}-\bs{\mu}) \right) f , \end{aligned}$$ where $\sum_{\{i,j\},\{k,\ell\}}$ denotes the sum over all 3 possible partitions of $\{1,2,3,4\}$ into $\{\{i,j\},\{k,\ell\}\}$, and $\sum_{\{i,j\},\{k\}, \{\ell\}}$ denotes the sum over all 6 possible partitions of $\{1,2,3,4\}$ into three sets $\{\{i,j\},\{k\},\{\ell\}\}$. Recall that $$\label{f_del_st_1t_2} \frac{\partial f(\bs{x};\bs{\mu},\bs{\Sigma}) }{ \partial \Sigma_{t_1t_2}} = (1/2)[ - s_{t_1t_2} + \bs{s}_{t_1}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_2}\t (\bs{x}-\bs{\mu}) ] f.$$ Let ${\bs 1}_{i}$ denote a $d\times 1$ vector whose elements are 0 except for the $i$th element, which is 1. We then have $s_{t_1t_2} = {\bs 1}_{t_1}\t \bSig^{-1} {\bs 1}_{t_2} $ and $\bs{s}_{t_1} =\bSig^{-1} {\bs 1}_{t_1} $. Using the symmetry of $\bSig$, we obtain $$\begin{aligned} \frac{\partial s_{t_1t_2}}{\partial \Sigma_{t_3t_4}} & = \frac{\partial ( s_{t_1t_2}+ s_{t_2t_1})/2}{\partial \Sigma_{t_3t_4}} \\ & = \frac{\partial }{\partial \Sigma_{t_3t_4}} \left({\bs 1}_{t_1}\t \bSig^{-1} {\bs 1}_{t_2} + {\bs 1}_{t_2}\t \bSig^{-1} {\bs 1}_{t_1} \right)/2 \\ & = - (1/2) \left(\bSig^{-1} {\bs 1}_{t_1} {\bs 1}_{t_2}\t \bSig^{-1} + \bSig^{-1} {\bs 1}_{t_2} {\bs 1}_{t_1}\t \bSig^{-1} \right)_{t_3t_4} \\ &= - (1/2) \left(s_{t_3t_1} s_{t_2t_4} + s_{t_3t_2} s_{t_1t_4} \right),\end{aligned}$$ and $$\begin{aligned} \frac{\partial \bs{s}_{t_1}\t(\bs{x}-\bs{\mu})}{\partial \Sigma_{t_3t_4}} & = \frac{\partial }{\partial \Sigma_{t_3t_4}} \left({\bs 1}_{t_1}\t \bSig^{-1} (\bs{x}-\bs{\mu}) +(\bs{x}-\bs{\mu})\t \bSig^{-1} {\bs 1}_{t_1} \right)/2 \\ & = - (1/2) \left(\bSig^{-1} {\bs 1}_{t_1} (\bs{x}-\bs{\mu})\t \bSig^{-1} + \bSig^{-1} (\bs{x}-\bs{\mu}) {\bs 1}_{t_1}\t \bSig^{-1} \right)_{t_3t_4} \\ &= - (1/2) \left(s_{t_3t_1} (\bs{x}-\bs{\mu})\t \bs{s}_{t_4} + \bs{s}_{t_3}\t (\bs{x}-\bs{\mu}) s_{t_1t_4} \right).\end{aligned}$$ Therefore, taking the derivative of the right hand side of (\[f\_del\_st\_1t\_2\]) with respect to $\Sigma_{t_3t_4}$ gives $$\label{f_del_sigma_2} \begin{aligned} \frac{\partial^2 f(\bs{x};\bs{\mu},\bs{\Sigma}) }{ \partial \Sigma_{t_1t_2} \partial \Sigma_{t_3t_4}} & = \frac{1}{4} \left[ s_{t_3t_1} s_{t_2t_4} + s_{t_3t_2} s_{t_1t_4} - \left(s_{t_3t_1} (\bs{x}-\bs{\mu})\t \bs{s}_{t_4} + \bs{s}_{t_3}\t (\bs{x}-\bs{\mu}) s_{t_1t_4} \right) \bs{s}_{t_2}\t (\bs{x}-\bs{\mu}) \right. \\ & \quad \left. - \bs{s}_{t_1}\t (\bs{x}-\bs{\mu})\left(s_{t_3t_2} (\bs{x}-\bs{\mu})\t \bs{s}_{t_4} + \bs{s}_{t_3}\t (\bs{x}-\bs{\mu}) s_{t_2t_4} \right) \right] f \\ & \quad + \frac{1}{2} \left( - s_{t_1t_2} + \bs{s}_{t_1}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_2}\t (\bs{x}-\bs{\mu}) \right) \frac{\partial f(\bs{x};\bs{\mu},\bs{\Sigma}) }{ \partial \Sigma_{t_3t_4}} \\ & = \frac{1}{4}\left( \sum_{\{i,j\},\{k,\ell\}} s_{t_it_j} s_{t_kt_\ell} - \sum_{\{i,j\},\{k\}, \{\ell\}} s_{t_it_j} \bs{s}_{t_k}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_\ell}\t (\bs{x}-\bs{\mu}) + \prod_{i=1}^4\bs{s}_{t_i}\t (\bs{x}-\bs{\mu}) \right) f. \end{aligned}$$ Comparing this with (\[f\_del\_mu\_4\]) and using (\[del\_v\_sigma\]) gives the second result. For the third result, differentiating (\[f\_del\_mu\_4\]) with respect to $\mu_{t_5}$ and $\mu_{t_6}$ gives $$\label{f_del_mu_6} \begin{aligned} & \frac{\partial^6 f(\bs{x};\bs{\mu},\bs{\Sigma})}{\partial \mu_{t_1} \partial \mu_{t_2} \partial \mu_{t_3} \partial \mu_{t_4} \partial \mu_{t_5} \partial \mu_{t_6}} \\ & = \left( - \sum_{\{i,j\},\{k,\ell\},\{m,n\}} s_{t_it_j} s_{t_kt_\ell}s_{t_m t_n} + \sum_{\{i,j\},\{k,\ell\}, \{m\},\{n\}} s_{t_it_j} s_{t_kt_\ell} \bs{s}_{t_m}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_n}\t (\bs{x}-\bs{\mu}) \right. \\ & \quad \left. - \sum_{\{i,j\},\{k,\ell,m,n\}} s_{t_it_j} \bs{s}_{t_k}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_\ell}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_m}\t (\bs{x}-\bs{\mu}) \bs{s}_{t_n}\t (\bs{x}-\bs{\mu}) + \prod_{i=1}^6\bs{s}_{t_i}\t (\bs{x}-\bs{\mu}) \right) f , \end{aligned}$$ where $\sum_{\{i,j\},\{k,\ell\},\{m,n\}}$ denotes the sum over all 15 possible partitions of $\{1,2,3,4,5,6\}$ into $\{\{i,j\},\{k,\ell\},\{m,n\}\}$, $\sum_{\{i,j\},\{k,\ell\}, \{m\},\{n\}}$ denotes the sum over all 45 possible partitions of $\{1,2,3,4,5,6\}$ into three sets $\{\{i,j\},\{k,\ell\},\{m\},\{n\}\}$, and $\sum_{\{i,j\},\{k,\ell,m,n\}}$ denotes the sum over all 15 possible partitions of $\{1,2,3,4,5,6\}$ into $\{\{i,j\},\{k,\ell,m,n\}\}$. Differentiating (\[f\_del\_sigma\_2\]) with respect to $\Sigma_{t_5t_6}$ gives (\[f\_del\_mu\_6\]) divided by 8, and the third result follows. \[dv3\] Suppose that $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ is given by (\[loglike\]), where $\bs{\psi} = (\bs{\eta}^{\top},\bs{\lambda}_{\bs\mu}\t,\bs{\lambda}_{\bs v}\t)^{\top}$ and $\bs{\eta} = (\bs{\gamma}^{\top},{\bs \nu}_{\bs \mu}\t,\bs{\nu}_{\bs{v}}\t)^{\top}$. Let $g^*$ and $\nabla g^*$ denote $g(\bs{x}|\bs{z}; \bs{\psi},\alpha)$ and $\nabla g(\bs{x}|\bs{z}; \bs{\psi},\alpha)$ evaluated at $(\bs{\psi}^*,\alpha)$, respectively. Let $\nabla f^*$ denote $\nabla f(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^*)$. Then, with $b(\alpha): = -(2/3) (\alpha^2 - \alpha + 1)<0$, $$\begin{aligned} &(a)\ \text{for}\ k = 1, 2, 3\ \text{and}\ \ell = 0, 1,\ldots,\ \nabla_{\bs{\lambda}_{\bs\mu}^{\otimes k} \otimes \bs{\eta}^{\otimes \ell} } g^* = \bs{0}; \\ &(b)\ \nabla_{\lambda_{\mu_i}\lambda_{\mu_j}\lambda_{\mu_k}\lambda_{\mu_\ell}} g^* = \alpha(1 - \alpha)b(\alpha) \nabla_{\mu_i\mu_j\mu_k\mu_\ell} f^*; \\ &(c)\ \text{for}\ \ell = 0, 1,\ldots,\ \nabla_{\bs{\lambda}_{\bs v} \otimes \bs{\eta}^{\otimes\ell} } g^* = \bs{0}; \\ &(d)\ \nabla_{\lambda_{\mu_{i}} \lambda_{v_{jk}}} g^* = \alpha(1 - \alpha)\nabla_{\mu_i\mu_j\mu_k} f^*; \\ &(e)\ \nabla_{\lambda_{v_{ij}} \lambda_{v_{k\ell}}} g^* = \alpha(1 - \alpha)\nabla_{\mu_i\mu_j\mu_k\mu_\ell} f^*.\end{aligned}$$ We prove part (a) for $\ell = 0$ first. Suppress all arguments in $g(\bs{x}|\bs{z}; \bs{\psi}, \alpha)$ and $f_v(\bs{x}|\bs{z}; \bs{\gamma}, \bs{\mu}, \bs{v})$ except for $\bs{\lambda}_{\bs\mu}$, and rewrite (\[loglike\]) as follows: $$\label{g_function} g(\bs{\lambda}_{\bs\mu}) = \alpha f_v((1 - \alpha) \bs{\lambda}_{\bs\mu}, (1 - \alpha)C_1 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}^{\top}) ) + (1 - \alpha) f_v(-\alpha \bs{\lambda}_{\bs\mu}, - \alpha C_2 \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}^{\top}) ) .$$ For a composite function $h(\bs{a}, {\bs r}(\bs{a}))$ of a $d\times 1$ vector ${\bs a}=(a_1,\ldots,a_d)^{\top}$, the following result holds: $$\label{comp1} \begin{aligned} \nabla_{a_{i_1} \cdots a_{i_k}} h({\bs a}, {\bs r}({\bs a})) & = \{ (\nabla_{a_{i_1}} + \nabla_{u_{i_1}}) \cdots (\nabla_{a_{i_k}} + \nabla_{u_{i_k}}) \} h({\bs a}, {\bs r}({\bs u}))|_{{\bs u} = {\bs a}} \\ & = \sum_{j = 0}^k \sum_{p(j,\{i_1,\ldots,i_k\})} \nabla_{ u_{t_1} \cdots u_{t_j} a_{t_{j+1}} \cdots a_{t_k} } h({\bs a}, {\bs r}({\bs u}))|_{{\bs u} = {\bs a}} , \end{aligned}$$ where $\sum_{p(j,\{i_1,\ldots,i_k\})}$ denotes the sum over all the partitions of $\{i_1,\ldots,i_k\}$ into two sets $\{t_1,\ldots,t_j\}$ and $\{t_{j+1},\ldots,t_k\}$. Applying (\[comp1\]) to the right hand side of (\[g\_function\]) with $\bs{a} = \bs{\lambda}_{\bs \mu}$ gives the derivatives of $g(\bs{\lambda}_{\bs\mu})$. First, we derive $\nabla_{ u_{t_1} \cdots u_{t_j} } f_v((1 - \alpha) \bs{\lambda}_{\bs\mu}, (1 - \alpha)C_1 \bs{w}(\bs{u}\bs{u}^{\top}) )|_{\bs{u} = \bs{0}}$. Let $\tilde c:=(1-\alpha)C_1$. For notational convenience, if $i >j$, define $\nabla_{v_{ij}}h(\bs{v}):=\nabla_{v_{ji}}h(\bs{v})$ for any function $h(\bs{v})$. Using the fact $\nabla_{u_k} w_{ij}(\bs{u}\bs{u}\t) = 2 u_i \mathbb{I}\{j=k\} + 2 u_j\mathbb{I}\{i=k\}$ if $i<j$ and $\nabla_{u_k} w_{ii}(\bs{u}\bs{u}\t) = 2 u_i \mathbb{I}\{i=k\}$, we obtain $$\begin{aligned} \nabla_{ u_{t_1} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) &= \sum_{i=1}^d \sum_{j=i}^d \nabla_{v_{ij}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c \nabla_{u_{t_1}} w_{ij}(\bs{u}\bs{u}\t) \\ &= 2\sum_{i=1}^{t_1} \nabla_{v_{i t_1}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c u_i + 2\sum_{j=t_1+1}^d \nabla_{v_{t_1 j}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c u_j \\ &= 2\sum_{i=1}^d \nabla_{v_{t_1 i}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c u_i .\end{aligned}$$ Differentiating the right hand side with respect to $u_{t_2}$ give $$\begin{aligned} \nabla_{ u_{t_1} u_{t_2} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) & = 4 \sum_{i=1}^d \sum_{j=1}^d \nabla_{v_{t_1 i} v_{t_2 j}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c^2 u_i u_j + 2 \nabla_{v_{t_1 t_2}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c .\end{aligned}$$ Differentiating the right hand side with respect to $u_{t_3}$ gives $$\begin{aligned} \nabla_{ u_{t_1} u_{t_2} u_{t_3} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) & = 8 \sum_{i=1}^d \sum_{j=1}^d \sum_{k=1}^d \nabla_{v_{t_1 i} v_{t_2 j} v_{t_3 k} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c^3 u_i u_j u_k \\ & \quad + 4 \sum_{i=1}^d\nabla_{v_{t_1 i} v_{t_2 t_3}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c^2 u_i + 4 \sum_{j=1}^d\nabla_{v_{t_1 t_3} v_{t_2 j}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c^2 u_j \\ & \quad + 4 \sum_{k=1}^d\nabla_{v_{t_1 t_2} v_{t_3 k}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) \tilde c^2 u_k .\end{aligned}$$ Finally, evaluating these derivatives at $\bs{u}=\bs{0}$ and differentiating $\nabla_{ u_{t_1} u_{t_2} u_{t_3} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) )$ with respect to $u_{t_4}$ and evaluating at $\bs{u}=\bs{0}$ gives $$\label{comp2} \begin{aligned} \nabla_{ u_{t_1} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) |_{\bs{u}=\bs{0}} & = 0,\\ \nabla_{ u_{t_1} u_{t_2}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) |_{\bs{u}=\bs{0}} & = 2 \tilde c \nabla_{v_{t_1 t_2}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) ,\\ \nabla_{ u_{t_1} u_{t_2} u_{t_3} } f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) |_{\bs{u}=\bs{0}} & = 0 ,\\ \nabla_{ u_{t_1} u_{t_2} u_{t_3} u_{t_4}} f_v( \cdot, \tilde c \bs{w}(\bs{u}\bs{u}^{\top}) ) |_{\bs{u}=\bs{0}} & = 4 \tilde c^2 \nabla_{v_{t_1 t_4} v_{t_2 t_3}} f_v( \cdot, \bs{0}) + 4 \tilde c^2\nabla_{v_{t_1 t_3} v_{t_2 t_4}} f_v( \cdot, \bs{0}) \\ & \quad + 4 \tilde c^2 \nabla_{v_{t_1 t_2} v_{t_3 t_4}} f_v( \cdot, \bs{0}) , \end{aligned}$$ and a similar result holds for $\nabla_{u_{t_1} \cdots u_{t_j} \bs{\lambda}_{\bs{\mu}}^{k-j} } f_v((1 - \alpha) \bs{\lambda}_{\bs\mu}, (1 - \alpha)C_1 \bs{w}(\bs{u}\bs{u}^{\top}) )|_{\bs{u} = \bs{0}}$ and\ $ \nabla_{ u_{t_1} \cdots u_{t_j} \bs{\lambda}_{\bs{\mu}}^{k-j}} f (- \alpha \bs{\lambda}_{\bs\mu}, - \alpha C_2 \bs{w}(\bs{u}\bs{u}^{\top}) )|_{\bs u = \bs{0}}$. With (\[comp1\]) and (\[comp2\]) at hand, we are ready to derive part (a). Differentiating (\[g\_function\]) with respect to $\bs{\lambda}_{\bs{\mu}}$ and using (\[comp1\]), (\[comp2\]), $C_1-C_2 = -1$, $3((1-\alpha)C_1+\alpha C_2) = 2\alpha - 1$, and Lemma \[mv\_derivative\], we obtain $$\begin{aligned} \nabla_{\bs{\lambda}_{\bs\mu}}g(\bs{0}) & = \bs{0}, \\ \nabla_{\lambda_{\mu_i} \lambda_{\mu_j} } g(\bs{0}) & = \alpha(1 - \alpha) \nabla_{\mu_i \mu_j} f_v(\bs{0}, \bs{0}) + 2\alpha(1 - \alpha)(C_1 - C_2) \nabla_{v_{ij}} f_v(\bs{0}, \bs{0}) = 0, \\ \nabla_{\lambda_{\mu_i}\lambda_{\mu_j}\lambda_{\mu_k}}g(\bs{0}) & = \alpha(1 - \alpha)(1 - 2\alpha) \nabla_{\mu_i \mu_j \mu_k} f_v(\bs{0}, \bs{0}) \\ & \quad + 3\alpha(1 - \alpha) ((1 - \alpha)C_1 + \alpha C_2) 2 \nabla_{\mu_i v_{jk}} f_v(\bs{0}, \bs{0}) = 0,\end{aligned}$$ and part (a) for $\ell = 0$ follows. Repeating the same argument with $\nabla_{\bs{\eta}^{\otimes \ell}}g(\bs{\lambda}_{\bs\mu},\bs{\eta})$ gives part (a) for $\ell \geq 1$. For part (b), differentiating (\[g\_function\]) and using (\[comp1\]), (\[comp2\]), and Lemma \[mv\_derivative\] gives $$\begin{aligned} & \nabla_{\lambda_{\mu_i}\lambda_{\mu_j}\lambda_{\mu_k}\lambda_{\mu_\ell}} g(\bs{0}) \\ & = \alpha(1 - \alpha)[(1 - \alpha)^3 + \alpha^3] \nabla_{\mu_i \mu_j \mu_k \mu_\ell} f_v(\bs{0}, \bs{0}) + 6 \alpha(1 - \alpha)((1 - \alpha)^2C_1 - \alpha^2C_2) 2 \nabla_{\mu_i \mu_j v_{k\ell}} f_v(\bs{0}, \bs{0}) \nonumber \\ & \quad + 12 \alpha(1 - \alpha)((1 - \alpha)C_1^2 + \alpha C_2^2) \nabla_{v_{ij}v_{k\ell}} f_v(\bs{0}, \bs{0}) \nonumber \\ &= \alpha(1 - \alpha) [-(2/3) (\alpha^2 - \alpha + 1)]\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f_v(\bs{0}, \bs{0}),\end{aligned}$$ and the stated result follows because $\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f_v(\bs{0}, \bs{0})=\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f(\bs{0}, \bs{0})$. Part (c) follows from a direct calculation. Parts (d) and (e) follow from direct calculation and using (\[comp1\]), (\[comp2\]) and Lemma \[mv\_derivative\]. \[dv3-homo\] Suppose that $g(\bs{x}|\bs{z};\bs{\psi},\alpha)$ is given by (\[loglike-homo\]), where $\bs{\psi} = (\bs{\eta}^{\top},\bs{\lambda})^{\top}$ and $\bs{\eta} = (\bs{\gamma}^{\top},{\bs \nu}_{\bs \mu},\bs{\nu}_{\bs{v}})^{\top}$. Let $g^*$ and $\nabla g^*$ denote $g(\bs{x}|\bs{z}; \bs{\psi},\alpha)$ and $\nabla g(\bs{x}|\bs{z}; \bs{\psi},\alpha)$ evaluated at $(\bs{\psi}^*,\alpha)$, respectively. Let $\nabla f^*$ denote $\nabla f(\bs{x}|\bs{z};\bs{\gamma}^*,\bs{\mu}^*,\bs{\Sigma}^*)$. Then, $$\begin{aligned} &(a)\ \text{for}\ k = 1, 2\ \text{and}\ \ell = 0, 1,\ldots,\ \nabla_{\bs{\lambda}^{\otimes k} \otimes \bs{\eta}^{\otimes \ell} } g^* = \bs{0}; \\ &(b)\ \nabla_{\lambda_{i}\lambda_{j}\lambda_{k}} g^* = \alpha(1 - \alpha)(1-2\alpha) \nabla_{\mu_i\mu_j\mu_k} f^*; \\ &(b)\ \nabla_{\lambda_{i}\lambda_{j}\lambda_{k}\lambda_{\ell}} g^* = \alpha(1 - \alpha)(1-6\alpha+6\alpha^2) \nabla_{\mu_i\mu_j\mu_k\mu_\ell} f^*.\end{aligned}$$ We prove part (a) for $\ell = 0$ first. Suppress all arguments in $g(\bs{x}|\bs{z}; \bs{\psi}, \alpha)$ and $f_v(\bs{x}|\bs{z}; \bs{\gamma}, \bs{\mu}, \bs{v})$ except for $\bs{\lambda}$, and rewrite as follows: $$\label{g_function-homo} g(\bs{\lambda}_{\bs\mu}) = \alpha f_v((1 - \alpha) \bs{\lambda}_{\bs\mu}, -\alpha(1 - \alpha) \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}^{\top}) ) + (1 - \alpha) f_v(-\alpha \bs{\lambda}_{\bs\mu}, -\alpha(1 - \alpha) \bs{w}(\bs{\lambda}_{\bs\mu}\bs{\lambda}_{\bs\mu}^{\top}) ) .$$ Differentiating (\[g\_function-homo\]) with respect to $\bs{\lambda}$ and using (\[comp1\]), (\[comp2\]), and Lemma \[mv\_derivative\], we obtain $$\begin{aligned} \nabla_{\bs{\lambda}_{\bs\mu}}g(\bs{0}) & = \bs{0}, \\ \nabla_{\lambda_{i} \lambda_{j} } g(\bs{0}) & = \alpha(1 - \alpha) \nabla_{\mu_i \mu_j} f_v(\bs{0}, \bs{0}) - 2\alpha(1 - \alpha) \nabla_{v_{ij}} f_v(\bs{0}, \bs{0}) = 0, \end{aligned}$$ and part (a) for $\ell = 0$ follows. Repeating the same argument with $\nabla_{\bs{\eta}^{\otimes \ell}}g(\bs{\lambda},\bs{\eta})$ gives part (a) for $\ell \geq 1$. For parts (b) and (c), differentiating (\[g\_function-homo\]) and using (\[comp1\]), (\[comp2\]), and Lemma \[mv\_derivative\] gives $$\begin{aligned} & \nabla_{\lambda_{i}\lambda_{j}\lambda_{k}}g(\bs{0}) \\ &= \alpha(1 - \alpha)(1 - 2\alpha) \nabla_{\mu_i \mu_j \mu_k} f_v(\bs{0}, \bs{0}) + 3\alpha(1 - \alpha) (-\alpha(1 - \alpha) + \alpha (1-\alpha)) 2 \nabla_{\mu_i v_{jk}} f_v(\bs{0}, \bs{0}) \\ &= \alpha(1 - \alpha)(1 - 2\alpha) \nabla_{\mu_i \mu_j \mu_k} f_v(\bs{0}, \bs{0}) ,\\ & \nabla_{\lambda_{i}\lambda_{j}\lambda_{k}\lambda_{\ell}} g(\bs{0}) \\ & = \alpha(1 - \alpha)[(1 - \alpha)^3 + \alpha^3] \nabla_{\mu_i \mu_j \mu_k \mu_\ell} f_v(\bs{0}, \bs{0}) + 6 \alpha(1 - \alpha)(-\alpha(1 - \alpha)^2 - \alpha^2(1-\alpha)) 2 \nabla_{\mu_i \mu_j v_{k\ell}} f_v(\bs{0}, \bs{0}) \nonumber \\ & \quad + 12 \alpha(1 - \alpha)((1 - \alpha)\alpha^2 + \alpha (1-\alpha)^2) \nabla_{v_{ij}v_{k\ell}} f_v(\bs{0}, \bs{0}) \nonumber \\ &= \alpha(1 - \alpha) (1-6\alpha+6\alpha^2)\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f_v(\bs{0}, \bs{0}),\end{aligned}$$ and the stated result follows because $\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f_v(\bs{0}, \bs{0})=\nabla_{\mu_i \mu_j \mu_k \mu_\ell} f(\bs{0}, \bs{0})$. \[lemma\_lambda\_e\] Suppose $\bs{\lambda} = (\bs{\lambda}_{\bs{\mu}}\t,\bs{\lambda}_{\bs{v}}\t)\t \in \Theta_{\bs{\lambda}}$ satisfies $\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) =O_p(n^{-1/2})$ for some $\alpha \in (0,1)$ with $\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha)$ defined in (\[tpsi\_defn\]). Then, if $|\lambda_{\mu_i}| \geq n^{-1/8} (\log n)^{-1}$ for some $i \in \{1,\ldots,d\}$, we have $\bs{\lambda}_{\bs{v}} = O_p(n^{-3/8}(\log n)^3)$. The stated result holds if we show, for all $(j,k)$, $$\begin{aligned} (A)\ {\lambda}_{{v}_{ii}} &= O_p(n^{-3/8} \log n), \quad (B)\ {\lambda}_{{v}_{ij}} = O_p(n^{-3/8} (\log n)^2), \\ (C)\ {\lambda}_{{v}_{jj}} &= O_p(n^{-3/8} (\log n)^3), \quad (D)\ {\lambda}_{{v}_{jk}} = O_p(n^{-3/8} (\log n)^3). \end{aligned}$$ Observe that $\bs{t}_{\bs{\lambda}}(\bs{\lambda},\alpha) =O_p(n^{-1/2})$ implies that, for any $(i,j,k)$, $$\begin{aligned} {\lambda}_{{\mu}_{i}} {\lambda}_{{v}_{ii}} & = O_p(n^{-1/2}), \label{lambda_e_bound_1}\\ ({\lambda}_{{\mu}_{i}} {\lambda}_{{v}_{ij}} + {\lambda}_{{\mu}_{j}}{\lambda}_{{v}_{ii}}) & = O_p(n^{-1/2}), \label{lambda_e_bound_2} \\ ({\lambda}_{{\mu}_{i}} {\lambda}_{{v}_{jk}} + {\lambda}_{{\mu}_{j}}{\lambda}_{{v}_{ik}} + {\lambda}_{{\mu}_{k}}{\lambda}_{{v}_{ij}}) & = O_p(n^{-1/2}), \label{lambda_e_bound_3} \\ [12 ({\lambda}_{{v}_{ii}})^2 + b(\alpha) ({\lambda}_{{\mu}_{i}})^4] & = O_p(n^{-1/2}) . \label{lambda_e_bound_4} \end{aligned}$$ Part (A) follows from $|\lambda_{\mu_i}| \geq n^{-1/8} (\log n)^{-1}$ and (\[lambda\_e\_bound\_1\]). Before deriving part (B), we first show that $\lambda_{\mu_j} = O_p(n^{-1/8})$ holds for any $j$. Consider the two cases, $|\lambda_{\mu_j}| \leq n^{-1/8} (\log n)^{-1}$ and $|\lambda_{\mu_j}| \geq n^{-1/8} (\log n)^{-1}$. When $|\lambda_{\mu_j}| \leq n^{-1/8} (\log n)^{-1}$, the result $\lambda_{\mu_j} = O_p(n^{-1/8})$ follows immediately. When $|\lambda_{\mu_j}| \geq n^{-1/8} (\log n)^{-1}$, (\[lambda\_e\_bound\_1\]) implies ${\lambda}_{{v}_{jj}} = O_p(n^{-3/8} \log n)$, and in view of (\[lambda\_e\_bound\_4\]) we obtain $\lambda_{\mu_j} = O_p(n^{-1/8})$. Therefore, $\lambda_{\mu_j} = O_p(n^{-1/8})$ holds for any $j$. Combining this with (\[lambda\_e\_bound\_2\]) and part (A), we obtain ${\lambda}_{{\mu}_{i}} {\lambda}_{{v}_{ij}} = O_p(n^{-1/2}\log n)$. Hence, noting that $|\lambda_{\mu_i}| \geq n^{-1/8} (\log n)^{-1}$ gives part (B). For part (C), reversing the role of $i$ and $j$ in (\[lambda\_e\_bound\_2\]) gives ${\lambda}_{{\mu}_{j}} {\lambda}_{{v}_{ij}} + {\lambda}_{{\mu}_{i}}{\lambda}_{{v}_{jj}} = O_p(n^{-1/2})$. In conjunction with $\lambda_{\mu_j} = O_p(n^{-1/8})$ and part (B), we obtain ${\lambda}_{{\mu}_{i}}{\lambda}_{{v}_{jj}} = O_p(n^{-1/2}(\log n)^2)$. Then, part (C) follows from $|\lambda_{\mu_i}| \geq n^{-1/8} (\log n)^{-1}$. For part (D), we already show that $\lambda_{\mu_j}, \lambda_{\mu_k} = O_p(n^{-1/8})$, and part (B) implies ${\lambda}_{{v}_{ij}}, {\lambda}_{{v}_{ik}}= O_p(n^{-3/8} (\log n)^2)$. Substituting this to (\[lambda\_e\_bound\_3\]) gives ${\lambda}_{{\mu}_{i}} {\lambda}_{{v}_{jk}} = O_p(n^{-1/2} (\log n)^2)$, and part (D) follows from $|\lambda_{\mu_i}| \geq n^{-1/8} (\log n)^{-1}$. \[tau\_update\] Suppose that Assumptions\[assn\_consis\] and \[A-vec-2\] hold. If $\bs{\vartheta}_{m_0+1}^{m(k)}(\tau_0) - \bs{\vartheta}_{m_0+1}^{m*}(\tau_0)= o_p(1)$ and $\tau^{(k)} - \tau_0 = o_p(1)$, then (a) $\alpha_m^{(k+1)}/[\alpha_m^{(k+1)}+\alpha_{m+1}^{(k+1)}] - \tau_0 = o_p(1)$ and (b) $\tau^{(k+1)} - \tau_0 = o_p(1)$. We suppress $(\tau_0)$ from $\bs{\vartheta}_{M_0+1}^{m(k)}(\tau_0)$ and $\bs{\vartheta}_{M_0+1}^{m*}(\tau_0)$. The proof is similar to the proof of Lemma 3 of @lichen10jasa. We suppress $\bs{Z}$ for brevity. Let $f_i( \bs{\mu}, \bs{\Sigma})$ and $f_i(\bs{\vartheta}_{M_0+1})$ denote $f(\bs{X}_i; \bs{\mu}, \bs{\Sigma})$ and $f_{M_0+1}(\bs{X}_i; \bs{\vartheta}_{M_0+1})$, respectively. Applying a Taylor expansion to $\alpha_m^{(k+1)}= n^{-1}\sum_{i = 1}^n w_{im}^{(k)}$ and using $\bs{\vartheta}_{M_0+1}^{m(k)} - \bs{\vartheta}_{M_0+1}^{m*} = o_p(1)$, we obtain $$\begin{aligned} \alpha_m^{(k+1)}& = \frac{1}{n} \sum_{i = 1}^n \frac{\tau^{(k)}(\alpha_{m}^{(k)}+\alpha_{m+1}^{(k)})f_i(\bs{\mu}_m^{(k)},\bs{\Sigma}_m^{(k)})}{f_i(\bs{\vartheta}_{M_0+1}^{m(k)})} \\ & = \frac{1}{n} \sum_{i = 1}^n \frac{\tau_0 \alpha_{m}^*f_i(\bs{\mu}_{m}^{*},\bs{\Sigma}_{m}^{*})}{f_i(\bs{\vartheta}_{M_0+1}^{m*})} + o_p(1) = \tau_0 \alpha_{m}^*+ o_p(1), \end{aligned}$$ where the last equality follows from $E[f_i(\bs{\mu}_{m}^{*},\bs{\Sigma}_{m}^{*})/f_i(\bs{\vartheta}_{M_0+1}^{m*}) ] = 1$ and the law of large numbers. A similar argument gives $\alpha_{m+1}^{(k+1)} = (1 - \tau_0)\alpha_{m}^* + o_p(1)$, and part (a) follows. For part (b), define $H(\tau) := \sum_{i=1}^n w_{im}^{(k)} \log(\tau) + \sum_{i=1}^n w_{i,m+1}^{(k)} \log(1-\tau) = n \alpha_m^{(k+1)} \log(\tau) + n \alpha_{m+1}^{(k+1)} \log(1-\tau) $, then $\tau^{(k+1)}$ maximizes $H(\tau) + p(\tau)$. $H(\tau)$ is maximized at $\tilde \tau = \alpha_m^{(k+1)}/[\alpha_m^{(k+1)}+\alpha_{m+1}^{(k+1)}] = \tau_0 + o_p(1)$. Expanding $H(\tau)$ twice around $\tilde \tau$ gives $H(\tilde\tau) - H(\tau) \geq (\epsilon+o_p(1)) \alpha_m^* n (\tau - \tilde \tau)^2$ for some $\epsilon>0$. In conjunction with $H(\tau^{(k+1)}) + p(\tau^{(k+1)}) - H(\tilde \tau) - p(\tilde \tau) \geq 0$, we obtain $p(\tau^{(k+1)}) - p(\tilde \tau) \geq (\epsilon+o_p(1)) \alpha_m^* n (\tau^{(k+1)} - \tilde \tau)^2$. Because $p(\tau) \leq 0$ and $p(\tilde \tau) = O_p(1)$, we have $n (\tau^{(k+1)} - \tilde \tau)^2 = O_p(1)$, and part (b) follows. The following lemma follows from Le Cam’s first and third lemmas and facilitates the derivation of the asymptotic distribution of the LRTS under $\mathbb{P}_{\vartheta_n}^n$. \[P-LAN\] Suppose that the assumptions of Lemma \[P-quadratic\] hold and $\bs{\vartheta}_n$ is given by (\[local-alternative\]). Then, (a) $\mathbb{P}_{\bs{\vartheta}_n}^n$ is mutually contiguous with respect to $\mathbb{P}_{\bs{\vartheta}^*}^n$, and (b) under $\mathbb{P}_{\bs{\vartheta}_n}^n$, we have $\log (d\mathbb{P}_{\bs{\vartheta}_n}^n/d \mathbb{P}_{\bs{\vartheta}^*}^n) = \bs{h}\t \nu_n(\bs{s}(\bs{x},\bs{z})) - \bs{h}\t \bs{\mathcal{I}} \bs{h}/2+ o_{p}(1)$ with $\nu_n(\bs{s}(\bs{x},\bs{z})) \overset{d}{\rightarrow} N(\bs{\mathcal{I}} \bs{h}, \bs{\mathcal{I}})$, where $\bs{s}(\bs{x},\bs{z})$ is defined in (\[score\_defn\]) and $\bs{\mathcal{I}}:=E[\bs{s}(\bs{X},\bs{Z})\bs{s}(\bs{X},\bs{Z})\t]$. Observe that Lemma \[Ln\_thm1\] holds under $\mathbb{P}_{\bs\vartheta^*}^n$ under the assumptions of Lemma \[P-quadratic\]. Because $\bs{\vartheta}_{n} = (\bs{\eta}_n\t,\bs\lambda_n\t,\alpha_n)\t\in \mathcal{N}_{c/\sqrt{n}}$ by choosing $c> |\bs h|$, it follows from Lemma \[Ln\_thm1\] that $$\label{expansion} \left| \log \frac{d\mathbb{P}_{\bs{\vartheta}_n}^n}{d \mathbb{P}_{\bs{\vartheta}^*}^n}- \bs{h}\t \nu_n(\bs{s}(\bs{x},\bs{z})) - \bs{h}\t \bs{\mathcal{I}} \bs{h}/2 \right|=o_{\mathbb{P}_{\bs\vartheta^*}^n}(1).$$ Furthermore, $\nu_n(\bs{s}(\bs{x},\bs{z})) \rightarrow_d \bs{G} \sim N(\bs{0},\bs{\mathcal{I}})$ under $\mathbb{P}_{\bs\vartheta^*}^n$. Therefore, $d\mathbb{P}_{\bs\vartheta_n}^n / d \mathbb{P}_{\bs\vartheta^*}^n$ converges in distribution under $\mathbb{P}_{\vartheta^*}^n$ to $\exp\left( N( \mu,\sigma^2) \right)$ with $\mu=-(1/2) \bs{h}\t \bs{\mathcal{I}} \bs{h}$ and $\sigma^2= \bs{h}\t \bs{\mathcal{I}} \bs{h}$, so that $E(\exp\left( N( \mu,\sigma^2) \right))=1$. Consequently, part (a) follows from Le Cam’s first lemma (see, e.g., Corollary 12.3.1 of @lehmannromano05book). Part (b) follows from Le Cam’s third lemma (see, e.g., Corollary 12.3.2 of @lehmannromano05book) because part (a) and (\[expansion\]) imply that $$\begin{pmatrix} \nu_n(\bs{s}(\bs{x},\bs{z})) \\ \log\frac{d\mathbb{P}_{\bs{\vartheta}_n}^n}{d \mathbb{P}_{\bs{\vartheta}^*}^n} \end{pmatrix} \overset{d}{\rightarrow} N\left( \begin{pmatrix} 0\\ -\frac{1}{2} \bs{h}\t \bs{\mathcal{I}} \bs{h} \end{pmatrix}, \begin{pmatrix} \bs{\mathcal{I}}& \bs{\mathcal{I}}\bs{h}\\ \bs{h}\t \bs{\mathcal{I}}&\bs{h}\t \bs{\mathcal{I}} \bs{h} \end{pmatrix} \right)\quad\text{under $\mathbb{P}_{\bs{\vartheta}^*}^n$.}$$ \[lemma\_btsp\] Suppose that the assumptions of Proposition \[P-LR-N1\] hold. Let $\bf{C}_{\bs\eta}$ be a set of sequences $\{\bs\eta_n\}$ satisfying $\sqrt{n}(\bs\eta_n - \bs\eta^*) \to \bs h_{\bs\eta}$ for some finite $\bs h_{\bs\eta}$. Let $\mathbb{P}^n_{\bs{\eta}_n} := \prod_{i=1}^n g(X_i|Z_i;\bs{\eta}_n,\bs{0},\alpha)$ denote the probability measure under $\bs{\eta}_n$ with $\bs{\lambda}_n=\bs{0}$. Then, for every sequence $\{\bs\eta_n\} \in \bf{C}_{\bs\eta}$, the LRTS under $\{\mathbb{P}^n_{\bs\eta_n}\}$ converges in distribution to $\max_{ j \in \{1,2\}} \left( (\widehat{\bs{t}}_{\bs{\lambda}}^{j})\t \bs{\mathcal{I}}_{\bs{\lambda}.\bs{\eta}} \widehat{\bs{t}}_{\bs{\lambda}}^{j} \right)$ given in Propositions \[P-LR-N1\]. Observe that $\bs{\vartheta}_n:=(\bs\eta_n\t,\bs\lambda_n\t,\alpha_n)\t = ( (\bs\eta^*+\bs{h}_{\bs\eta}/\sqrt{n})\t,\bs{0}\t,\alpha)$ satisfies the assumptions of Lemma \[P-LAN\]. Therefore, Lemma \[P-LAN\] holds under $\bs{\vartheta}_n$ with $\nu_n(\bs{s}(\bs{x},\bs{z})) \overset{d}{\rightarrow} N(\bs{\mathcal{I}} \bs{h}, \bs{\mathcal{I}})$ with $\bs{h}=(\bs{h}_{\bs\eta}\t,\bs{0}\t)\t$ under $\mathbb{P}_{\bs\vartheta_n}^n$. Furthermore, the log-likelihood function of the one-component model admits a similar expansion, and $\log (d \mathbb{P}_{\bs\eta_n}^n / d \mathbb{P}_{\bs\eta^*}^n ) = \bs{h}_{\bs\eta}\t \nu_n (\bs{s}_{\bs\eta}(\bs{x},\bs{z})) - (1/2)\bs{h}_{\bs\eta}\t \bs{\mathcal{I}}_{\bs \eta} \bs{h}_{\bs\eta} + o_p(1)$ holds under $\mathbb{P}_{\bs\eta_n}^n$. Therefore, the proof of Proposition \[P-LR-N1\] goes through by replacing $\bs{G}_{n}$ with $\bs{G}_{n}^{\bs h} = \left[\begin{smallmatrix} \bs{G}_{\bs\eta n}^{\bs h} \\ \bs{G}_{\bs\lambda n}^{\bs h} \end{smallmatrix} \right] := \bs G_{ n} + \bs{\mathcal{I}}\bs h$. In view of $\bs{G}_{\bs \eta n}^{\bs h} = \bs G_{\bs \eta n} + \bs{\mathcal{I}}_{\bs\eta}\bs{h}_{\bs \eta}$ and $\bs{G}_{\bs\lambda n}^{\bs h} = \bs{G}_{\bs\lambda n} + \bs{\mathcal{I}}_{\bs\lambda \bs \eta } \bs h_{\bs\eta}$, we have $\bs G_{\bs \lambda.\bs \eta n}^{\bs h} := \bs G_{\bs \lambda n}^{\bs h} - \bs{\mathcal{I}}_{\bs \lambda \bs \eta}\bs{\mathcal{I}}_{\bs \eta}^{-1} \bs G_{\bs \eta n}^{\bs h} =\bs G_{\bs \lambda n} - \bs{\mathcal{I}}_{\bs \lambda \bs \eta }\bs{\mathcal{I}}_{\bs \eta}^{-1}\bs G_{\bs \eta n} =\bs G_{\bs \lambda n}$. Therefore, the asymptotic distribution of the LRTS under $\mathbb{P}_{\bs \eta_n}^n$ is the same as that under $\mathbb{P}_{\bs \eta^*}^n$, and the stated result follows. \[s\_der\_alpha\] For $h(\bs{x}|\bs{z};\bs{\phi},\bs{\lambda})$ and $\bs{t}(\bs{\phi},\bs{\lambda})$ defined in (\[loglike-homo-2\]) and (\[tpsi\_defn-homo-2\]), the following holds: $$\begin{aligned} \text{(A)} \quad & \nabla_{\alpha} h(\bs{y};\bs{\phi}^*,\bs{\lambda}) = f_v^*(\bs{\lambda}) - f_v^* - \nabla_{ \bs{\mu}\t} f_v^* \bs{\lambda} - \nabla_{ \bs{v}\t} f_v^* \bs{\lambda}_{\bs{\mu}^2},\\ \text{(B)} \quad & \nabla_{\alpha^2} h(\bs{y};\bs{\phi},\bs{\lambda}) = \bs{\xi}(\bs{y};\bs{\vartheta}_2) O(|\bs{\lambda}|^3), \end{aligned}$$ where $\sup_{\bs{\vartheta}_2 } |\bs{\xi}(\bs{y};\bs{\vartheta}_2)| \leq \sup_{\bs{\vartheta}_2 } |\bs{v}(\bs{y};\bs{\vartheta}_2)|$ with $\bs{v}(\bs{y};\bs{\vartheta}_2)$ defined in (\[v\_defn-homo-2\]) in the proof of Lemma \[P-quadratic-homo-2\], and the domain of $\bs{\vartheta}_2$ is such that $\bs{\phi} \in \Theta_{\bs{\eta}} \times [0, 3/4]$. Define $$\begin{aligned} f_v^1 &:=f_v\left(\bs{x}\middle|\bs{z};\bs{\gamma}, \bs{\nu}_{\bs\mu}+(1-\alpha)\bs{\lambda}, \bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda}\bs{\lambda}\t)\right) , \\ f_v^2 &:= f_v \left(\bs{x}\middle|\bs{z};\bs{\gamma}, \bs{\nu}_{\bs\mu} -\alpha\bs{\lambda},\bs{\nu}_{\bs{v}} - \alpha(1-\alpha) \bs{w}(\bs{\lambda}\bs{\lambda}\t) \right),\end{aligned}$$ and define $\nabla f_v^1$ and $\nabla f_v^2$ analogously. With this definition, we have $h(\bs{y};\bs{\phi},\bs{\lambda})= \alpha ( f_v^1 - f_v^2 ) + f_v ^2$. First, we collect the derivatives of $f_v^1$ and $f_v^2$. Noting that $\nabla_\alpha ( - \alpha(1-\alpha) ) = 2\alpha -1$, we obtain, for $j=1,2$, $$\label{g_der_alpha} \begin{aligned} \nabla_\alpha f_v^j & = - \nabla_{\bs{\mu}\t} f_v^j \bs{\lambda} + \nabla_{\bs{v}\t} f_v^j (2\alpha-1) \bs{w}(\bs{\lambda}\bs{\lambda}\t), \\ \nabla_{\alpha^2} f_v^j & = \nabla_{(\bs{\mu}^{\otimes 2})\t} f_v^j \bs{\lambda}^{\otimes 2} - 2 \nabla_{(\bs{\mu} \otimes \bs{v})\t} f_v^j (2\alpha -1) (\bs{\lambda} \otimes \bs{w}(\bs{\lambda}\bs{\lambda}\t)) \\ & \quad + \nabla_{(\bs{v}^{\otimes 2})\t} f_v^j (2\alpha -1)^2 \bs{w}(\bs{\lambda}\bs{\lambda}\t)^{\otimes 2} + \nabla_{\bs{v}\t} f_v^j 2 \bs{w}(\bs{\lambda}\bs{\lambda}\t). \end{aligned}$$ Part (A) follows from differentiating $h(\bs{y};\bs{\phi},\bs{\lambda})= \alpha ( f_v^1 - f_v^2 ) + f_v ^2$ with respect to $\alpha$, applying (\[g\_der\_alpha\]), evaluating it at $(\bs{\eta} = \bs{\eta}^*, \alpha=0)$, and noting that $\bs{w}(\bs{\lambda}\bs{\lambda}\t) = \bs{\lambda}_{\bs{\mu}^2}$. For part (B), expanding $\nabla_{\alpha^2} h(\bs{y};\bs{\phi},\bs{\lambda})$ around $\alpha=0$ gives $$\label{h_del_alpha_3} \nabla_{\alpha^2} h(\bs{y};\bs{\phi},\bs{\lambda}) = \nabla_{\alpha^2} h(\bs{y};(\bs{\eta}\t,0)\t,\bs{\lambda}) + \nabla_{\alpha^3} h(\bs{y};(\bs{\eta}\t,\bar{\alpha})\t,\bs{\lambda}) \alpha.$$ Define $f_v(\bs{\lambda}):=f_v \left(\bs{x}\middle|\bs{z};\bs{\gamma}, \bs{\nu}_{\bs{\mu}} + \bs{\lambda},\bs{\nu}_{\bs{v}} \right)$. For the first term on the right hand side of (\[h\_del\_alpha\_3\]), a direct calculation and $2 \nabla_{\bs{v}\t} f_v \left( \bs{\lambda} \right) \bs{w}(\bs{\lambda}\bs{\lambda}\t) = \nabla_{(\bs{\mu}^{\otimes 2})\t} f_v \left( \bs{\lambda} \right) \bs{\lambda}^{\otimes 2}$ gives $$\begin{aligned} & \nabla_{\alpha^2} h(\bs{y};(\bs{\eta}\t,0)\t,\bs{\lambda}) \\ & = - 2 \left[ \nabla_{\bs{\mu}\t} f_v \left(\bs{\lambda} \right) \bs{\lambda} - \nabla_{\bs{\mu}\t} f_v \left(\bs{0}\right) \bs{\lambda} -\nabla_{(\bs{\mu}^{\otimes 2})\t} f_v \left( \bs{0} \right) \bs{\lambda}^{\otimes 2} \right] - 2 \left[ \nabla_{\bs{v}\t} f_v \left(\bs{\lambda} \right) \bs{\lambda}_{\bs{\mu}^2} - \nabla_{\bs{v}\t} f_v \left(\bs{0} \right) \bs{\lambda}_{\bs{\mu}^2} \right] \\ & \quad - 2 \nabla_{(\bs{\mu} \otimes \bs{v})\t} f_v \left(\bs{0}\right) (\bs{\lambda} \otimes \bs{w}(\bs{\lambda}\bs{\lambda}\t)) + \nabla_{(\bs{v}^{\otimes 2})\t} f_v \left(\bs{0}\right) \bs{w}(\bs{\lambda}\bs{\lambda}\t)^{\otimes 2}.\end{aligned}$$ Applying a Taylor expansion to the terms in the brackets, the right hand side is written as $\nabla_{(\bs{\mu}^{\otimes 3})\t} f_v(\overline{\bs{\lambda}})O(|\bs{\lambda}|^3) + \nabla_{(\bs{\mu}^{\otimes 3})\t} f_v(\bs{\lambda})O(|\bs{\lambda}|^3)+ \nabla_{(\bs{\mu}^{\otimes 4})\t} f_v(\bs{\lambda})O(|\bs{\lambda}|^4)$ with $\overline{\bs{\lambda}} \in (\bs{0},\bs{\lambda})$. Finally, it follows from a direct calculation in conjunction with (\[g\_der\_alpha\]) that $\nabla_{\alpha^3} h(\bs{y};\bs{\psi},\bs{\lambda})$ is bounded by the product of the derivatives of $f_v \left(\bs{x}\middle|\bs{z};\bs{\gamma}, \bs{\mu},\bs{v} \right)$ and an $O(|\bs{\lambda}|^3)$ term, and the required result follows. \ Notes: Based on 2000 replications with 399 bootstrapped samples. Model 1 is $\bs{\mu} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $\bs{\Sigma}=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$. Model 2 is $\bs{\mu} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $\bs{\Sigma}=\begin{pmatrix} 1 & 0.5 \\ 0.5 & 1 \end{pmatrix}$. \ .\ Notes: Based on 2000 replications with 399 bootstrapped samples. We set $a_n = 1$. Model 1, 2, and 3 are given in Table \[table2\]. \ Based on 1000 replications with 199 bootstrapped samples.\ Model 1: $\bs{\alpha} = \begin{pmatrix} 0.7 \\ 0.3 \end{pmatrix}$, $\bs{\mu}_1 = \begin{pmatrix} -1 \\ -1 \end{pmatrix}$, $\bs{\mu}_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$,  $\bs{\Sigma}_1 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$,  $\bs{\Sigma}_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$.\ Model 2: $\bs{\alpha} = \begin{pmatrix} 0.7 \\ 0.3 \end{pmatrix}$, $\bs{\mu}_1 = \begin{pmatrix} -2 \\ -2 \end{pmatrix}$, $\bs{\mu}_2 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}$,  $\bs{\Sigma}_1 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$,  $\bs{\Sigma}_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$. \ Based on 1000 replications with 199 bootstrapped samples. Model 1 and 2 are given in Table \[table5\]. We set $a_n=1$.\ ![ Scatter plot of two physical measurements of flea beetles[]{data-label="figure1"}](flea.pdf){width="80.00000%"} \ Notes: Based on 199 bootstrapped samples.\ Notes: $\widehat{\bs{\mu}}_{j}$ and $\widehat{\bs{\Sigma}}_{j}$ for $j\in \{\text{Hep}, \text{Con}, \text{Hei}\}$ reports the mean and the variance estimated from a subsample of observations that belong to “Heptapotamica,” “Concinna," and “Heikertingeri," respectively. \ Notes: Based on 199 bootstrapped samples.\ ![ Scatter plot of gene expression levels for the rats with and without middle-ear infection[]{data-label="figure2"}](rat.pdf){width="80.00000%"} [^1]: Address for correspondence: Hiroyuki Kasahara, Vancouver School of Economics, University of British Columbia, 997-1873 East Mall, Vancouver, BC V6T 1Z1, Canada. The authors thank Pengfei Li and participants at the conference on Advances in Finite Mixture and other Non-regular Models, Guilin, China in 2018 for helpful comments and the Institute of Statistical Mathematics for the facilities and the use of SGI ICE X. This research is support by the Natural Science and Engineering Research Council of Canada and JSPS Grant-in-Aid for Scientific Research (C) No. 26380267. [^2]: For penalty function in our empirical applications, we set $a_n=n^{-1/2}$ for the null model and $a_n=1$ for the alternative model. [^3]: The data is originally from @lubischew62bio.
--- abstract: 'The sum of the $CP$-violating asymmetries $A_\Omega^{}$ and $A_\Lambda^{}$ in the decay sequence $\Omega\to\Lambda K,\,$ $\Lambda\to p\pi$ is presently being measured by the E871 experiment. We evaluate contributions to $A_\Omega^{}$ from the standard model and from possible new physics, and find them to be smaller than the corresponding contributions to $A_\Lambda^{}$, although not negligibly so. We also show that the partial-rate asymmetry in $\Omega\to\Lambda K$ is nonvanishing due to final-state interactions. Taking into account constraints from kaon data, we discuss how the upcoming result of E871 and future measurements may probe the various contributions to the observables.' author: - Jusak Tandean title: 'Probing $\bm{CP}$ Violation in $\bm{\Omega\to\Lambda K\to p\pi K}$ Decay' --- SMU-HEP-04-06 Introduction\[intro\] ===================== The question of the origin of $CP$ violation remains one of the outstanding puzzles in particle physics. Although $CP$ violation has now been seen in a number of processes in the kaon and $B$-meson systems [@cpx], it is still far from clear whether its explanation lies exclusively within the picture provided by the standard model [@buras1]. To pin down the sources of $CP$ violation, it is essential to observe it in many other processes. Hyperon nonleptonic decays provide an environment where it is possible to make additional observations of $CP$ violation [@hcpvt; @hypercp]. Currently, there are $CP$-violation searches in such processes being conducted by the HyperCP (E871) Collaboration at Fermilab. Its main reactions of interest are the decay chain $\,\Xi^-\to\Lambda\pi^-,\,$ $\,\Lambda\to p\pi^-\,$ and its antiparticle counterpart [@hypercp]. A different, but related, system also being studied by HyperCP involves the spin-$\frac{3}{2}$ hyperon $\Omega^-$, namely the sequence $\,\Omega^-\to\Lambda K^-,\,$ $\,\Lambda\to p\pi^-\,$ and its antiparticle process [@lu]. For each of these decays, the decay distribution in the rest frame of the parent hyperon with known polarization $\bm{w}$ has the form $$\frac{{\rm d}\Gamma}{{\rm d}\Omega} \,\sim\, 1 +\alpha\, \bm{w}\cdot\hat{\bm{p}} \,\,,$$ where ${\rm d}\Omega$ is the final-state solid angle, $\hat{\bm{p}}$ is the unit vector of the daughter-baryon momentum, and $\alpha$ is the parameter relevant to the $CP$ violation of interest. In the case of $\,\Omega\to\Lambda K\to p\pi K,\,$ the HyperCP experiment is sensitive to the [*sum*]{} of $CP$ violation in the $\Omega$ decay and $CP$ violation in the $\Lambda$ decay, measuring [@lu] $$\label{A_OL} A_{\Omega\Lambda}^{} \,=\, \frac{\alpha_\Omega^{} \alpha_\Lambda^{} - \alpha_{\overline{\Omega}} \alpha_{\overline{\Lambda}}} {\alpha_\Omega^{} \alpha_\Lambda^{} + \alpha_{\overline{\Omega}} \alpha_{\overline{\Lambda}}} \,\simeq\, A_\Omega^{} + A_\Lambda^{} \,\,,$$ where $$A_\Omega^{} \,\equiv\, \frac{\alpha_\Omega^{}+\alpha_{\overline{\Omega}}} {\alpha_\Omega^{}-\alpha_{\overline{\Omega}}} \,\,, \hspace{3em} A_\Lambda^{} \,\equiv\, \frac{\alpha_{\Lambda}^{}+\alpha_{\overline{\Lambda}}} {\alpha_{\Lambda}^{}-\alpha_{\overline{\Lambda}}}$$ are the $CP$-violating asymmetries in $\,\Omega\to\Lambda K\,$ and $\,\Lambda\to p\pi,\,$ respectively. Similarly, the observable it measures in $\,\Xi\to\Lambda\pi\to p\pi\pi\,$ is $\,A_{\Xi\Lambda}^{}\simeq A_\Lambda^{} + A_\Xi^{}\,$ [@hypercp]. On the theoretical side, $CP$ violation in $\,\Lambda\to p\pi\,$ and $\,\Xi\to\Lambda\pi\,$ has been extensively studied [@hcpvt; @dhp; @hcpvt'; @NP; @tv4; @t6]. In contrast, the literature on $CP$ violation in $\Omega$ decays is minimal, perhaps the only study being Ref. [@tv2] which deals with the partial-rate asymmetry in $\,\Omega\to\Xi\pi.\,$ There is presently no data available or experiment being done on this rate asymmetry. In view of the upcoming measurement of $A_{\Omega\Lambda}^{}$ by HyperCP, it is important to have theoretical expectations of this observable. Clearly, the information to be gained from $A_{\Omega\Lambda}^{}$ will complement that from $A_{\Xi\Lambda}^{}$. Since the estimates of $A_\Lambda^{}$ and $A_\Xi^{}$ within and beyond the standard model (SM) have been updated very recently in Refs. [@tv4; @t6], in this paper we focus on $A_\Omega^{}$. We begin in Sec. \[observables\] by relating the observables of interest in $\,\Omega\to\Lambda K\,$ to the strong and $CP$-violating weak phases in the decay amplitudes. We discuss the role played by final-state interactions in this decay, which not only affect $A_\Omega^{}$, but also cause its partial-rate asymmetry to be nonvanishing, thereby providing another $CP$-violating observable. In Sec. \[strong\_phases\], we employ heavy-baryon chiral perturbation theory ($\chi$PT) to calculate $P$- and $D$-wave amplitudes for baryon-meson scattering in channels with isospin $\,I=\frac{1}{2}\,$ and strangeness $\,S=-2.\,$ We use the derived amplitudes in a coupled-channel $K$-matrix formalism to determine the strong parameters needed in evaluating the $CP$-violating asymmetries. In Sec. \[A\_sm\], we estimate the asymmetries within the standard model. Working in the framework of $\chi$PT, we calculate the weak phases by considering factorizable and nonfactorizable contributions to the matrix elements of the leading penguin operator. Subsequently, we compare the resulting $A_\Omega^{}$ with $A_\Lambda^{}$, which was previously evaluated, as both asymmetries appear in $A_{\Omega\Lambda}^{}$. In Sec. \[A\_np\], we address contributions to the $CP$-violating asymmetries from possible new physics, taking into account constraints from $CP$ violation in the kaon system. Specifically, we consider contributions induced by chromomagnetic-penguin operators, which in certain models can be enhanced compared to the SM effects. Sec. \[conclusion\] contains our conclusions. Observables and phases\[observables\] ===================================== The amplitudes for $\,\Omega^-\to\Lambda K^-\,$ and $\,\bar{\Omega}{}^+\to\bar{\Lambda}K^+\,$ each contain parity-conserving $P$-wave and parity-violating $D$-wave components, with the former being empirically known to be dominant [@pdb]. They are related to the parameters $\alpha_\Omega^{}$ and $\alpha_{\overline{\Omega}}$ by $$\begin{aligned} \label{alpha} \alpha_\Omega^{} \,=\, \frac{2\,{\rm Re}\bigl(p^*d\bigr)}{|p|^2+|d|^2} \,\,, \hspace{3em} \alpha_{\overline{\Omega}} \,=\, \frac{2\,{\rm Re}\bigl(\bar{p}{}^*\bar{d}\bigr)} {\bigl|\bar{p}\bigr|^2+\bigl|\bar{d}\bigr|^2} \,\,,\end{aligned}$$ where $p$ and $d$ $\bigl(\bar{p}$ and $\bar{d}\bigr)$ are the $P$- and $D$-wave components, respectively, for the $\Omega^-$ $\bigl(\bar{\Omega}{}^+\bigr)$ decay. Since both $\Omega$ and $\Lambda$ have $\,I=0,\,$ each of these decays is an exclusively $\,|\Delta I|=\frac{1}{2}\,$ transition. Before writing down the amplitudes in terms their phases, we note that the strong phases in $\,\Omega\to\Lambda\bar{K}\,$ are not generated by the strong rescattering of $\Lambda\bar{K}$ alone. Watson’s theorem for elastic unitarity [@watson] does not apply here, though it does in the cases of $\,\Lambda\to p\pi\,$ and $\,\Xi\to\Lambda\pi.\,$ Final-state interactions also allow $\,\Omega\to\Xi\pi\to\Lambda\bar{K}\,$ to contribute, yielding additional strong phases as well as weak ones, because the channel $\,\Xi\pi\leftrightarrow\Lambda\bar{K}\,$ is open at the scattering energy $\,\sqrt{s}=m_\Omega.\,$ Since the $\,\Omega\to\Xi\pi,\Lambda\bar{K}\,$ rates overwhelmingly dominate the $\Omega$ width [@pdb], we expect other contributions via final-state rescattering to be negligible. The requirements of $CPT$ invariance and unitarity provide us with a relationship between the amplitudes for $\,\Omega\to B\phi\,$ and its antiparticle counterpart. Thus, with ${\cal M}_{\Omega\to B\phi}^{(L)}$ denoting the amplitude corresponding to $B\phi$ being in a state with orbital angular momentum $L$, we have $$\begin{aligned} \label{M_Bphi} (-1)^{L+1}\, {\cal M}_{\overline{\Omega}\to\overline{\Lambda}K}^{(L)} \,=\, {\cal S}_{\Lambda\Lambda}^{(L)}\, {\cal M}_{\Omega\to\Lambda\bar{K}}^{(L)*} + {\cal S}_{\Lambda\Xi}^{(L)}\, {\cal M}_{\Omega\to\Xi\pi}^{(L)*} \,\,,\end{aligned}$$ where ${\cal S}_{BB'}^{(L)}$ is the element of the strong $S$-matrix associated with the $L$ partial-wave of $\,B\phi\to B'\phi',\,$ and only the $\,I=\frac{1}{2}\,$ component of the $\Xi\pi$ state is involved in the second term. Assuming that the $\Xi\pi$ and $\Lambda\bar{K}$ channels are the only ones open, we can express the $S$-matrix as [@pilkuhn] $$\begin{aligned} \label{S} {\cal S} \,=\, \left( \begin{array}{cc} \displaystyle {\cal S}_{\Xi\Xi}^{} & {\cal S}_{\Xi\Lambda}^{} \vspace{1ex} \\ \displaystyle {\cal S}_{\Lambda\Xi}^{} & {\cal S}_{\Lambda\Lambda}^{} \end{array} \right) \,=\, \left( \begin{array}{cc} \displaystyle \hat{\eta}\, {\rm e}^{2{\rm i}\delta_{\Xi\pi}} & {\rm i}\sqrt{1-\hat{\eta}^2}\; {\rm e}^{{\rm i}(\delta_{\Xi\pi}+\delta_{\Lambda K})} \vspace{1ex} \\ \displaystyle {\rm i}\sqrt{1-\hat{\eta}^2}\; {\rm e}^{{\rm i}(\delta_{\Xi\pi}+\delta_{\Lambda K})} & \hat{\eta}\, {\rm e}^{2{\rm i}\delta_{\Lambda K}} \end{array} \right) \,\,,\end{aligned}$$ where $\hat{\eta}$ is the inelasticity factor and $\delta_{B\phi}^{}$ denotes the phase shift in $\,B\phi\to B\phi.\,$ Clearly $\cal S$ is unitary, and each partial-wave has its own $\cal S$. Now, since $\hat{\eta}$ is expected to be close to and smaller than 1, it is convenient to introduce a parameter $\varepsilon$ defined by $$\begin{aligned} \hat{\eta} \,=\, 1-2\varepsilon \,\,, \end{aligned}$$ and so $\varepsilon$ is positive and small. Consequently, for $\,L=1$ and $2$, to first order in $\sqrt{\varepsilon}$ we have [@wolfenstein'] $$\begin{aligned} \label{pd} \begin{array}{c} \displaystyle p \,=\, {\rm e}^{{\rm i}\delta_{\Lambda K}^P}\, \Bigl( p_\Lambda^{}\, {\rm e}^{{\rm i}\phi_\Lambda^P} + {\rm i}\sqrt{\varepsilon_P^{}}\,\, p_\Xi^{}\, {\rm e}^{{\rm i}\phi_\Xi^P} \Bigr) \,\,, \hspace{3em} d \,=\, {\rm e}^{{\rm i}\delta_{\Lambda K}^D}\, \Bigl( d_\Lambda^{}\, {\rm e}^{{\rm i}\phi_\Lambda^D} + {\rm i}\sqrt{\varepsilon_D^{}}\,\, d_\Xi^{}\, {\rm e}^{{\rm i}\phi_\Xi^D} \Bigr) \,\,, % \vspace{2ex} \\ \displaystyle % \bar{p} \,=\, {\rm e}^{{\rm i}\delta_{\Lambda K}^P}\, \Bigl( p_\Lambda^{}\, {\rm e}^{-{\rm i}\phi_\Lambda^P} + {\rm i}\sqrt{\varepsilon_P^{}}\,\, p_\Xi^{}\, {\rm e}^{-{\rm i}\phi_\Xi^P} \Bigr) \,\,, \hspace{3em} \bar{d} \,=\, -{\rm e}^{{\rm i}\delta_{\Lambda K}^D} \Bigl( d_\Lambda^{}\, {\rm e}^{-{\rm i}\phi_\Lambda^D} + {\rm i}\sqrt{\varepsilon_D^{}}\,\, d_\Xi^{}\, {\rm e}^{-{\rm i}\phi_\Xi^D} \Bigr) \,\,, \end{array}\end{aligned}$$ where $p_B^{}$ and $d_B^{}$ are real, associated with $\,\Omega\to B\phi,\,$ and $\phi_B^{P,D}$ denote the corresponding weak phases in the $\,|\Delta I|=\frac{1}{2}\,$ amplitudes. Putting together the results above, and keeping only the terms at lowest order in small quantities, we obtain $$\label{AO} A_\Omega^{} \,=\, - \tan \bigl( \delta_{\Lambda K}^P-\delta_{\Lambda K}^D \bigr) \, \sin \bigl( \phi_\Lambda^P-\phi_\Lambda^D \bigr) - \frac{p_\Xi^{}}{p_\Lambda^{}}\, \sqrt{\varepsilon_P^{}}\, \sin \bigl( 2\phi_\Lambda^P-\phi_\Xi^P-\phi_\Lambda^D \bigr) + \frac{d_\Xi^{}}{d_\Lambda^{}}\, \sqrt{\varepsilon_D^{}}\, \sin \bigl( \phi_\Lambda^P-\phi_\Xi^D \bigr) \,\,,$$ where we have made use of the expectation that $\delta_{\Lambda K}^{P,D}$, $\phi_{\Lambda,\Xi}^{P,D}$, and $d_B^{}/p_B^{}$ are also small. Unlike the strong phases in $\Lambda$ and $\Xi$ decays, there are no data currently available for $\delta_{\Lambda K}^{}$, and so we will calculate them here. To estimate the weak phases $\phi_{\Lambda,\Xi}^{}$, we will consider contributions coming from the SM as well as from possible new physics. As for $p_B^{}$ and $d_B^{}$, we will extract their approximate values from data shortly, under the assumption of no final-state interactions and no $CP$ violation. Now, the presence of the $\sqrt{\varepsilon}$ terms with additional weak and strong phases in the decay amplitudes in Eq. (\[pd\]) implies that the rate of $\,\Omega\to\Lambda\bar{K},\,$ $$\begin{aligned} \label{width} \Gamma_{\Omega\to\Lambda\bar{K}}^{} \,=\, \frac{ \bigl|\bm{k}_\Lambda^{}\bigr| \, \bigl(E_\Lambda^{}+m_\Lambda^{}\bigr) }{12\pi\, m_\Omega^{}} \bigl( |p|^2 + |d|^2 \bigr) \,\,,\end{aligned}$$ evaluated in the rest frame of $\Omega$, is no longer identical to that of $\,\bar{\Omega}\to\bar{\Lambda}K.\,$ Hence these decays yield another $CP$-violating observable, namely the partial-rate asymmetry $$\begin{aligned} \label{deltao} \Delta_\Omega^{} \,=\, \frac{ \Gamma_{\Omega\to\Lambda\bar{K}}^{} - \Gamma_{\overline{\Omega}\to\overline{\Lambda}K}^{} } { \Gamma_{\Omega\to\Lambda\bar{K}}^{} + \Gamma_{\overline{\Omega}\to\overline{\Lambda}K}^{} } \,\,.\end{aligned}$$ It follows that to leading order $$\begin{aligned} \label{DeltaO} \Delta_\Omega^{} \,=\, \frac{2\,p_\Xi^{}}{p_\Lambda^{}}\, \sqrt{\varepsilon_P^{}}\,\, \sin \bigl( \phi_\Lambda^P-\phi_\Xi^P \bigr) \,\,.\end{aligned}$$ We will also estimate this asymmetry below.[^1] Since $\Delta_\Omega^{}$ results from the interference of $P$-wave amplitudes, a future measurement of it will probe $CP$ violation in the underlying parity-conserving interactions. We note that the strong parameters entering Eq. (\[DeltaO\]), and the second and third terms in Eq. (\[AO\]), are not the strong phases, but $\varepsilon_{P,D}^{}$. Before ending this section, we determine the values of $p_B^{}$ and $d_B^{}$ which are needed in Eqs. (\[AO\]) and (\[DeltaO\]), and also in evaluating the weak phases. To do so, we apply the measured values of $\alpha$ and $\Gamma$, as well as of the masses involved, in the corresponding formulas, as those in Eqs. (\[alpha\]) and (\[width\]), assuming that the strong and weak phases are zero. The experimental values of $\Gamma$ for $\,\Omega\to\Lambda\bar{K},\Xi\pi\,$ are well determined, but those of $\alpha$ are not [@pdb]. HyperCP is currently also measuring $\alpha_\Omega^{}$, in $\,\Omega\to\Lambda\bar{K},\,$ with much better precision, and has reported [@lu] preliminary results of $\,\alpha_\Omega^{}=(1.84\pm0.46\pm0.04)\times10^{-2}\,$ and $\,\alpha_\Omega^{}=(2.01\pm0.17\pm0.04)\times10^{-2}.\,$ Applying the PDG averaging procedure [@pdb] to all the experimental results, including the preliminary ones from HyperCP, yields the average $\,\alpha_\Omega^{}=0.020\pm0.002,\,$ which we adopt in the following. In the case of $\,\Omega\to\Xi\pi,\,$ we use the data given by the PDG [@pdb], and also $$\begin{aligned} \label{|Xp>} |\Xi\pi\rangle \,=\, \sqrt{\mbox{$\frac{2}{3}$}}\, \bigl| \Xi^0\pi^- \bigr\rangle + \mbox{$\frac{1}{\sqrt3}$}\, \bigl| \Xi^-\pi^0 \bigr\rangle\end{aligned}$$ to project out the $\,|\Delta I|=\frac{1}{2}\,$ amplitudes. Thus we extract $$\begin{aligned} \label{px,dx} \begin{array}{c} \displaystyle p_\Lambda^{} \,=\, 3.73 \pm 0.03 \,\,, \hspace{3em} d_\Lambda^{} \,=\, 0.037 \pm 0.004 \,\,, % \vspace{1ex} \\ \displaystyle % p_\Xi^{} \,=\, 2.00 \pm 0.03 \,\,, \hspace{3em} d_\Xi^{} \,=\, 0.08 \pm 0.12 \,\,, \end{array}\end{aligned}$$ all in units of $G_{\rm F}^{}m_{\pi^+}^2$, with $G_{\rm F}^{}$ being the Fermi coupling constant. Strong phases and inelasticity factors\[strong\_phases\] ======================================================== To calculate the strong parameters needed in Eq. (\[AO\]), we take a $K$-matrix approach [@pilkuhn]. Furthermore, we include the contributions of other $B\phi$ states with $\,I=\frac{1}{2}\,$ and $\,S=-2,\,$ namely $\Sigma\bar{K}$ and $\Xi\eta$, which are coupled to $\Lambda\bar{K}$ and $\Xi\pi$ through unitarity constraints. Although at $\,\sqrt{s}=m_\Omega^{}\,$ the $\Sigma\bar{K}$ and $\Xi\eta$ channels are below their thresholds, it is important to incorporate their contributions to the open ones. Such kinematically closed channels have been shown to have sizable influence on the open ones in some other cases [@unitary; @ttv]. The $K$ matrix for the four coupled channels can be written as $$\begin{aligned} K \,=\, K^{\rm T} \,=\, \left( \begin{array}{cccc} \displaystyle K_{\rm oo}^{} & K_{\rm oc}^{} \vspace{1ex} \\ \displaystyle K_{\rm co}^{} & K_{\rm cc}^{} \end{array} \right) \,\,,\end{aligned}$$ where the subscripts “o” and “c” refer to open and closed channels, respectively, at $\,\sqrt{s}=m_\Omega^{}.\,$ Thus $K_{\rm oo,oc,co,cc}$ are all 2$\times$2 matrices in this case and $\,K_{\rm co}^{}=K_{\rm oc}^{\rm T}.\,$ Now, it is convenient to introduce the matrix $$\begin{aligned} K_{\rm r}^{} \,=\, K_{\rm oo}^{} + {\rm i}K_{\rm oc}^{} \bigl( \openone-{\rm i}q_{\rm c}^{} K_{\rm cc}^{} \bigr)^{-1} q_{\rm c}^{} K_{\rm co}^{} \,\,,\end{aligned}$$ where $\openone$ is the 2$\times$2 unit matrix and $\,q_{\rm c}^{}={\rm diag} \bigl(k_{\Sigma\bar{K}},k_{\Xi\eta}^{}\bigr),\,$ with $k_{B\phi}^{}$ being the magnitude of the CM three-momentum in $B\phi$ scattering, implying that $k_{\Sigma\bar{K}}$ and $k_{\Xi\eta}^{}$ are purely imaginary at $\,\sqrt{s}=m_\Omega^{}.\,$ The elements of $\cal S$ in Eq. (\[S\]) can then be evaluated using [@pilkuhn] $$\begin{aligned} {\cal S} \,=\, \openone + 2{\rm i}\, q_{\rm o}^{1/2}\, K_{\rm r}^{} \bigl( \openone-{\rm i}q_{\rm o}^{} K_{\rm r}^{} \bigr)^{-1}\, q_{\rm o}^{1/2} \,\,,\end{aligned}$$ where $\,q_{\rm o}^{} = q_{\rm o}^{1/2} q_{\rm o}^{1/2} = {\rm diag} \bigl( k_{\Xi\pi}^{}, k_{\Lambda\bar{K}} \bigr) .\,$ For the $K$-matrix elements, we make the simplest approximation by adopting the partial-wave amplitudes $f_{B\phi\to B'\phi'}$ at leading order in chiral perturbation theory, namely $$\begin{aligned} \begin{array}{c} \displaystyle K_{\rm oo}^{} \,=\, \left( \begin{array}{cccc} \displaystyle f_{\Xi\pi\to\Xi\pi}^{} & f_{\Xi\pi\to\Lambda\bar{K}} \vspace{1ex} \\ \displaystyle f_{\Lambda\bar{K}\to\Xi\pi} & f_{\Lambda\bar{K}\to\Lambda\bar{K}} \end{array} \right) \,\,, % \vspace{2ex} \\ \displaystyle % K_{\rm oc}^{} \,=\, K_{\rm co}^{\rm T} \,=\, \left( \begin{array}{cccc} \displaystyle f_{\Xi\pi\to\Sigma\bar{K}} & f_{\Xi\pi\to\Xi\eta}^{} \vspace{1ex} \\ \displaystyle f_{\Lambda\bar{K}\to\Sigma\bar{K}} & f_{\Lambda\bar{K}\to\Xi\eta} \end{array} \right) \,\,, % \vspace{2ex} \\ \displaystyle % K_{\rm cc}^{} \,=\, \left( \begin{array}{cccc} \displaystyle f_{\Sigma\bar{K}\to\Sigma\bar{K}} & f_{\Sigma\bar{K}\to\Xi\eta} \vspace{1ex} \\ \displaystyle f_{\Xi\eta\to\Sigma\bar{K}} & f_{\Xi\eta\to\Xi\eta}^{} \end{array} \right) \,\,. \end{array}\end{aligned}$$ Before deriving them, we remark that time-reversal invariance of the strong interaction implies $\,f_{B\phi\to B'\phi'}=f_{B'\phi'\to B\phi}.\,$ The chiral Lagrangian that describes the interactions of the lowest-lying mesons and baryons is written down in terms of the lightest meson-octet, baryon-octet, and baryon-decuplet fields [@bsw; @JenMan]. The meson and baryon octets are collected into $3\times3$ matrices $\phi$ and $B$, respectively, and the decuplet fields are represented by the Rarita-Schwinger tensor $T_{abc}^\mu$, which is completely symmetric in its SU(3) indices ($a,b,c$). The octet mesons enter through the exponential $\,\Sigma=\xi^2=\exp({\rm i}\phi/f),\,$ where $f$ is the pion-decay constant. In the heavy-baryon formalism [@JenMan], the baryons in the chiral Lagrangian are described by velocity-dependent fields, $B_v^{}$ and $T_v^\mu$. For the strong interactions, the Lagrangian at lowest order in the derivative and $m_s^{}$ expansions is given by [@JenMan; @L2refs] $$\begin{aligned} \label{Ls} {\cal L}_{\rm s}^{} &=& \left\langle \bar{B}_v^{}\, {\rm i}v\cdot{\cal D} B_v^{} \right\rangle + 2 D \left\langle \bar{B}_v^{} S_v^\mu \left\{ {\cal A}_\mu^{}, B_v^{} \right\} \right\rangle + 2 F \left\langle \bar{B}_v^{} S_v^\mu \left[ {\cal A}_\mu^{}, B_v^{} \right] \right\rangle \nonumber \\ && \!\! -\,\, \bar{T}_v^\mu\, {\rm i}v\cdot{\cal D} T_{v\mu}^{} + \Delta m\, \bar{T}_v^\mu T_{v\mu}^{} + {\cal C} \left( \bar{T}_v^\mu {\cal A}_\mu^{} B_v^{} + \bar{B}_v^{} {\cal A}_\mu^{} T_v^\mu \right) \nonumber \\ && \!\! +\,\, \frac{b_D^{}}{2 B_0^{}} \left\langle \bar B_v^{} \left\{ \chi_+^{}, B_v^{} \right\} \right\rangle + \frac{b_F^{}}{2 B_0^{}} \left\langle \bar B_v^{} \left[ \chi_+^{}, B_v^{} \right] \right\rangle + \frac{b_0^{}}{2 B_0^{}} \left\langle \chi_+^{} \right\rangle \left\langle \bar B_v^{} B_v^{} \right\rangle \nonumber \\ && \!\! +\,\, \frac{c}{2 B_0^{}}\, \bar T_v^\mu \chi_+^{} T_{v\mu}^{} - \frac{c_0^{}}{2 B_0^{}} \left\langle \chi_+^{} \right\rangle \bar T_v^\mu T_{v\mu}^{} \,\,+\,\, \mbox{$\frac{1}{4}$} f^2 \left\langle \chi_+^{} \right\rangle \,\,+\,\, \cdots \,\,,\end{aligned}$$ where $\,\langle\cdots\rangle\,$ denotes $\,{\rm Tr}(\cdots)\,$ in flavor-SU(3) space, and we have shown only the relevant terms. In the first two lines, $S_v^{}$ is the spin operator and $\,{\cal A}_\mu^{}=\frac{\rm i}{2} \left( \xi\, \partial_\mu^{}\xi^\dagger - \xi^\dagger\, \partial_\mu^{}\xi \right),\,$ with further details given in Ref. [@atv]. The last two lines of ${\cal L}_{\rm s}^{}$ contain $\,\chi_+^{}=\xi^\dagger\chi\xi^\dagger+\xi\chi^\dagger\xi,\,$ with $\,\chi=2B_0^{} M=2B_0^{}\,{\rm diag}\bigl(m_u^{},m_d^{},m_s^{}\bigr),\,$ which explicitly breaks chiral symmetry. We will take the isospin limit $\,m_u^{}=m_d^{}\equiv\hat{m}\,$ and consequently $\,\chi={\rm diag}\bigl(m_\pi^2,m_\pi^2,2 m_K^2-m_\pi^2\bigr).\,$ The constants $D$, $F$, $\cal C$, $B_0^{}$, $b_{D,F,0}^{}$, $c$, $c_0^{}$ are free parameters which can be fixed from data. In the center-of-mass (CM) frame, the $P$-wave amplitude for $\,B\phi\to B'\phi'\,$ with total angular-momentum $J$ has the form $$\begin{aligned} \label{M(BphiB'phi')} {\cal M}_{B\phi\to B'\phi'}^{} &=& -8\pi\sqrt{s}\,\, \chi_{B'}^\dagger \left\{ \left[ f_{B\phi\to B'\phi'}^{(P,J=\frac{1}{2})} + 2 f_{B\phi\to B'\phi'}^{(P,J=\frac{3}{2})} \right] \hat{k}{}'\cdot\hat{k} + \left[ f_{B\phi\to B'\phi'}^{(P,J=\frac{1}{2})} - f_{B\phi\to B'\phi'}^{(P,J=\frac{3}{2})} \right] {\rm i}\bm{\sigma}\cdot\hat{k}{}'\times\hat{k} \right\} \chi_{B}^{} \,\,, \nonumber \\\end{aligned}$$ where $\sqrt{s}$ is the CM energy, $\chi_B^{}$ and $\chi_{B'}^{}$ are the Pauli spinors of the baryons, $\hat{k}$ and $\hat{k}{}'$ denote the unit vectors of the momenta of $B$ and $B'$, respectively, and $f_{B\phi\to B'\phi'}^{(P,J)}$ are the partial-wave amplitudes. At lowest order in $\chi$PT, the $\,J=\frac{3}{2}\,$ amplitude arises from the Lagrangian in Eq. (\[Ls\]), and the pertinent diagrams are displayed in Fig. \[Pwave\]. The amplitudes in the $\,I=\frac{1}{2}\,$ channels are then extracted using the $\,I=\frac{1}{2}\,$ states in Eq. (\[|Xp&gt;\]) and $$\begin{aligned} \begin{array}{c} \displaystyle \bigl| \Lambda\bar{K} \bigr\rangle \,=\, \bigl| \Lambda K^- \bigr\rangle \,\,, % \hspace{3em} % \bigl| \Sigma\bar{K} \bigr\rangle \,=\, \sqrt{\mbox{$\frac{2}{3}$}}\, \bigl| \Sigma^-\bar{K}{}^0 \bigr\rangle + \mbox{$\frac{1}{\sqrt3}$}\, \bigl| \Sigma^0 K^- \bigr\rangle \,\,, % \hspace{3em} % |\Xi\eta\rangle \,=\, \bigl| \Xi^-\eta \bigr\rangle \,\,, \end{array}\end{aligned}$$ which follow a phase convention consistent with the structure of the $\phi$ and $B_v^{}$ matrices. We write the results as $$\begin{aligned} \label{f(P)} f_{B\phi\to B'\phi'}^{(P,J=\frac{3}{2})} \,=\, -{\cal P}_{B\phi,B'\phi'}^{}\, \frac{k_{B\phi}^{} k_{B'\phi'}^{}\,\sqrt{m_B^{}m_{B'}^{}}} {4\pi\,f^2\,\sqrt{s}} \,\,,\end{aligned}$$ where the expressions for ${\cal P}_{B\phi,B'\phi}^{}$ corresponding to the four channels have been collected in Appendix \[PD\]. ![\[Pwave\]Diagrams contributing to the $P$-wave $\,J=\frac{3}{2}\,$ amplitude for $\,B\phi\to B'\phi'\,$ at leading order in $\chi$PT. In all figures, a dashed line denotes a meson field, a single (double) solid-line denotes an octet-baryon (decuplet-baryon) field, and each solid vertex is generated by ${\cal L}_{\rm s}^{}$ in Eq. (\[Ls\]).](T7fig1.eps) Since a $D$-wave amplitude has to be at least of second order in momentum, ${\cal O}\bigl(k^2\bigr)$, it cannot arise from the Lagrangian in Eq. (\[Ls\]) alone. Also required is the Lagrangian involving baryons at second order in the derivative expansion, namely $$\begin{aligned} \label{Ls'} {\cal L}_{\rm s}' \,=\, {-1\over 2 m_0^{}}\, \bar{B}_v^{}\, \bigl[ {\cal D}^2-(v\cdot{\cal D})^2 \bigr] B_v^{} \,+\, \frac{1}{2 m_0^{}}\, \bar{T}{}_v^\mu\, \bigl[{\cal D}^2-(v\cdot{\cal D})^2 \bigr] T_{v\mu}^{} \,\,+\,\, \cdots \,\,,\end{aligned}$$ where $m_0^{}$ is the octet-baryon mass in the chiral limit, and we have shown only the relevant terms. These are two of the relativistic-correction terms in the ${\cal O}\bigl(k^2\bigr)$ Lagrangian, and so their coefficients are fixed. In the CM frame, the $D$-wave amplitude for $\,B\phi\to B'\phi'\,$ has the form $$\begin{aligned} \label{M'(BphiB'phi')} {\cal M}_{B\phi\to B'\phi'}' &=& -8\pi\sqrt{s}\,\, \chi_{B'}^\dagger \left\{ \left[ 2\, f_{B\phi\to B'\phi'}^{(D,J=\frac{3}{2})} + 3\, f_{B\phi\to B'\phi'}^{(D,J=\frac{5}{2})} \right] \Bigl[ \mbox{$\frac{3}{2}$} \bigl( \hat{k}{}'\cdot \hat{k}\bigr)^2 - \mbox{$\frac{1}{2}$} \Bigr] \right. \nonumber\\ && \hspace*{6em} + \left. \left[ f_{B\phi\to B'\phi'}^{(D,J=\frac{3}{2})} - f_{B\phi\to B'\phi'}^{(D,J=\frac{5}{2})} \right] \bigl( 3\hat{k}{}'\cdot\hat{k} \bigr)\, {\rm i}\bm{\sigma}\cdot\hat{k}{}'\times\hat{k} \right\} \chi_B^{} \,\,.\end{aligned}$$ The leading nonzero contribution to this amplitude for $\,J=\frac{3}{2}\,$ comes from diagrams shown in Fig. \[Dwave\]. The resulting $\,I=\frac{1}{2}\,$ partial-wave amplitudes are given by $$\begin{aligned} \label{f(D)} f_{B\phi\to B'\phi'}^{(D,J=\frac{3}{2})} \,=\, -{\cal D}_{B\phi,B'\phi}^{}\, \frac{k_{B\phi}^2 k_{B'\phi'}^2\, \sqrt{m_B^{}m_{B'}^{}}}{ 4\pi\,f^2\, m_0^{}\, \sqrt{s}} \;,\end{aligned}$$ where the expressions for ${\cal D}_{B\phi,B'\phi}^{}$ corresponding to the four channels have also been collected in Appendix \[PD\]. ![\[Dwave\]Diagrams for the leading nonzero contribution to the $D$-wave $\,J=\frac{3}{2}\,$ amplitude for $\,B\phi\to B'\phi'.\,$ Each hollow vertex is generated by ${\cal L}_{\rm s}'$ in Eq. (\[Ls’\]).](T7fig2.eps) Numerically, we adopt the tree-level values $\,D=0.80\,$ and $\,F=0.50,\,$ extracted from hyperon semileptonic decays [@JenMan], as well as $\,{\cal C}=-1.7,\,$ from the strong decays $\,T\to B\phi.$[^2] We also employ $\,f=f_\pi^{}=92.4\,{\rm MeV},\,$ $\,m_0^{}=0.7\,\rm GeV,$[^3] and the isospin-averaged masses $$\begin{aligned} \label{masses} \begin{array}{c} \displaystyle m_\pi^{} \,=\, 137.3 \,\,, \hspace{2em} m_K^{} \,=\, 495.7 \,\,, \hspace{2em} m_\eta^{} \,=\, 547.3 \,\,, \hspace{2em} % \vspace{1ex} \\ \displaystyle % m_N^{} \,=\, 938.9 \,\,, \hspace{2em} m_\Lambda^{} \,=\, 1115.7 \,\,, \hspace{2em} m_\Sigma^{} \,=\, 1193.2 \,\,, \hspace{2em} m_\Xi^{} \,=\, 1318.1 \,\,, \hspace{2em} % \vspace{1ex} \\ \displaystyle % m_\Delta^{} \,=\, 1232.0 \,\,, \hspace{2em} m_{\Sigma^*}^{} \,=\, 1384.6 \,\,, \hspace{2em} m_{\Xi^*}^{} \,=\, 1533.4 \,\,, \hspace{2em} m_\Omega^{} \,=\, 1672.5 \,\,, \end{array}\end{aligned}$$ all in units of MeV. Thus, putting together all the results above and setting $\,\sqrt{s}=m_\Omega^{},\,$ from the $P$- and $D$-wave $\cal S$-matrices we obtain $$\begin{aligned} \label{deltaPD} \begin{array}{c} \displaystyle \delta_{\Lambda K}^{P} \,=\, -0.65^\circ \,\,, \hspace{3em} \sqrt{\varepsilon_P^{}} \,=\, 0.013 \,\,, % \hspace{3em} % \delta_{\Lambda K}^{D} \,=\, +0.05^\circ \,\,, \hspace{3em} \sqrt{\varepsilon_D^{}} \,=\, 0.0009 \,\,, \end{array} \end{aligned}$$ which are pertinent to Eqs. (\[AO\]) and (\[DeltaO\]). The effects of the closed channels turn out to be significant on $\delta_{\Lambda K}^{P}$ and $\varepsilon_P^{}$. Excluding the $\Sigma\bar{K}$ and $\Xi\eta$ channels would lead to $\,\delta_{\Lambda K}^{P}=-2.7^\circ\,$ and $\,\sqrt{\varepsilon_P^{}}=0.065.\,$ The closed channels have minor effects on the $D$-wave parameters. Since the numbers in Eq. (\[deltaPD\]) proceed from the leading nonzero amplitudes in $\chi$PT, part of the uncertainties in these predictions comes from our lack of knowledge about the higher-order contributions, which are presently incalculable. To get an idea of how they might affect our results, we redo the calculation using the one-loop values $\,D=0.61,\,$ $\,F=0.40,\,$ and $\,{\cal C}=-1.2\,$ [@JenMan; @bss], finding $\,\delta_{\Lambda K}^{P}=-0.47^\circ,\,$ $\,\sqrt{\varepsilon_P^{}}=0.010,\,$ $\,\delta_{\Lambda K}^{D}=+0.03^\circ,\,$ and $\,\sqrt{\varepsilon_D^{}}=0.0003.\,$ The differences between the two sets of results then provide an indication of the size of this part of the uncertainties. Another part is due to our lack of knowledge about the reliability of our $K$-matrix approximation. A comparison of $K$-matrix results in $\Lambda\pi$ scattering with experiment suggests that this approach gives results with the correct order-of-magnitude and sign [@ttv; @dukes]. For these reasons, we may conclude that $$\begin{aligned} \label{DeltaPD} -0.9^\circ \,\le\, \delta_{\Lambda K}^P-\delta_{\Lambda K}^D \,\le\, -0.5^\circ \,\,, \hspace{2em} 0.01 \,\le\, \sqrt{\varepsilon_P^{}} \,\le\, 0.02 \,\,, \hspace{2em} 0.0003 \,\le\, \sqrt{\varepsilon_D^{}} \,\le\, 0.002 \,\,. \hspace*{2em}\end{aligned}$$ We will employ these numbers in evaluating the asymmetries. $\bm{CP}$-violating asymmetries within standard model\[A\_sm\] ============================================================== To calculate the $CP$-violating phases, we will work in the framework of heavy-baryon $\chi$PT. The amplitude for the weak decay $\,\Omega\to B\phi\,$ in the heavy-baryon approach has the general form $$\begin{aligned} \label{iM} {\rm i} {\cal M}_{\Omega\to B\phi}^{} \,=\, -{\rm i} \bigl\langle B\phi \bigr| {\cal L} \bigl| \Omega \bigr\rangle \;=\; \bar{u}_{B}^{} \left( \vphantom{|_|^|} {\cal A}_{B\phi}^{(P)}+2S_v^{}\cdot k_\phi^{}\,{\cal A}_{B\phi}^{(D)}\, \right) k_\phi^{}\cdot{u}_\Omega^{} \;.\end{aligned}$$ where $k_\phi^{}$ is the four-momentum of $\phi$, and the superscripts refer to the $P$- and $D$-wave components of the amplitude. In the rest frame of $\Omega$, these components are related to the $p$ and $d$ amplitudes by $$\begin{aligned} \label{p,d} p \;=\; \bigl| \bm{k}_\phi^{} \bigr| \, {\cal A}^{(P)} \;, \hspace{3em} d \;=\; \bm{k}_\phi^2\, {\cal A}^{(D)} \;.\end{aligned}$$ We will follow the usual prescription for estimating a weak phase [@dhp; @hcpvt'; @tv4], namely, first calculating the imaginary part of the amplitude and then dividing it by the real part of the amplitude extracted from experiment under the assumption of no strong phases and no $CP$ violation. Within the SM, the weak interactions responsible for hyperon nonleptonic decays are described by the short-distance effective $\,|\Delta S|=1\,$ Hamiltonian [@BucBL] $$\begin{aligned} \label{Hw_sm} {\cal H}_{\rm w}^{} \,=\, \frac{G_{\rm F}^{}}{\sqrt2}\, V_{ud}^* V_{us}^{}\, \sum_{i=1}^{10} C_i^{}\, Q_i^{} \,\,+\,\, {\rm H.c} \,\,,\end{aligned}$$ where $V_{kl}^{}$ are the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix [@ckm], $$\begin{aligned} C_i^{} \,\equiv\, z_i^{} + \tau y_i^{} \,\equiv\, z_i^{} \,-\, \frac{V_{td}^* V_{ts}^{}}{V_{ud}^* V_{us}^{}}\,\, y_i^{}\end{aligned}$$ are the Wilson coefficients, and $Q_i^{}$ are four-quark operators whose expressions can be found in Ref. [@BucBL]. In this case, the weak phases $\phi^{P,D}$ of Eq. (\[AO\]) proceed from the $CP$-violating phase residing in the CKM matrix, and its elements appearing in $C_i^{}$ above can be expressed in the Wolfenstein parametrization [@wolfenstein] as $$\begin{aligned} V_{ud}^* V_{us}^{} \,=\, \lambda \,\,, \hspace{3em} V_{td}^* V_{ts}^{} \,=\, -A^2 \lambda^5\, (1-\rho+{\rm i}\eta)\end{aligned}$$ at lowest order in $\lambda$. As is well known, ${\cal H}_{\rm w}^{}$ transforms mainly as $\bigl(8_{\rm L}^{},1_{\rm R}^{}\bigr)\oplus \bigl(27_{\rm L}^{},1_{\rm R}^{}\bigr)$ under SU(3$)_{\rm L}^{}$$\times$SU(3$)_{\rm R}^{}$ rotations. It is also known from experiment that the octet term dominates the 27-plet term [@dgh]. We, therefore, assume in what follows that within the SM the decays of interest are completely characterized by the $(8_{\rm L}^{},1_{\rm R}^{})$, $\,|\Delta I|=\frac{1}{2}\,$ interactions. The leading-order chiral Lagrangian for such interactions is [@bsw; @jenkins2] $$\begin{aligned} \label{Lw_sm} {\cal L}_{\rm w}^{} &=& h_D^{} \left\langle \bar B_v^{} \left\{ \xi^\dagger h \xi\,,\,B_v^{} \right\} \right\rangle + h_F^{} \left\langle \bar B_v^{} \left[ \xi^\dagger h \xi\,,\,B_v^{} \right] \right\rangle + h_C^{}\, \bar T_v^\mu\, \xi^\dagger h \xi\, T_{v\mu}^{} \,\,+\,\, {\rm H.c.} \,\,,\end{aligned}$$ where the 3$\times$3-matrix $h$ selects out $\,s\to d\,$ transitions, having elements $\,h_{kl}^{}=\delta_{k2}^{}\delta_{3l}^{},\,$ and the parameters $h_{D,F,C}^{}$ contain the weak phases of interest. These phases are induced primarily by the imaginary part of $C_6^{}$ associated with the penguin operator $Q_6^{}$, and this is due to its chiral structure and the relative size of ${\rm Im}\,C_6^{}$. In order to relate the imaginary part of $h_{D,F,C}^{}$ to ${\rm Im}\,C_6^{}$, we use the results of Ref. [@tv4], obtained from factorizable and nonfactorizable contributions. Accordingly, we have $$\begin{aligned} \label{Imh} {\rm Im}\, h_D^{} \,=\, 5.14 \,\, y_6^{} \,\,, % \hspace{2em} % {\rm Im}\, h_F^{} \,=\, -14.3 \,\, y_6^{} \,\,, % \hspace{2em} % {\rm Im}\, h_C^{} \,=\, 32.5 \,\, y_6^{} \,\,,\end{aligned}$$ all in units of $\,\sqrt2\,f_\pi^{} G_{\rm F}^{} m_{\pi^+}^2\,A^2\lambda^5\eta.\,$ From ${\cal L}_{\rm w}^{}$ together with ${\cal L}_{\rm s}^{}$, we can derive the diagrams displayed in Fig. \[Pwave\_sm\], which represent the leading-order contributions to the $P$-wave transitions in $\,\Omega^-\to\Lambda\bar{K},\Xi\pi\,$ and yield the amplitudes $$\begin{aligned} \label{A_P} \begin{array}{c} \displaystyle {\cal A}_{\Lambda\bar{K}}^{(P)} \,=\, \frac{{\cal C}\, \bigl( h_D^{}-3h_F^{} \bigr) } {2\sqrt{3}\, f\, \bigl( m_\Xi^{}-E_{\Lambda}^{} \bigr) } - \frac{{\cal C}\, h_C^{}} {2\sqrt{3}\,f\,\bigl(m_{\Omega}^{}-m_{\Xi^*}^{}\bigr)} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal A}_{\Xi\pi}^{(P)} \,=\, \sqrt{\mbox{$\frac{2}{3}$}}\, {\cal A}_{\Xi^0\pi^-}^{(P)} + \mbox{$\frac{1}{\sqrt3}$}\, {\cal A}_{\Xi^-\pi^0}^{(P)} \,=\, \frac{-{\cal C}\, h_C^{}} {2\sqrt3\, f\, \bigl( m_{\Omega}^{}-m_{\Xi^*}^{} \bigr) } \,\,. \end{array}\end{aligned}$$ ![\[Pwave\_sm\]Diagrams representing standard-model contributions to the leading-order $P$-wave amplitude for $\,\Omega^-\to B\phi.\,$ Each square represents a weak vertex generated by ${\cal L}_{\rm w}^{}$ in Eq. (\[Lw\_sm\]).](T7fig3.eps) Applying Eq. (\[Imh\]) in $\,p_{B\phi}^{}=\bigl|\bm{k}_\phi^{}\bigr|\,{\cal A}_{B\phi}^{(P)}\,$ then leads to $$\begin{aligned} \label{Imp_sm/px} \frac{{\rm Im}\,p_{\Lambda\bar{K}}^{}}{p_\Lambda^{\rm expt}} \,=\, -1.15\,\, A^2 \lambda^5\eta\,\, y_6^{} \,\,, \hspace{3em} \frac{{\rm Im}\,p_{\Xi\pi}^{}}{p_\Xi^{\rm expt}} \,=\, +23.6\,\, A^2 \lambda^5\eta\,\, y_6^{} \,\,,\end{aligned}$$ where $p_{\Lambda,\Xi}^{\rm expt}$ are the central values of $p_{\Lambda,\Xi}^{}$ in Eq. (\[px,dx\]). The uncertainties in these predictions are due to our neglect of higher-order terms that are presently incalculable and to our lack of knowledge on the reliability of the matrix-element calculation. Therefore, we assign an error of 100$\%$ to these ratios, as was similarly done in Ref. [@tv4] for the weak phases in $\,\Lambda\to p\pi\,$ and $\,\Xi\to\Lambda\pi.\,$ Thus, using $\,A^2 \lambda^5\eta = 1.26\times10^{-4}\,$ and $\,y_6^{} = -0.096,\,$ as in Ref. [@tv4], we obtain $$\begin{aligned} \label{phiP_sm} \phi_\Lambda^P \,=\, (1.4\pm 1.4)\times 10^{-5} \,\,, \hspace{3em} \phi_\Xi^P \,=\, (-2.9\pm 2.9)\times 10^{-4} \,\,.\end{aligned}$$ The $\phi_\Xi^P$ result is comparable in size to that estimated in Ref. [@tv2] using the vacuum-saturation method.[^4] Turning now to the $D$-wave phases, we note that the expression for the ${\cal A}^{(D)}$ term in Eq. (\[iM\]) implies that ${\cal L}_{\rm w}^{}$, in conjunction with ${\cal L}_{\rm s}^{}$ and ${\cal L}_{\rm s}'$, cannot solely give rise to diagrams for the $D$-wave components. Rather, the weak Lagrangian that can generate the leading nonzero contributions to this term must have the Dirac structure $\,\bar{B}{}_v^{}S_v^\mu\, \partial_\mu^{}{\cal A}_\alpha^{}\, T_v^\alpha,\,$ which is of ${\cal O}\bigl(k^2\bigr)$. The $D$-wave amplitude at ${\cal O}\bigl(k^2\bigr)$ can also receive contributions from so-called tadpole diagrams, each being a combination of a strong $\,\Omega B\phi\bar{K}$ vertex, generated by a Lagrangian having the structure $\,\bar{B}{}_v^{}S_v^\mu{\cal A}_\mu^{}{\cal A}_\alpha^{}T_v^\alpha,\,$ and a $\bar{K}$-vacuum vertex coming from a weak Lagrangian of ${\cal O}\bigl(m_s^{}\bigr)$. Unfortunately, at present the parameters of these strong and weak Lagrangians of ${\cal O}\bigl(k^2\bigr)$ are incalculable. The best that we can do is to make a crude estimate based on naive dimensional analysis [@nda]. Thus, since the lowest-order chiral Lagrangian yielding $p_{B\phi}^{}$ is of ${\cal O}(1)$, whereas that yielding $d_{B\phi}^{}$ is of ${\cal O}\bigl(k^2\bigr)$, and since $\,k\sim m_s^{}\,$ in hyperon nonleptonic decays, we expect that $$\begin{aligned} \label{d/p} \frac{d_{B\phi}^{}}{p_{B\phi}^{}} \,\sim\, \frac{m_s^2}{\Lambda_\chi^2} \,\,,\end{aligned}$$ where $\,\Lambda_\chi\sim4\pi f\,$ is the chiral-symmetry breaking scale. It is worth remarking here that for $\,m_s^{}\sim 0.12\,\rm GeV\,$ [@buras2] this naive expectation is compatible with the value of $d_\Lambda^{}/p_\Lambda^{}$ from Eq. (\[px,dx\]), in which the $d_\Lambda^{}$ number is determined largely by the preliminary data from HyperCP [@lu]. For these reasons, we make the approximation $$\begin{aligned} \phi^D \,=\, \frac{{\rm Im}\,d}{d^{\rm expt}} \,=\, \frac{m_s^2}{\Lambda_\chi^2}\, \frac{p^{\rm expt}}{d^{\rm expt}}\, \phi^P\end{aligned}$$ for the magnitude of the phase, where $\phi^P$ comes from Eq. (\[phiP\_sm\]). Since $d_\Xi^{}$ as quoted in Eq. (\[px,dx\]) is poorly determined, we take the further approximation $\,d_\Xi^{}=p_\Xi^{} d_\Lambda^{}/p_\Lambda\,$ for its magnitude in order to estimate $\phi_\Xi^D$. All this leads to $$\begin{aligned} \label{phiD_sm} \phi_\Lambda^D \,=\, (0\pm3)\times 10^{-5} \,\,, \hspace{3em} \phi_\Xi^D \,=\, (0\pm6)\times 10^{-4} \,\,.\end{aligned}$$ The errors that we quote in $\phi_B^{P,D}$ are obviously not Gaussian and simply indicate the ranges resulting from our calculation. Putting together the numbers from Eqs. (\[px,dx\]), (\[DeltaPD\]), (\[phiP\_sm\]), and (\[phiD\_sm\]) in Eq. (\[AO\]) yields $$\begin{aligned} -9\times10^{-6} \,\le\, A_\Omega^{} \,\le\, +2\times10^{-6} \,\,.\end{aligned}$$ We note that the second term on the right-hand side of Eq. (\[AO\]), which would vanish if the $\,\Xi\pi\leftrightarrow\Lambda\bar{K}\,$ rescattering were ignored, has turned out to be the largest one. This is due to $\phi_\Xi^P$ and $\varepsilon_P^{}$ being much larger than $\phi_\Lambda^{P,D}$ and $\varepsilon_D^{}$, respectively, as well as to $\delta_{\Lambda K}^{P,D}$ being small. For the partial-rate asymmetry in Eq. (\[DeltaO\]), we find $$\begin{aligned} \label{Delta_O^sm} 0 \,\le\, \Delta_\Omega^{} \,\le\, 13\times10^{-6} \,\,.\end{aligned}$$ This is comparable to the corresponding asymmetry in $\,\Omega\to\Xi\pi\,$ [@tv2], but larger than those in octet-hyperon decays [@dhp]. Since the asymmetry measured by HyperCP is the sum $\,A_{\Omega\Lambda}^{}=A_\Omega^{}+A_\Lambda^{},\,$ it is important to know how $A_\Omega^{}$ compares with $A_\Lambda^{}$. The SM contribution to $A_\Lambda^{}$ has been evaluated most recently to be $\,-3\times10^{-5}\le A_\Lambda^{}\le 4\times10^{-5}\,$ [@tv4]. Thus within the standard model $A_\Omega^{}$ is smaller than $A_\Lambda^{}$, but not negligibly so, and the resulting $A_{\Omega\Lambda}^{}$ has a value within the range $$\begin{aligned} \label{A_OL^sm} -4\times10^{-5} \,\le\, A_{\Omega\Lambda}^{} \,\le\, 4\times10^{-5} \,\,.\end{aligned}$$ For this observable, HyperCP expects to have a statistical precision of $\,9\times10^{-2}\,$ [@lu], and so its measurement will unlikely be sensitive to the SM effects. $\bm{CP}$-violating asymmetries due to new physics\[A\_np\] =========================================================== Here we evaluate $A_\Omega^{}$ and $\Delta_\Omega^{}$ arising from possible physics beyond the standard model. In particular, we consider contributions generated by the chromomagnetic-penguin operators (CMO), which in some new-physics models could be significantly larger that their SM counterparts [@NP; @sdg; @buras3]. The relevant effective Hamiltonian can be written as [@buras3] $$\begin{aligned} \label{Hw_np} {\cal H}_{{\rm w},g}^{} \,=\, C_{g}^{}\, Q_g^{} \,+\, \tilde{C}_{g}^{}\, \tilde{Q}{}_g^{} \,\,+\,\, {\rm H.c.} \,\,,\end{aligned}$$ where $C_{g}^{}$ and $\tilde{C}_{g}^{}$ are the Wilson coefficients, and $$\begin{aligned} Q_{g}^{} \,=\, \frac{g_{\rm s}^{}}{16\pi^2}\, \bar{d}\, \sigma^{\mu\nu} t^a\, \bigl(1+\gamma_5^{}\bigr) s\, G_{\!\mu\nu}^{a} \,\,, \hspace{3em} \tilde{Q}{}_{g}^{} \,=\, \frac{g_{\rm s}^{}}{16\pi^2}\, \bar{d}\, \sigma^{\mu\nu} t^a\, \bigl(1-\gamma_5^{}\bigr) s\, G_{\!\mu\nu}^{a}\end{aligned}$$ are the CMO, with $G_a^{\!\mu\nu}$ being the gluon field-strength tensor, $g_{\rm s}^{}$ the gluon coupling constant, and $\,{\rm Tr}\bigl(t^a t^b\bigr)=\frac{1}{2}\delta^{ab}.\,$ Since various new-physics scenarios may contribute differently to the coefficients of the operators, we will not focus on specific models, but will instead adopt a model-independent approach, only assuming that the contributions are potentially sizable, in order to estimate bounds on the resulting asymmetries as allowed by constraints from kaon measurements. The chiral Lagrangian proceeding from the CMO has to respect their symmetry properties. Under $\,\rm SU(3)_{L}^{}$$\times$$\rm SU(3)_{R}^{}\,$ rotations $Q_g^{}$ and $\tilde{Q}{}_g^{}$ transform as $\,\bigl(\bar{3}{}_{\rm L}^{},3_{\rm R}^{}\bigr)\,$ and $\,\bigl(3_{\rm L}^{},\bar{3}{}_{\rm R}^{}\bigr),\,$ respectively. Moreover, under a $CPS$ transformation (a $CP$ operation followed by interchanging the $s$ and $d$ quarks) $Q_g^{}$ and $\tilde{Q}{}_g^{}$ change into each other. These symmetry properties are also those of the quark densities $\,\bar{d}(1\pm\gamma_5^{})s,\,$ of which the lowest-order chiral realization has been derived in Ref. [@tv4]. From this realization, we can infer the leading-order chiral Lagrangian induced by the CMO, namely $$\begin{aligned} \label{Lw_np} {\cal L}_{{\rm w},g}^{} &=& \beta_D^{} \left\langle \bar{B}{}_v^{} \left\{ \xi^\dagger h\xi^\dagger, B_v^{} \right\} \right\rangle + \beta_F^{} \left\langle \bar{B}{}_v^{} \left[ \xi^\dagger h\xi^\dagger, B_v^{} \right] \right\rangle + \beta_0^{} \left\langle h\Sigma^\dagger \right\rangle \left\langle \bar{B}{}_v^{} B_v^{} \right\rangle \nonumber \\ && \!\! +\,\, \tilde\beta_D^{} \left\langle \bar{B}{}_v^{} \left\{ \xi h\xi,B_v^{} \right\} \right\rangle + \tilde\beta_F^{} \left\langle \bar{B}{}_v^{} \left[ \xi h\xi,B_v^{} \right] \right\rangle + \tilde\beta_0^{} \left\langle h\Sigma \right\rangle \left\langle \bar{B}{}_v^{} B_v^{} \right\rangle \nonumber \\ && \!\! +\,\, \beta_C^{}\, \bar{T}{}_v^\alpha\, \xi^\dagger h\xi^\dagger\, T_{v\alpha}^{} - \beta_0' \left\langle h\Sigma^\dagger \right\rangle \bar{T}{}_v^\alpha T_{v\alpha}^{} + \tilde\beta_C^{}\, \bar{T}{}_v^\alpha\, \xi h\xi\, T_{v\alpha}^{} - \tilde\beta_0' \left\langle h\Sigma \right\rangle \bar{T}{}_v^\alpha T_{v\alpha}^{} \nonumber \\ && \!\! +\,\, \beta_\varphi^{}\, f^2 B_0^{} \left\langle h\Sigma^\dagger \right\rangle \,\,+\,\, \tilde\beta_\varphi^{}\, f^2 B_0^{} \left\langle h\Sigma \right\rangle \,\,+\,\, {\rm H.c.} \,\,,\end{aligned}$$ where $\beta_i^{}$ $\bigl(\tilde\beta_i^{}\bigr)$ are parameters containing the coefficient $C_g^{}$ $\bigl(\tilde{C}_g^{}\bigr)$. The part of this Lagrangian without the decuplet-baryon fields was first written down in Ref. [@t6]. From ${\cal L}_{{\rm w},g}$ along with ${\cal L}_{\rm s}$, we derive the diagrams shown in Fig. \[Pwave\_np\], which represent the lowest-order contributions induced by the CMO to the $P$-wave transitions in $\,\Omega\to\Lambda\bar{K},\Xi\pi.\,$ We remark that each of the three diagrams in the figure is of ${\cal O}(1)$ in the $m_s^{}$ expansion, and that Fig. \[Pwave\_sm\] does not include the meson-pole diagram because within the SM it contributes only at next-to-leading order. The amplitudes following from Fig. \[Pwave\_np\] are $$\begin{aligned} \label{AP_np} \begin{array}{c} \displaystyle {\cal A}_{\Lambda\bar{K}}^{(P)g} \,=\, \frac{{\cal C}\, \bigl( \beta_D^{+}-3\beta_F^{+} \bigr) } {2\sqrt{3}\, f\, \bigl( m_\Xi^{}-E_\Lambda^{} \bigr) } - \frac{{\cal C}\, \beta_C^{+}} {2\sqrt{3}\, f\, \bigl(m_{\Omega}^{}-m_{\Xi^*}^{}\bigr)} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal A}_{\Xi\pi}^{(P)g} \,=\, \frac{-{\cal C}\, \beta_C^{+}} {2\sqrt3\, f\, \bigl( m_{\Omega}^{}-m_{\Xi^*}^{} \bigr) } + \frac{\sqrt3\, {\cal C}\, \beta_\varphi^+} {2f\, \bigl(m_s^{}-\hat{m}\bigr)} \,\,, \end{array}\end{aligned}$$ where $\,\beta_i^{+}\equiv\beta_i^{}+\tilde{\beta}_i^{}\,$ and we have used $\,m_K^2-m_\pi^2=B_0^{}\, (m_s^{}-\hat m),\,$ derived from Eq. (\[Ls’\]).[^5] ![\[Pwave\_np\]Diagrams representing chromomagnetic-penguin contributions to the leading-order $P$-wave amplitude for $\,\Omega^-\to B\phi.\,$ Each square represents a weak vertex generated by ${\cal L}_{\rm w}^{\rm CMO}$ in Eq. (\[Lw\_np\]).](T7fig4.eps) In order to estimate the weak phases in $A_\Omega^{}$, we need to determine the parameters $\beta_i^{+}$ in terms of the underlying coefficient $\,C_g^{+}\equiv C_g^{}+\tilde{C}_g^{},\,$ which is the combination corresponding to parity-conserving transitions. From the effective Hamiltonian in Eq. (\[Hw\_np\]) and the chiral Lagrangian in Eq. (\[Lw\_np\]), we can derive the one-particle matrix elements $$\begin{aligned} \label{<B'|H|B>} \begin{array}{c} \displaystyle \bigl\langle n\bigr|{\cal H}_{{\rm w},g}^{}\bigl|\Lambda\bigr\rangle \,=\, \frac{\beta_D^{+}+3\beta_F^{+}}{\sqrt6}\, \bar{u}_n^{}u_\Lambda^{} \,\,, % \hspace{3em} % \bigl\langle\Lambda\bigr|{\cal H}_{{\rm w},g}^{}\bigl|\Xi^0\bigr\rangle \,=\, \frac{\beta_D^{+}-3\beta_F^{+}}{\sqrt6}\, \bar{u}_\Lambda^{}u_\Xi^{} \,\,, % \vspace{2ex} \\ \displaystyle % \bigl\langle\Xi^{*-}\bigr|{\cal H}_{{\rm w},g}^{}\bigl|\Omega^-\bigr\rangle \,=\, \frac{-\beta_C^{+}}{\sqrt3}\, \bar{u}{}_{\Xi^*}^{}\cdot u_\Omega^{} \,\,, % \hspace{3em} % \bigl\langle\pi^-\bigr|{\cal H}_{{\rm w},g}^{}\bigl|K^-\bigr\rangle \,=\, \beta_\varphi^{+}\, B_0^{} \,\,. \end{array}\end{aligned}$$ Since there is presently no reliable way to determine these matrix elements from first principles, we employ the MIT bag model to estimate them. The results for $\beta_{D,F,\varphi}^{+}$ have already been derived in Ref. [@t6] using the bag-model calculations of Ref. [@dghp] and are given by $$\begin{aligned} \label{bdbf} \begin{array}{c} \displaystyle \beta_D^+ \,=\, -\mbox{$\frac{3}{7}$}\, \beta_F^+ \,=\, \frac{2\, I_M^{}N^4}{\pi\, R^2}\, C_g^+ \,\,, % \hspace{3em} % \beta_\varphi^{+} \,=\, \frac{-8\, I_M^{}N^4\, \sqrt{2m_K^2}}{\pi B_0^{}\, R^2}\, C_g^+ \,\,, \end{array}\end{aligned}$$ where $N$, $R$, and $I_M^{}$ are bag parameters. For $\beta_C^+$, extending the work of Ref. [@dghp] we find $$\begin{aligned} \label{bc} \beta_C^+ \,=\, \frac{-8\, I_M^{}N^4}{\pi\, R^2}\, C_g^+ \,\,.\end{aligned}$$ Numerically, we take $\,R=5.0\,{\rm GeV}^{-1}\,$ for the octet baryons, $\,R=5.4\,{\rm GeV}^{-1}\,$ for the decuplet baryons, and $\,R=3.3\,{\rm GeV}^{-1}\,$ for the mesons, after Refs. [@dghp; @bagmodel]. In addition, as in Ref. [@t6], we have $\,N=2.27\,$ and $\,I_M^{}=1.63\times10^{-3}\,$ for both the baryons and mesons. It follows that $$\begin{aligned} \label{beta_i} \begin{array}{c} \displaystyle \beta_D^+ \,=\, -\mbox{$\frac{3}{7}$} \beta_F^+ \,=\, 1.10\times10^{-3}\,\, C_g^+\,\,{\rm GeV}^2 \,\,, % \hspace{3em} %\vspace{2ex} \\ \displaystyle % \beta_C^+ \,=\, -3.78\times10^{-3}\,\, C_g^+\,\,{\rm GeV}^2 \,\,, % \vspace{2ex} \\ \displaystyle % \beta_\varphi^+\, B_0^{} \,=\, -7.09\times10^{-3}\,\, C_g^+\,\, {\rm GeV}^3 \,\,, \end{array}\end{aligned}$$ We note that $C_g^{+}$ here is the Wilson coefficient at the low scale $\,\mu={\cal O}(1\,\rm GeV)\,$ and hence already contains the QCD running from the new-physics scales. We also note that the bag-model numbers in Eq. (\[beta\_i\]) are comparable in magnitude to the natural values of the parameters as obtained from naive dimensional analysis [@nda], $$\begin{aligned} \label{beta_i^nda} \beta_{D,F,C}^{\rm NDA} \,=\, \frac{C_g^{}\, g_{\rm s}^{}}{16\pi^2}\, \frac{\Lambda_\chi^2}{4\pi} \,\sim\, 0.0024\,C_g^{}\,\, {\rm GeV}^2 \,\,, \hspace{3em} \beta_\varphi^{\rm NDA}\, B_0^{} \,=\, {C_g^{}\, g_{\rm s}^{}\over 16\pi^2}\, \frac{\Lambda_\chi^3}{4\pi} \,\sim\, 0.0028\,C_g^{}\,\, {\rm GeV}^3 \,\,,\end{aligned}$$ where we have chosen $\,g_{\rm s}^{}=\sqrt{4\pi}.\,$ The differences between the two sets of numbers provide an indication of the level of uncertainty in estimating the matrix elements.. This will be taken into account in our results below. Applying Eq. (\[beta\_i\]) in $\,p_{B\phi}^{}=\bigl|\bm{k}_\phi^{}\bigr|\,{\cal A}_{B\phi}^{(P)}\,$ then leads to the CMO contributions $$\begin{aligned} \label{phiP_np} \bigl( \phi_\Lambda^P \bigr)_g \,=\, (-1.0\pm2.0)\times10^5\,{\rm GeV}\,\, {\rm Im}\, C_g^+ \,\,, \hspace{3em} \bigl( \phi_\Xi^P \bigr)_g \,=\, (2.3\pm4.6)\times10^5\,{\rm GeV}\,\, {\rm Im}\, C_g^+ \,\,.\end{aligned}$$ where, as in the $\,\Lambda\to p\pi\,$ and $\,\Xi\to\Lambda\pi\,$ cases [@t6], we have assigned an error of 200$\%$ to each of these numbers to reflect the uncertainty due to our neglect of higher-order terms that are presently incalculable and the uncertainty in estimating the matrix elements above. For the $D$-wave phases, we have here the same problem in estimating them as in the standard-model case, and so we have to resort again to dimensional arguments. Thus, since the $D$-wave amplitude is parity violating, we have $$\begin{aligned} \label{phiD_np} \bigl( \phi_\Lambda^D \bigr)_g \,=\, (0\pm 3)\times10^5\,{\rm GeV}\,\, {\rm Im}\, C_g^- \,\,, \hspace{3em} \bigl( \phi_\Xi^D \bigr)_g \,=\, (0\pm 8)\times10^5\,{\rm GeV}\,\, {\rm Im}\, C_g^- \,\,, \end{aligned}$$ where $\,C_g^{-}\equiv C_g^{}-\tilde{C}_g^{}\,$ is the combination corresponding to parity-violating transitions. Putting together the numbers from Eqs. (\[px,dx\]), (\[DeltaPD\]), (\[phiP\_np\]), and (\[phiD\_np\]) in Eq. (\[AO\]), we find $$\begin{aligned} 10^{-4}\, {\rm GeV}^{-1}\, \bigl( A_\Omega^{} \bigr)_g \,=\, (0.3\pm 1.3)\, {\rm Im}\, C_g^+ + (0\pm 1)\, {\rm Im}\, C_g^- \,\,.\end{aligned}$$ As in the SM result, the second term in $A_\Omega^{}$ dominates these numbers. For the partial-rate asymmetry, we obtain $$\begin{aligned} \label{Delta_O^np} \bigl( \Delta_\Omega^{} \bigr)_g^{} \,=\, (-0.7\pm 1.4)\times10^4\, {\rm GeV}\,\, {\rm Im}\, C_g^+ \,\,.\end{aligned}$$ We can now write down the contribution of the CMO to the sum of asymmetries $\,A_{\Omega\Lambda}^{}=A_\Omega^{}+A_\Lambda^{}\,$ being measured by HyperCP. The most recent evaluation of their contribution to $A_\Lambda^{}$ has been done in Ref. [@t6], the result being $\,10^{-4}\, {\rm GeV}^{-1}\, (A_\Lambda^{})_g^{} = (-4.2\pm 8.3)\, {\rm Im}\,C_g^{+} + (3.5\pm 7.0)\, {\rm Im}\,C_g^{-}.\,$ Evidently, $\bigl(A_\Omega^{}\bigr){}_g$ is much smaller than, though still not negligible compared to, $\bigl(A_\Lambda^{}\bigr){}_g$. Summing the two asymmetries yields $$\begin{aligned} \label{A_OL^np} 10^{-4}\, {\rm GeV}^{-1}\, \bigl(A_{\Omega\Lambda}^{}\bigr)_g \,=\, (-4\pm 10)\, {\rm Im}\, C_g^{+} + (4\pm 8)\, {\rm Im}\,C_g^{-} \,\,.\end{aligned}$$ Since the CMO also contribute to the $CP$-violating parameters $\epsilon$ in kaon mixing and $\epsilon'$ in kaon decay, which are now well measured, it is possible to obtain bounds on $\bigl(A_{\Omega\Lambda}^{}\bigr){}_g^{}$ and $\bigl(\Delta_\Omega^{}\bigr){}_g^{}$ using the $\epsilon$ and $\epsilon'$ data. As discussed in Ref. [@t6], the experimental values $\,|\epsilon|=(22.80\pm 0.13)\times10^{-4}\,$ and $\,{\rm Re}(\epsilon'/\epsilon)=(16.6\pm1.6)\times10^{-4}\,$ [@buras2; @pdb] imply that $$\begin{aligned} \bigl|{\rm Im}C_g^+\bigr| \,<\, 5.0\times10^{-8}\,{\rm GeV}^{-1} \,\,, \hspace{3em} \bigl|{\rm Im}C_g^-\bigr| \,<\, 7.4 \times10^{-9}\,{\rm GeV}^{-1} \,\,.\end{aligned}$$ Then, from Eqs. (\[Delta\_O\^np\]) and (\[A\_OL\^np\]), it follows that $$\begin{aligned} \label{(AOL,DeltaO)g} \bigl|A_{\Omega\Lambda}^{}\bigr|_g \,<\, 8\times10^{-3} \,\,, \hspace{3em} \bigl|\Delta_\Omega^{}\bigr|_g \,<\, 1\times10^{-3} \,\,.\end{aligned}$$ The upper limits of these ranges well exceed those within the SM in Eqs. (\[Delta\_O\^sm\]) and (\[A\_OL\^sm\]), but the largest size of $\bigl(A_{\Omega\Lambda}^{}\bigr){}_g^{}$ is still an order of magnitude below the expected sensitivity of HyperCP [@lu]. This, nevertheless, implies that a nonzero measurement by HyperCP would be an unmistakable signal of new physics. Conclusion\[conclusion\] ======================== We have evaluated the sum of the $CP$-violating asymmetries $A_\Omega^{}$ and $A_\Lambda^{}$ occurring in the decay chain $\,\Omega\to\Lambda K\to p\pi K,\,$ which is currently being studied by the HyperCP experiment. The dominant contribution to $A_\Omega^{}$ has turned out to be due to final-state interactions via $\,\Omega\to\Xi\pi\to\Lambda K.\,$ We have found that both within and beyond the standard model $A_\Omega^{}$ is smaller than $A_\Lambda^{}$, but not negligibly so. Taking a model-independent approach, we have also found that contributions to $\,A_{\Omega\Lambda}^{}=A_\Omega^{}+A_\Lambda^{}$ from possible new-physics through the chromomagnetic-penguin operators are allowed by constraints from kaon data to exceed the SM effects by up to two orders of magnitude. In summary, $$\begin{aligned} \bigl| A_{\Omega\Lambda}^{} \bigr|_{\rm SM} \,\le\, 4\times10^{-5} \,\,, \hspace{3em} \bigl| A_{\Omega\Lambda}^{} \bigr|_g \,<\, 8\times10^{-3} \,\,. \end{aligned}$$ Since the SM contribution is well beyond the expected reach of HyperCP, a finding of nonzero asymmetry would definitely indicate the presence of new physics. In any case, the upcoming data on $A_{\Omega\Lambda}^{}$ will yield information which complements that to be gained from the measurement of $A_{\Xi\Lambda}^{}$ in $\,\Xi\to\Lambda\pi\to p\pi\pi.\,$ Finally, we have shown that the contribution of $\,\Omega\to\Xi\pi\to\Lambda K\,$ also causes the partial-rate asymmetry $\Delta_\Omega^{}$ in $\,\Omega\to\Lambda K\,$ to be nonvanishing, thereby providing another means to observe $CP$ violation in this decay. This asymmetry and that in $\,\Omega\to\Xi\pi\,$ tend to be larger than the corresponding asymmetries in octet-hyperon decays and hence are potentially useful probes of $CP$ violation in future experiments. Since $\Delta_\Omega^{}$ results from the interference of $P$-wave amplitudes, a measurement of it will probe the underlying parity-conserving interactions. Numerically, we have found $$\begin{aligned} 0 \,\le\, \bigl( \Delta_\Omega^{} \bigr)_{\rm SM} \,\le\, 1\times10^{-5} \,\,, \hspace{3em} \bigl| \Delta_\Omega^{} \bigr|_g \,<\, 1\times10^{-3} \,\,.\end{aligned}$$ where the bound on the contribution of the CMO arises from the constraint imposed by $\epsilon$ data. I would like to thank G. Valencia for helpful discussions and comments. I am also grateful to E.C. Dukes and L.-C. Lu for experimental information. This work was supported in part by the Lightner-Sams Foundation. $\bm{{\cal P}}$ and $\bm{{\cal D}}$ factors in $\bm{P}$-wave and $\bm{D}$-wave $\,\bm{J=\frac{3}{2}}\,$ amplitudes for $\,\bm{B\phi\to B'\phi'}\,$ in $\,\bm{I=\frac{1}{2}}$, $\bm{S=-2}\,$ channels \[PD\] =========================================================================================================================================================================================================== For the four coupled channels, the ${\cal P}$ factors are $$\begin{aligned} \begin{array}{c} \displaystyle {\cal P}_{\Xi\pi,\Xi\pi}^{} \,=\, \frac{-\frac{1}{6}(D-F)^2}{E_\Xi^{}-E_\pi^\prime-m_\Xi^{}} + \frac{\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{-\frac{1}{108}\, {\cal C}^2}{ E_\Xi^{}-E_\pi^\prime-m_{\Xi^*}^{}} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal P}_{\Xi\pi,\Lambda\bar{K}} \,=\, \frac{\frac{1}{3}D(D+F)}{E_\Xi^{}-E_K^\prime-m_\Sigma^{}} + \frac{\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{\frac{1}{36}\, {\cal C}^2} {E_\Xi^{}-E_K^\prime-m_{\Sigma^*}^{}} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal P}_{\Xi\pi,\Sigma\bar{K}} \,=\, \frac{-\frac{1}{9}D(D-3 F)}{E_\Xi^{}-E_K^\prime-m_\Lambda^{}} + \frac{-\frac{2}{3}(D+F)F}{E_\Xi^{}-E_K^\prime-m_\Sigma^{}} + \frac{-\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{\frac{1}{54}\, {\cal C}^2} {E_\Xi^{}-E_K^\prime-m_{\Sigma^*}^{}} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal P}_{\Xi\pi,\Xi\eta}^{} \,=\, \frac{-\frac{1}{6}(D-F)(D+3 F)}{E_\Xi^{}-E_\eta^\prime-m_\Xi^{}} + \frac{-\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{-\frac{1}{36}\, {\cal C}^2}{ E_\Xi^{}-E_\eta^\prime-m_{\Xi^*}^{}} \,\,, \end{array}\end{aligned}$$ $$\begin{aligned} \begin{array}{c} \displaystyle {\cal P}_{\Lambda\bar{K},\Lambda\bar{K}} \,=\, \frac{\frac{1}{18}(D+3 F)^2}{E_\Lambda^{}-E_K^\prime-m_N^{}} + \frac{\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal P}_{\Lambda\bar{K},\Sigma\bar{K}} \,=\, \frac{-\frac{1}{6}(D-F)(D+3 F)}{E_\Lambda^{}-E_K^\prime-m_N^{}} + \frac{-\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal P}_{\Lambda\bar{K},\Xi\eta} \,=\, \frac{\frac{1}{9}D(D-3 F)}{E_\Lambda^{}-E_\eta^\prime-m_\Lambda^{}} + \frac{-\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} \,\,, \end{array}\end{aligned}$$ $$\begin{aligned} \begin{array}{c} \displaystyle {\cal P}_{\Sigma\bar{K},\Sigma\bar{K}} \,=\, \frac{-\frac{1}{6}(D-F)^2}{E_\Sigma^{}-E_K^\prime-m_N^{}} + \frac{\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{\frac{2}{27}\, {\cal C}^2} {E_\Sigma^{}-E_K^\prime-m_\Delta^{}} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal P}_{\Sigma\bar{K},\Xi\eta} \,=\, \frac{\frac{1}{3}D(D+F)}{E_\Sigma^{}-E_\eta^\prime-m_\Sigma^{}} + \frac{\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{-\frac{1}{36}\, {\cal C}^2}{ E_\Sigma^{}-E_\eta^\prime-m_{\Sigma^*}^{}} \,\,, \end{array}\end{aligned}$$ $$\begin{aligned} \begin{array}{c} \displaystyle {\cal P}_{\Xi\eta,\Xi\eta}^{} \,=\, \frac{\frac{1}{18}(D+3 F)^2}{E_\Xi^{}-E_\eta^\prime-m_\Xi^{}} + \frac{\frac{1}{12}\, {\cal C}^2}{\sqrt{s}-m_{\Xi^*}^{}} + \frac{\frac{1}{36}\, {\cal C}^2} {E_\Xi^{}-E_\eta^\prime-m_{\Xi^*}^{}} \;, \end{array}\end{aligned}$$ and the ${\cal D}$ factors $$\begin{aligned} \begin{array}{c} \displaystyle {\cal D}_{\Xi\pi,\Xi\pi}^{} \,=\, \frac{(D-F)^2}{60 \bigl( E_\Xi^{}-E_\pi^\prime-m_\Xi^{} \bigr)^2} - \frac{7\, {\cal C}^2}{ 540 \bigl( E_\Xi^{}-E_\pi^\prime-m_{\Xi^*}^{} \bigr)^2} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal D}_{\Xi\pi,\Lambda\bar{K}} \,=\, \frac{-D(D+F)}{30 \bigl( E_\Xi^{}-E_K^\prime-m_\Sigma^{} \bigr)^2} + \frac{7\, {\cal C}^2}{ 180 \bigl( E_\Xi^{}-E_K^\prime-m_{\Sigma^*}^{} \bigr)^2} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal D}_{\Xi\pi,\Sigma\bar{K}} \,=\, \frac{D(D-3 F)}{90 \bigl( E_\Xi^{}-E_K^\prime-m_\Lambda^{} \bigr)^2} + \frac{(D+F)F}{15 \bigl( E_\Xi^{}-E_K^\prime-m_\Sigma^{} \bigr)^2} + \frac{7\, {\cal C}^2}{ 270 \bigl( E_\Xi^{}-E_K^\prime-m_{\Sigma^*}^{} \bigr)^2} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal D}_{\Xi\pi,\Xi\eta}^{} \,=\, \frac{(D-F)(D+3 F)}{60 \bigl( E_\Xi^{}-E_\eta^\prime-m_\Xi^{} \bigr)^2} + \frac{-7\, {\cal C}^2}{ 180 \bigl( E_\Xi^{}-E_\eta^\prime-m_{\Xi^*}^{} \bigr)^2} \,\,, \end{array}\end{aligned}$$ $$\begin{aligned} \begin{array}{c} \displaystyle {\cal D}_{\Lambda\bar{K},\Lambda\bar{K}} \,=\, \frac{-(D+3 F)^2}{180 \bigl(E_\Lambda^{}-E_K^\prime-m_N^{}\bigr)^2} \,\,, % \hspace{3em} % {\cal D}_{\Lambda\bar{K},\Sigma\bar{K}} \,=\, \frac{(D-F)(D+3 F)}{60 \bigl( E_\Lambda^{}-E_K^\prime-m_N^{} \bigr)^2} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal D}_{\Lambda\bar{K},\Xi\eta} \,=\, \frac{-D(D-3 F)}{ 90 \bigl( E_\Lambda^{}-E_\eta^\prime-m_\Lambda^{} \bigr)^2} \,\,, \end{array}\end{aligned}$$ $$\begin{aligned} \begin{array}{c} \displaystyle {\cal D}_{\Sigma\bar{K},\Sigma\bar{K}} \,=\, \frac{(D-F)^2}{60 \bigl( E_\Sigma^{}-E_K^\prime-m_N^{} \bigr)^2} + \frac{14\, {\cal C}^2}{ 135 \bigl( E_\Sigma^{}-E_K^\prime-m_\Delta^{} \bigr)^2} \,\,, % \vspace{2ex} \\ \displaystyle % {\cal D}_{\Sigma\bar{K},\Xi\eta} \,=\, \frac{-D(D+F)}{30 \bigl( E_\Sigma^{}-E_\eta^\prime-m_\Sigma^{} \bigr)^2} + \frac{-7\, {\cal C}^2}{ 180 \bigl(E_\Sigma^{}-E_\eta^\prime-m_{\Sigma^*}^{}\bigr)^2} \,\,, \end{array}\end{aligned}$$ $$\begin{aligned} {\cal D}_{\Xi\eta,\Xi\eta}^{} \,=\, \frac{-(D+3 F)^2}{180 \bigl( E_\Xi^{}-E_\eta^\prime-m_\Xi^{} \bigr)^2} + \frac{7\, {\cal C}^2}{ 180 \bigl( E_\Xi^{}-E_\eta^\prime-m_{\Xi^*}^{} \bigr)^2} \;,\end{aligned}$$ where $E_\phi'$ is the energy of $\phi$ in the final state. We note that contributions to the propagators from the $\Delta m$ and quark-mass terms in Eq. (\[Ls\]) have been implicitly included in these results. [999]{} J.H. Christenson [*et al.*]{}, Phys. Rev. Lett.  [**13**]{}, 138 (1964); A. Alavi-Harati [*et al.*]{} \[KTeV Collaboration\], [*ibid.*]{} [**83**]{}, 22 (1999); V. Fanti [*et al.*]{} \[NA48 Collaboration\], Phys. Lett. B [**465**]{}, 335 (1999); B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**87**]{}, 091801 (2001); K. Abe [*et al.*]{} \[Belle Collaboration\], [*ibid.*]{} [**87**]{}, 091802 (2001). See, e.g., A. J. Buras, arXiv:hep-ph/0402191 and references therein. S. Okubo, Phys. Rev. [**109**]{}, 984 (1958); A. Pais, Phys. Rev. Lett. [**3**]{}, 242 (1959); T.Brown, S.F. Tuan, and S. Pakvasa, [*ibid.*]{} [**51**]{}, 1823 (1983); L.L. Chau and H.Y. Cheng, Phys. Lett. B [**131**]{}, 202 (1983); J.F. Donoghue and S. Pakvasa, Phys. Rev. Lett.  [**55**]{}, 162 (1985). K.B. Luk, arXiv:hep-ex/9803002. K.B. Luk [*et al.*]{} \[E756 and HyperCP Collaborations\], arXiv:hep-ex/0005004. L.-C. Lu \[HyperCP Collaboration\], AIP Conf. Proc.  [**675**]{} (2003) 251; Talk given at the Meeting of the Division of Particles and Fields of the American Physical Society, Philadelphia, Pennsylvania, 5-8 April 2003. J.F. Donoghue, X.-G. He, and S. Pakvasa, Phys. Rev. D [**34**]{}, 833 (1986). M.J. Iqbal and G.A. Miller, Phys. Rev. D [**41**]{}, 2817 (1990); X.-G. He, H. Steger, and G. Valencia, Phys. Lett. B [**272**]{}, 411 (1991); N.G. Deshpande, X.-G. He, and S. Pakvasa, [*ibid.*]{} [**326**]{}, 307 (1994). D. Chang, X.-G. He, and S. Pakvasa, Phys. Rev. Lett.  [**74**]{}, 3927 (1995); X.-G. He and G. Valencia, Phys. Rev. D [**52**]{}, 5257 (1995); X.-G. He, H. Murayama, S. Pakvasa, and G. Valencia, [*ibid.*]{} [**61**]{}, 071701 (2000); C.-H. Chen, Phys. Lett. B [**521**]{}, 315 (2001); J.-H. Jiang and M.-L. Yan, J. Phys. G [**30**]{}, B1 (2004). J. Tandean and G. Valencia, Phys. Rev. D [**67**]{}, 056001 (2003). J. Tandean, Phys. Rev. D [**69**]{}, 076008 (2004). J. Tandean and G. Valencia, Phys. Lett. B [**451**]{}, 382 (1999). K. Hagiwara [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**66**]{}, 010001 (2002). K.M. Watson, Phys. Rev.  [**95**]{}, 228 (1954). See, e.g., H. Pilkuhn, [*The Interactions of Hadrons*]{} (Wiley, New York, 1967). L. Wolfenstein, Phys. Rev. D [**43**]{}, 151 (1991). J.A. Oller, E. Oset, and A. Ramos, Prog. Part. Nucl. Phys.  [**45**]{}, 157 (2000). J. Tandean, A.W. Thomas, and G. Valencia, Phys. Rev. D [**64**]{}, 014005 (2001); references therein. J. Bijnens, H. Sonoda, and M.B. Wise, Nucl. Phys. [**B261**]{}, 185 (1985). E. Jenkins and A.V. Manohar, Phys. Lett. B [**255**]{}, 558 (1991); [*ibid.*]{} [**259**]{}, 353 (1991). in [*Effective Field Theories of the Standard Model*]{}, edited by U.-G. Meissner (World Scientific, Singapore, 1992). E. Jenkins, Nucl. Phys. [**B368**]{}, 190 (1992); N. Kaiser, P.B. Siegel, and W. Weise, [*ibid.*]{} [**A594**]{}, 325 (1995); N. Kaiser, T. Waas, and W. Weise, [*ibid.*]{} [**A612**]{}, 297 (1997); J. Caro Ramon, N. Kaiser, S. Wetzel, and W. Weise, [*ibid.*]{} [**A672**]{}, 249 (2000); C.H. Lee, G.E. Brown, D.-P. Min, and M. Rho, [*ibid.*]{} [**A585**]{}, 401 (1995); J.W. Bos [*et al.*]{}, Phys. Rev. D [**51**]{}, 6308 (1995); [*ibid.*]{} [**57**]{}, 4101 (1998); G. Müller and U.-G. Meissner, Nucl. Phys. [**B492**]{}, 379 (1997). A. Abd El-Hady, J. Tandean, and G. Valencia, Nucl. Phys. A [**651**]{}, 71 (1999). J. Gasser, H. Leutwyler, and M.E. Sainio, Phys. Lett. B [**253**]{}, 252 (1991); [**253**]{}, 260 (1991). M.N. Butler, M.J. Savage, and R.P. Springer, Nucl. Phys. B [**399**]{}, 69 (1993). C. Dukes \[HyperCP Collaboration\], Talk given at DA$\Phi$NE 2004: Physics at Meson Factories, Laboratori Nazionali di Frascati, Italy, 7-11 June 2004. G. Buchalla, A.J. Buras, and M.E. Lautenbacher, Rev. Mod. Phys.  [**68**]{}, 1125 (1996); references therein. N. Cabibbo, Phys. Rev. Lett.  [**10**]{}, 531 (1963); M. Kobayashi and T. Maskawa, Prog. Theor. Phys.  [**49**]{}, 652 (1973). L. Wolfenstein, Phys. Rev. Lett.  [**51**]{}, 1945 (1983). See, e.g., J.F. Donoghue, E. Golowich, and B.R. Holstein, [*Dynamics of the Standard Model*]{} (Cambridge University Press, Cambridge, 1992). E. Jenkins, Nucl. Phys. [**B375**]{}, 561 (1992). A. Manohar and H. Georgi, Nucl. Phys. [**B234**]{} 189 (1984); H. Georgi and L. Randall, [*ibid.*]{} [**B276**]{} 241 (1986); S. Weinberg, Phys. Rev. Lett. [**63**]{}, 2333 (1989). A.J. Buras, arXiv:hep-ph/0307203. F. Gabbiani, E. Gabrielli, A. Masiero, and L. Silvestrini, Nucl. Phys. B [**477**]{}, 321 (1996); A. Masiero and H. Murayama, Phys. Rev. Lett.  [**83**]{}, 907 (1999); X.-G. He and G. Valencia, Phys. Rev. D [**61**]{}, 075003 (2000); G. Colangelo, G. Isidori, and J. Portoles, Phys. Lett. B [**470**]{}, 134 (1999); J. Tandean, Phys. Rev. D [**61**]{}, 114022 (2000); J. Tandean and G. Valencia, [*ibid.*]{} [**62**]{}, 116007 (2000). A.J. Buras [*et al.*]{}, Nucl. Phys. B [**566**]{}, 3 (2000). G. Feinberg, P. Kabir, and S. Weinberg, Phys. Rev. Lett. [**3**]{}, 527 (1959). J.F. Donoghue and B.R. Holstein, Phys. Rev. D [**33**]{}, 2717 (1986); J.F. Donoghue, E. Golowich, and B.R. Holstein, Phys. Rept.  [**131**]{}, 319 (1986). J.F. Donoghue, E. Golowich, and B.R. Holstein, Phys. Rev. D [**15**]{}, 1341 (1977); J.F. Donoghue, E. Golowich, B.R. Holstein, and W.A. Ponce, [*ibid.*]{} [**23**]{}, 1213 (1981). T. DeGrand, R.L. Jaffe, K. Johnson, and J.E. Kiskis, Phys. Rev. D [**12**]{}, 2060 (1975). [^1]: In Ref. [@tv2] the partial-rate asymmetry in $\,\Omega\to\Xi\pi\,$ was evaluated under the assumption that $\,\varepsilon=0.\,$ [^2]: We have chosen the sign of $\cal C$ after nonrelativistic quark models [@JenMan], which predict $\,3F=2D\,$ and $\,{\cal C}=-2D,\,$ both well satisfied by the adopted $D$, $F$, and $\cal C$ values. [^3]: This $m_0^{}$ value comes from simultaneously fitting the tree-level formulas for the octet-baryon masses and the sigma term, $\,\sigma_{\pi N}^{}=-2 \bigl(b_D^{}+b_F^{}+2 b_0^{}\bigr) \hat{m},\,$ all derived from Eq. (\[Ls\]), to the measured masses and the empirical value [@sigma] $\,\sigma_{\pi N}^{}\simeq45\,\rm MeV.\,$ [^4]: The numerical differences between the estimates also arise from the use of a positive $\cal C$ value in Ref. [@tv2]. [^5]: It is worth noting here that, as in the $\,\Lambda\to p\pi\,$ and $\,\Xi\to\Lambda\pi\,$ cases [@t6], each of the two amplitudes in Eq. (\[AP\_np\]) vanishes if we set $\,\beta_{D,F}^{+}=\kappa^{+} b_{D,F}^{},\,$ $\,\beta_C^{+}=\kappa^{+} c,\,$ and $\,\beta_\varphi^{+}=\kappa^{+}/2,\,$ with $\kappa^+$ being a constant, take the limit $\,E_\Lambda^{}=m_\Lambda^{},\,$ and use the relations $\,m_\Xi^{}-m_\Lambda^{}=\frac{2}{3} \bigl( b_D^{}-3 b_F^{} \bigr) \bigl( m_s^{}-\hat m \bigr) \,$ and $\,m_\Omega^{}-m_{\Xi^*}^{}=\frac{2}{3}\,c\,\bigl( m_s^{}-\hat m \bigr),\,$ both derived from Eq. (\[Ls\]). This satisfies the requirement implied by the Feinberg-Kabir-Weinberg theorem [@fkw] that the operator $\,\bar{d}s\,$ cannot contribute to physical decay amplitudes [@DonGH1], and thus serves as a check for the formulas in Eq. (\[AP\_np\]).
-0.5in 0.0in =6.5in =8.5in 0.25in Abhijit Kar Gupta [*Physics Department, Panskura Banamali College\ Panskura R.S., East Midnapore, WB, India, Pin-721 152*]{}\ [*e-mail:*]{} [kg.abhi@gmail.com]{} [**Introducing the Models**]{} The wealth exchange models are many agent models for wealth distributions where a randomly chosen pair of agents interact and exchange wealth between them in a certain fashion. We here deal with the models that are primarily based on the kinetic theory of gas in statistical physics. The interactions among agents can be thought of as an elastic collision as the total wealth of the interacting agents is kept constant and this in turn ensures the total wealth of all the agents to remain conserved. There has been a great amount of study [@eco] in this kind of wealth exchange models in econophysics in recent times [@eco]. The time evolution of wealth ($w$) in the wealth exchange models can be summarised as follows: $$w_i(t+1)=w_i(t)+\Delta w$$ $$w_j(t+1)=w_j(t)-\Delta w.$$ The above is a zero sum process where the wealth of an agent evolves through such a simple rule and a distribution of wealth emerges after a certain ’time’ $t$. It is possible to obtain a wide variety of distributions within this framework namely, the exponential Boltzmann-Gibbs type distributions, the gamma type distributions and more interestingly the power laws. Power laws that are obtained from fitting the tail ends of the wealth distributions of real data are known as [*Pareto’s law*]{} in the economic literature for over a hundred years and this is known to possess some universality. The emergence of a distribution depends on the expression of exchange amount $\Delta w$ which stems from the description of the model. The models developed out of the above principle have been quite useful in understanding the wealth distributions of individuals (or that of companies or societies) in an economy (see the recent reviews [@abhi-rev; @yako-rev]). In the following, we write down the expressions for the exchange amount $\Delta w$ for different models: $$\begin{aligned} \Delta w & = &\bar\epsilon w_i - \epsilon w_j \label{gambling}\\ &=&(1-\lambda)(\bar\epsilon w_i - \epsilon w_j) \label{fixlam}\\ &=&(1-\lambda_i)\bar\epsilon w_i - (1-\lambda_j)\epsilon w_j \label{randomlam}.\end{aligned}$$ The first step in the above \[eqn.(\[gambling\])\] is for the pure gambling model [@puregamb] where we write the evolution of the wealth of $i$-th agent is given by $w_i(t+1)=\epsilon(w_i(t)+w_j(t))$, $\epsilon$ being a random number between 0 and 1 drawn from a random number generator. Note that here, $\bar\epsilon = 1-\epsilon$. The exponential Boltzmann-Gibbs type distribution results from this model. In the next model \[in eqn.(\[fixlam\])\], the agents have a fixed saving propensity [@constlam] introduced through a parameter $\lambda$ where the evolution of the $i$-th agent is given by $w_i(t+1)=\lambda w_i(t)+\epsilon(1-\lambda)[w_i(t)+w_j(t)]$. In this, each agent saves $\lambda$-fraction of his/ her wealth and puts the rest for gambling. It has been shown that the distributions found from this model are of the gamma-type. The last step \[eqn.(\[randomlam\])\] corresponds to the model where the saving propensity is characteristic of an agent [*i.e.*]{}, the parameter $\lambda$ has been assigned a distribution (in our case a uniform distribution in $\lambda$). Power laws in wealth distributions are obtained from this random or distributed saving model [@varlam] and hence this model does attract enhanced attention. As the system of many agents evolves in time (as suitably defined), it relaxes towards a steady state equilibrium so that the distribution of wealth assumes a definite shape. In the pure gambling model and in the model with fixed saving propensity, the idea of relaxation can hardly be of interest as the agents do not possess any characteristic feature; they are having equal opportunities (either no saving or equal saving) through random interactions. We shall, therefore, concentrate on the random saving model out of the three as the meaning of relaxation can be rightfully associated with this. The relaxation study has been made earlier [@prevrelax] on this model but the concept and purpose of that had been different from the present study. [**To Observe Relaxation**]{} The question is how we may characterise the ’relaxation’ of an evolving system. If a system is allowed to go towards a steady or fixed state, it will relax (usually exponentially) after the external disturbance is withdrawn (as is well known in the context of a model spin system in statistical physics). As the system approaches a steady state, the successive values (in time) of a measurable quantity for the whole system are supposed to be nearing each other. From this idea we check the concept of relaxation in the wealth exchange models in the following way. The following quantity may be defined as a measure of relaxation: $$X(t) = {1\over N}\sum_i|{w_i(t)-w_i(t-1)}|.$$ The quantity $X(t)$ is averaged over $N$ interactions. Further, it is averaged over a number of initial configurations. When the configuration averaged value of $X(t)$ is plotted against time $t$ we expect a graph decaying with time for a system which is supposed to relax to equilibrium. Here we define one ’time step’ to be equal to $N$ interactions on average, where $N$ is the number of agents in the system: $t=T/N$, $T$ being the number of $N$ interactions. [**Numerical Results**]{} From the numerical study of the above models it is seen that the system of many agents relaxes towards a steady state equilibrium and the wealth of each individual attains a specific time independent distribution. It will be of interest to see how a system of many agents, as a whole, relaxes as it evolves from arbitrary initial distributions. The system obviously does not head towards a fixed stationary state. Rather it fluctuates around some average value (in equilibrium) in all cases as the exchange of wealth is allowed to happen all the time. It may be emphasised here that the steady state equilibrium states are not affected by the choice of initial configurations. Also, the initial configurations do not have any bearing on the overall process of relaxation ([*i.e.*]{} the shape of the decay curves). We check all these numerically (the data are not presented here). In the numerical simulation, we take $N$=100 in most of the cases as this is sufficient for our purpose; only in some cases we take $N=1000$ to demonstrate the size dependence. The averaging, in all cases, have been done over $10^4$ initial configurations. The appearance of exponential decay in relaxation in the random saving model [@varlam] may be anticipated due to a probable reason that the agents possess a characteristic feature (random distribution in the saving parameter $\lambda$ in our case). The system is expected to be driven towards a steady state equilibrium through a possible exponential decay in such a case. In fig.1, we plot averaged value of $X(t)$ against time $t$ where the relaxation seems to be of the form: $X(t)=X_0-A.\exp(-t/{\tau})$. Thus to demonstrate the exponential character we determine $X_0$ and then subtract that from $X(t)$ to plot in the semi-log scale as shown in the inset of fig.1. The relaxation time $\tau$ can be identified as the inverse of the slope of the straight line (inset in fig.1). The relaxation is seen to be dependent upon the system size (number of agents $N$) which can be intuitively understood. How it exactly depends on the system size, can be studied later. The random saving model has two parameters namely, $\lambda$ and $\epsilon$ where both are taken from a uniform distribution between 0 and 1. It is checked through numerical simulation that as the parameter $\lambda$ is taken to be random (as per the requirement of the model), the other parameter $\epsilon$ may be held constant and for that the ultimate wealth distribution does not change. Therefore, we check the relaxation in this model with different fixed values of $\epsilon$ and out of those we find that the relaxation is found to be purely exponential (up to a certain time) for $\epsilon = {1\over 2}$. In fig.2, we demonstrate this and plot the graphs for different widths in the random distribution in $\lambda$. Note that the plots are made here without the subtraction of the saturation value $X_0$ (unlike that in fig.1). The subtraction is not required as the straight line portion in the semi-log plot shows a pure exponential decay of the type: $X(t)=A.\exp(-t/\tau)$. The system stabilises only after a spell of pure exponential decay. The relaxation time $\tau$ (inverse of the slope of the straight line) seems to depend on the width of distribution in $\lambda$ in a definite way. As the mean in the distribution in $\lambda$ increases, the relaxation time increases (slope decreases) which is evident from the graphs. We have not investigated here how $\tau$ may depend on the width or the mean of the distribution in $\lambda$. This can be interesting and it will be studied later. In passing, a similarity of this kind of computer model with the well known [*random resistor network*]{} (rrn) model [@kirk] may be noted. The potential of a node in a resistor network can correspond to the wealth of an agent. In rrn, the voltage is updated by Kirchhoff’s law: $V_o^{\prime}=V_o+\Delta V$, where $\Delta V=\lambda\sum(V_i-V_o)g_i$, $\lambda=\sum g_i$, $g_i's$ being the conductances of the connecting resistors. The above updating rule is very similar to that in the wealth exchange models. Note that $\Delta V$ can be positive or negative. The quantity that remains conserved here is the total current through the connecting resistors in and out of a node. Interestingly, here too the relaxation is observed to be exponential. To check this, we do a simulation on a random resistor network over a $100\times 100$ square lattice. We calculate a similar quantity $X(t)$ as that is done before. Here we define $$X(t) = {1\over N}\sum_{i,j}|{V_{i,j}(t)-V_{i,j}(t-1)}|,$$ where $V_{i,j}$ is the potential at a node ($i$,$j$). The resistors are assigned conductance values (inverse of resistance) taken from a uniformly random distribution. It is also seen that the relaxation time is dependent upon the width of randomness in the distribution of resistances/ conductances in rrn, quite a similar thing that happens in the wealth distribution model with random saving (the results of which is presented in fig.2). Therefore, a similarity (between the models in two areas) may be drawn in many ways but we should emphasise here that this is only tentative. However, the random resistor network model is obviously done on a regular lattice whereas the wealth exchange models are usually not done on any lattice. But it is also can be checked numerically that in the wealth exchange models, the wealth distribution does not change if the agents are taken on a regular lattice. Numerical investigations nevertheless suggests that the relaxation time in the present wealth exchange model changes due to the presence of a lattice. We observe that the exponential decay can be prominently demonstrated for the case of $\epsilon = {1\over 2}$. In this case the system is seen to settle to the lowest possible equilibrium value for $X(t)$. So this is a special case. We also examined the cases with $\epsilon$ very close to ${1\over 2}$ (but not exactly equal to) which are demonstrated in fig.3. As the value of $\epsilon$ is taken slightly away from 0.5, the relaxation curve stabilises far off from that for $\epsilon={1\over 2}$ (fig.3). The natural question now is to ask how the wealth of an individual approaches a value (for some it grows and for some it decays) on an average with time. This question is addressed in the work in [@prevrelax]. In the random gambling model and in the model with constant $\lambda$ an agent ends up with the same value of wealth on the average. However, the model with random or characteristic saving [@varlam] that we have dealt with here is quite different. The agents with higher $\lambda$ (higher saving propensity) ends up with more wealth than another with smaller $\lambda$ value. This can be understood from common knowledge. The agents with higher saving tendencies ends up with accumulating more wealth than those with smaller saving tendencies. As we start from any arbitrary initial configuration, the wealths of some agents thus grow towards some higher values and that for others decay towards some lower values. This decay or growth is exponential (as this is also checked numerically). The growth or decay of all the agents are reflected in the relaxation of the entire system. A heuristic argument that follows, may may be helpful in understanding the general nature of relaxation and that for the special case of $\epsilon={1\over 2}$. [**Heuristic Arguments on the Nature of Relaxation**]{} Let us consider a general model which can correspond to any of the models stated earlier as we tune the parameters in it appropriately. A discussion over this based on the transfer matrix approach is given in [@abhi-gen]. The algorithm of wealth exchange in the general model is the following: $$\begin{aligned} w_i(t+1) & = & \epsilon_1 w_i(t)+\epsilon_2 w_j(t) \label{algo}\\ w_j(t+1) & = & (1-\epsilon_1)w_i(t)+(1-\epsilon_2)w_j(t) \label{algo1},\end{aligned}$$ where the parameters $\epsilon_1$ and $\epsilon_2$ can be positive or negative and they can be related to the actual parameters in the models under consideration. Note that the values of the above parameters are supposed to be uniformly distributed in some range. For example, if $\epsilon_1$ and $\epsilon_2$ both are uniform random numbers between 0 and 1, a little analysis suggests that $w_i$ in eqn.(\[algo\]) will be distributed uniformly in the range between $-w_i$ and $(w_i+w_j)$. Thus $w_i$’s can be thought of continuous variables in a certain range. The $w_i$’s, however, can take only positive values in the models by design. Therefore, we attempt to construct differential equations out of the above mentioned coupled equations \[eqn.(\[algo\]) & (\[algo1\])\] assuming that the change in the wealth, $\Delta w_i = w_i(t+1)-w_i(t)$ occurs in time $\Delta t$=1. The time steps $\Delta t$ can be adjusted according to our convenience. We arrive at the following differential equation: $${d^2w_i\over dt^2} + (1+\epsilon_2-\epsilon_1){dw_i\over dt} = 0. \label{diffeqn}$$ The solution of the above homogeneous differential equation is of the following form: $$w_i(t) = a + b \exp(-k.t), \label{sol}$$ where $k = (1+\epsilon_2-\epsilon_1)$; $k$ can be positive or negative depending on the choice of the parameters, $\epsilon_1$ and $\epsilon_2$. Therefore, the above solution can be seen to be either growing or decaying exponentially from or to a certain value. Now we think of the wealth exchange model with random saving as a special case. The wealth of the $i$-th agent evolves through the following way: $$w_i(t+1)=\lambda_i w_i(t)+\epsilon[(1-\lambda_i)w_i(t)+(1-\lambda_j)w_j(t)], \label{randomsav}$$ where $i$-th and $j$-th agents save $\lambda_i$ and $\lambda_j$ fractions of their wealths respectively, at time $t$. In view of the general equation as mentioned above \[eqn.(\[algo\])\], we have $\epsilon_1=\lambda_i+\epsilon[(1-\lambda_i)$ and $\epsilon_2=\epsilon[(1-\lambda_j)$. Thus for the random saving model, we obtain the following expression for $k$ for the choice of $\epsilon = {1\over 2}$: $$k=1+\epsilon_2-\epsilon_1=1-{1\over 2}(\lambda_i+\lambda_j).$$ As we consider the distributions in $\lambda$ such that $0 < \lambda_i,\lambda_j < 1$, the value of $k$ must always be positive ($k > 0$). Hence the solution \[eqn.(\[sol\])\] always decays exponentially for this special case for any values of $\lambda$’s as long as they are bounded between 0 and 1. Infact, now we may intuitively understand how a pure exponential relaxation may appear in the case for $\epsilon = {1\over 2}$ which is shown in fig.2. This fact is even more evident from the demonstration in fig.3. In conclusion, we have tried to establish the nature of relaxation in the wealth exchange models of a certain class by studying the relaxation in random saving model. The nature of relaxation is found to be exponential (followed by a saturation) and this is possibly true for other models (that are not discussed here) based on the similar principles. The relaxation process (and the relaxation time) can be greatly manipulated by tuning the parameters, $\lambda$ and $\epsilon$ that are involved in random saving model. The relaxation is shown to be purely exponential for the special choice of $\epsilon = {1\over 2}$ in this model. The claims are made from numerical simulation results and are supported by heuristic arguments. [**Acknowledgement**]{} The author wants to thank [*D. Stauffer*]{} and [*B.K. Chakrabarti*]{} for their valuable comments and critical remarks on the work and on the manuscript. [9]{} Articles in [*Econophysics of Wealth Distributions*]{}, A. Chatterjee, S. Yarlagadda and B.K. Chakrabarti ([*eds.*]{}), Springer (2005). \[See the Chapters by T. Lux (p.51); by A. Chatterjee and B. chakrabarti (p.79); by K. Bhattacharya, G. Mukherjee and S.S. Manna (p.111); by P. Richmond, P. Repetowicz and S. Hutzler (p.120); by S. Sinha (p.177)\]. A. Kar Gupta in ’Econophysics and Sociophysics: trends and perspectives’ (p.161), B.K. Chakrabarti, A.K. Chakraborty, A.K. Chatterjee ([*eds.*]{}), Wiley-VCH, Verlag GmbH & Co. KGaA (2006); Also see [*arXiV.org*]{}, Physics/0604161. Also see the chapters by P. Richmond [*et al.*]{} (p.131) and by Y. Wang [*et al.*]{} (p.191) in the same book. Victor M. Yakovenko, [*Econophysics, Statistical Mechanics Approach to*]{}, review article for “Encyclopedia of Complexity and System Science” to be published by Springer; arXiv:0709.3662v1 \[physics.soc-ph\]. A.A. Drăgulescu, V.A. Yakovenko, [*Eur. Phys. J. B*]{} [**17**]{} 723 (2000). A. Chakraborti and B.K. Chakrabarti, [*Eur. Phys. J. B*]{} [**17**]{} 167 (2000). A. Chatterjee, B.K. Chakrabarti and S.S. Manna, [*Physica A*]{} [**335**]{} 155 (2004). A. Kar Gupta, [*Physica A*]{} [**359**]{} 634 (2004). M. Patriarca, A. Chakraborti, E. Heinsalu and G. Germano, [*Eur. Phys. J. B*]{} [**57**]{}, 219 (2007). S. Kirkpatrick, Rev. Mod. Phys. [**45**]{}, 574 (1973); A. Kar Gupta and A.K. Sen, Physica A [**215**]{}, 1 (1995).
--- abstract: | We study contributions to the nucleon strange quark vector current form factors from intermediate states containing $K^{*}$ mesons. We show how these contributions may be comparable in magnitude to those made by $K$ mesons, using methods complementary to those employed in quark model studies. We also analyze the degree of theoretical uncertainty associated with $K^{*}$ contributions.\ PACS numbers: 14.20.Dh, 12.40.-y\ author: - | L.L. Barz$^{1,2}$, H. Forkel$^{3,4}$, H.-W. Hammer$^{5,6}$, F.S. Navarra$^1$, M. Nielsen$^1$, and M.J. Ramsey-Musolf$^{6,7}$[^1]\ \[0.5cm\] [*$^1$Instituto de Física, Universidade de São Paulo*]{}\ [*C.P. 66318, 05315-970 São Paulo, SP, Brazil*]{}\ [*$^2$Faculdade de Engenharia de Joinville, Universidade Estadual de Santa Catarina*]{}\ [*82223-100 Joinville, SC, Brazil*]{}\ [*$^3$European Centre for Theoretical Studies in Nuclear Physics and Related Areas*]{},\ [*Villa Tambosi, Strada delle Tabarelle 286, I-38050 Villazzano, Italy*]{}\ [*$^4$Institut f[ü]{}r Theoretische Physik, Universit[ä]{}t Heidelberg*]{},\ [*Philosophenweg 19, D-69120 Heidelberg, Germany*]{}\ [*$^5$ TRIUMF, 4004 Wesbrook Mall, Vancouver, B.C., Canada V6T 2A3*]{}\ [*$^6$ Institute for Nuclear Theory, University of Washington, Seattle, WA 98195, USA*]{}\ [*$^7$ Department of Physics, University of Connecticut, Storrs, CT 08629 USA*]{} title: '**$K^*$ Mesons and Nucleon Strangeness** ' --- \#1[[\#1]{}]{} \#1[[\#1]{}]{} \#1[[\#1]{}]{} \#1[[[*Phys. Rev.*]{} [**\#1**]{} ]{}]{} \#1[[[*Phys. Rev.*]{} [**C\#1**]{} ]{}]{} \#1[[[*Phys. Rev.*]{} [**D\#1**]{} ]{}]{} \#1[[[*Phys. Rev. Lett.*]{} [**\#1**]{} ]{}]{} \#1[[[*Nucl. Phys.*]{} [**A\#1**]{} ]{}]{} \#1[[[*Nucl. Phys.*]{} [**B\#1**]{} ]{}]{} \#1[[[*Ann. of Phys.*]{} [**\#1**]{} ]{}]{} \#1[[[*Phys. Reports*]{} [**\#1**]{} ]{}]{} \#1[[[*Phys. Lett.*]{} [**B\#1**]{} ]{}]{} \#1[[[*Z. für Phys.*]{} [**A\#1**]{} ]{}]{} \#1[[[*Z. für Phys.*]{} [**C\#1**]{} ]{}]{} 5 cm Introduction {#intro} ============ The role played by virtual $q\bar{q}$ pairs in the low-energy structure of hadrons remains one of the outstanding questions for hadron structure physics. Despite the evidence for important $q\bar{q}$ sea effects obtained with deep inelastic scattering, the experimental manifestations of explicit sea-quark effects at low energies are minimal. Partial explanations for this absence have been given using a non-relativistic quark model framework by the authors of Ref. [@Gei90], who noted that in the adiabatic approximation, virtual $q\bar{q}$ pairs renormalize the string tension and, therefore, do not have any discernable impact on the low-lying spectrum of hadronic states. Similarly, virtual $q\bar{q}$ effects – in the guise of virtual mesonic loops – which could conceivably lead to large $\rho-\omega$ and $\phi-\omega$ mixing were shown to cancel at second order in strong couplings when a sum is performed over a tower of virtual hadronic states [@Gei91]. The latter result provides insight into the applicability of the OZI Rule to $V$-$V'$ mixing despite the naïve scale of $q\bar{q}$ effects expected at one-loop order. Nevertheless, several mysteries involving $q\bar{q}$ pairs remain to be solved. Of particular interest are those involving nucleon matrix elements of strange quark operators, $\bra{N}\bar{s}\Gamma s\ket{N}$. The latter explicitly probe properties of the $q\bar{q}$ sea at low energies, since the nucleon contains no valence strange quarks. Moreover, the mass scale associated with $s\bar{s}$ pairs – $m_s\sim\Lambda_{QCD}$ – implies that such pairs live for sufficiently long times and propagate over sufficiently large distances to produce observable effects when probed explicitly. In this respect, $s\bar{s}$ pairs stand in contrast with, [*e.g.*]{}, $c\bar{c}$ pairs, whose effects one expects to be suppressed by powers of $\Lambda_{QCD}/m_c\sim 0.1$ [@Kap88]. Some support for these simple-minded expectations is provided by determinations of $\bra{N}\bar{s} s\ket{N}$ from the $\pi N$ sigma" term [@Che76] and of $\bra{N}\bar{s}\gamma_\mu\gamma_5 s\ket{N}$ from polarized deep inelastic scattering [@EMC] and neutrino-nucleus quasi-elastic scattering [@Ahr87]. The former suggests that roughly 15% of the nucleon mass is generated by $s\bar{s}$ pairs, while the latter implies that strange quarks contribute about 30% of the total quark contribution to the nucleon spin[^2]. Measurements of $\bra{N}\bar{s}\gamma_\mu s\ket{N}$, which would provide information about the strange quark contribution to the nucleon magnetic moment and rms radius are presently underway at MIT-Bates [@MIT], Mainz [@Mai], and the Jefferson Laboratory [@TJN]. The first results for the strangeness magnetic form factor have been reported in Ref. [@MIT]. One expects this set of $\bra{N}\bar{s}\Gamma s\ket{N}$ determinations to provide a clearer picture of the $q\bar{q}$ sea than obtained from existing spectroscopic data alone. Despite over a decade of theoretical efforts to study nucleon strangeness, the theoretical understanding of s-quark matrix elements remains in its infancy. In the case of $\bra{N}\bar{s}\gamma_\mu s\ket{N}$, a plethora of predictions have been reported in the literature [@Lat; @Mod; @Jaf89; @Had; @Gei97; @Mrm97a; @Mus97b; @Mrm97c; @HWH97]. While a few lattice results have been obtained by different groups[@Lat], they are not entirely consistent with each other nor with the recent first results for the strange magnetic moment" obtained by the SAMPLE collaboration[@MIT]. The remaining predictions – based generally on QCD-inspired nucleon models [@Mod; @Gei97] or low-energy truncations of QCD in a hadronic basis [@Jaf89; @Had; @Mrm97a] – display a broad range in magnitude and sign. Recently, it has been shown why such truncations – either in the strong coupling constant ($g$) expansion (loop order) [@Mus97b; @Mrm97c] or hadronic excitation energy ($\Delta E$) [@Gei97] – are untrustworthy and may produce misleading results. The implication of these studies is that the intuitively appealing picture of a kaon cloud around the nucleon does not suffice to describe $s\bar{s}$ fluctuations in the nucleon. It appears that one must include both the full set of virtual hadronic intermediate states [@Gei97] as well as the full set of higher-order rescattering effects for a given state [@Mus97b; @Mrm97c] in order to obtain a physically realistic prediction. In principle, corrections to the leading order truncations in $\Delta E$ and $g$ could be accounted for by the appropriate low-energy constants in chiral perturbation theory (CHPT); however, chiral symmetry does not afford a determination of the low-energy constant relevant to nucleon vector current strangeness [@Mrm97a]. Hence, one must understand in some detail the short-distance strong interaction mechanisms responsible for the low-energy structure of the strange quark sea. In the present study, we amplify on the themes of Refs. [@Gei97; @Mrm97a; @Mus97b; @Mrm97c] by studying the $K^{\ast}$ contribution to $\bra{N}\bar{s}\gamma_\mu s\ket{N}$. Our objective is two-fold: (i) to illustrate, using an alternative framework to that of Ref. [@Gei97], how inclusion of higher-lying intermediate states may alter conclusions obtained when only the lightest OZI-allowed" fluctuation is included, and (ii) to demonstrate the theoretical uncertainty associated with computing higher-lying contributions. For these purposes, we restrict ourselves to second order in the strong meson-baryon coupling, $g$, when treating hadronic amplitudes $N\to YK^*$ [*etc.*]{}, fully cognizant of the shortcomings such a truncation entails. In fact, the kind of analysis of higher-order effects reported in Ref. [@Mrm97c] for the $K\bar{K}$ intermediate state does not appear feasible at present for higher lying states. Consequently, some form of model-dependent truncation is necessary when treating these states, and we do not, therefore, pretend to make any reliable numerical predictions. Rather, we use the ${\cal O}(g^2)$ (one-loop) truncation to illustrate the two main points stated above. In this respect, our study is similar in spirit to that of Ref. [@HWH97], where a comparison at one-loop order was made to show that contributions from intermediate states containing no valence strangeness ($3\pi$) and those containing valence s-quarks ($K\bar{K}$) may be comparable in magnitude. In order to estimate the degree of theoretical uncertainty one has in the numerical prediction for the $K^{\ast}$ contribution, we use two approaches to carry out the calculation: (a) an explicit one-loop calculation, where form factors are included at hadronic vertices and the intermediate state $\bar{s}\gamma_\mu s$ matrix elements are taken to be point-like, and (b) a computation using dispersion relations, in which the $N\bar{N}\to KK,\ KK^{\ast},\ K^{\ast}K^{\ast}$ amplitudes are computed in the Born approximation but form factors are included at the $\bar{s}\gamma_\mu s$ insertions. These computations are outlined, respectively, in Sections II and III. In Section IV, we discuss the results of the calculations and compare with the conclusions drawn in Ref. [@Gei97]. One-loop calculation {#ext} ==================== The first kaon cloud" estimates of $\bra{N}\bar{s}\gamma_\mu s\ket{N}$ were obtained from the amplitudes associated with the diagrams of Fig. 1, where only the contributions for $B=B'=\Lambda, \Sigma$ and $M=M'=K$ were included [@Had]. Here we consider the next heaviest contributions by including the octet of spin-one mesons as well as the pseudoscalars, and compute the following amplitudes where, in each case, $B=B'=\Lambda$ or $\Sigma$: (1a) for $M=K^{\ast}$; (1b) for $M=M'=K^{\ast}$; (1b) for $M=K$, $M'=K^{\ast}$; (1c) for $M=K^{\ast}$. As we discuss below, the diagrams (1c) are required for consistency with the Ward-Takahashi identities. The resulting contributions to the strange-quark vector current matrix element are embodied in the Dirac and Pauli form factors defined via N(p\^) | |s\_s| N(p) = [|U]{}(p\^) U(p)    , \[ma\] where $U(p)$ denotes the nucleon spinor. Recall that $F_1^{(s)}(0)=0$, due to the zero strangeness charge of the nucleon. The leading nonvanishing moments of the corresponding Sachs form factors G\_E\^[(s)]{}(q\^2)= F\_1\^[(s)]{}(q\^2)+ [q\^24m\_N\^2]{}F\_2\^[(s)]{} (q\^2),\ G\_M\^[(s)]{}(q\^2)= F\_1\^[(s)]{}(q\^2)+F\_2\^[(s)]{}(q\^2) are the strangeness radius r\_s\^2 \_S = .6[ddq\^2]{} G\_E\^[(s)]{}(q\^2)|\_[q\^2=0]{} , \[rs\] and the strangeness magnetic moment \_s=G\_M\^[(s)]{}(0)=F\_2\^[(s)]{}(0) . \[mus\] For future reference, we note that the Sachs radius $\langle r_s^2 \rangle_S$ is related to the corresponding Dirac radius as \[sachsdirac\] r\^2\_s \_S= r\^2\_s \_ + \_s. In order to extend the $K - \Lambda$ loop framework to include $K^*$-meson contributions, we start from the meson baryon effective lagrangians $$\begin{aligned} \label{1aa} {\cal L}_{MB} & = &-ig_{ps} \bar{B} \gamma_5 B K\ \ \ , \\ {\cal L}_{VB} & = & -g_v(\bar{B} \gamma_\alpha B V^\alpha + {\kappa\over 2m_N}\bar{B} \sigma_{\alpha\beta} B \partial^\alpha V^\beta)\; , \label{la}\end{aligned}$$ where $B$, $K$, and $V^\alpha$ are the baryon, kaon, and $K^*$ vector-meson fields respectively, $m_N=939\MeV$ is the nucleon mass and $\kappa$ is the ratio of tensor to vector coupling, $\kappa=g_t/g_v=3.26$, with $g_v/\sqrt{4\pi}= -1.588$ [@hol89]. The strength of the pseudoscalar coupling is $g_{ps}/\sqrt{4\pi} = -3.944$ [@hol89]. In order to account in some way for the finite extent of the hadrons appearing in the loops of Fig. 1, we include form factors at the hadronic vertices. For simplicity, we adopt a monopole form F(k\^2)=[m\^2-\^2k\^2-\^2]{}    . \[ff\] Although there is no rigorous justification for this choice, form factors of this type for the $KN\Lambda$ and $K^* N \Lambda$ vertices are used in the Bonn potential. Their cut-off parameters are determined from hyperon-nucleon scattering data [@hol89]: $\Lambda_{K^*}=2.2$ (2.1), $\Lambda_K=1.2 (1.4) \GeV$ with masses $m_K= 495$ MeV and $m_{K^*}=895\MeV$[@PDG]. The numbers in parenthesis denote values obtained in an alternate model for the baryon-baryon interaction.The momentum of the $K^*$ is $k$. These form factors render all the following loop integrals finite and reproduce the on-shell values of the mesonic couplings (since $F(m^2)=1$) . In the presence of electroweak fields the non-local meson-baryon interaction of Eqs. (\[1aa\]-\[ff\]) gives rise to vertex currents. In order to maintain gauge invariance we introduce the photon field by minimal substitution of the momentum variable in the form factors[^3]. This procedure generates the nonlocal seagull vertex [@ohta; @wan96; @Mrm97a] i\_\^[(s)]{}(k,q)=ig\_vQ\_[K\^\*]{} (q2k)\_ , \[seag\] where the upper/lower signs correspond to an incoming/outgoing vector meson (with index $\alpha$), $Q_{K^*}=-1$ is the $K^*$ strangeness charge, and $q$ is the photon momentum. Due to the derivative in eq. (\[la\]), the minimal substitution also generates an additional seagull vertex (even in the absence of meson-nucleon form factors) i\_\^[(v)]{}(k)= F((qk)\^2) \_ , \[ver\] where the sign convention is the same as above. The diagonal matrix elements of $\bar{s}\gamma_\mu s$ for strange mesons and baryons is straightforwardly determined by current conservation and the net strangeness charge of each hadron. The structure of the s-quark current spin-flip transition from $K$ to $K^*$ is K\^\*\_a(k\_1,)|[s]{}\_s|K\_b(k\_2)= [(q\^2)m\_[K\^\*]{}]{}\_k\_1\^k\_2\^\^[\*]{}\_[ab]{} , where $a$ and $b$ are isospin indices, $\varepsilon^\beta$ is the polarization vector of the $K^*$, and $k_1, k_2$ are the meson momenta. In a loop calculation, $\ffkks(q^2)$ is taken to be a constant equal to its value at the photon point. In order to estimate this constant, we follow Ref. [@gm] and assume $\ffkks(q^2)$ to be dominated at low-$q^2$ by the lightest $I^G(J^{PC})=0^-(1^{--})$ vector mesons[^4]: \[omegaphi\] [(q\^2)m\_[K\^\*]{}]{}=-\_[V=,]{} [G\_[K\^\*VK]{}S\_Vq\^2-m\_V\^2]{} , where $G_{K^*VK}$ are the couplings of the vector meson $V$ to $K$ and $K^*$. $S_V$ determines the strength of the strange-current conversion into $V$: 0|[s]{}\_s|V=S\_V \_=[m\_V\^2f\_V]{}[f\_Vf\_V\^[(s)]{}]{}\_ . &gt;From the known isoscalar electromagnetic couplings $f_{\omega,\phi}$ one can delineate the corresponding strange-current couplings with the help of a simple quark counting prescription based on flavor symmetry [@Jaf89]: \[fvrel\] [f\_f\_\^[(s)]{}]{}=- , = - , Here $\epsilon=0.053$ [@jain] is the mixing angle between the pure $\overline{u}u+ \overline{d}d$ and $\overline{s}s$ states and the physical vector mesons $\omega$ and $\phi$, and $\theta_0$ is the “magic angle” defined by $\sin^2\theta_0 =1/3$. From the above we find $f_\omega / f_\omega^{(s)} = -0.21$ and $f_\phi/ f_\phi^{(s)} = -3.11$. Combined with the strong couplings $G_{K^*\phi K} = -8.94\GeV^{-1}$ and $G_{K^*\omega K} = 6.84 \GeV^{-1}$ estimated in Ref. [@gm] we finally obtain (0)=1.84. After these preparations[^5], we can evaluate the $K^*$ loop contributions to the nucleon’s strangeness radius and magnetic moment. Explicit expressions for the loop amplitudes are given in Appendix A. The results for the different diagrams are listed in Table I. The implications of these results are discussed in Section IV. Dispersion relation calculation {#dispa} =============================== An alternative approach to computing virtual hadronic contributions to strange quark form factors is the use of dispersion relations (DR’s). In principle, DR’s provide a method for including information beyond second order in $g$, both via the strong amplitudes $N\to YK^{*}\to N$ and through the form factors $F_n^{(s)}$ describing the intermediate state matrix elements $\bra{Y}\bar{s} \gamma_\mu s\ket{Y}$, $\bra{K^{*}}\bar{s}\gamma_\mu s\ket{K}$, $\ldots$. The one-loop calculation of Section II is equivalent to the use of a DR in which the strong amplitudes $N\to YK^{*}\to N$ are computed in the Born approximation and the form factors assumed to be point-like: $F_n^{(s)}(q^2)=F_n^{(s)}(0)= {\hbox{const}}$[^6]. The inclusion of rescattering and resonance effects in the $N\to YK^{*}\to N$ amplitude would require the existence of sufficient data for $KN\to NK\pi,\ldots$ or $N\bar{N}\to KK\pi, KK\pi\pi$ [*etc.*]{} to permit analytic continuation of these amplitudes to the unphysical regime as needed for the dispersion relation. Although such a program is feasible to some degree for the $K\bar{K}$ intermediate state [@Mrm97c], it does not appear practical at present for the case of higher mass strange mesons of interest here. Consequently, we include the amplitude for $N\to YK^{*}\to N$ at the level of the Born approximation. In the case of the $F_n^{(s)}(q^2)$, however, it is possible to introduce some structure beyond the point-like approximation, albeit in a model-dependent way. Our strategy for doing so is discussed below. First, we review the formalism for treating strangeness form factors with DR’s. We write an unsubtracted dispersion relation for the Pauli form factor $F^{(s)}_2$ and subtract the one for the Dirac form factor $F^{(s)}_1$ once at $t=0$ (where $F^{(s)}_1$ vanishes, see above): $$\begin{aligned} \label{disp1} F^{(s)}_1(t) &=& \frac{t}{\pi}\int\limits_{t_0}^\infty dt' \frac{\hbox{Im}\ F^{(s)}_1(t')}{t'(t' -t)}\ \ \ , \\ F^{(s)}_2(t) &=& \frac{1}{\pi}\int\limits_{t_0}^\infty dt' \frac{\hbox{Im}\ F^{(s)}_2(t')}{t' -t} \ \ \ , \nonumber\end{aligned}$$ where $t\equiv q^2$. The cut along the real $t$-axis starts at the threshold $t_0$ of a given multi-particle intermediate state, as [*e.g.*]{} $t_0 =4\mks$ for the $K\bar{K}$ state. From Eqs. (\[disp1\]) one expects that contributions from the lightest intermediate states will mainly determine the behavior of the form factors at $t=0$. The imaginary part of the form factors is readily obtained by means of a spectral decomposition. Since the matrix elements $\bra{N(p)}\bar{s}\gamma_\mu s \ket{N(p')}$ and $\bra{N(p);\bar{N}(\pbar)}\bar{s}\gamma_\mu s\ket{0}$ are simply related by crossing symmetry, we write the spectral decomposition for the latter one as [@Mrm97c], $$\begin{aligned} \label{spec_t} & &{\hbox{Im}}\ \bra{N(p);\bar{N}(\pbar)}\bar{s}\gamma_\mu s\ket{0} = {\hbox{Im}}\ \bar{U}(p)\left[F_1^{(s)}(t)\gamma_\mu + i{\sigma_{\mu\nu}(p+ \pbar)^\nu \over 2\mn} F_2^{(s)}(t)\right]V(\pbar)\\ & &\qquad\rightarrow {\pi\over\sqrt{Z}}(2\pi)^{3/2}{\cal N}\sum_{n} \bra{N(p)}\bar{J_N}(0)\ket{n}\bra{n}\bar{s}\gamma_\mu s\ket{0} V(\pbar) \delta^4(p+\pbar-p_n)\, , \nonumber\end{aligned}$$ where ${\cal N}$ is a spinor normalization factor, $Z$ is the nucleon’s wave function renormalization constant, and $J_N(x)$ is a nucleon source. Nonzero contributions arise only from physical states $\ket{n}$ with the same quantum numbers as the current $\bar{s}\gamma_\mu s$, [*i.e.*]{} $I^G(J^{PC})=0^-(1^{--})$ and zero baryon number. These asymptotic states $\ket{n}$ in the above sum do not explicitly contain resonances. Resonance contributions arise via the matrix elements $\bra{N(p)}\bar{J_N}(0)\ket{n}$ and $\bra{n}\bar{s}\gamma_\mu s\ket{0}$. In the vector meson dominance approximation, one assumes the product of the two matrix elements in Eq. (\[spec\_t\]) to be strongly peaked near vector meson masses. This approximation has been used in several pole analyses of the strange vector form factors [@Jaf89]. The lightest contributing intermediate states are purely mesonic: $3\pi$, $5\pi$, $7\pi$, $K\bar{K}$, $K\bar{K}\pi$, $9\pi$, $K\bar{K}\pi\pi$, $\ldots$, in order. Intermediate baryon states $N\bar{N}$, $\Lambda\bar{\Lambda},\ldots$ appear with significantly higher thresholds, $t_0$. In the present study, we restrict ourselves to the strange states and consider corrections to the $K\bar{K}$ state. The first such corrections (in order of threshold) are those involving the $K\bar{K}\pi$ and $K\bar{K}\pi\pi$ intermediate states. In the previous section, these states were included using the narrow resonance approximation: $K\bar{K}\pi \to K^{*}\bar{K}$ and $K\bar{K}\pi\pi\to K^{*}\bar{K}^{*}$. In order to make contact with the loop results of Section II as well as with the calculation of Ref. [@Gei97] where in effect the same approximation was made, we adopt the narrow resonance approximation here. We also include the $\Lambda\bar{\Lambda}$ and $\Sigma\bar{\Sigma}$ intermediate states, even though they are not among the lightest in the series, in order to compare the DR results with those of the loop and quark model calculations, which contain these states. As noted earlier, we also include the strong amplitudes $\bra{N}\bar{J}_N(0)\ket{n}$ at the level of the Born approximation. For the matrix elements $\bra{n}\bar{s}\gamma_\mu s\ket{0}$, parameterized by form factors $F_n^{(s)}(t)$, we go beyond the point-like approximation, F\_n\^[(s)]{}(t)F\_n\^[(s)]{}(0) F\_n\^0 , of the one-loop and quark model calculations by allowing for some structure in the form factors. For the mesonic intermediate states, we make a simple vector meson dominance (VMD) [*ansatz*]{}. This [*ansatz*]{} is well justified for the $K\bar{K}$ state, following from $e^+e^-\to K\bar{K}$ cross section data [@Del81] and simple flavor rotation arguments [@Jaf89]. The $e^+ e^-\to K\bar{K}$ data indicates a strong peak in the vicinity of the $\phi$ resonance, with a subsequent rapid fall-off as $q^2$ (time-like) increases away from $m_\phi^2$. Inclusion of a VMD-type form factor peaked near the $\phi$-resonance significantly affects the $K\bar{K}$ component of the spectral functions \[Eqs. (\[spec\_t\])\] and the resulting contribution to the strangeness moments as compared with the use of a point-like form factor. In the case of the $KK\pi\sim KK^{*}$ and $KK\pi\pi\sim K^{*}K^{*}$ states, we take the $\fns(t)$ to be dominated by either the $\phi(1020)$ or the $\phi'(1680)$. Following Ref. [@Mus97b] we write $$\label{fk_vdm} | \fns(t)_{VDM}| =F_n^0 \left\{ {(\xi^2)^2+M^2\Gamma^2\over [(\xi^2-t)^2 +M^2\Gamma^2]}\right\}^{1/2}\,, \label{phidom}$$ where $M=m_{\phi}=1020$ MeV or $m_{\phi'}=1680\pm 20$ MeV, $\Gamma= \Gamma_{\phi}=4.43\pm 0.05$ MeV or $\Gamma_{\phi'}= 150\pm 50$ MeV are the total widths of the $\phi$ or $\phi'$ [@PDG], and $\xi^2\equiv M^2-\Gamma^2 /4$. As we note below, we need only the magnitude of the form factor in the present calculation, as the $n\to N\bar{N}$ amplitudes are real in the Born approximation. Because the states $KK\pi\sim KK^{*}$ and $KK\pi\pi\sim K^{*}K^{*}$ contribute to the DR of Eq. (\[disp1\]) for $t_0> m_\phi$, we expect higher mass vector mesons to play a significant role in the $\fns(t)$ in the region of integration. The case for $\phi'$ dominance is most convincing for the $K\bar{K}\pi$ intermediate state. Data for $\sigma(e^+e^-\to K_S^0 K^\pm \pi^\mp)$ in the range $1.4 \leq \sqrt{s} \leq 2.18$ GeV display a pronounced peak near $\sqrt{s}=1.680$ GeV [@Man82]. Furthermore, Dalitz plot analyses imply that the final state is dominated by a $K^{\ast}K\leftrightarrow KK\pi$ resonance. The OZI rule implies that the $\phi'$ is nearly a pure $s\bar{s}$ state, while SU(3) relations and data for $\sigma(e^+e^-\to \rho\pi;\sqrt{s}\approx 1.65)$ constrain the $\omega'-\phi'$ mixing angle to deviate by less than $10^\circ$ from ideal mixing [@Buo82]. While the tails of the $\rho(770)$, $\omega(780)$, and $\phi(1020)$ affect details of the peak structure, the dominant effect is that of the $\phi'$ [@Buo82]. In the absence of any other structure in $\sigma(e^+e^-\to KK\pi)$ in the region $t>t_0$, we conclude that $\ffkks(t)$ should also be dominated by the $\phi'(1680)$. Indeed, the $\omega\phi$ model of Eq. (\[omegaphi\]), which is credible for low-$t$, is inconsistent with annihilation data for $t> t_0$. Using it in this region would generate an artificial suppression of the $KK\pi$ spectral function. With these considerations in mind, it is straightforward to determine the normalization $F_{KK^*}^0$ appearing in Eq. (\[fk\_vdm\]). Following the notation of Ref. [@gm], we obtain \[fkks\_zero\] F\_[KK\^\*]{}\^0= G\_[KK\^\*’]{} m\_[K\^\*]{}/f\_[’]{}\^[(s)]{}   , where $1/f_{\phi'}^{(s)}\approx -3/f_{\phi'}$, and $G_{KK^*\phi'}$ is the strong $\phi'\to KK^*$ coupling. The latter may be obtained from $\Gamma(\phi'\to KK^*)$ which, for a single final charge state is [@gm] (’KK\^\*) = [|G\_[KK\^\*’]{}|\^212]{} |k\_F|\^3   , where $k_F=463$ MeV is the $K$ or $K^*$ CM momentum. Assuming $\Gamma(\phi'\to {\hbox{all}})$ is dominated by $\Gamma(\phi'\to KK^*)$ [@PDG], we obtain $|G_{KK^*\phi'}|\approx 3.8$ GeV$^{-1}$. Similarly, the $\phi'$ electronic width determines $f_{\phi'}$: (’e\^+e\^-) = [43]{} \^2 [M\_[’]{}f\_[’]{}\^2]{}    . Analyses of $e^+e^-$ data yield $\Gamma(\phi'\to e^+e^-)=0.7$ keV [@Buo82; @Bis91], from which we obtain $f_{\phi'}\approx 23$. The factor of $-3$ appearing in the relation between $f_{\phi'}$ and $f_{\phi'}^{(s)}$ assumes ideal mixing \[see Eq. (\[fvrel\])\]. Allowing for a small deviation $|\epsilon|< 10^\circ$ does not change our results appreciably, especially since the $\omega'$ is not observed to decay to $KK\pi$. Substituting these results into Eq. (\[fkks\_zero\]) yields $F_{KK^*}^0=\ffkks(0)=0.43$, to be compared with the value $\ffkks(0)=1.84$ used in loop calculation. We emphasize the latter value results from assuming only the $\omega$ and $\phi$ contribute to $\ffkks(t)$, whereas the former is obtained when [*only*]{} the $\phi'$ is included. Depending on the relative phase of the $(\rho\omega\phi)$ and $(\rho\omega\phi)'$ contributions in $e^+e^-\to KK\pi$, the $\phi'$ will either increase or decrease the point-like value (1.84) for this form factor by about 25% . At the $KK^*$ threshold, the $\phi'$ contribution to $\ffkks$ is about half as large as that from the $\phi$, but becomes nearly five times larger in the vicinity of $t=m_{\phi'}^2$. For purposes of estimating the $t$-dependence of $\ffkks$ in the region $t>t_0$, then, inclusion of only the $\phi'$ appears to be a reasonable approximation[^7]. We note in passing that our estimate of the $\phi'$ contribution carries an uncertainty of 25% or more, as the experimental values for $\Gamma(\phi'\to KK\pi, e^+e^-)$ carry experimental errors of $\geq 25\% $ [@Cle94]. The implications of $e^+e^-$ data for $F_{K^*}^{(s)}(t)$ are less clear. To our knowledge, there exists no annihilation data giving $K^*K^*$ branching ratios. In $e^+e^-\to KK\pi\pi$ ($1.4\leq \sqrt{s}\leq 2.18$), for example, the $K\pi$ invariant mass distribution is consistent with production of only one $K^*$ per event [@Cor82]. Consequently, the data cannot be used to infer a $K^*$ EM or strangeness form factor for $t>t_0$, and we must rely on a model. Given the evidence for $\phi'$ dominance of $\ffkks(t)$ and for $\phi$ dominance of $F_K^{(s)}(t)$ as well as the absence of experimental observation of any $0^-(1^{--})$ $s\bar{s}$ mesons with mass $\geq 2 m_{K^*}$, it is natural to assume that the $t$-dependence of $F_{K^*}^{(s)}(t)$ is governed by the tails of the known $s\bar{s}$ vector mesons. For simplicity, we include only one $s\bar{s}$ resonance – either the $\phi$ or the $\phi'$ – using the form of Eq. (\[fk\_vdm\]). The normalization $F_{K^*}^0=|Q_{K^*}|$. In the DR results displayed in Table I, we quote a range of values, the limits of which correspond to using either the $\phi$ or $\phi'$. A more realistic parameterization of $F_{K^*}^{(s)}(t)$ is likely to include some linear combination of $\phi$ and $\phi'$ poles, as well as small contributions from the $\omega$ and $\omega'$. Existing information does not permit us to determine this linear combination. Consequently, we use the ranges appearing in Table I to estimate the uncertainty in the $K^*K^*$ contribution associated with lack of knowledge of the $K^*$ strangeness form factor. For the intermediate hyperon form factors, we are aware of no electromagnetic data to provide guidance for the choice of $F_n^{(s)}(t)$. We therefore work in analogy with the proton EM form factors, since both $F_B^{(s)}(t)$ ($B=\Lambda,\ \Sigma$) and $F_\sst{PROTON}^\sst{EM}(t)$ involve matrix elements of vector currents having unit conserved charge in the states of interest. Consequently, we adopt the standard dipole form factor for the Dirac strangeness form factors of the intermediate hyperons. Since the corresponding strange magnetic couplings are unknown, we omit magnetic form factors altogether. Because the resulting contributions to the strangeness moments are generally small compared to the mesonic contributions, we do not expect the uncertainty associated with $F_B^{(s)}(t)$ to be problematic. Under these assumptions, our calculation proceeds as follows. The spectral functions entering Eqs. (\[disp1\]) have the general form \[genform\]   F(t) =  = |A\_[J=1]{}\^n(t)|  |(t)|  (1+\_n) , where $A_{J=1}^n$ is the appropriate combination of $J=1$ partial waves for the process $n\to N\bar{N}$ and $\gamma_n$ is a correction arising from the difference in phases between the amplitude $A_{J=1}^n$ and $\fns$ [@Mus97b]. This correction can vary between $-2$ and 0 and depends on $t$. At present, we are unable to determine $\gamma_n$ for the intermediate states considered here, and set $\gamma_n=0$ to obtain an upper bound. To compute the $A_{J=1}^n(t)$ in Born approximation, we calculate the imaginary parts of the diagrams (a) and (b) in Fig. 1 assuming point-like strangeness form factors, $\fns(t)\equiv 1$. We neglect the hyperon-nucleon mass difference and take $m_\sst{Y}=\mn$. The seagull diagrams do not have an imaginary part, so we obtain no contributions from diagrams 1c. Furthermore, from Eq. (\[spec\_t\]) the individual contributions are manifestly gauge invariant in this approach. We calculate the imaginary parts of the corresponding diagrams with cutting rules [@cutru] and insert them into the dispersion relations Eqs. (\[disp1\]). To obtain the imaginary parts it is convenient to consider the crossed $t$-channel matrix element $\bra{N(p);\bar{N}(\pbar)}\bar{s} \gamma_\mu s\ket{0}$. The generic form of such a diagram is shown in Fig. 2. The different choices for the internal lines I, II, and III are shown in Table II. The equivalent of the previous kaon loop result is recovered if the internal lines are chosen as in case 1 and 2. In the following, we outline our calculation for the cases 3 - 5. In case 3 and 4, both kaons have been replaced by $K^*$ vector mesons, while one kaon and one $K^*$ contribute in case 5. We choose to work in the center-of-momentum (CM) frame of the nucleon-antinucleon pair, where $q=(\omega,\vec{0})$. The loop diagrams lead to a physical reaction for $t \geq 4 \mns$, which is the minimal energy required for the creation of a $\bar{N}N$-pair, and we have $p'=(\omega/2,\vec{p'})$ and $p=(\omega/2,-\vec{p'})$ with $p_t=|\vec{p'}|=\sqrt{t/4-\mns}$. We define the contribution of a particular Feynman diagram with vertex function $\Gamma_\mu$ as $$\label{vert} {\cal M}^{(i)}_\mu= -i\, \bar{u}(p') \Gamma^{(i)}_\mu v(p)\,.$$ These vertex functions are then multiplied by the strangeness form factor $|F^{(s)}(t)_{VDM}|$ from above as indicated by Eq. (\[genform\]). Our choice for the momenta of the internal lines is indicated in Fig. 2. For the cases 3 - 5 we obtain the vertex functions shown in Appendix B. The imaginary part of $\Gamma_\mu^{(i)}$ is always finite; hence, the divergencies of the $d^4k$ integrals are without consequences. The vertex functions $\Gamma^\mu$ have branch cuts on the real axis for $ t \geq (m_I+m_{II})^2$. Their real part is continuous, such that the discontinuity associated with the cut is reflected only in the imaginary part. In the CM-frame of the nucleon and antinucleon, we have to calculate $$\label{discon} {\hbox{Im}}\,\Gamma^\mu = \frac{1}{2\,i}\Delta\Gamma^\mu = \frac{1}{2\,i}\lim_{\delta \to 0} \left(\Gamma^\mu(\omega+i\delta)-\Gamma^\mu(\omega- i\delta)\right) \,.$$ In particular, we obtain the discontinuity $\Delta\Gamma^\mu$ using the Cutkosky rules [@cutru] by cutting the lines I and II, i.e. by replacing the propagators of these lines by $\delta$ functions, $$\label{cuma} \frac{1}{p^2 - m^2 + i\varepsilon} \longrightarrow -2\,\pi\,i\,\theta(p_0)\, \delta(p^2 - m^2)\; .$$ As a consequence, the discontinuity arises when the particles I and II in Fig. 2 are on-shell. Due to the delta functions, the $d^4 k$ integration covers only a finite part of the $k$ space, leading to a finite value of the integral. Next we write $d^4k$ as $dk_0\, k^2 dk \,d\Omega_k$ and use the delta functions to carry out the $dk_0$ and $dk$ integrations. Moreover, the $d\Omega_k$ integration involves only $x$, the cosine of the angle between $\vec{k}$ and $\vec{p'}$. The denominator of the remaining propagator acquires the structure $z + x$, where $z$ depends on the particles internal to the loop. 3&:& z = =-(1+ )\ [Case]{} 4 &:& z=\ & &q\_t =\ 5&:& z =\ & & q\_t = Finally, ${\hbox{Im}}\,\Gamma_\mu$ can be expressed through Legendre functions of the second kind, and, using the relation $$\label{f_zer} {\hbox{Im}} \,\Gamma_\mu = \gamma_\mu {\hbox{Im}}\,F_1 +i \frac{\sigma_{\mu\nu}}{2m} q^\nu {\hbox{Im}}\, F_2 \; ,$$ the contributions to the imaginary parts of the Dirac and Pauli form factors for $t \geq 4\,\mns$, respectively, can be identified. The emerging spectral functions are valid for $t \geq 4\,\mns$. The dispersion integrals, however, start at $t_0=(m_I + m_{II})^2$, with $m_I$ and $m_{II}$ the masses of the loop particles I and II, respectively. Consequently, the imaginary parts of the diagrams with two internal meson lines have to be analytically continued into the unphysical region $(m_I + m_{II})^2 \leq t < 4\,\mns$, by replacing the momentum $p_t = \sqrt{t/4 -\mns}$ by $i\,p_{-} = i\sqrt{\mns -t/4}$. Similarly, the variables $z$ become complex ($z \to i \xi$), and the Legendre functions of the second kind must be analytically continued as well. Inserting now the imaginary parts and their analytical continuations in the unphysical region into the dispersion relations of Eq. (\[disp1\]), we obtain the $KK^*$ and $K^*K^*$ contributions to the strangeness form factors of the nucleon. In particular, the dispersion relations for the $K^*$ loop contributions to the strangeness radius and magnetic moment read $$\begin{aligned} \label{rhosi} \langle r^2_s \rangle_D &=&{6\over\pi}\int_{t_0}^\infty dt {{\hbox{Im}}\, \FOS(t)\over t^2} \\ \label{musi} \mu^{(s)}&=& {1\over\pi} \int_{t_0}^\infty dt {{\hbox{Im}}\, \FTS(t)\over t}\,,\end{aligned}$$ where $\langle r^2_s\rangle_D$ is related to the Sachs radius via Eq. (\[sachsdirac\]). For most of the intermediate states considered here, the dispersion integrals in Eqs. (\[rhosi\], \[musi\]) converge when a non-pointlike form for the $F_n^{(s)}(t)$ is employed. However, the tensor $K^*NB$ ($B=\Lambda, \Sigma$) coupling renders the $K^{*}K^*$ divergent even when the VDM form factor is included. To regulate this integral, we note that the unitarity of the S-matrix implies that the $N\bar{N}\to K^{*}K^{*}$ amplitude is bounded in magnitude for scattering in the physical region, $t> 4m_N^2$. The Born approximation for this amplitude does not respect this boundedness property, signalling the importance of higher-order rescattering corrections [@Mus97b]. At present, since we wish only to obtain an estimate for the $K^{*}$ contributions, we replace the $A_{J=1}^n(t>4 m_N^2)$ by its value at the physical threshold, $A_{J=1}^n(t=4 m_N^2)$. We make the same replacement in the integrals for the $KK^{*}$ intermediate state. This procedure leads to a crude upper bound on the contribution to the integrals from the integration region $t> 4 m_N^2$. The results of the DR estimates of the various contributions are quoted in Table I. The DR results for the $K\bar{K}$ contribution given in Table I were obtained using the rigorous unitarity bound. We stress that the $K^*$ results give rough upper bounds on the various contributions, not only because of the boundedness of the strong amplitudes but also because the phase difference correction, $\gamma_n$, is not known. We also do not compute the total contributions from the various states, as we cannot presently determine their relative phases. Only in the one-loop calculation of the previous section are the relative phases fixed by the model. Discussion and Conclusions {#disc} =========================== The results shown in Table I illustrate the two primary conclusions of our analysis: (i) contributions from higher mass intermediate states to the strangeness moments are not necessarily small compared with those from the lightest “OZI allowed" state $K\bar{K}$ ; (ii) estimating these higher mass contributions can entail a significant degree of theoretical uncertainty. In the one-loop model, the $K^*$ contributions can be as much as an order of magnitude larger than those from the kaon loop. The origin of this result can be traced to two factors: the tensor coupling of the $N\Lambda K^*$ vertex is much larger than the $N \Lambda K$ coupling, and the cut-off of the Bonn form factor involving the $K^*$ is about twice as large as that involving the kaon ($\Lambda_K = 1.2$ GeV). In the case of the former, omitting the tensor coupling reduces the contribution to the strangeness radius by a factor of five to ten and yields a near exact cancellation between the $KK$, $KK^{*}$, and $K^{*}K^{*}$ contributions. In the case of the strange magnetic moment, the large $K^{*}K^{*}$ and $KK^{*}$ contributions drop by two orders of magnitude when $\kappa$ is set to zero. The effect of the larger cut-off is particularly emphasized in graphs which contain derivative ([*i.e.*]{} tensor) couplings of the $K^*$. These couplings bring in additional powers of the loop momentum $k$ and the corresponding loop integrals therefore receive larger contributions from $k$ of the order of the cut-off. However, the importance of loop momenta above $\sim$2 GeV points to weaknesses of the one loop approximation. As we discuss in more detail below, the large $K^*K\Lambda$ and $K^*K^*\Lambda$ contributions (1b) appear to result from un-physical, un-realistically large values of the integrand for large loop momenta. Physically realistic contributions from these intermediate states are likely to be much smaller. In fact, the DR contributions from the $KK^*$ and $K^*K^*$ states are significantly smaller in magnitude than those generated in the loop model, though they are still comparable to, or larger than, the $K\bar{K}$ contribution. The reduction in the magnitude of these contributions from the loop model estimate reflects two factors: the boundedness of the $n\to N\bar{N}$ scattering amplitude in the physical region and the presence of more realistic, non-pointlike $\fns(t)$. Although we have only implemented the boundedness crudely for the $KK^{*}$ and $K^{*}K^{*}$ states, the requirement that the partial waves are bounded in the physical region ($t>4\mns$) is a rigorous one, following from the unitarity of the S-matrix. Since a one-loop calculation is equivalent to a DR in which the $\fns(t)$ are taken to be pointlike and the $A_{J=1}^n$ computed in the Born approximation, the one-loop results do not respect the boundedness requirement. The presence of hadronic form factors \[Eqs. (\[ff\])\] does not remedy this violation since they preserve the on-shell form for the $n\to N\bar{N}$ amplitudes. In the $K\bar{K}$ case, the unitarity violation of the one-loop calculation was shown to be a serious one [@Mus97b]. For the intermediate states containing a $K^*$, this violation appears to be all the more serious, as a comparison of the DR and loop results suggests. The tensor coupling of the $K^*$ to baryons weights the $K^*K\to N\bar{N}$ and $K^*K^*\to N\bar{N}$ amplitudes more strongly in the physical region, relative to the un-physical region ($t_0\leq t\leq 4\mns$), than in the $K\bar{K}\to N\bar{N}$ case. Consequently, the physical region contributes a substantial fraction of the entries (1b) for the $K^*K$ and $K^*K^*$ states (80% of the total in the $K^*K^*$ case) – even after the imposition of a crude bound on the $A_{J=1}^n$ and inclusion of non-pointlike $F_K^{(s)}(t)$. Had we not imposed even our rough bound, the $K^*K^*$ contribution to $\langle r_s^2\rangle_D$, for example, would have been five times larger. We conclude that the large contributions to the strangeness moments resulting from the one-loop model are not physically realistic. We emphasize that the DR calculation given here – though containing more physical information than the one-loop model – remains incomplete. A rigorous unitarity bound for the $K^*K$ and $K^*K^*$ amplitudes remains to be implemented, as has been done in the $K\bar{K}$ case. More importantly, the impact of higher order (in $g$) rescattering corrections and possible resonance effects in the $A_{J=1}^n(t_0\leq t\leq 4\mns)$ must also be estimated. In the $K\bar{K}$ case, these effects significantly enhance the $\langle r_s^2\rangle$ contribution over the entry $KK\Lambda$ (1b) in Table I [@Mrm97c]. This enhancement arises primarily from a near threshold $\phi(1020)$-resonance in the $K\bar{K}\to N\bar{N}$ amplitude. Similarly, we expect inclusion of $K^*K$ and $K^*K^*$ rescattering and $\phi'$ resonance effects in the $A_{J=1}^n$ to modify the $K^*K$ and $K^*K^*$ entries in Table I. Unfortunately, sufficient $KK\pi\to N\bar{N}$ (or $KN\to K\pi N$) and $KK\pi\pi\to N\bar{N}$ ($KN\to KN\pi\pi$ [*etc.*]{}) data do not presently exist to afford a model-independent determination of these effects. Given that higher mass contributions to the strangeness moments need not be small compared to that from the $K\bar{K}$, it is desireable to reduce the theoretical uncertainty in the former as much as possible. The $K^*K^*\Lambda$ (1b) entry hints at the level of this uncertainty. Our reasonable range" for this contribution allows for about a factor of four to seven variation, which follows from the choice of different, but reasonable, $K^*$ strangeness form factors. Based on our previous study of the $K\bar{K}$ contribution, as well as the behavior of the scattering amplitudes in the physical region, we may reasonably expect a similar level of uncertainty associated with the presently unknown rescattering and resonance effects in the $A_{J=1}^{K^*K,\ K^*K^*}$. To summarize, we have estimated $K^*K$ and $K^*K^*$ contributions to the nucleon strangeness moments, using two approaches which complement the quark model calculation of Ref. [@Gei97]. Our results confirm the conclusions reached in that work that higher mass hadronic states can be as important as the $K\bar{K}$ state and that a calculation of the strangeness moments based on a truncation in $\Delta E$ is not reliable. Similarly, we illustrate the significant theoretical ambiguities involved in estimating these higher mass contributions – particularly those associated with effects going beyond ${\cal O}(g^2)$ and with the intermediate state strangeness form factors. In this study, we have taken the first steps toward including the latter in a realistic way. We find that inclusion of physically reasonable parameterizations of the $\fns(t)$ can appreciably affect the $K^*K$ and $K^*K^*$ contributions. Even here, however, our efforts are limited by a lack of existing EM data. In the case of higher-order and resonance effects in the strong amplitudes, it should be evident that simple models which do not account for them can produce physically unrealistic estimates of the higher mass intermediate state contributions. Clearly, more sophisticated approaches are needed in order to understand how $s\bar{s}$ pairs live as virtual hadronic states. We would like to thank D. Drechsel and N. Isgur for useful discussions. This work has been supported in part by FAPESP and CNPq. M.N. would like to thank the Institute for Nuclear Theory at the University of Washington for its hospitality and H.F. acknowledges an HCM grant from the European Union and a DFG habilitation fellowship. MJR-M has been supported in part under U.S. Department of Energy contracts \# DE-FG06-90ER40561 and \# DE-AC05-84ER40150 and under a National Science Foundation Young Investigator Award. HWH has been supported by the Deutsche Forschungsgemeinschaft (SFB 201) and the German Academic Exchange Service (Doktorandenstipendium HSP III/ AUFE). Vertex Functions: Loops ======================= In the following appendices we list the explicit expressions for the one-loop diagrams considered in Section \[ext\]. They are numbered as in the figures: (1a) for $M=K^{\ast}$ and $B=B'=\Lambda,\Sigma$; (1b) for $M=M'=K^{\ast}$ and $B=B'=\Lambda,\Sigma$; (1b) for $M=K$, $M'=K^{\ast}$, and $B=B'=\Lambda,\Sigma$; (1c) for $M=K^{\ast}$ and $B=B'=\Lambda,\Sigma$. $$\begin{aligned} \Gamma^{(1a)}_\mu(p^\prime,p)& =& ig^2_v Q_B \int \frac{d^4k} {(2\pi)^4} (F(k^2))^2 D^{\alpha\beta}(k)\left(\gamma_\alpha +i{\kappa\over2m_N} \sigma_{\alpha\nu}k^\nu\right) S(p^\prime-k) \gamma_\mu\times \nonumber\\*[7.2pt] &&S(p-k) \left(\gamma_\beta-i{\kappa\over2m_N}\sigma_{\beta\gamma}k^\gamma \right) \; , \label{1a}\end{aligned}$$ $$\begin{aligned} \Gamma^{(1b)}_\mu(p^\prime,p)& =&- ig^2_v Q_{K^*} \int \frac{d^4k}{(2\pi)^4} F((k+q)^2)F(k^2) D^{\alpha\lambda}(k+q) D^{\sigma\beta}(k)\left(\gamma_\alpha + \right. \nonumber\\*[7.2pt] &+&\left.i{\kappa\over2m_N} \sigma_{\alpha\nu}(k+q)^\nu\right) [(2k+q)_\mu \, g_{\sigma\lambda}-(k+q)_\sigma g_{\lambda\mu}-k_\lambda g_{\sigma\mu}]\times \nonumber\\*[7.2pt] &&S(p-k)\left(\gamma_\beta-i{\kappa\over2m_N}\sigma_{\beta\gamma} k^\gamma\right) \; ,\;{\mbox{for $M=M^\prime=K^*$}} \nonumber\\*[7.2pt] & =&-{g_vg_{ps}F_{K^*K}^{(s)}(0)\over m_{K^*}} \epsilon_{\mu\nu\lambda\alpha}\int \frac{d^4k}{(2\pi)^4}\left\{F((k+q)^2)F_K (k^2)D^{\alpha\beta}(k+q)\times\right. \nonumber\\*[7.2pt] &&\Delta(k^2)(k+q)^\nu k^\lambda \left(\gamma_\beta +i{\kappa\over2m_N} \sigma_{\beta\delta}(k+q)^\delta\right)S(p-k)\gamma_5+ \nonumber\\*[7.2pt] &+&F(k^2)F_K((k+q)^2)D^{\alpha\beta}(k)\Delta((k+q)^2)k^\nu (k+q)^\lambda \gamma_5 \times \nonumber\\*[7.2pt] &&\left.S(p-k)\left(\gamma_\beta -i{\kappa\over2m_N}\sigma_{\beta\delta} k^\delta\right)\right\}\; ,\;{\mbox{for $M=K\;,M^\prime=K^*$}} \label{1b}\end{aligned}$$ $$\begin{aligned} \Gamma^{(1c)}_\mu(p^\prime,p)& =& g^2_v Q_{K^*} \int \frac{d^4k} {(2\pi)^4} F(k^2) D^{\alpha\beta}(k) \left\{i \left[\frac{ (q+2k)_\mu}{ (q+k)^2-k^2} \left(F(k^2)\, - F((k+q)^2)\right) \times \right.\right. \nonumber\\*[7.2pt] & & \left(\gamma_\alpha +i{\kappa\over2m_N}\sigma_{\alpha\nu}k^\nu\right) S(p-k)\left(\gamma_\beta-i{\kappa\over2m_N}\sigma_{\beta\gamma}k^\gamma\right) - \frac{ (q-2k)_\mu}{ (q-k)^2-k^2} (F(k^2)+ \nonumber\\*[7.2pt] &&\left.-F((k-q)^2))\left(\gamma_\alpha +i{\kappa\over2m_N}\sigma_{\alpha\nu} k^\nu\right) S(p^\prime-k)\left(\gamma_\beta-i{\kappa\over2m_N}\sigma_{\beta\gamma}k^\gamma \right) \right] \; + \nonumber\\*[7.2pt] &+&\;{\kappa\over2m_N}\left[F((k+q)^2)\sigma_{\alpha\mu} S(p-k)\left(\gamma_\beta-i{\kappa\over2m_N}\sigma_{\beta\gamma}k^\gamma \right)\right. + \nonumber\\*[7.2pt] &-&\left.\left.F((k-q)^2)\left(\gamma_\alpha +i{\kappa\over2m_N} \sigma_{\alpha\nu}k^\nu \right)S(p^\prime-k)\sigma_{\beta\mu}\right]\right\} \; , \label{1c}\end{aligned}$$ In the above equations we define $p^\prime=p+q$ and use the notation $D_{\alpha\beta}(k)=(-g_{\alpha\beta} + k_\alpha k_\beta/m_{K^*}^2)(k^2-m_{K^*}^2+i\epsilon)^{-1}$ for the $K^*$ propagator, $\Delta(k^2)=(k^2-m_K^2+i\epsilon)^{-1}$ for the kaon propagator, $S(p-k) = (p\kern-.5em\slash- k\kern- .5em\slash-m_B+ i\epsilon)^{-1}$ for the hyperon, $B$, propagator with mass $m_\Lambda=1116\MeV$, $m_\Sigma=1193\MeV$ and strangeness charge $Q_B =1$. Vertex Functions: Dispersion Calculation ======================================== Here, we display the vertex functions for the dispersion relation calculation of Section III. We require the product of propagator denominators and $|F^{(s)}(t)_{VDM}|$ for the cases 3-5. This product is abbreviated by \[denab\] [D]{}\_3 &=&{\[(k-q/2)\^2 --i\]\[(k+q/2)\^2 -- i\] .\ & & .\[(p’-k-q/2)\^2 -m\_\^2-i\] }\^[-1]{}|F\^[(s)]{}(t)\_[VDM]{}|,for case 3 and accordingly for cases 4 and 5. The vertex functions are labelled as in section III (Table I). We obtain: - Case 3 ($K^*K^*B$ 1a) : $$\begin{aligned} \Gamma_\mu^{(3)} &=& -iQ_B g_v^2\int\frac{d^4 k}{(2\pi)^4}\, (\gamma_\alpha +\frac{i \kappa }{2\mn}\sigma_{\alpha\nu} (p'-k-q/2)^\nu)\\ &{\hphantom{-}}&({/ \!\!\! k}+{/ \!\!\! q}/2+\mn) \gamma_\mu({/ \!\!\! k}-{/ \!\!\! q}/2+\mn) \nonumber\\ &{\hphantom{-}}&(\gamma_{\alpha'}-\frac{i \kappa}{2\mn} \sigma_{\alpha'\nu'}(p'-k-q/2)^{\nu'}) \nonumber \\ &{\hphantom{-}}& (g^{\alpha\alpha'}-(p'-k-q/2)^\alpha (p'-k-q/2)^{\alpha'}/m_\sst{K^\ast}^2)\, {\cal D}_3\; \nonumber\end{aligned}$$ - Case 4 ($K^*K^*B$ 1b) : $$\begin{aligned} \Gamma_\mu^{(4)} &=& -iQ_{K^*} g_v^2 \int\frac{d^4 k}{(2\pi)^4}\, (\gamma_{\beta'}+\frac{i \kappa}{2\mn}\sigma_{\beta'\nu} (k+q/2)^\nu)\\ &{\hphantom{-}}& (g^{\beta'\beta}-(k+q/2)^{\beta'}(k+q/2)^{\beta}/ m_\sst{K^*}^2) \nonumber \\ &{\hphantom{-}}& (g^{\alpha\alpha'}-(k-q/2)^\alpha (k-q/2)^{\alpha'}/m_\sst{K^*}^2) \,({/ \!\!\! p'}-{/ \!\!\! k}-{/ \!\!\! q}/2+\mn) \nonumber \\ &{\hphantom{-}}&(2k_\mu g_{\beta\alpha} -g_{\beta\mu}(k+q/2)_\alpha-g_{\alpha\mu}(k-q/2)_\beta) \nonumber\\ &{\hphantom{-}}&(\gamma_{\alpha'}-\frac{i \kappa}{2\mn}\sigma_{\alpha'\nu'} (k-q/2)^{\nu'}){\cal D}_4\; \nonumber\end{aligned}$$ - Case 5 ($KK^*B$ 1b) : $$\begin{aligned} \Gamma_\mu^{(5)} &=&-2 g_{ps}g_v \frac{F_{K^*K}^{(s)}(0)}{m_\sst{K^*}} \int\frac{d^4 k}{(2\pi)^4}\, (\gamma_{\beta'}+\frac{i\kappa}{2\mn}\sigma_{\beta'\nu} (k+q/2)^\nu)\\ & & (g^{\beta'\beta}-(k+q/2)^{\beta'}(k+q/2)^{\beta}/m_\sst{K^*}^2) \nonumber \\ & &\epsilon_{\sigma\beta\rho\mu} (k+q/2)^\sigma q^\rho ({/ \!\!\! p'}-{/ \!\!\! k}-{/ \!\!\! q}/2+\mn) \gamma_5 {\cal D}_5\; \nonumber\end{aligned}$$ [99]{} P. Geiger and N. Isgur, (1990) 1595. P. Geiger and N. Isgur, (1991) 799; (1991) 1066; (1993) 5050; P. Geiger, (1994) 6003. D. B. Kaplan and A. Manohar, (1988) 527. T. P. Cheng, (1976) 2161; J. Gasser, H. Leutwyler, and M. E. Sainio, (1986) 1051, (1991) 252. EMC Collaboration, J. Ashman [*et al.*]{}, (1989)1; E142 Collaboration, P. L. Anthony [*et al.*]{}, (1993) 959; SMC Collaboration, B. Adeva [*et al.*]{}, (1993) 53; SMC Collaboration, D. Adams [*et al.*]{}, (1994) 399; E143 Collaboration, K. Abe [*et al.*]{}, (1995) 346. L. A. Ahrens [*et al.*]{}, (1987) 785. B. Mueller [*et al.*]{}, [*Phys Rev. Lett.*]{} [**78**]{} (1997) 3824; MIT-Bates Report No. 94-11, M. Pitt and E.J. Beise, spokespersons. Mainz-MAMI Report No. A4/1-93, D. von Harrach, spokesperson. TJNAF Report No. PR-91-017, D.H. Beck, spokesperson; TJNAF Report No. PR-91-004, E.J. Beise, spokesperson; TJNAF Report No. PR-91-010, J.M. Finn and P.A. Souder, spokespersons. D. B. Leinweber, (1996) 5115; K.-F. Liu, U. of Kentucky preprint UK/95-11, 1995; S.J. Dong, K.F. Liu, and A.G. Williams, \[hep-ph/9712483\]. N. W. Park, J. Schechter, and H. Weigel, (1991) 869; S.-T. Hong and B.-Y. Park, (1993) 525; S. C. Phatak and S. Sahu, (1994) 11; W. Melnitchouk and M. Malheiro, [*Phys. Rev.*]{} [**C55**]{} (1997) 431. R. L. Jaffe, [*Phys. Lett.*]{} [**B229**]{} (1989) 275; H.-W. Hammer, Ulf-G. Mei[ß]{}ner and D. Drechsel, [*Phys. Lett.*]{} [**B367**]{} (1996) 323; H. Forkel, [*Prog. Part. Nucl. Phys.*]{} [**36**]{} (1996) 229; [*Phys. Rev.*]{} [**C56**]{} (1997) 510; M. J. Musolf, Eleventh Student Workshop on Electromagnetic Interactions, Bosen, Germany, 1994 (unpublished). W. Koepf and E.M. Henley, [*Phys.Rev.*]{} [**C49**]{} (1994) 2219; W. Koepf, S.J. Pollock and E.M. Henley, [*Phys. Lett.*]{} [**B288**]{} (1992) 11; M.J. Musolf and M. Burkardt, [*Z. Phys.*]{} [**C61**]{} (1994) 433; T.D. Cohen, H. Forkel and M. Nielsen, [*Phys. Lett.*]{} [**B316**]{} (1993) 1; H. Forkel, M. Nielsen, X. Jin and T.D. Cohen, [*Phys. Rev.*]{} [**C50**]{} (1994) 3108. P. Geiger and N. Isgur, [*Phys. Rev.*]{} [**D55**]{} (1997) 299. M. J. Ramsey-Musolf and H. Ito, (1997) 3066. M.J. Musolf, H.-W. Hammer, and D. Drechsel, [*Phys. Rev.*]{} [**D55**]{} (1997) 2741. M. J. Ramsey-Musolf and H.-W. Hammer, INT Report No. DOE/ER/40561-323-INT97-00-170 \[hep-ph/9705409\], to appear in . H.-W. Hammer and M.J. Ramsey-Musolf, (1998) 5. B. Holzenkamp, K. Holinde and J. Speth, [*Nucl. Phys.*]{} [**A500**]{} (1989) 485. Particle Data Group, Review of Particle Physics, (1996) 1. K. Ohta, [*Phys. Rev.*]{} [**D35**]{} (1987) 785. S. Wang and M.K. Banerjee, [*Phys. Rev.*]{} [**C54**]{} (1996) 2883. J.L. Goity, M.J. Musolf, [*Phys. Rev.*]{} [**C53**]{} (1996) 399. P. Jain et al., [*Phys. Rev.*]{} [**D37**]{} (1988) 3252. B. Delcourt [*et al.*]{}, (1981) 257; F. Mane [*et al.*]{}, (1981) 261; F. Felicetti and Y. Srivastava, (1981) 227. F. Mane [*et al.*]{}, (1982) 178. J. Buon [*et al.*]{}, (1982) 221. A. B. Clegg and A. Donnachie, (1994) 455. A. Cordier [*et al.*]{}, (1982) 335. D. Bisello [*et al.*]{}, (1991) 227. R.E. Cutkostky, [*J. Math. Phys.*]{} [**1**]{} (1960) 429; see also C. Itzykson and J.B. Zuber, Quantum Field Theory, Mc-Graw-Hill, New York, 1980. Contribution $\langle r_s^2 \rangle_D \; (\mbox{fm}^2) \; \; $ loop $|\langle r_s^2 \rangle_D |\; (\mbox{fm}^2) \; \; $ DR $\mu_s \; $ loop $|\mu_s| \; $ DR --------------- ---------------------------------------------------------- ------------------------------------------------------------- ------------------ ------------------- $KKB$ 1a 0.006 $0.001$ -0.107 $0.023$ $KKB$ 1b -0.009 $0.036$ -0.078 $0.143$ $KKB$ 1c -0.004 0 -0.069 0 $KKB$ tot $-0.007$ $-0.24$ $K^*K^*B$ 1a $ 0.075$ 0.001 $-2.283$ 0.053 $K^*K^*B$ 1b $-0.038$ $0.003\to 0.012$ $-2.343$ $0.059\to 0.408$ $K^*K^*B$ 1c $-0.007$ $0$ 0.499 $0$ $K^*K^*B$ tot $0.030$ $-4.127$ $KK^*B$ 1b 0.078 0.035 1.015 0.425 total $0.101$ $-3.352$ : \[kstartab4\] Intermediate state contributions to the strange magnetic moment $\mu_s$ and the electric strangeness radius $\langle r_s^2 \rangle_D $. The contributions are labelled according to the diagrams in Fig. 1 and the intermediate state particles. Case I II III ------ -------------------- -------------------- -------------------------------- 1 $\qquad K\qquad$ $\qquad K\qquad$ $\qquad\Lambda,\;\Sigma\qquad$ 2 $\Lambda,\;\Sigma$ $\Lambda,\;\Sigma$ $K$ 3 $\Lambda,\;\Sigma$ $\Lambda,\;\Sigma$ $K^*$ 4 $K^*$ $K^*$ $\Lambda,\;\Sigma$ 5 $K$ $K^*$ $\Lambda,\;\Sigma$ : \[kstartab1\] Particles assigned to the internal lines in the loop diagram of Fig. 2.     [^1]: National Science Foundation Young Investigator [^2]: Theoretical uncertainties associated with SU(3) breaking qualify the conclusions drawn from deep inelastic scattering experiments, however. [^3]: As noted in [@ohta; @Mrm97a] and elsewhere this procedure is not unique since the Ward-Takahashi identity does not restrict the transverse part of the vertex. [^4]: The validity of this assumption is discussed in more detail in the following section. [^5]: Note also that the small SU(3) values for the $\Sigma K N$ couplings [@hol89] lead to a strong suppression of the contributions from $\Sigma K$ intermediate states [@Had]. This argument does not affect, however, the $\Sigma^* K$ and $\Sigma^* K^*$ contributions. [^6]: The equivalence holds only when the hadronic form factors of Eq. (\[ff\]) are set to unity. [^7]: A more sophisticated treatment, including the tails of the $\phi$ and $\omega$, would – as in the purely EM case – affect the shape of the form factor near the $\phi'$ peak and the resultant $KK^*$ spectral function.
--- abstract: 'We investigate the intrinsic alignments of dark halo substructures with their host halo major-axis orientations both analytically and numerically. Analytically, we derive the probability density distribution of the angles between the minor axes of the substructures and the major axes of their host halos from the physical principles, under the assumption that the substructure alignment on galaxy scale is a consequence of the tidal fields of the host halo gravitational potential. Numerically, we use a sample of four cluster-scale halos and their galaxy-scale substructures from recent high-resolution N-body simulations to measure the probability density distribution. We compare the numerical distribution with the analytic prediction, and find that the two results agree with each other very well. We conclude that our analytic model provides a quantitative physical explanation for the intrinsic alignment of dark halo substructures. We also discuss the possibility of discriminating our model from the anisotropic infall scenario by testing it against very large N-body simulations in the future.' author: - 'Jounghun Lee, Xi Kang, and Yipeng Jing' title: The Intrinsic Alignment of Dark Halo Substructures --- INTRODUCTION ============ The dark halo substructure has recently come to one of the most lively topics in cosmology. Although the standard cosmological paradigm based on the cold dark matter (CDM) concept generically predicts the presence of the substructure inside the dark matter halos, there are many questions yet to be answered associated with the dark halo substructures. The intrinsic alignment effect of the dark halo substructure is one of those questions. There are plenty of observational evidences that the major axes of the brightest cluster galaxies (BCGs) have a strong tendency to be aligned with that of their host clusters [@sas68; @car-met80; @bin82; @str-pee85; @rhe-kat87; @wes89; @wes94; @pli94; @ful-etal99; @kim-etal02]. The most popular theory for the BCG alignment is the anisotropic infall scenario based on the standard hierarchical clustering model [@wes89]: The initial density field of CDM is web-like, interconnected by the primordial filaments [@bon87; @bon-etal96]. The gravitational collapse and merging to form structures occurs not in an isotropic way but in an anisotropic way along the large-scale filaments. Accordingly, the infall of materials into a cluster also occurs along the primordial filament, which will induce the alignment between the orientation of a host cluster and that of BCG embedded in it. There are several reasons that the anisotropic infall theory became so popular: Being simple and intuitive, it fits very well into the cold dark matter paradigm. In addition, it has been supported by several numerical simulations [e.g., @wes-etal91; @van-van93; @dub98; @fal-etal02] which demonstrated that the gravitational infall and merging of materials indeed occurs along the filaments. Nevertheless, the theory is only qualitative and still incomplete. Recent observations indicate that not only the BCGs but also the less dominant cluster galaxies exhibit the alignment effect to a non-negligible degree [@pli-bas02; @pli-etal03; @per-kuh04]. In the anisotropic infall model, the substructure alignment is a primordial effect, and would get damped away quickly by the subsequent nonlinear processes such as the violent relaxation, the secondary infall, and so on [@qui-bin92; @cou96]. Therefore, it is very unlikely that the cluster galaxies other than the BCGs keep the primordial alignment effect till the present epoch [@pli-etal03]. Here, we propose that the initial tidal interaction between the subhalos and the host halo will be responsible for the observed intrinsic alignment of the cluster galaxies. ANALYTICAL PREDICTIONS ====================== When a subhalo forms inside a host halo, it acquires the angular momentum ${\bf L}=(L_{i})$ due to the tidal shear field ${\bf T}=(T_{ij})$ generated by the gravitational potential of the host halo $(\Psi)$: $T_{ij} \equiv \partial_{i}\partial_{j}\Psi$. @lee-pen00 [@lee-pen01] proposed the following formula to quantify the mutual correlations between ${\bf T}$ and ${\bf L}$: $$\label{eqn:spin} \langle L_{i}L_{j} | \hat{\bf T}\rangle = \frac{1+c}{3}\delta_{ij} - c\hat{T}_{ik}\hat{T}_{kj}.$$ where $c \in [0,1]$ is a correlation parameter to quantify the strength of the correlation between $\hat{\bf T}$ and ${\bf L}$, and $\hat{\bf T} =(\hat{T_{ij}})$ is a unit traceless tidal shear tensor defined as $\hat{T}_{ij} \equiv \tilde{T}_{ij}/\vert\tilde{\bf T}\vert$ with $ \tilde{T}_{ij} \equiv T_{ij} - {\rm Tr}({\bf T})\delta_{ij}/3$, and ${\bf L} = (L_{i})$ is a rescaled but not a unit angular momentum. If we replace the rescaled angular momentum by the unit angular momentum, $\hat{\bf L}_{i} \equiv {\bf L}/|{\bf L}|$ in equation (\[eqn:spin\]), then the correlation parameter $c$ is reduced by a factor of $3/5$. Note here that the LHS of equation (\[eqn:spin\]) represents a [*conditional*]{} ensemble average of $L_{i}L_{j}$ provided that the unit traceless tidal shear tensor is given as $\hat{T}_{ij}$. For the detailed explanations of equation (\[eqn:spin\]), see Appendix A in @lee-pen01. It is naturally expected that $c$ decreases with time as the correlation between $\hat{\bf T}$ and ${\bf L}$ must decrease after the moment of the turn-around due to the subsequent nonlinear process. @lee-pen02 found $c \sim 0.3$ at present epoch by analyzing the data from the Tully Galaxy Catalog and the Point Source Redshift Catalog Redshift Survey (in their original work, they used a reduced correlation parameter $a \equiv 3c/5$ and found $a =0.18$). Strictly speaking, equation (\[eqn:spin\]) holds only if $\hat{\bf T}$ and ${\bf L}$ are defined at the same positions [@lee-pen00; @lee-pen01]. Here, they don’t: $\hat{\bf T}$ and ${\bf L}$ are defined at the centers of the mass of the host halo and the subhalo, respectively. For simplicity, here we just assume that equation (\[eqn:spin\]) still holds, ignoring the separation between the centers of the mass of the subhalos and that of the host halo. The distribution of ${\bf L}$ under the influence of the tidal field is often regarded as Gaussian [@cat-the96; @lee-pen01]: $$\label{eqn:ldis} P({\bf L}) = \frac{1}{[(2\pi)^3 {\rm det}(M)]^{1/2}} \exp\left[-\frac{L_{i}(M^{-1})_{ij}L_{j}}{2}\right],$$ where the covariance matrix $M_{ij} \equiv \langle L_{i}L_{j}| \hat{\bf T}\rangle$ is related to the tidal shear field $\hat{\bf T}$ by equation (\[eqn:spin\]). In the principal axis frame of $\hat{\bf T}$, let us express ${\bf L}$ in terms of the spherical coordinates: ${\bf L} = (L\sin\theta\cos\phi,L\sin\theta\sin\phi,L\cos\theta)$ where $L \equiv |{\bf L}|$, and $\theta$ and $\phi$ are the polar and the azimuthal angles of ${\bf L}$. Note that the polar angle $\theta$ represents the angle between the direction of ${\bf L}$ of the subhalo and the minor principal axis of $\hat{\bf T}$ of its host. The probability density distribution of the cosines of the polar angle $\theta$ can be obtained by integrating out equation (\[eqn:ldis\]) over $L$ and $\phi$ [@lee04]: $$\begin{aligned} p(\cos\theta) &=& \frac{1}{2\pi}\prod_{i=1}^{3} \left(1+c-3c\hat{\lambda}^{2}_{i}\right)^{-\frac{1}{2}}\times \nonumber \\ &&\int_{0}^{2\pi} \left(\frac{\sin^{2}\theta\cos^{2}\phi}{1+c-3c\hat{\lambda}^{2}_{1}} + \frac{\sin^{2}\theta\sin^{2}\phi}{1+c-3c\hat{\lambda}^{2}_{2}} + \frac{\cos^{2}\theta} {1+c-3c\hat{\lambda}^{2}_{3}}\right)^{-\frac{3}{2}}d\phi. \label{eqn:vtdis}\end{aligned}$$ Here the polar angle $\theta$ is forced to be in the range of $[0,\pi/2]$ satisfying $\int_{0}^{\pi/2}p(\theta)\sin\theta d\theta = 1$, since we care about the relative spatial orientation of the subhalo axis, but not its sign. Here the three $\hat{\lambda}_{i}$’s ($i=1,2,3$) are the eigenvalues of $\hat{\bf T}$ in a decreasing order satisfying the following two conditions: (i) $\sum_{i}\hat{\lambda}_{i}=0$; (ii) $\sum_{i}\hat{\lambda}^{2}_{i}=1$. If ${\bf T}$ is a Gaussian random field which is true in the linear regime, one can show that $\hat{\lambda}_{1} \approx -\hat{\lambda}_{3} \approx 1/\sqrt{2}$ and $\hat{\lambda}_{2} \approx 0$ [@lee-pen01]. We adopt the following two assumptions: (i) On cluster scale, the principal axes of the inertia shape tensor $I_{ij}$ of a host halo is aligned with its tidal shear tensor $T_{ij}$ with the eigenvalues being in an opposite order. In other words, the major principal axis of $I_{ij}$ is the minor principal axis of $T_{ij}$, and vice versa. Note that in @lee04, it was erroneously stated that it the major axis of $I_{ij}$ is the major axis of $T_{ij}$ (Trujillo, Carretero, & Juncosa 2004 in private communication); (ii) The minor axis of a subhalo is in the direction of its angular momentum. A justification of the first assumption is given by the Zel’dovich approximation [@zel70] which predicts a prefect alignment between the principal axis of $I_{ij}$ and that of $T_{ij}$. Since the cluster-size halos are believed to be in the quasi-linear regime where the Zel’dovich approximation is valid, the first assumption should provide a good approximation to the reality. Moreover, recent N-body simulations indeed demonstrated that $I_{ij}$ and $T_{ij}$ are quite strongly correlated [@lee-pen00; @por-etal02]. Regarding the second assumption, there is an established theory that the spin axis of an ellipsoidal object in the gravitational tidal field is well correlated with its minor axis [@bin-tre87], which was also confirmed by several N-body simulations [e.g., @fal-etal02] Now that the minor principal axis of $\hat{\bf T}$ is the major axis of the host halo, and ${\bf L}$ is aligned with the minor axis of the subhalo, the polar angle $\theta$ in equation (\[eqn:vtdis\]) actually equals [*the angle between the minor axis of the subhalo and the major axis of the host halo*]{}. Putting $\hat{\lambda}_{1}=1/\sqrt{2}$, $\hat{\lambda}_{2}=0$, and $\hat{\lambda}_{3}=-1/\sqrt{2}$, we simplify equation (\[eqn:vtdis\]) into $$\label{eqn:tdis} p(\cos\theta) = \frac{1}{2\pi}(1+c)\sqrt{1 - \frac{c}{2}} \int_{0}^{2\pi}\left[1 + c\left(1 - \frac{3}{2}\sin^{2}\theta\sin^{2}\phi \right)\right]^{-3/2}d\phi,$$ In the asymptotic limit of $c \ll 1$, equation (\[eqn:tdis\]) can be further simplified into the following closed form: $$\label{eqn:tdisa} p(\cos\theta) = \left(1 - \frac{3c}{4}\right) + \frac{9c}{8}\sin^{2}\theta$$ Equations (\[eqn:tdis\]) and (\[eqn:tdisa\]) imply that $p(\theta)$ increases as $\theta$ increases. That is, the minor axis of a subhalo has a strong propensity to be [*anti-aligned*]{} with the major axis of its host halo. Hence, it explains the observed alignment effect between the subhalo and the host halo major axes as a consequence of the intrinsic anti-alignment between the subhalo minor axis and the host halo major axis. The value of $c$ in equations (\[eqn:tdis\]) and (\[eqn:tdisa\]) should depend on the distance from the host halo center ($r$), the subhalo mass ($m$) and redshift ($z$): $c = c(r,m,z)$. What one can naturally expect is that $c$ should decrease with $r$ since the tidal interaction must be strongest in the inner part of the host halo, and that $c$ should increase with $m$ and $r$ since the alignment effect gets reduced in the nonlinear regime. Unfortunately, it will be very difficult to find thefunctional form of $c(r,z,m)$ as $c$ contains all the nonlinear informations of galaxy evolution. We do not attempt to find $c(r, z,m)$ here since it is beyond the scope of this Letter. Instead, we simplely assume that $c$ is a constant, and determine the value of $c$ empirically by fitting equation (\[eqn:tdis\]) to the numerical results in $\S 3$. NUMERICAL EVIDENCES =================== The data we use in this Letter is the high resolution halo simulations of @jin-sut00. First they selected dark matter halos from their previous cosmological P$^{3}$M N-body simulations with $256^{3}$ particles in a $100h^{-1}$Mpc cube [@jin-sut98]. The halos were identified using the standard friend-of-friend (FOF) algorithm, among which four halos on cluster-mass scales (with mass around $5-10 \times 10^{14}h^{-1}M_{\odot}$) were then re-simulated using the nested-grid P$^{3}$M code which was designed to simulate high-resolution halos. The force resolution is typically $0.4\%$ of the virial radius, and each halo is represented by about $2 \times 10^{6}$ particles within the virial radius. We then use the [SUBFIND]{} routine of @spr-etal01 to identify the disjoint self-bound subhalos within these halos, and include those subhalos containing more than 10 particles in the analysis. These simulations adopted the “concordance” $\Lambda$CDM cosmology with $\Omega_{0}=0.3$, $\Omega_{\Lambda,0}=0.7$, and $h=0.7$. Using this numerical data, we first compute the inertia tensors as $I_{ij} \equiv \Sigma_{\alpha} m_{\alpha}x_{\alpha,i}x_{\alpha,j}$ for each host halo and its subhalos in their respective center-of-mass frames. Then, we find the directions of the eigenvectors corresponding to the largest and the smallest eigenvalues of the host halo and the subhalo inertia tensors, respectively, by rotating the inertia tensors into the principal axes frame, and determine the major axes of each host halo and the minor axis of its subhalos. Then, we measure the cosines of the angles, $\theta$, between the major axis of the host halo and the minor axes of their subhalos by computing $\cos\theta \equiv \hat{\bf e}_{h}\cdot \hat{\bf e}_{s}$ where $\hat{\bf e}_{h}$ and $\hat{\bf e}_{s}$ represent the major and the minor axes of the host halo and its subhalos, respectively. Finally, we find the probability density distribution, $p(\cos\theta)$, by counting the number density of the subhalos. When computing the probability density distribution, we use all the subhalos in the host halo linked by the FOF algorithm. We perform the above procedure at four different redshifts: $z=0,0.5,1$ and $2$. The total number of the subhalos $N_{tot}$ at each redshift is $8963$, $5686$, $2766$, and $1469$, respectively. Figure \[fig:dis1\] plots the final numerical distributions (solid dots) with the error bars. The error bar at each bin is nothing but the Poisson mean for the case of no alignment given as given as $1/\sqrt{N_{bin}-1}$ where $N_{bin}$ is the number of the subhalos at each bin. As one can see, the numerical distribution $p(\cos\theta)$ increases as $\theta$ increases, revealing that the minor axes of substructures really tend to be anti-aligned with the major axes of their host halos, as predicted by the analytic model (eq.\[\[eqn:tdis\]\]) of $\S 2$. Figure \[fig:dis1\] also plots the analytic predictions (solid line) and the approximation formula (dashed line) derived in $\S 2$. The horizontal dotted line represents the uniform distribution of $\cos\theta$ for the case of no alignment. We fit the analytic distributions to the numerical data points to determine the best-fit values of the correlation parameter $c$. We find $c = 0.28\pm 0.01, 0.36\pm 0.02, 0.41\pm 0.02, 0.45\pm 0.03$ at $z=0,0.5,1,1.5$, respectively. The errors involved in the determination of $c$ is given as the standard deviation of $c$ for the case of no alignment effect given as $\epsilon_{c} \equiv \sqrt{c^{2}/(N_{tot}-1)}$ where $N_{tot}$ is the number of all subhalos used to compute $c$. The value of $c$ increases with redshifts $z$, as expected. In fact the value of $c=0.3$ gives quite a good fit, if not the best, not only at the present epoch of $z=0$ but also at all earlier epochs of $z=0.5,1,1.5$, which implies that the initially induced anti-alignment effect is more or less conserved, reflecting the fact that the directions of the subhalo angular momentum are fairly well conserved. To understand the dependence of the alignment effect on the subhalo mass, we derive the same probability distribution but by using only the most massive 30 subhalos in each cluster ($N_{tot}=120$ for each redshift), and find the corresponding best-fit values of $c$. We find $c=0.8\pm 0.11, 0.85\pm 0.11,0.9\pm 0.11,0.95\pm 0.11$ at $z=0,0.5,1,1.5$, respectively. Figure \[fig:dis2\] plots the results. The approximation formula (eq.\[\[eqn:tdisa\]\]) is excluded in this figure since the best-fit values of $c$ for this case is pretty close to unity. Although the large error bars prevent us from making a quantitative statement, it is obvious that the anti-alignment effect is stronger for the case massive subhalos. SUMMARY AND DISCUSSIONS ======================= Although the currently popular anisotropic merging and infall scenario has provided a qualitative explanation for the BCG-cluster and cluster-cluster alignments [e.g., @hop-etal05], no previous approach based on this scenario was capable of making a quantitative prediction for the alignment effect of cluster galaxies other than BCGs with the host clusters. We constructed an analytic model where the substructure alignment is a consequence of the tidal field of the host halo at least on the scale of cluster galaxies, and predicted quantitatively the strength of the alignment effect, by comparing the model with the results from recent high-resolution N-body simulations. However, it is worth noting that we have yet to completely rule out the anisotropic infall model. An idealistic way to discriminate our analytic model from the anisotropic infall scenario would be to measure directly the correlation of the directions of the subhalo angular momentum with the host halo orientations. Unfortunately, it is very difficult to determine the directions of the subhalo angular momentum vectors in current simulations. Because the rotation speed of a dark halo in simulations is only a few percent of the virial motion of the particles, one needs more than $10^4$ particles to determine accurately the direction of the angular momentum vector of a subhalo with an average rotation speed. In current simulations the subhalos have much fewer particles than $10^4$. This is why we used the minor axes of the subhalos rather than the directions of the subhalo angular momentum vectors to investigate the intrinsic alignments of substructures. Nevertheless, the strong alignment between the halo minor axes and angular momentum vectors [@bin-tre87] should indicate indirectly that the anti-alignments between the subhalo minor axes and the host halo major axes observed in our simulations are likely to be caused by the host halo tidal field as our model predicts. Many N-body simulations [@bar-efs87; @dub92; @war-etal92; @bai-etal05] have already proved that the dark matter halos rotate and their angular momentum vectors are aligned with their minor axes. Furthermore, @bai-ste04 demonstrated evidently that there are good internal alignments between the halo minor axes and angular momentum vectors measured at different radii. Therefore, the alignments between the halo angular momentum vectors and the minor axes are expected to be hold even when the outer parts of the halos get disrupted when they fall into larger halos as substructures. Indeed, we ourselves check this effect in our simulations: we measure the angular momentum vectors of several very massive subhalos with more than $10^{4}$ particles, and find that the subhalos have [*non-zero*]{} angular momentum and that the subhalo minor axes are strongly aligned the directions of their angular momentum: the cosines of all alignment angles turn out to be bigger than $0.6$. Anyway, it will be definitely necessary to investigate directly the correlations of the directions of the subhalo angular momentum with the orientations of their host halos in the future with larger simulation data, where the number of particles belonging to subhalos should be large enough. Using larger simulation data, it will be also possible to determine the functional form of the correlation parameter $c(r,m,z)$. Our future work is in thisdirection. We thank the anonymous referee who helped us improve the original manuscript. J.L. wishes to thank the Shanghai Astronomical Observatory for a warm hospitality during the workshop on Cosmology and Galaxy Formation where this collaboration began. J.L. is supported by the research grant of the Korea Institute for Advanced Study. X.K. and Y.P.J. are partly supported by NKBRSF (G19990754), by NSFC(Nos. 10125314, 10373012), and by Shanghai Key Projects in Basic Research (No. 04jc14079) Adams, M. T., Strom, K. M., & Strom, S. F. 1980, , 238, 445 Bailin, J., & Steinmetz, M. 2004, , in press, (astro-ph/0408163) Bailin, J. et al. 2005, , in press, astro-ph/0505523 Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, , 304, 15 Barnes, J., & Efstathiou, G. 1987, , 319, 575 Binggeli, B. 1982, , 107, 338 Binney, J., & Tremaine, S. 1987, Galactic Dynamics, (Princeton: Princeton Univ. Press) Binggeli, B. 1982, , 107, 338 Bond, J. R. 1987, in Nearly Normal Galaxies, ed. S. Faber (New York: Springer), 388 Bond, J., R., Kofman, L., & Pogosyan, D. 1996, Nature, 380, 603 Carter, D., & Metcalfe, J. 1980, , 191, 325 Catelan, P., & Theuns, T. 1996, , 282, 436 Coutts, A. 1996, , 278, 87 Dubinski, J. 1992, 401, 441 Dubinski, J. 1998, 502, 141 Faltenbacher, A., Kerscher, M., Gottloeber, S., & Mueller, M. 2002, , 395, 1 Fuller, T. M., West, M. J., & Bridges, T. J. 1999, , 519, 22 Hopkins, P. F., Bahcall, N., & Bode, P. 2005, in press Jing, Y. P. & Suto, Y. 1998, ApJ, 503, L9 Jing, Y. P. & Suto, Y. 2000, ApJ, 529, L69 Kim, R. S. J., Annis, J., Strauss, M. A., Lupton, R. H., Bahcall, N. A., Cunn, J. E., Kepner, J. V., & Postman, M. 2002, ASP Conf.Ser. 268, 393 Klypin, A., Gottl$\ddot{\rm o}$ber, S., Kravtsov, A. V., & Khokhlov, A. M. 1999, , 516, 530 Lee, J. & Pen, U. L. 2000, , 532, 5 Lee, J. & Pen, U. L. 2001, , 555, 106 Lee, J. & Pen, U. L. 2002, , 567, L111 Lee, J. 2004, , 614, L1 Pen, U. L., Lee, J., & Seljak, U. 2000, , 543, L107 Pereira, M. J., & Kuhn, J. R. 2004, preprint \[astro-ph/0411710\] Plionis, M., Barrow, J. D., & Frenk, C. S. 1991, , 249, 662 Plionis, M. 1994, ApJS, 95, 401 Plionis, M., & Basilakos, S. 2002, , 329, L47 Plionis, M., Benoist, C., & Maurogordato, S., Ferrai, C., Basilakos, S. 2003, , 594, 153 Porciani, C., Dekel, A., & Hoffman, Y. 2002, , 332, 325 Quinn, T., & Binney, J. 1992, , 255, 729 Rhee, G. F. R. N., & Katgert, P. 1987, , 183, 217 Sastry, G. N. 1968, PASP, 83, 313 Springel, V., White, S. D. M., Tormen, G., Kauffmann, G. 2001, , 328, 726 Struble, M. F., & Peebles, P. J. E. 1985, , 99, 743 van Haarlem, M., & van de Weygaert, R. 1993, , 418, 544 van Kampen, E., & Rhee, G. F. R. N. 1990, , 237, 283 Warren, M. S., Quinn, P. J., Salmon, J. K., & Zurek, W. H. 1992, , 399, 405 West, M. J. 1989, , 347, 610 West, M. J. 1994, , 268, 79 West, M. J., Villumsen, C., & Dekel, A. 1991, , 369, 287 West, M. J., Jones, C., & Forman, W. 1995, , 451, L5 Zel’dovich, Y. B. 1970, A& A, 5, 84
--- abstract: 'We consider the $(1+3)-$dimensional Einstein equations with negative cosmological constant coupled to a spherically-symmetric, massless scalar field and study perturbations around the Anti-de-Sitter solution. We derive the resonant systems, pick out vanishing secular terms and discuss issues related to small divisors. Most importantly, we rigorously establish (sharp, in most of the cases) asymptotic behaviour for all the interaction coefficients. The latter is based on uniform estimates for the eigenfunctions associated to the linearized operator and their first order derivatives as well as on oscillating integrals.' address: 'Laboratory Jacques-Louis Lions (LJLL), University Pierre and Marie Curie (Paris 6), 4 place Jussieu, 75252 Paris, France' author: - Athanasios Chatzikaleas nocite: '[@*]' title: 'On the Fourier analysis of the Einstein-Klein-Gordon system: Growth and Decay of the Fourier constants' --- Introduction ============ Einstein-Klein-Gordon equation ------------------------------ In this note, we are interested in the Einstein-Klein-Gordon equation. This model consists of the Einstein equations in vacuum coupled to the scalar wave equation, $$\begin{aligned} \label{arxiko} \begin{dcases} R_{ \alpha \beta} -\frac{1}{2} g _{\alpha \beta} R + \Lambda g_{\alpha \beta} = 0,\\ \Box_{g} \phi = 0, \end{dcases}\end{aligned}$$ where $\Box_{g}:=\nabla ^{\alpha} \nabla _{\alpha}$ is the wave operator and $(\mathcal{M},g)$ stands for the underlying Lorentzian manifold. In particular, we consider the Anti-de-sitter (AdS) spacetime $\left( \mathcal{M},g_{\text{AdS}} \right)$ which is the unique maximally symmetric solution to the Einstein’s equation in vacuum with negative cosmological constant $\Lambda$. In local coordinates, $$\begin{aligned} (\tau,r,\omega) \in \mathcal{M}:=\mathbb{R} \times [0,\infty) \times \mathbb{S}^{d-1} \end{aligned}$$ this solution reads $$\begin{aligned} g_{\text{AdS}} (\tau,r,\omega) = -\left( 1+\left( \frac{r}{l} \right)^2 \right) d \tau ^2 + \frac{dr^2}{1+ \left( \frac{r}{l} \right)^2 } + r^2 d \omega ^2,\end{aligned}$$ where $$\begin{aligned} l^2 = -\frac{d(d-1)}{2\Lambda}.\end{aligned}$$ We can compactify the spacetime and introduce a new set of coordinates $$\begin{aligned} (\tau,r) \longmapsto \left( t , x \right) = \left( \frac{\tau}{l}, \arctan \left( \frac{r}{l} \right)\right)\end{aligned}$$ which now vary within a compact region $$\begin{aligned} (t,x,\omega) \in \mathcal{\widetilde{M}}:= \mathbb{R} \times \left[0,\frac{\pi}{2} \right) \times \mathbb{S}^{d-1}.\end{aligned}$$ Now, the AdS solution takes the form $$\begin{aligned} g_{\text{AdS}}(t,x,\omega) = \frac{l^2}{\cos ^2 (x)} \left( - dt^2 + dx^2 + \sin ^2 (x) d \omega ^2 \right) \end{aligned}$$ implying that it is conformal to half of the Einstein static universe. One can see that null geodesics reach the conformal spatial infinity $\mathcal{I}:=\{x=\frac{\pi}{2}\}$ in finite time even though the spatial distance from any point $(t,x)$ with $0 \leq x < \frac{\pi}{2}$ to $\mathcal{I}$ is infinite. The particular characteristic of the AdS solution as well as of all the asymptotically AdS spacetimes (aAdS) (that is spacetimes that approach the AdS solution at infinity fast enough and share the same conformal boundary) is that the conformal spatial infinity is a time-like cylinder $\mathbb{R} \times \mathbb{S}^{d-1}$. Consequently, the AdS metric is not globally hyperbolic and in order to study the evolution of the field $\phi$ on the underline manifold $(\mathcal{\widetilde{M}},g_{AdS})$ one has to prescribe boundary conditions also on $\mathcal{I}$ in addition to the initial data on the $\{t=0\}$ slice.\ \ It is well know that the Minkowski space is a ground state among asymptotically flat spacetimes [@MR612249]. The AdS spacetime also enjoys a similar variational characterization due to the positive energy theorem which states that for solutions to the Einstein’s equations with matter $$\begin{aligned} \begin{dcases} R_{ \alpha \beta} -\frac{1}{2} g _{\alpha \beta} R + \Lambda g_{\alpha \beta} = 8 \pi \left ( \partial _{\alpha} \phi \partial _{\beta} \phi - \frac{1}{2} g_{\alpha \beta} (\partial \phi)^2 \right ),\\ \Box_{g} \phi = 0 \end{dcases}\end{aligned}$$ which are globally regular and satisfy a reasonable energy condition, the AdS space is a ground state among asymptotically AdS spacetimes [@MR701918; @MR626707]. As far as the initial-boundary value problem is concerned, Smulevici-Holzegel [@MR2913628] (for Dirichlet boundary conditions) and Warnick-Holzegel [@MR3369103] (for more general boundary conditions) proved its local well-posedness.\ \ Once the local well-posedness is established, an important question for the AdS solution (as for any ground state) is whether it is stable or not, meaning whether small perturbations of the solution on the $\{t = 0\}$ slice remain small for all future times or not. For the Minkowski spacetime such a question has been answered by Christodoulou-Klainerman [@MR1316662] and for the de-Sitter spacetime by Friedrich [@MR868737], who proved its stability.\ \ The main mechanism responsible for the stability of the Minkowski spacetime is the dissipation of energy by dispersion. In the case of AdS solution, such a mechanism is no longer present. For “reflective” boundary conditions on the conformal infinity, waves which start at any point inside the region $\{0\leq x < \frac{\pi}{2} \}$ and propagate outwards are reflected on $\mathcal{I}$ and return back to into the region from where they started [@MR3205859]. Such boundary conditions are confining enough forcing the AdS solution to act as a closed universe (in terms of its fields inside). Horowitz [@horowitz] relates this fact to the singularity theorem of Hawking-Penrose [@MR264959] (which states that closed universes are generically singular) suggesting that the AdS solution should be singular.\ \ Although the conjecture on the instability of the AdS spacetime was first announced by Dafermos [@DafermosTalk] and Dafermos-Holzegel [@DafermosHolzegel] in 2006, the first work in this direction was a numerical study of Bizoń-Rostworowski [@11043702]. In particular, Bizoń-Rostworowski [@11043702] considered the spherically symmetric Einstein massless scalar field equations with negative cosmological constant in $(1+3)-$dimensions and estabilshed strong numerical (as well as analytical) results which show that the AdS solution to the Einstein equations (although linearly stable) is nonlinearly unstable against the formation of a black hole under arbitrarily small and generic perturbations. In their work [@11043702], Bizoń-Rostworowski used specific Gaussian-type initial data and concluded that such initial data evolve to a wave which, as it propagates in time, collapses quickly and an apparent horizon appears. Furthermore, Dias-Horowitz-Santos [@MR2978943] considered pure gravity with a negative cosmological constant and provided additional support strengthening the evidence that the AdS spacetime might be nonlinearly unstable. Similar results have been obtained by Ja[ł]{}mu[ż]{}na-Rostworowski-Bizoń [@11084539] and Buchel-Lehner-Liebling [@12100890] for higher dimensions. Furthermore, Choptuik [@Choptuik] also studied the mechanism of the spherically symmetric collapse of a scalar field with a general time and radial spatial dependent metric and for several families of initial data.\ \ In addition, Bizoń-Rostworowski [@11043702] also conjectured that there may exist specific initial data (islands of stability) for which the evolution of small perturbations around the AdS solution remains globally regular in time. Furthermore, Maliborski-Rostworowski [@13033186] considered the spherically symmetric Einstein-massless scalar field equations with negative cosmological constant in $d + 1$ dimensions with $d\geq 2$ and provided reliable numerical evidence indicating that in fact time-periodic solutions may exist for non-generic initial data. They were able to construct these solutions using both nonlinear perturbative expansions and fully nonlinear numerical methods. Similar conjectures were made by Dias-Horowitz-Marolf-Santos [@MR3002881] who argued that many aAdS solutions are nonlinearly stable (including geons, boson stars, and black holes) and by Buchel-Liebling-Lehner [@13044166] who considered boson stars in global AdS spacetime and study their stability. Furthermore, rigorous proof of the instability of the AdS solution was given by Mochidis who considered the Einstein-null dust system [@170408681] and the Einstein massless Vlasov system [@181204268].\ \ Finally, the AdS spacetime as well as aAdS spacetimes play an important role in theoretical physics due to the celebrated AdS/CFT correspondence [@0501128] which was brought to light by Maldacena [@MR1705508; @MR1633016]. Such a duality relates events that occur within a universe with a negative cosmological constant (AdS) to events in conformal field theories (CFT) and has important applications [@MR2551709; @09090518; @12100890; @0501128]. Spherical symmetric ansatz -------------------------- To make the problem trackable we assume spherically symmetric metrics. However, by Birkhoff’s theorem, spherically symmetric solutions to the Einstein equations in vacuum are static and therefore we add matter to generate dynamics. We consider the Einstein-Klein-Gordon equation for a self-gravitating massless scalar field, that is the wave equation coupled to the Einstein equations with matter, $$\begin{aligned} \begin{dcases} R_{ \alpha \beta} -\frac{1}{2} g _{\alpha \beta} R + \Lambda g_{\alpha \beta} = 8 \pi \left ( \partial _{\alpha} \phi \partial _{\beta} \phi - \frac{1}{2} g_{\alpha \beta} (\partial \phi)^2 \right ),\\ \Box_{g} \phi = 0. \end{dcases}\end{aligned}$$ For simplicity, we fix the spatial dimension $d=3$. Following the work of Bizoń-Rostworowski [@11043702] we parametrize the spacetime metric $g$ by the spherical symmetric ansatz $$\begin{aligned} \label{symmetricansatz} g(t,x,\omega) = \frac{l^2}{\cos ^2 (x)} \left( - \frac{ A(t,x)}{ e^{2\delta(t,x)}} dt^2 + \frac{1}{A(t,x)} dx^2+ \sin ^2 (x) d \omega ^2 \right),\end{aligned}$$ for $(t,x,\omega) \in \mathcal{\widetilde{M}}$. Under this ansatz the wave equation becomes $$\begin{aligned} \label{originalwaveequation} \partial_{t} \left( \frac{1}{A(t,x)e^{-\delta(t,x)}} \partial_{t} \phi(t,x) \right) = \frac{1}{\tan^2(x)} \partial_{x} \left( \tan^2(x) A(t,x)e^{-\delta(t,x)} \partial_{x} \phi(t,x) \right). \end{aligned}$$ We transform the second order partial differential equation for $\phi$ into a first order system by setting $$\begin{aligned} \Phi (t,x) = \partial_{x} \phi (t,x),~~~\Pi (t,x) = \frac{1}{A(t,x)e^{-\delta (t,x)} } \partial_{t} \phi(t,x).\end{aligned}$$ Then, reads $$\begin{aligned} \begin{dcases} \partial_{t} \Phi(t,x) = \partial_{x} \left( A(t,x)e^{-\delta(t,x)} \Pi(t,x) \right), \\ \partial_{t} \Pi(t,x) = - { \savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{L}}]{\kern.1pt\mathchar"0362\kern.1pt} {\rule{0ex}{\textheight}} }{\textheight} }{2.4ex}} \stackon[-6.9pt]{L}{\tmpbox} } \left( A(t,x)e^{-\delta (t,x)} \Phi (t,x) \right), \end{dcases}\end{aligned}$$ where $$\begin{aligned} -{ \savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{L}}]{\kern.1pt\mathchar"0362\kern.1pt} {\rule{0ex}{\textheight}} }{\textheight} }{2.4ex}} \stackon[-6.9pt]{L}{\tmpbox} }[g](x):= \frac{1}{\tan^2(x)} \partial_{x} ( \tan^2(x) g(x)),\end{aligned}$$ coupled to the Einstein equations $$\begin{aligned} \begin{dcases} (1-A(t,x))e^{-\delta(t,x)}= \frac{\cos ^3 (x)}{\sin(x)} \int_{0}^{x} e^{-\delta(t,y)} \left(\Phi^2(t,y) + \Pi^2(t,y) \right) (\tan(y))^2 dy, \\ -\delta(t,x) = \int_{0}^{x} \left(\Phi^2(t,y) + \Pi^2(t,y) \right) \sin(y) \cos(y) dy. \end{dcases}\end{aligned}$$ The linearized operator ----------------------- From the Einstein equations, one can derive an additional equation, namely the momentum constraint $$\begin{aligned} \partial_{t} A(t,x) = - 2 \sin(x)\cos(x) A(t,x) \partial_{x}\phi(t,x) \partial_{t}\phi(t,x)\end{aligned}$$ and now can be written as $$\begin{aligned} \partial_{t}^2 \phi(t,x) + L\left[\phi (t,x)\right] &= \frac{1}{2} \partial_{x} \left( \left( A(t,x)e^{-\delta(t,x)} \right)^2 \right) \partial_{x} \phi(t,x) \\ &- 2 \sin(x) \cos(x) \partial_{x} \phi(t,x) \left(\partial_{t} \phi(t,x) \right)^2 - \partial_{t} \delta(t,x) \partial_{t} \phi(t,x)\\ &+ \left(1- \left(A(t,x)e^{-\delta(t,x)} \right)^2 \right) L\left[\phi(t,x) \right]\end{aligned}$$ where $$\begin{aligned} -L[f](x):= \frac{1}{\tan^2(x)} \partial_{x} ( \tan^2(x) \partial_{x} f(x))\end{aligned}$$ is the operator which governs linearized perturbations of AdS solution. The solutions to the eigenvalue problem $L[f] = \omega^2 f$ subject to Dirichlet boundary conditions on the conformal boundary $\mathcal{I}=\{x=\frac{\pi}{2}\}$ fall into the hypergeometric class and hence can be found explicitly. For a rigorous definition of the spectrum, see the Appendix in the work of Bachelot [@MR2430631]. Specifically, the eigenvalues read $$\begin{aligned} \omega^2_{j}:=(3+2j)^2,~j=1,2,\dots\end{aligned}$$ and eigenfunctions are weighted Jacobi polynomials, $$\begin{aligned} e_{j}(x):=2\frac{\sqrt{j! (j+2)!}}{\Gamma (j+\frac{3}{2})} \cos^3(x) P^{\frac{1}{2},\frac{3}{2}}_{j}(\cos(2x)),\quad x \in \left[0,\frac{\pi}{2} \right], \quad j=0,1,\dots.\end{aligned}$$ For the definition, basic properties and an introduction to the Jacobi polynomials $P^{\alpha,\beta}_{j}$, see Chapter 4, page 48 in Szegö’s book [@MR0106295]. In addition, the linearized operator $L$ is self-adjoint with respect to the weighted inner product $$\begin{aligned} \label{innerproduct} (f|g):=\int _{0}^{\frac{\pi}{2}} f(x)g(x) \tan^{2}(x) dx.\end{aligned}$$ For the definition of the domain in which the linearized operator is self-adjoint, see also the Appendix in the work of Bachelot [@MR2430631]. Finally, note that the eigenvalues are strictly positive and hence the linear problem is stable. Main result and preliminaries ============================= We consider the spherically symmetric Einstein-massless scalar field equations with negative cosmological constant under the spherically symmetric ansatz , $$\begin{aligned} & \partial_{t} \Phi(t,x) = \partial_{x} \left( A(t,x)e^{-\delta(t,x)} \Pi(t,x) \right), \label{EKG1} \\ & \partial_{t} \Pi(t,x) = - { \savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{L}}]{\kern.1pt\mathchar"0362\kern.1pt} {\rule{0ex}{\textheight}} }{\textheight} }{2.4ex}} \stackon[-6.9pt]{L}{\tmpbox} } \left( A(t,x)e^{-\delta (t,x)} \Phi (t,x) \right), \label{EKG2}\\ & (1-A(t,x))e^{-\delta(t,x)}= \frac{\cos ^3 (x)}{\sin(x)} \int_{0}^{x} e^{-\delta(t,y)} \left(\Phi^2(t,y) + \Pi^2(t,y) \right) (\tan(y))^2 dy,\label{EKG3} \\ & \delta(t,x) = -\int_{0}^{x} \left(\Phi^2(t,y) + \Pi^2(t,y) \right) \sin(y) \cos(y) dy, \label{EKG4}\end{aligned}$$ and we are mainly interested in the asymptotic behaviour of the Fourier constants which appear in the analysis of perturbations around the AdS solution $(\Phi, \Pi, A, \delta)=(0,0,1,0)$. Statement of the main result ---------------------------- Specifically, we consider two types of perturbations. On the one hand, in light of recent work Maliborski-Rostworowski [@13033186], although the series may not converge, we seek a solution of the form $$\begin{aligned} &\Phi(t,x) =\sum_{\lambda=0}^{\infty} \psi_{2\lambda+1} (\tau,x)\epsilon^{2\lambda+1} = \psi_{1} (\tau,x)\epsilon + \psi_{3} (\tau,x)\epsilon^{3}+\psi_{5} (\tau,x)\epsilon^{5}+\dots,\label{series1} \\ & \Pi(t,x)=\sum_{\lambda=0}^{\infty} \sigma_{2\lambda+1} (\tau,x)\epsilon^{2\lambda+1}=\sigma_{1} (\tau,x)\epsilon + \sigma_{3} (\tau,x)\epsilon^{3}+\sigma_{5} (\tau,x)\epsilon^{5}+\dots, \label{series2} \\ & A(t,x) e^{-\delta(t,x)} =\sum_{\lambda=0}^{\infty} \xi_{2\lambda} (\tau,x)\epsilon^{2\lambda}= 1 +\xi_{2}(\tau,x) \epsilon^2+\xi_{4}(\tau,x) \epsilon^4+\dots,\label{series3} \\ & e^{-\delta(t,x)}=\sum_{\lambda=0}^{\infty} \zeta_{2\lambda} (\tau,x)\epsilon^{2\lambda}=1 +\zeta_{2}(\tau,x) \epsilon^2+\zeta_{4}(\tau,x) \epsilon^4+\dots \label{series4} \\ & \tau = \Omega_{\gamma} t, \quad \Omega_{\gamma} = \sum_{\lambda=0}^{\infty} \omega_{\gamma,2\lambda} \epsilon^{2\lambda} = \omega_{\gamma,0}+ \omega_{\gamma,2} \epsilon^{2}+\cdots,\label{series5}\end{aligned}$$ where $\psi_{2\lambda+1}$,$\sigma_{2\lambda+1}$,$\xi_{2\lambda}$ and $\zeta_{2\lambda}$ are all periodic in time. Here, $\psi_{1},\sigma_{1},\xi_{2},\zeta_{2},\omega_{\gamma,0},\gamma$ will be chosen later. Furthermore, with a slight abuse of notation, we use the same letters to denote the variables with respect to the $(\tau,x)$ and $(t,x)$. On the other hand, we still assume that $(\Phi,\Pi,A,\delta)$ are all close to the AdS solution $(0,0,1,0)$ but expand them using a finite sum $$\begin{aligned} & \Phi(t,x) = \Phi_{1}(\tau,x) \epsilon + \Psi(\tau,x) \epsilon ^3 \label{finitesum1} \\ &\Pi(t,x) = \Pi_{1}(\tau,x) \epsilon + \Sigma(\tau,x) \epsilon ^3 \label{finitesum2} \\ &A(t,x) = 1-A_{2}(\tau,x) \epsilon^2 - B(\tau,x) \epsilon^4 \label{finitesum3} \\ & e^{-\delta (t,x)} = 1 - \delta_{2} (\tau,x) \epsilon^2 - \Theta (\tau,x) \epsilon^4, \label{finitesum4} \\ & \tau = (\omega_{\gamma} + \epsilon^2 \theta_{\gamma} +\epsilon^4 \eta_{\gamma})t, \label{finitesum5}\end{aligned}$$ for some error terms $\Psi, \Sigma, B,\Theta,\eta_{\gamma}$ where $\Phi_{1},\Pi_{1},A_{2},\delta_{2}$ are all explicit periodic expressions in time and will be chosen later together with $\gamma$. We formulate our main result. \[maintheorem\] We consider the perturbations around the AdS solution of the form - Perturbation 1: the series – - Perturbation 2: the finite sum – and establish the asymptotic behaviour - Perturbation 1: Proposition \[Result1\] - Perturbation 2: Propositions \[A1\] – \[B2\] for all the interaction coefficients (Fourier constants) which appear. Our results are sharp in most of the cases. Our work is motivated by the work of Maliborski-Rostworowski [@13033186] who studied the existence of time periodic solutions to the Eistein-Klein-Gordon equation as well as by the work of Hunik-Kostyra-Rostworowski [@200208393] who established interesting recurrence relations for the interaction coefficients for the $5-$dimensional Einstein equations in vacuum with negative cosmological constant within cohomogenity-two biaxial Bianchi IX ansatz. To place our results in the context of the physics literature, we refer the reader to [@Balasubramanian:2014cja; @Biasi:2018eaa; @Bizon:2015pfa; @Craps:2014vaa; @Craps:2014jwa; @Craps:2015iia; @Green:2015dsa]. Acknowledgments --------------- The author would like to express his sincere gratitude to Professor Jacques Smulevici for very useful communications, comments and insights. Also, the author gratefully acknowledges the support of the ERC grant 714408 GEOWAKI, under the European Union’s Horizon 2020 research and innovation program. Preliminaries ------------- As we will see, there are subspaces from which these Fourier constants decay and subspaces from which they grow. On the one hand, in order to establish the decay estimates, we will use - the leading order terms of the eigenfunctions and their weighted derivatives (Lemma \[ClosedformulasFore\] and Remark \[lot\]) - the asymptotic behaviour of specific oscillatory integrals (Lemma \[OscillatoryIntegrals\]) - integration by parts (Lemma \[byparts1\] and Lemma \[byparts2\]) The second is based on - Dirichlet-Kernel-type identities (Lemma \[Dirichlet\]) - carefully chosen anti-derivatives On the other hand, for the growth estimates, we will only use - Holder’s inequality - $L^{\infty}-$bounds for quantities related to the eigenfunctions (Lemma \[Linftyboundse\]) To begin with, we prove the first auxiliary result. \[ClosedformulasFore\] For all $x \in [0,\frac{\pi}{2}]$ and $i=0,1,\dots$, we have $$\begin{aligned} & e_{i}(x) = \frac{2}{\sqrt{\pi}} \frac{ 1}{\sqrt{\omega_{i}^2-1}} \left( \omega_{i} \frac{\sin\left( \omega_{i} x \right)}{\tan(x)} - \cos \left( \omega_{i}x \right) \right), \\ & \frac{e_{i}^{\prime}(x)}{\omega_{i}} = \frac{2}{\sqrt{\pi}} \frac{ 1}{\sqrt{\omega_{i}^2-1}} \left( \omega_{i}\frac{\cos\left( \omega_{i} x \right)}{\tan(x)} - \frac{\sin \left( \omega_{i}x \right)}{\tan^2(x)} \right),\end{aligned}$$ Furthermore, both $\{e_{i}\}_{i=0}^{\infty}$ and $\{\frac{e_{i}^{\prime}}{\omega_{i}}\}_{i=0}^{\infty}$ form an orthogonal basis for $L^{2}\left( \left[0,\frac{\pi}{2} \right] \right)$ with respect to the weighted inner product , namely $$\begin{aligned} (e_{i}|e_{j}) = \delta_{i,j},\quad (e_{i}^{\prime}|e_{j}^{\prime}) = \omega_{i}^2 \delta_{i,j} $$ and, for all $i,j=0,1,\dots,$. Here, $\delta_{i,j}$ stands for the Kronecker’s delta. For the first part, we make use of the facts $$\begin{aligned} & P^{\frac{1}{2},\frac{3}{2}}_{j} (z) = \frac{2\Gamma(j+2)}{\Gamma(j+3)} \frac{d}{d z} P^{-\frac{1}{2},\frac{1}{2}}_{j+1} (z), \quad P^{-\frac{1}{2},\frac{1}{2}}_{j+1} (z) = \frac{1}{\sqrt{\pi}} \frac{\Gamma(j+\frac{3}{2})}{\Gamma(j+2) } \frac{\cos(\omega_{j} x)}{\cos(x)}\end{aligned}$$ where $x=x(z)=\frac{1}{2} \arccos(z)$ and $j=0,1,\dots$. These identities can be found in Chapter 4, page 60, equation (4.1.8) and Chapter 4, page 63, equation (4.21.7) in Szegö’s book [@MR0106295]. Then, the closed formula for $e_{j}$ follows by the chain rule. Indeed, we define $z=\cos(2x)$ and compute $$\begin{aligned} P^{\frac{1}{2},\frac{3}{2}}_{j} (z) & = \frac{2\Gamma(j+2)}{\Gamma(j+3)} \frac{1}{\sqrt{\pi}} \frac{\Gamma(j+\frac{3}{2})}{\Gamma(j+2) } \frac{d}{d z} \left( \frac{\cos(\omega_{j} x)}{\cos(x)}\right) \\ & =\frac{2}{\sqrt{\pi}} \frac{\Gamma(j+\frac{3}{2})}{\Gamma(j+3)} \frac{dx}{d z} \frac{d}{d x} \left( \frac{\cos(\omega_{j} x)}{\cos(x)} \right) \\ & =\frac{2}{\sqrt{\pi}} \frac{\Gamma(j+\frac{3}{2})}{\Gamma(j+3)} \frac{-1}{2 \sin(2x)} \frac{-\omega_{j} \cos(x) \sin(\omega_{j}x)+\cos(\omega_{j}x) \sin(x)}{\cos^2(x)} \\ & =\frac{1}{\sqrt{\pi}} \frac{\Gamma(j+\frac{3}{2})}{\Gamma(j+3)} \left( \frac{\omega_{j}}{2\sin(x) \cos^2(x)} \sin(\omega_{j}x) -\frac{1}{2 \cos^3(x)} \cos(\omega_{j}x) \right)\end{aligned}$$ and so $$\begin{aligned} e_{j}(x) & :=2\frac{\sqrt{j! (j+2)!}}{\Gamma (j+\frac{3}{2})} \cos^3(x) P^{\frac{1}{2},\frac{3}{2}}_{j}(\cos(2x)) \\ & =\frac{\sqrt{j! (j+2)!}}{\Gamma (j+\frac{3}{2})} \frac{1}{\sqrt{\pi}} \frac{\Gamma(j+\frac{3}{2})}{\Gamma(j+3)} \left(\omega_{j} \frac{\cos(x)}{\sin(x)} \sin(\omega_{j}x) - \cos(\omega_{j}x) \right) \\ & = \frac{1}{\sqrt{\pi}} \frac{\sqrt{j! (j+2)!}}{\Gamma(j+3) } \left(\omega_{j} \frac{\sin(\omega_{j}x) }{\tan(x)} - \cos(\omega_{j}x) \right).\end{aligned}$$ Finally, since $j$ is an integer, we conclude $$\begin{aligned} \frac{\sqrt{j! (j+2)!}}{\Gamma(j+3) } = \frac{\sqrt{j! (j+2)!}}{ (j+2)! } = \sqrt{\frac{j!}{(j+2)!}} = \sqrt{\frac{1}{(j+1)(j+2)}} = \frac{2}{\sqrt{\omega_{i}^2-1}}\end{aligned}$$ The closed formula for $e_{j}^{\prime}$ follows by differentiating the closed formula for $e_{j}$. Using these formulas, the orthogonality properties are straightforward. For the fact that the set $\{e_{j}:j=0,1,2,\dots \}$ forms a basis for $L^{2}\left( \left[0,\frac{\pi}{2} \right] \right)$ with respect to the weighted inner product , see the Appendix in the work of Bachelot [@MR2430631]. In order to show that $\{ \frac{e^{\prime}_{j}}{\omega_{j}}:j=0,1,2,\dots \}$ also forms a basis for the same function space, one has to prove that $$\begin{aligned} \left(f \Big| \frac{e^{\prime}_{j}}{\omega_{j}} \right) = 0,\quad \forall j=0,1,2,\dots \Longrightarrow f=0.\end{aligned}$$ To this end, we define $$\begin{aligned} F(x):= \int_{x}^{\frac{\pi}{2}} f(y) dy\end{aligned}$$ and use the fact that $$\begin{aligned} - (\tan^2(x) e_{j}^{\prime}(x))^{\prime} = \omega_{j}^2 \tan^2(x) e_{j}(x)\end{aligned}$$ which follows from $L[e_{j}]=\omega_{j}^2 e_{j}$ together with integration by parts to compute $$\begin{aligned} 0=\left(f \Big| \frac{e^{\prime}_{j}}{\omega_{j}} \right) &= \int_{0}^{\frac{\pi}{2}} f(x) \frac{e^{\prime}_{j}(x)}{\omega_{j}} \tan^2(x) dx \\ &=-\int_{0}^{\frac{\pi}{2}} F^{\prime}(x) \frac{e^{\prime}_{j}(x)}{\omega_{j}} \tan^2(x) dx \\ &=\int_{0}^{\frac{\pi}{2}} F(x) \left( \frac{e^{\prime}_{j}(x)}{\omega_{j}} \tan^2(x) \right)^{\prime} dx \\ &=\int_{0}^{\frac{\pi}{2}} F(x) \left( \frac{e^{\prime}_{j}(x)}{\omega_{j}} \tan^2(x) \right)^{\prime} dx \\ &=-\omega_{j} \int_{0}^{\frac{\pi}{2}} F(x) e_{j}(x) dx\end{aligned}$$ for all $j=0,1,2,\dots$. Now, we use the fact that $\{e_{j}:j=0,1,2,\dots \}$ forms a basis to get $F = 0$ which in turn implies $f=0$. \[lot\] We find the leading order terms $$\begin{aligned} & e_{i}(x) = \frac{2}{\sqrt{\pi}} \left( \frac{ \omega_{i}}{\sqrt{\omega_{i}^2-1}} \frac{\sin\left( \omega_{i} x \right)}{\tan(x)} -\frac{ 1}{\sqrt{\omega_{i}^2-1}} \cos \left( \omega_{i}x \right) \right) \simeq \frac{2}{\sqrt{\pi}} \frac{\sin\left( \omega_{i} x \right)}{\tan(x)},\\ & \frac{e_{i}^{\prime}(x)}{\omega_{i}} = \frac{2}{\sqrt{\pi}} \left( \frac{ \omega_{i}}{\sqrt{\omega_{i}^2-1}} \frac{\cos\left( \omega_{i} x \right)}{\tan(x)} - \frac{1}{\sqrt{\omega_{i}^2-1}} \frac{\sin \left( \omega_{i}x \right)}{\tan^2(x)} \right) \simeq \frac{2}{\sqrt{\pi}} \frac{\cos \left( \omega_{i} x \right)}{\tan(x)},\end{aligned}$$ as $i \longrightarrow \infty$ and for all $x\in \left[ 0, \frac{\pi}{2} \right]$. These estimates are uniform with respect to the weighted $L^2-$norm $$\begin{aligned} \| f \| :=\left( \int_{0}^{\frac{\pi}{2}} f^2(x) \tan^2 (x) dx \right)^{\frac{1}{2}}.\end{aligned}$$ Indeed, for large $i$, we estimate $$\begin{aligned} \left \| e_{i} - \frac{2}{\sqrt{\pi}} \frac{\sin\left( \omega_{i} \cdot \right)}{\tan} \right \| &\leq \frac{2}{\sqrt{\pi}} \left( \frac{ \omega_{i}}{\sqrt{\omega_{i}^2-1}} -1 \right) \left \| \frac{\sin(\omega_{i} \cdot)}{\tan} \right \| +\frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{i}^2-1}} \left \| \cos(\omega_{i} \cdot) \right \| \\ & \lesssim \frac{ \omega_{i}}{\sqrt{\omega_{i}^2-1}} -1 +\sqrt{ \frac{\omega_{i}}{\omega_{i}^2-1} } \lesssim \frac{1}{\sqrt{\omega_{i}}}, \\ \left \| \frac{e_{i}^{\prime}}{\omega_{i}} - \frac{2}{\sqrt{\pi}} \frac{\cos\left( \omega_{i} \cdot \right)}{\tan} \right \| & \leq \frac{2}{\sqrt{\pi}} \left(\frac{ \omega_{i}}{\sqrt{\omega_{i}^2-1}} -1 \right) \left \| \frac{\cos(\omega_{i} \cdot)}{\tan} \right \| + \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{i}^2-1}} \left \| \frac{\sin(\omega_{i} \cdot)}{\tan^2} \right \| \\ & \lesssim \frac{ \omega_{i}}{\sqrt{\omega_{i}^2-1}} -1 +\sqrt{ \frac{\omega_{i}}{\omega_{i}^2-1} } \lesssim \frac{1}{\sqrt{\omega_{i}}},\end{aligned}$$ since $$\begin{aligned} & \left \| \frac{\sin(\omega_{i} \cdot)}{\tan} \right \|^2 = \int_{0}^{\frac{\pi}{2}} \sin^2 (\omega_{i}x) dx = \frac{\pi}{4},\nonumber \\ & \left \| \cos(\omega_{i} \cdot) \right \|^2 = \int_{0}^{\frac{\pi}{2}} \cos^2 (\omega_{i}x) \tan^2(x)dx \lesssim \omega_{i}, \label{App1} \\ & \left \| \frac{\cos(\omega_{i} \cdot)}{\tan} \right \|^2 = \int_{0}^{\frac{\pi}{2}} \cos^2 (\omega_{i}x) dx = \frac{\pi}{4},\nonumber \\ & \left \| \frac{\sin(\omega_{i} \cdot)}{\tan^2} \right \|^2 = \int_{0}^{\frac{\pi}{2}} \frac{\sin^2 (\omega_{i}x)}{\tan^2(x)} dx \lesssim \omega_{i}. \label{App2}\end{aligned}$$ The proofs of and are given in the Appendix, Lemma \[lemmaAppendix\]. Next, we prove $L^{\infty}-$bounds for quantities related to the eigenfunctions. \[Linftyboundse\] For all $j=0,1,2,\dots$, we have $$\begin{aligned} & \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| e_{j}(x) \right| \leq \frac{2}{\sqrt{\pi}} ~\omega_{j}, \\ & \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| \tan(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \right| \leq \frac{4}{\sqrt{\pi}},\\ & \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| \int_{x}^{\frac{\pi}{2}} e_{j}(y) \sin(y) \cos(y) dy \right| \leq \frac{2}{\omega_{j}}.\end{aligned}$$ For the first estimate, we define the oscillating part of $e_{j}(x)$, namely $$\begin{aligned} f_{j}(x):= \omega_{j} \frac{\sin(\omega_{j} x)}{\tan(x)} - \cos(\omega_{j} x),\end{aligned}$$ for all $x\in [0,\frac{\pi}{2}]$ and $j=0,1,2,\dots$ and compute its derivative $$\begin{aligned} f_{j}^{\prime}(x) = \frac{\omega_{j}}{\tan(x)} \left( \omega_{j} \cos(\omega_{j} x) - \frac{\sin(\omega_{j}x)}{\tan(x)} \right) = \omega_{j} \frac{\cos(\omega_{j}x)}{\tan(x)} \left( \omega_{j} - \frac{\tan(\omega_{j}x)}{\tan(x)} \right).\end{aligned}$$ Hence, for all $x\in (0,\frac{\pi}{2})$ and $j=0,1,2,\dots$, the equation $$\begin{aligned} f_{j}^{\prime}(x) = 0 \Longleftrightarrow \tan(\omega_{j} x) = \omega_{j} \tan(x)\end{aligned}$$ has countable many solutions, say $x=x_{j}^{\star}\in (0,\frac{\pi}{2})$. Then, $$\begin{aligned} f_{j}(x_{j}^{\star}) &= \cos(\omega_{j} x_{j}^{\star}) \Bigg( \omega_{j} \frac{\tan(\omega_{j} x_{j}^{\star}) }{\tan(x_{j}^{\star}) } -1 \Bigg) = \cos(\omega_{j} x_{j}^{\star}) \Bigg( \omega_{j} \frac{\omega_{j} \tan( x_{j}^{\star}) }{\tan(x_{j}^{\star}) } -1 \Bigg) = \cos(\omega_{j} x_{j}^{\star}) \big( \omega_{j}^2-1 \big).\end{aligned}$$ Now, since $$\begin{aligned} f_{j}(0) = \lim_{x \rightarrow 0} f_{j}(x) = \omega_{j}^2-1,\quad \quad f_{j}\left(\frac{\pi}{2} \right) = \lim_{x \rightarrow \frac{\pi}{2}} f_{j}(x) = 0,\end{aligned}$$ we get $$\begin{aligned} \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| f_{j}(x) \right| = \max \left\{ \left| f_{j}(0) \right|,\left| f_{j}(x_{j}^{\star}) \right|,\left| f_{j} \left( \frac{\pi}{2} \right) \right| \right\} \leq \omega_{j}^2-1\end{aligned}$$ and finally $$\begin{aligned} \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| e_{j}(x) \right| = \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| f_{j}(x) \right| = \frac{2}{\sqrt{\pi}} \sqrt{\omega_{j}^2-1} \leq \frac{2}{\sqrt{\pi}}\omega_{j}.\end{aligned}$$ For the second estimate, we observe that $$\begin{aligned} \tan(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} & =\frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \omega_{j} \cos(\omega_{j} x) - \frac{\sin(\omega_{j}x)}{\tan(x)} \right) \\ & =\frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \omega_{j} \cos(\omega_{j} x) - \frac{1}{\omega_{j}}\omega_{j} \frac{\sin(\omega_{j}x)}{\tan(x)} + \frac{1}{\omega_{j}} \cos(\omega_{j}x) -\frac{1}{\omega_{j}} \cos(\omega_{j}x) \right)\\ & =\frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \omega_{j} \cos(\omega_{j} x) - \frac{1}{\omega_{j}} \left( \omega_{j} \frac{\sin(\omega_{j}x)}{\tan(x)} - \cos(\omega_{j}x) \right) -\frac{1}{\omega_{j}} \cos(\omega_{j}x) \right)\\ & =\frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \left( \omega_{j} -\frac{1}{\omega_{j}} \right) \cos(\omega_{j} x) - \frac{1}{\omega_{j}} f_{j}(x) \right).\end{aligned}$$ Hence, by triangular inequality, $$\begin{aligned} \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| \tan(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \right| & \leq \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \left( \omega_{j} -\frac{1}{\omega_{j}} \right) + \frac{1}{\omega_{j}} \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| f_{j}(x) \right| \right) \\ & \leq \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \left( \omega_{j} -\frac{1}{\omega_{j}} \right) + \frac{\omega_{j}^2-1}{\omega_{j}} \right) \\ & \leq \frac{4}{\sqrt{\pi}},\end{aligned}$$ for all $j=0,1,2,\dots$. For the last estimate, we define $$\begin{aligned} g_{j}(x):=\int_{x}^{\frac{\pi}{2}} e_{j}(y) \sin(y) \cos(y) dy\end{aligned}$$ for all $x\in \left[0,\frac{\pi}{2} \right]$ and $j=0,1,2,\dots$ and compute its derivative $$\begin{aligned} g_{j}^{\prime}(x) = -e_{j}(x)\sin(x) \cos(x). \end{aligned}$$ As above, for all $x\in (0,\frac{\pi}{2})$ and $j=0,1,2,\dots$, the equation $$\begin{aligned} g_{j}^{\prime}(x) = 0 \Longleftrightarrow \omega_{j} \sin(\omega_{j}x) = \cos(\omega_{j}x) \tan(x)\end{aligned}$$ has countable many solutions, say $x=x_{j}^{\prime}\in (0,\frac{\pi}{2})$. Then, $$\begin{aligned} g_{j}(x_{j}^{\prime}) &= \int_{x_{j}^{\prime}}^{\frac{\pi}{2}} e_{j}(y) \sin(y)\cos(y) dy \\ &= \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \left( \omega_{j} \int_{x_{j}^{\prime}}^{\frac{\pi}{2}} \sin(\omega_{j}y) \cos^2(y) dy - \int_{x_{j}^{\prime}}^{\frac{\pi}{2}} \cos(\omega_{j}y) \sin(y)\cos(y) dy \right) \\ & = \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \frac{1}{\omega_{j}^2-4} \Big( \cos(\omega_{j}x_{j}^{\prime}) \left( -2 + \cos(2x_{j}^{\prime}) + \omega_{j}^2 \cos^2(x_{j}^{\prime})\right) + 3 \sin(x_{j}^{\prime}) \cos(x_{j}^{\prime}) \omega_{i} \sin(\omega_{i}x_{j}^{\prime}) \Big) \\ & = \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \frac{\cos(\omega_{j}x_{j}^{\prime})}{\omega_{j}^2-4} \Big( -2 + \cos(2x_{j}^{\prime}) + \omega_{i}^2 \cos^2(x_{j}^{\prime}) + 3 \sin(x_{j}^{\prime}) \cos(x_{j}^{\prime}) \tan(x_{j}^{\prime}) \Big) \\ & = \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \frac{\cos(\omega_{j}x_{j}^{\prime})}{\omega_{j}^2-4} \Big( - 1-2 \sin^2(x_{j}^{\prime}) + \omega_{j}^2 \cos^2(x_{j}^{\prime}) + 3 \sin^2(x_{j}^{\prime}) \Big) \\ & = \frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \frac{\cos(\omega_{j}x_{j}^{\prime})}{\omega_{j}^2-4} (\omega_{j}^2-1) \cos^2(x_{j}^{\prime}) \\ & = \frac{2}{\sqrt{\pi}} \frac{\sqrt{\omega_{j}^2-1} }{\omega_{j}^2-4} \cos(\omega_{j}x_{j}^{\prime}) \cos^2(x_{j}^{\prime}) \end{aligned}$$ and since $$\begin{aligned} g_{j}(0) = \lim_{x \rightarrow 0} g_{j}(x) = \frac{2}{\sqrt{\pi}} \frac{\sqrt{\omega_{j}^2-1} }{\omega_{j}^2-4} ,\quad \quad g_{j}\left(\frac{\pi}{2} \right) = \lim_{x \rightarrow \frac{\pi}{2}} g_{j}(x) = 0,\end{aligned}$$ we get $$\begin{aligned} \sup_{x \in \left[0,\frac{\pi}{2} \right]} \left| g_{j}(x) \right| = \max \left\{ \left| g_{j}(0) \right|,\left| g_{j}(x_{j}^{\star}) \right|,\left| g_{j} \left( \frac{\pi}{2} \right) \right| \right\} \leq \frac{2}{\sqrt{\pi}} \frac{\sqrt{\omega_{j}^2-1} }{\omega_{j}^2-4} \leq \frac{2}{\omega_{j}},\end{aligned}$$ valid for all $j=0,1,2,\dots$, which concludes the proof. For future reference, we also prove the following Dirichlet-Kernel-type identities. \[Dirichlet\] For any $n \in \mathbb{N}$ and $x\in\mathbb{R}$, we have $$\begin{aligned} & \frac{\sin((2n+1)x)}{\sin(x)} = 1+ 2\sum_{\mu=1}^{n} \cos(2 \mu x), \\ & \frac{\cos((2n+1)x)}{\cos(x)} =(-1)^{n}\left( 1+ 2\sum_{\mu=1}^{n} (-1)^{\mu} \cos(2 \mu x) \right), \\ & \frac{\sin(2 n x)}{\tan(x)} = 1+ \cos(2nx) + 2\sum_{\mu=1}^{n-1} \cos(2 \mu x).\end{aligned}$$ The first result is well-known (Dirichlet Kernel). For the second, we use the first one and just replace $x$ by $x+\frac{\pi}{2}$, $$\begin{aligned} 1+ 2\sum_{\mu=1}^{n} (-1)^{\mu} \cos(2 \mu x) & = 1+ 2\sum_{\mu=1}^{n} \cos(2 \mu x+\pi \mu) = 1+ 2\sum_{\mu=1}^{n} \cos \left(2 \mu \left( x+ \frac{\pi}{2} \right)\right ) \\ &= \frac{\sin \left( (2n+1) \left(x+\frac{\pi}{2} \right) \right)}{\sin \left(x+\frac{\pi}{2} \right)} \\ & = \frac{\sin((2n+1)x)\cos \left( (2n+1) \frac{\pi}{2} \right)+\cos((2n+1)x)\sin \left( (2n+1) \frac{\pi}{2} \right)}{\cos(x)} \\ &= (-1)^n \frac{\cos((2n+1)x)}{\cos(x)}. \end{aligned}$$ For the thrid, we observe that $$\begin{aligned} \frac{\sin(2nx)}{\tan(x)} +\cos(2nx) & = \frac{\sin(2nx) \cos(x) + \cos(2nx) \sin(x)}{\sin(x)} \\ & = \frac{\sin(2n+1)x}{\sin(x)} = 1+ 2\sum_{\mu=1}^{n} \cos(2 \mu x) \\ & = 1+ 2\sum_{\mu=1}^{n-1} \cos(2 \mu x) + 2 \cos(2 n x),\end{aligned}$$ from which the result follows. Finally, we establish the asymptotic behaviour of specific oscillatory integrals which will appear later. \[OscillatoryIntegrals\] For any $N \in \mathbb{N}$, we have $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} \frac{\sin((2a+1)x)}{\tan(x)} dx = \frac{\pi}{2} + \mathcal{O} \left( \frac{1}{a^{2}} \right),\\ & \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{\sin(2 a x )}{\tan(x)} dx = \frac{\pi}{2} + \mathcal{O} \left( \frac{1}{a^{N}} \right), \\ & \int_{0}^{\frac{\pi}{2}} \cos^3(x) \frac{\sin((2a+1)x)}{\tan(x)} dx = \frac{\pi}{2} + \mathcal{O} \left( \frac{1}{a^{N}} \right),\\ & \int_{\frac{\pi}{a}}^{\frac{\pi}{2}} \frac{ \cos(2 a x)}{\tan^2(x)} dx = c a + \mathcal{O} \left( \frac{1}{a^{3}} \right), \end{aligned}$$ as $a \longrightarrow \infty$ with $a \in \mathbb{N}$. Here, $$\begin{aligned} c:= \frac{1}{\pi} - \pi + 2 \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy \simeq 0.01302.\end{aligned}$$ For the first integral, we use Lemma \[Dirichlet\] to infer $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \frac{\sin((2a+1)x)}{\tan(x)} dx & = \int_{0}^{\frac{\pi}{2}} \cos(x) \frac{\sin((2a+1)x)}{\sin(x)} dx = \int_{0}^{\frac{\pi}{2}} \cos(x) \left( 1+ 2 \sum_{\mu=1}^{a} \cos(2 \mu x) \right) dx \\ & = \int_{0}^{\frac{\pi}{2}} \cos(x) dx + 2 \sum_{\mu=1}^{a} \int_{0}^{\frac{\pi}{2}} \cos(x) \cos(2 \mu x) dx \\ & = 1 + 2 \sum_{\mu=1}^{a} \frac{(-1)^{\mu}}{1-4 \mu^2} = 1 + 2 \sum_{\mu=1}^{\infty} \frac{(-1)^{\mu}}{1-4 \mu^2} - 2 \sum_{\mu=a+1}^{\infty} \frac{(-1)^{\mu}}{1-4 \mu^2} \\ & = 1 + 2 \frac{\pi-2}{4} - 2 \sum_{\mu=a+1}^{\infty} \frac{(-1)^{\mu}}{1-4 \mu^2} \\ & = \frac{\pi}{2} + 2 \sum_{\mu=a+1}^{\infty} \frac{(-1)^{\mu}}{4 \mu^2-1} = \frac{\pi}{2} + \mathcal{O} \left( \frac{1}{a^{2}} \right). $$ The second integral follows similarly. Lemma \[Dirichlet\] implies $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{\sin(2 a x)}{\tan(x)} dx & = \int_{0}^{\frac{\pi}{2}} \cos^2(x) \left( 1+ \cos(2 a x) + 2 \sum_{\mu =1}^{a-1} \cos(2 \mu x) \right) dx \\ & = \int_{0}^{\frac{\pi}{2}} \cos^2(x) dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos(2 a x) dx + 2 \sum_{\mu=1}^{a-1} \int_{0}^{\frac{\pi}{2}} \cos^2(x)\cos(2 \mu x) dx \\ & = \frac{\pi}{4}+0+2 \frac{\pi}{8} = \frac{\pi}{2}, \end{aligned}$$ for all $a \geq 3$, since $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \cos^2(x) dx = \frac{\pi}{4},\quad \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos(2 \lambda x) dx = \begin{dcases} \frac{\pi}{8}, & \lambda = 1 \\ 0, & \lambda \geq 2. \end{dcases} \end{aligned}$$ Next, for the third integral, Lemma \[Dirichlet\] also yields $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} \cos^3(x) \frac{\sin((2a+1)x)}{\tan(x)} dx = \int_{0}^{\frac{\pi}{2}} \cos^4(x) \frac{\sin((2a+1)x)}{\sin(x)} dx = \int_{0}^{\frac{\pi}{2}} \cos^4(x) \Bigg( 1+ 2\sum_{\mu=1}^{a} \cos(2 \mu x)\Bigg) dx \\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad = \int_{0}^{\frac{\pi}{2}} \cos^4(x) dx + 2\sum_{\mu=1}^{a} \int_{0}^{\frac{\pi}{2}} \cos^4(x) \cos(2 \mu x) dx = \frac{3 \pi }{16} + 2 \frac{\pi }{8} + 2 \frac{ \pi }{32} = \frac{ \pi }{2},\end{aligned}$$ for all $a \geq 3$, since $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \cos^4(x) dx = \frac{3\pi}{16},\quad \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos(2 \lambda x) dx = \begin{dcases} \frac{\pi}{8}, & \lambda = 1 \\ \frac{\pi}{32}, & \lambda = 2 \\ 0, & \lambda \geq 3. \end{dcases} \end{aligned}$$ Finally, we conclude with the fourth integral. First, observe that $$\begin{aligned} 2 \frac{\cos(2 a x)}{\tan^2(x)} + 4 a \frac{\sin(2 a x)}{\tan(x)} + \Bigg( 2 \frac{\cos(2ax)}{\tan(x)} + \frac{\sin(2ax)}{a} \Bigg)^{\prime} = 0\end{aligned}$$ and therefore, for any $\epsilon >0$, $$\begin{aligned} \int_{\epsilon}^{\frac{\pi}{2}} 2 \frac{ \cos(2 a x)}{\tan^2(x)} dx & = \int_{\epsilon}^{\frac{\pi}{2}} \left( -\Bigg( 2 \frac{\cos(2ax)}{\tan(x)} + \frac{\sin(2ax)}{a} \Bigg)^{\prime} - 4 a \frac{\sin(2 a x)}{\tan(x)} \right) dx \nonumber \\ & = 2 \frac{\cos(2a \epsilon)}{\tan(\epsilon)} + \frac{\sin(2a \epsilon)}{a} - 4a \int_{\epsilon}^{\frac{\pi}{2}} \frac{\sin(2 a x)}{\tan(x)} dx.\end{aligned}$$ Second, we set $\epsilon = \frac{\pi}{a}$ and get $\cos(2a \epsilon) = 1, \sin(2a \epsilon) = 0$. Now, Lemma \[Dirichlet\] shows that $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \frac{\sin(2 a x)}{\tan(x)} dx = \frac{\pi}{2}+ \int_{0}^{\frac{\pi}{2}} \cos(2ax) dx + 2\sum_{\mu=1}^{a-1} \int_{0}^{\frac{\pi}{2}} \cos(2 \mu x) dx = \frac{\pi}{2}+0+0 = \frac{\pi}{2}, \end{aligned}$$ valid for all $a \geq 1$, and we change variables $y=\frac{a}{\pi}x$ to infer $$\begin{aligned} \int_{\frac{\pi}{a}}^{\frac{\pi}{2}} \frac{ \cos(2 a x)}{\tan^2(x)} dx &= \frac{1}{\tan(\frac{\pi}{a})} -2 a \int_{\frac{\pi}{a}}^{\frac{\pi}{2}} \frac{\sin(2 a x)}{\tan(x)} dx \\ &= \frac{1}{\tan(\frac{\pi}{a})} -2 a \left( \frac{\pi}{2} - \int_{0}^{\frac{\pi}{a}} \frac{\sin(2 a x)}{\tan(x)} dx \right)\\ &= \frac{1}{\tan(\frac{\pi}{a})} - a \pi + 2\pi \int_{0}^{1} \frac{\sin(2 \pi y )}{\tan(\frac{\pi y }{a})} dy, \end{aligned}$$ for all $a \geq 2$. Now, for all $y \in [0,1]$ and $a \longrightarrow \infty$, $$\begin{aligned} \frac{1}{\tan(\frac{\pi y }{a})} = \frac{a}{\pi} \frac{1}{y} - \frac{\pi}{3 a} y + \mathcal{O} \left( \left( \frac{y}{a} \right)^{3} \right) \end{aligned}$$ and hence $$\begin{aligned} \int_{0}^{1} \frac{\sin(2 \pi y )}{\tan(\frac{\pi y }{a})} dy & = \frac{a}{\pi} \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy -\frac{\pi}{3 a}\int_{0}^{1} y\sin(2 \pi y ) dy +\int_{0}^{1} \sin(2 \pi y ) \mathcal{O}\left( \left( \frac{y}{a} \right)^{3} \right) dy \\ & = \frac{a}{\pi} \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy -\frac{\pi}{3 a} \frac{-1}{2 \pi} +\mathcal{O} \left( \frac{1}{a^{3}} \right) \\ & = \frac{a}{\pi} \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy +\frac{1}{6 a} +\mathcal{O} \left( \frac{1}{a^{3}} \right) \end{aligned}$$ from which we conclude $$\begin{aligned} \int_{\frac{\pi}{a}}^{\frac{\pi}{2}} \frac{ \cos(2 a x)}{\tan^2(x)} dx &= \frac{1}{\tan(\frac{\pi}{a})} - a \pi + 2\pi \int_{0}^{1} \frac{\sin(2 \pi y )}{\tan(\frac{\pi y }{a})} dy \\ &=\frac{1}{\tan(\frac{\pi}{a})} - a \pi+ 2a \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy +\frac{\pi}{3 a} +\mathcal{O} \left( \frac{1}{a^{3}} \right) \\ & =\frac{a}{\pi}-\frac{\pi}{3 a}+\mathcal{O} \left( \frac{1}{a^{3}} \right) - a \pi+ 2a \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy +\frac{\pi}{3 a} +\mathcal{O} \left( \frac{1}{a^{3}} \right) \\ & =a \left( \frac{1}{\pi} - \pi + 2 \int_{0}^{1} \frac{\sin(2 \pi y )}{y}dy \right) +\mathcal{O} \left( \frac{1}{a^{3}} \right). \end{aligned}$$ \[e0oscilating\] A straightforward computation yields $$\begin{aligned} e_{0}(x) = 4 \sqrt{\frac{2}{\pi}} \cos^3(x)\end{aligned}$$ and hence the third integral of Lemma \[OscillatoryIntegrals\] also gives $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} e_{0}(x) \frac{\sin((2a+1)x)}{\tan(x)} dx = 2 \sqrt{2 \pi} + \mathcal{O} \left( \frac{1}{a^{N}} \right), \\ \end{aligned}$$ for $a \longrightarrow \infty$. Finally, we will also use standard integration by parts. \[byparts1\] Let $a \in \mathbb{N}$ and $N \in \mathbb{N}$. Assume that $F$ is differentiable in $\left (0,\frac{\pi}{2} \right)$ and continuous in $\left [0,\frac{\pi}{2} \right]$. Then, $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} F(x) \cos( a x ) dx = & \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+1}} \left( F^{(2k)}(x) \sin(a x) \right) \Big|_{x=0}^{x=\frac{\pi}{2}} + \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+2}} \left( F^{(2k+1)}(x) \cos(a x) \right) \Big|_{x=0}^{x=\frac{\pi}{2}} \\ & + \frac{(-1)^{N-1}}{a^{2N+2}} \int_{0}^{\frac{\pi}{2}} F^{(2N+2)}(x) \cos(a x) dx, \\ \int_{0}^{\frac{\pi}{2}} F(x) \sin( a x ) dx = & \sum_{k=0}^{N} \frac{(-1)^{k+1}}{a^{2k+1}} \left( F^{(2k)}(x) \cos(a x) \right) \Big|_{x=0}^{x=\frac{\pi}{2}} + \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+2}} \left( F^{(2k+1)}(x) \sin(a x) \right) \Big|_{x=0}^{x=\frac{\pi}{2}} \\ & + \frac{(-1)^{N-1}}{a^{2N+2}} \int_{0}^{\frac{\pi}{2}} F^{(2N+2)}(x) \sin(a x) dx.\end{aligned}$$ The proof is a straight forward application of integration by parts. The following result is a direct consequence of Lemma \[byparts1\]. \[byparts2\] Let $b \in \mathbb{N}$ and $N \in \mathbb{N}$. Assume that $F$ is differentiable in $\left (0,\frac{\pi}{2} \right)$, continuous in $\left [0,\frac{\pi}{2} \right]$ with uniformly bounded derivatives in $\left [0,\frac{\pi}{2} \right]$. Then, $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} F(x) \cos( 2 b x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k}}{(2b)^{2k+2}} \left( (-1)^{b} F^{(2k+1)} \left( \frac{\pi}{2} \right) -F^{(2k+1)} \left( 0 \right) \right) + \mathcal{O} \left( \frac{1}{b^{2N+2}} \right), \\ & \int_{0}^{\frac{\pi}{2}} F(x) \sin( 2 b x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k+1}}{(2b)^{2k+1}} \left( (-1)^{b} F^{(2k)} \left( \frac{\pi}{2} \right) -F^{(2k)} \left( 0 \right) \right) + \mathcal{O} \left( \frac{1}{b^{2N+2}} \right), \\ & \int_{0}^{\frac{\pi}{2}} F(x) \cos( (2 b+1) x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k+b}}{(2b+1)^{2k+1}} F^{(2k)} \left( \frac{\pi}{2} \right) + \sum_{k=0}^{N} \frac{(-1)^{k+1}}{(2b+1)^{2k+2}} F^{(2k+1)} \left( 0 \right) + \mathcal{O} \left( \frac{1}{b^{2N+2}} \right), \\ & \int_{0}^{\frac{\pi}{2}} F(x) \sin( (2 b+1) x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k}}{(2 b+1)^{2k+1}} F^{(2k)} \left( 0 \right) + \sum_{k=0}^{N} \frac{(-1)^{k+b}}{(2 b+1)^{2k+2}} F^{(2k+1)} \left( \frac{\pi}{2} \right) + \mathcal{O} \left( \frac{1}{b^{2N+2}} \right),\end{aligned}$$ as $b \longrightarrow \infty$. All these asymptotic expansions follow from Lemma \[byparts1\] just by computing the boundary terms. Note that if $a$ is even, namely $a = 2b$ for some $b \in \mathbb{N}$, then $\sin(a \frac{\pi}{2}) = 0$ and $ \cos(a \frac{\pi}{2}) = (-1)^{b}$ whereas if $a$ is odd, namely $a = 2b+1$ for some $b \in \mathbb{N}$, then $\sin(a \frac{\pi}{2}) = (-1)^b$ and $ \cos(a \frac{\pi}{2}) = 0$. For example, Lemma \[byparts1\] yields $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} F(x) \cos( 2 b x ) dx & = \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+2}} \left( (-1)^{b} F^{(2k+1)} \left( \frac{\pi}{2} \right) -F^{(2k+1)} \left( 0 \right) \right) \\ & + \frac{(-1)^{N-1}}{a^{2N+2}} \int_{0}^{\frac{\pi}{2}} F^{(2N+3)}(x) \cos(a x) dx.\end{aligned}$$ The other asymptotic expansions follow similarly. With these auxiliary results at hand we proceed to the main analysis of the Fourier constants. Perturbations 1: Series ======================= As mentioned above, Maliborski-Rostworowski [@13033186] considered --- and provided strong numerical evidence indicating that time-periodic solutions exist for non-generic initial data. To explain their approach and see how the Fourier constants appear, first assume that $(\Phi, \Pi, A, \delta)$ are all close to $(0,0,1,0)$ and expand $(\Phi, \Pi, A, \delta)$ in terms of powers of $\epsilon$ using the series ,,, and . Definition of the Fourier constants ----------------------------------- First, we compute the density $$\begin{aligned} \Phi^2(t,x)+\Pi^2(t,x) = \sum_{\lambda=1}^{\infty} r_{2\lambda} \epsilon^{2\lambda}\end{aligned}$$ where, for all $\lambda = 0,1,2,\dots$, $$\begin{aligned} r_{2(\lambda+1)}(\tau,x) = \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda }}^{\lambda} \left( \psi_{2\mu+1}(\tau,x)\psi_{2\nu+1}(\tau,x) + \sigma_{2\mu+1}(\tau,x)\sigma_{2\nu+1} (\tau,x) \right).\end{aligned}$$ Next, we substitute these expressions into ---, collect terms of the same order in $\epsilon$ and obtain a hierarchy of equations $$\begin{aligned} \begin{dcases} \omega_{\gamma} \partial_{\tau} \psi_{2\lambda+1} (\tau,x) - \partial_{x} \sigma_{2\lambda+1} (\tau,x)= \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0) }}^{\lambda} \Big ( - \omega_{\gamma,2\nu} \partial_{\tau} \psi_{2\mu+1} (\tau,x) + \partial_{x} \left( \xi_{2\nu}(\tau,x) \sigma_{2\mu+1}(\tau,x) \right) \Big ), \\ \omega_{\gamma} \partial_{\tau} \sigma_{2\lambda+1} (\tau,x) + { \savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{L}}]{\kern.1pt\mathchar"0362\kern.1pt} {\rule{0ex}{\textheight}} }{\textheight} }{2.4ex}} \stackon[-6.9pt]{L}{\tmpbox} } \left[ \psi_{2\lambda+1}(\tau,x) \right] = - \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0) }}^{\lambda} \Big( \omega_{\gamma,2\nu}\partial_{\tau} \sigma_{2\mu+1}(\tau,x)+ { \savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{L}}]{\kern.1pt\mathchar"0362\kern.1pt} {\rule{0ex}{\textheight}} }{\textheight} }{2.4ex}} \stackon[-6.9pt]{L}{\tmpbox} } \left[ \xi_{2\nu}(\tau,x) \psi_{2\mu+1}(\tau,x) \right] \Big),\\ \xi_{2(\lambda+1)} (\tau,x) = \zeta_{2(\lambda+1)} (\tau,x) - \frac{\cos^3(x)}{\sin(x)} \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda }}^{\lambda} \int_{0}^{x} r_{2(\mu+1)}(\tau,y) \zeta_{2\nu}(\tau,y) \tan^2(y) dy,\\ \zeta_{2(\lambda+1)} (\tau,x) = \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda }}^{\lambda} \int_{0}^{x} r_{2(\mu+1)}(\tau,y) \zeta_{2\nu}(\tau,y)\sin(y)\cos(y)dy \end{dcases}\end{aligned}$$ for all $\lambda=0,1,2,\dots$, together with $$\begin{aligned} \omega_{\gamma,0} = \omega_{\gamma}, \quad \xi_{0}(\tau,x) = 1,\quad \zeta_{0}(\tau,x) = 1. \end{aligned}$$ and $$\begin{aligned} \begin{cases} \omega_{\gamma} \partial_{\tau} \psi_{1}(\tau,x) - \partial_{x} \sigma_{1} (\tau,x) = 0, \\ \omega_{\gamma} \partial_{\tau} \sigma_{1}(\tau,x)+ { \savestack{\tmpbox}{\stretchto{ \scaleto{ \scalerel*[\widthof{\ensuremath{L}}]{\kern.1pt\mathchar"0362\kern.1pt} {\rule{0ex}{\textheight}} }{\textheight} }{2.4ex}} \stackon[-6.9pt]{L}{\tmpbox} } \left[ \psi_{1}(\tau,x) \right]=0. \end{cases}\end{aligned}$$ From the set of all eigenvalues $\{e_{i}\}_{i=0}^{\infty}$ to the linear operator, we choose a dominant mode $e_{\gamma}$ for some $\gamma \in \{0,1,2,\dots \}$ and pick $$\begin{aligned} \begin{cases} \psi_{1} (\tau,x) = \cos(\tau) e^{\prime}_{\gamma}(x),\\ \sigma_{1} (\tau,x) = -\omega_{\gamma} \sin(\tau) e_{\gamma}(x). \end{cases}\end{aligned}$$ Now, for each $\lambda=0,1,2,\dots$, we expand the coefficients $\psi_{2\lambda+1},\sigma_{2\lambda+1},\xi_{2\lambda},\zeta_{2\lambda}$ in terms of the eigenvalues of the linearized operator, namely $$\begin{aligned} & \psi_{2\lambda+1}(\tau,x)=\sum_{i=0}^{\infty} f_{2\lambda+1}^{(i)}(\tau) \frac{e^{\prime}_{i}(x)}{\omega_{i}}, \quad \sigma_{2\lambda+1}(\tau,x)=\sum_{i=0}^{\infty} g_{2\lambda+1}^{(i)} (\tau) e_{i}(x), \\ & \xi_{2\lambda}(\tau,x)=\sum_{i=0}^{\infty} p_{2\lambda}^{(i)}(\tau) e_{i}(x), \quad \zeta_{2\lambda}(\tau,x)=\sum_{i=0}^{\infty} q_{2\lambda}^{(i)} (\tau) e_{i}(x),\end{aligned}$$ substitute these expressions into the recurrence relations above, take the inner product $(\cdot|e^{\prime}_{m})$ for the first equation, the $(\cdot|e_{m})$ for all the other equations and use their orthogonality properties $(e^{\prime}_{n}|e^{\prime}_{m})=\omega_{n}^2 \delta_{nm}$, $(e_{n},e_{m})=\delta_{nm}$. Using the notation $$\begin{aligned} { \accentset{\mbox{\large\bfseries .}}{f}}(\tau) = \frac{d f(\tau)}{d \tau},\quad { \accentset{\mbox{\large\bfseries .\hspace{-0.25ex}.}}{f}} (\tau) = \frac{d^2 f(\tau)}{d \tau^2},\end{aligned}$$ we find $$\begin{aligned} & \omega_{\gamma} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)} (\tau) = \omega_{m} g_{2\lambda+1}^{(m)} (\tau) +\sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0) }}^{\lambda} \left( -\omega_{\gamma,2\nu} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\mu+1}^{(m)}(\tau)+ \omega_{m} \sum_{i,j=0}^{\infty} C_{ij}^{(m)} p_{2\nu}^{(i)}(\tau) g_{2\mu+1}^{(j)}(\tau) \right), \\ & \omega_{\gamma} { \accentset{\mbox{\large\bfseries .}}{g}}_{2\lambda+1}^{(m)} (\tau) = - \omega_{m} f_{2\lambda+1}^{(m)} (\tau) - \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0) }}^{\lambda} \left( \omega_{\gamma,2\nu} { \accentset{\mbox{\large\bfseries .}}{g}}_{2\mu+1}^{(m)}(\tau) + \omega_{m} \sum_{i,j=0}^{\infty} \overline{C}_{ij}^{(m)} p_{2\nu}^{(i)}(\tau) f_{2\mu+1}^{(j)}(\tau) \right), \\ & p_{2(\lambda+1)}^{(m)}(\tau) = \sum_{\substack{\rho,k,\nu=0 \\ \rho+k+\nu=\lambda }}^{\lambda} \sum_{i,j,l=0}^{\infty} \Bigg( \widetilde{A}_{ijl}^{(m)} f_{2\rho+1}^{(i)}(\tau)f_{2k+1}^{(j)}(\tau)+\widetilde{B}_{ijl}^{(m)} g_{2\rho+1}^{(i)}(\tau)g_{2k+1}^{(j)}(\tau) \Bigg) q_{2\nu}^{(l)}(\tau),\nonumber \\ & q_{2(\lambda+1)}^{(m)}(\tau) = \sum_{\substack{\rho,k,\nu=0 \\ \rho+k+\nu=\lambda }}^{\lambda} \sum_{i,j,l=0}^{\infty} \Bigg( \frac{\overline{A}_{ijl}^{(m)}}{\omega_{m}} f_{2\rho+1}^{(i)}(\tau)f_{2k+1}^{(j)}(\tau)+ \frac{\overline{B}_{ijl}^{(m)}}{\omega_{m}} g_{2\rho+1}^{(i)}(\tau)g_{2k+1}^{(j)}(\tau) \Bigg) q_{2\nu}^{(l)}(\tau),\nonumber\end{aligned}$$ where all the interactions with respect to the spatial variable $x \in \left[0,\frac{\pi}{2} \right]$ are included into the following Fourier constants $$\begin{aligned} C_{ij}^{(m)} & := \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{m}(x) \tan^2(x) dx, \\ \overline{C}_{ij}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} e_{i}(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{m}^{\prime}(x)}{\omega_{m}} \tan^2(x) dx, \\ \overline{A}_{ijl}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{ e_{j}^{\prime}(x)}{\omega_{j}} e_{l}(x) \frac{ e_{m}^{\prime}(x)}{\omega_{m}} \frac{\sin^3(x)}{\cos(x)} dx, \\ \overline{B}_{ijl}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{l}(x) \frac{ e_{m}^{\prime}(x)}{\omega_{m}} \frac{\sin^3(x)}{\cos(x)} dx, \\ \widetilde{A}_{ijl}^{(m)} &:= \frac{\overline{A}_{ijl}^{(m)}}{\omega_{m}} - \int_{0}^{\frac{\pi}{2}} \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) \tan^2(x)dx, , \\ \widetilde{B}_{ijl}^{(m)} &:= \frac{\overline{B}_{ijl}^{(m)}}{\omega_{m}} - \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) \tan^2(x)dx.\end{aligned}$$ We also find $$\begin{aligned} f_{1}^{(m)} (\tau) =\omega_{\gamma} \cos(\tau) \delta_{\gamma}^{m}, \quad g_{1}^{(m)} (\tau) = - \omega_{\gamma} \sin(\tau) \delta_{\gamma}^{m}, \quad p_{0}^{(m)} (\tau) = q_{0}^{(m)} (\tau) = (1|e_{m})\end{aligned}$$ and use Lemma \[ClosedformulasFore\] to compute $$\begin{aligned} (1|e_{m}):= \int_{0}^{\frac{\pi}{2}} e_{m}(x) \tan^2(x) dx = \frac{2}{\sqrt{\pi}} \frac{(-1)^m}{\omega_{m}} \sqrt{\omega_{m}^2-1},\end{aligned}$$ for all $m=0,1,2,\dots$. In addition, we differentiate the first equation with respect to $\tau$ and use the second to obtain the harmonic oscillator equation $$\begin{aligned} \label{HarmonicOscilator} { \accentset{\mbox{\large\bfseries .\hspace{-0.25ex}.}}{f}}_{2\lambda+1}^{(m)} (\tau) + \left(\frac{\omega_{m}}{\omega_{\gamma}} \right)^2 f_{2\lambda+1}^{(m)} (\tau) = S_{2\lambda+1}^{(m)}(\tau)\end{aligned}$$ where the source term is given by $$\begin{aligned} S_{2\lambda+1}^{(m)}(\tau) &:=\frac{ \omega_{m} }{ \omega_{\gamma}} \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0) }}^{\lambda} \Bigg[ -\frac{\omega_{\gamma,2\nu}}{\omega_{\gamma}} \left( { \accentset{\mbox{\large\bfseries .}}{g}}_{2\mu+1}^{(m)}(\tau) +\frac{ \omega_{\gamma} }{ \omega_{m}} { \accentset{\mbox{\large\bfseries .\hspace{-0.25ex}.}}{f}}_{2\mu+1}^{(m)}(\tau) \right) \\ & + \sum_{i,j=0}^{\infty} \left( C_{ij}^{(m)} \frac{d}{d \tau} \left(p_{2\nu}^{(i)}(\tau) g_{2\mu+1}^{(j)}(\tau) \right) - \frac{\omega_{m}}{\omega_{\gamma}} \overline{C}_{ij}^{(m)} p_{2\nu}^{(i)}(\tau) f_{2\mu+1}^{(j)}(\tau) \right) \Bigg].\end{aligned}$$ Finally, we make use of the variation constants formula to solve and find $$\begin{aligned} f_{2\lambda+1}^{(m)} (\tau) &= f_{2\lambda+1}^{(m)} (0) \cos \left( \frac{\omega_{m}}{\omega_{\gamma}} \tau \right) + \frac{\omega_{\gamma}}{\omega_{m}} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)} (0) \sin \left( \frac{\omega_{m}}{\omega_{\gamma}}\tau \right) + \frac{\omega_{\gamma}}{\omega_{m}} \int_{0}^{\tau} \sin \left( \frac{\omega_{m}}{\omega_{\gamma}} (\tau-s) \right) S_{2\lambda+1}^{(m)}(s) ds.\end{aligned}$$ In conclusion, we get for all $m=0,1,2,\dots$ the following recurrence relations. For all $\lambda=1,2,3,\dots$, $$\begin{aligned} & f_{1}^{(m)} (\tau) =\omega_{\gamma} \cos(\tau) \delta_{\gamma}^{m}, \\ & f_{2\lambda+1}^{(m)} (\tau) = f_{2\lambda+1}^{(m)} (0) \cos \left( \frac{\omega_{m}}{\omega_{\gamma}} \tau \right) + \frac{\omega_{\gamma}}{\omega_{m}} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)} (0) \sin \left( \frac{\omega_{m}}{\omega_{\gamma}}\tau \right) + \frac{\omega_{\gamma}}{\omega_{m}} \int_{0}^{\tau} \sin \left( \frac{\omega_{m}}{\omega_{\gamma}} (\tau-s) \right) S_{2\lambda+1}^{(m)}(s) ds, \\ \\ & S_{2\lambda+1}^{(m)}(\tau) =\frac{ \omega_{m} }{ \omega_{\gamma}} \sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0) }}^{\lambda} \Bigg[ -\frac{\omega_{\gamma,2\nu}}{\omega_{\gamma}} \left( { \accentset{\mbox{\large\bfseries .}}{g}}_{2\mu+1}^{(m)}(\tau) +\frac{ \omega_{\gamma} }{ \omega_{m}} { \accentset{\mbox{\large\bfseries .\hspace{-0.25ex}.}}{f}}_{2\mu+1}^{(m)}(\tau) \right) \\ &\qquad \qquad \qquad \qquad + \sum_{i,j=0}^{\infty} \left( C_{ij}^{(m)} \frac{d}{d \tau} \left(p_{2\nu}^{(i)}(\tau) g_{2\mu+1}^{(j)}(\tau) \right) - \frac{\omega_{m}}{\omega_{\gamma}} \overline{C}_{ij}^{(m)} p_{2\nu}^{(i)}(\tau) f_{2\mu+1}^{(j)}(\tau) \right) \Bigg] \\ \\ &g_{1}^{(m)} (\tau) = - \omega_{\gamma} \sin(\tau) \delta_{\gamma}^{m}, \\ &g_{2\lambda+1}^{(m)} (\tau) = \frac{\omega_{\gamma}}{\omega_{m}} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)} (\tau) +\sum_{\substack{\mu,\nu=0 \\ \mu+\nu=\lambda \\ (\mu,\nu) \neq (\lambda,0)}}^{\lambda} \Bigg[ \frac{ \omega_{\gamma,2\nu}}{\omega_{m}} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\mu+1}^{(m)}(\tau) - \sum_{i,j=0}^{\infty} C_{ij}^{(m)} p_{2\nu}^{(i)}(\tau) g_{2\mu+1}^{(j)}(\tau)\Bigg], \\ \\ & p_{0}^{(m)}(\tau) = \frac{2}{\sqrt{\pi}} \frac{(-1)^m}{\omega_{m}} \sqrt{\omega_{m}^2-1}, \\ & p_{2(\lambda+1)}^{(m)}(\tau) = \sum_{\substack{\rho,k,\nu=0 \\ \rho+k+\nu=\lambda \\ }}^{\lambda} \sum_{i,j,l=0}^{\infty} \Bigg[ \widetilde{A}_{ijl}^{(m)} f_{2\rho+1}^{(i)}(\tau)f_{2k+1}^{(j)}(\tau)+ \widetilde{B}_{ijl}^{(m)} g_{2\rho+1}^{(i)}(\tau)g_{2k+1}^{(j)}(\tau) \Bigg] q_{2\nu}^{(l)}(\tau), \\ \\ &q_{0}^{(m)}(\tau) = \frac{2}{\sqrt{\pi}} \frac{(-1)^m}{\omega_{m}} \sqrt{\omega_{m}^2-1}, \\ &q_{2(\lambda+1)}^{(m)}(\tau) = \sum_{\substack{\rho,k,\nu=0 \\ \rho+k+\nu=\lambda }}^{\lambda} \sum_{i,j,l=0}^{\infty} \Bigg[ \frac{\overline{A}_{ijl}^{(m)}}{\omega_{m}} f_{2\rho+1}^{(i)}(\tau)f_{2k+1}^{(j)}(\tau)+ \frac{\overline{B}_{ijl}^{(m)}}{\omega_{m}} g_{2\rho+1}^{(i)}(\tau)g_{2k+1}^{(j)}(\tau) \Bigg] q_{2\nu}^{(l)}(\tau). \end{aligned}$$ Time periodic solutions and secular terms ----------------------------------------- As pointed out in [@13033186], non-periodic terms appear naturally when the source term $S_{2\lambda+1}^{(m)}(\tau)$ has terms of the form $\cos ( \frac{\omega_{m}}{\omega_{\gamma}} \tau )$ or $\sin ( \frac{\omega_{m}}{\omega_{\gamma}} \tau )$ in its Fourier expansion. Indeed, we assume that, for some $\lambda=1,2,3,\dots$, $$\begin{aligned} S_{2\lambda+1}^{(m)}(\tau) = \sum_{a \in I_{\lambda}} S_{1,2\lambda+1,a}^{(m)} \cos(a x)+\sum_{b \in J_{\lambda}} S_{2,2\lambda+1,b}^{(m)} \sin(b x),\end{aligned}$$ and in addition there exists an index $m =0,1,2,\dots$ such that $$\begin{aligned} \frac{\omega_{m}}{\omega_{\gamma}}:=a \in I_{\lambda}. \end{aligned}$$ Then, the integral $$\begin{aligned} \int_{0}^{\tau} \sin \left( \frac{\omega_{m}}{\omega_{\gamma}} (\tau-s) \right) S_{2\lambda+1}^{(m)}(s) ds\end{aligned}$$ produces a non-periodic term since $$\begin{aligned} \int_{0}^{\tau} \sin \left( \frac{\omega_{m}}{\omega_{\gamma}} (\tau-s) \right) \cos \left( a s \right) ds & = \int_{0}^{\tau}\sin \left( \frac{\omega_{m}}{\omega_{\gamma}} (\tau-s) \right) \cos \left( \frac{\omega_{m}}{\omega_{\gamma}} s \right) ds = \frac{1}{2} \tau \sin \left( \frac{\omega_{m}}{\omega_{\gamma}} \tau \right).\end{aligned}$$ Such secular terms are also produced when there exists an $m =0,1,2,\dots$ such that $$\begin{aligned} \frac{\omega_{m}}{\omega_{\gamma}}:=b \in J_{\lambda}. \end{aligned}$$ In other words, $$\begin{aligned} \forall \lambda =0,1,2,\dots,~ \exists \text{~a set~} \mathcal{N}_{\lambda}:~\forall m \in \mathcal{N}_{\lambda}, ~f_{2\lambda+1}^{(m)}(\tau) \text{~contains non-periodic terms}. \end{aligned}$$ Maliborski and Rostworowski [@13033186] were able to numerically cancel these secular terms by prescribing the initial data $(f_{2\lambda+1}^{(m)}(0),{ \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)}(0))$. To explain their approach, we take $f_{1}^{(\gamma)}(0)=1$ and $f_{2\lambda+1}^{(\gamma)}(0)=0$ for $\lambda=1,2,3,\dots$. First, they choose ${ \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)}(0)=0$ for all $\lambda=0,1,2,\dots$ and all $m=0,1,2,\dots$ to ensure that the source term $ S_{2\lambda+1}^{(m)}(\tau)$ is a series only of cosines. Then, they observed that the fixed index $\gamma $ belongs in $ \mathcal{N}_{\lambda}$ for all $\lambda=0,1,2,\dots$ and there is only one secular term in $f_{2\lambda+1}^{(\gamma)}(\tau)$ which can be removed by choosing the frequency shift $\omega_{\gamma,2(\lambda-1)}$. Furthermore, for all $m \in \mathcal{N}_{\lambda} \setminus \{\gamma\}$, there are some secular terms which cancel by the structure of the equations, some secular terms cancel by choosing some initial data but some initial data remain free variables at this stage. They choose these free variables together with $\omega_{\gamma,2\lambda}$ to cancel the secular terms in the $f_{2(\lambda+1)+1}(\tau)$. For more details see [@13033186]. However, there is no proof based on rigorous arguments ensuring that this procedure works for all $\lambda$. Choice of the initial data -------------------------- For example, we fix $\gamma=0$ and choose $$\begin{aligned} { \accentset{\mbox{\large\bfseries .}}{f}}_{2\lambda+1}^{(m)}(0) = 0,\quad \forall \lambda \geq 0, \quad \forall m \geq 0.\end{aligned}$$ First, we use the recurrence relation above and find periodic expressions for $p_{2}^{(m)}(\tau)$ and $q_{2}^{(m)}(\tau)$ due to the periodicity of $f_{1}^{(m)}(\tau)$ and $g_{1}^{(m)}(\tau)$. Second, we compute $$\begin{aligned} S_{3}^{(m)}(\tau)= A_{3}^{(m)} \cos(\tau)+B_{3}^{(m)} \cos(3\tau),\end{aligned}$$ for some sequences $\{A_{3}^{(m)}\}_{m=0,1,\dots}$ and $\{B_{3}^{(m)}\}_{m=0,1,\dots}$. Then, the equation $$\begin{aligned} \frac{\omega_{m}}{\omega_{0}}=\frac{3+2m}{3}=1+\frac{2}{3} m \in \{1,3 \}\end{aligned}$$ has two solutions $$\begin{aligned} m \in \mathcal{N}_{3}:= \{0,3\}. \end{aligned}$$ Based on the discussion above, we get two secular terms in the list $\{f_{3}^{(m)}(\tau): m=0,1,2,\dots\}$, one for $m=0$ and one for $m=3$. We see that the secular term for $m=0$ can be canceled by choosing the frequency shift $\theta_{2}$ whereas the secular term for $m=3$ cancels by the structure of the equations meaning $B_{3}^{(3)}=0$ without any choice of the initial data $\{f_{3}^{(m)}(0): m=0,1,2,\dots\}$. Hence, all these are free variables at this stage (i.e. for $\lambda=3$) and will be chosen later to cancel all the secular terms in $f_{5}^{(m)}(\tau)$. Specifically, we get $$\begin{aligned} & f_{3}^{(0)}(\tau)= \left( \frac{765}{128 \pi}+f_{3}^{(0)}(0) \right) \cos(\tau) - \frac{765}{128 \pi} \cos(3\tau) + \left(\theta_{2} - \frac{153}{4 \pi} \right) \tau \sin(\tau), \\ & f_{3}^{(1)}(\tau)= \frac{6183}{256 \pi}\sqrt{3} \cos(\tau) + \frac{765}{256 \pi}\sqrt{3} \cos(3\tau) + \left(f_{3}^{(1)}(0) - \frac{1737}{64 \pi}\sqrt{3} \right) \cos \left(\frac{5}{3}\tau \right), \\ & f_{3}^{(2)}(\tau)= \frac{3717}{3200 \pi}\sqrt{\frac{3}{2}} \cos(\tau) + \frac{441}{128 \pi}\sqrt{\frac{3}{2}} \cos(3\tau) + \left(f_{3}^{(2)}(0) - \frac{7371}{1600 \pi}\sqrt{\frac{3}{2}} \right) \cos \left(\frac{7}{3}\tau \right), \\ & f_{3}^{(3)}(\tau)= -\frac{14607}{4480 \pi}\sqrt{\frac{1}{10}} \cos(\tau) + \frac{14607}{4480 \pi}\sqrt{\frac{1}{10}} \cos(3\tau) + f_{3}^{(3)}(3) \cos \left(3\tau \right), \\ & f_{3}^{(4)}(\tau)= \frac{9999}{62720 \pi}\sqrt{\frac{3}{5}} \cos(\tau) + \frac{99}{256 \pi}\sqrt{\frac{3}{5}} \cos(3\tau) + \left(f_{3}^{(4)}(0) - \frac{17127}{31360 \pi}\sqrt{\frac{3}{5}} \right) \cos \left(\frac{11}{3}\tau \right), \\ & f_{3}^{(5)}(\tau)= -\frac{507}{11200 \pi}\sqrt{\frac{3}{7}} \cos(\tau) + \left(f_{3}^{(5)}(0) + \frac{507}{11200 \pi}\sqrt{\frac{3}{7}} \right) \cos \left(\frac{13}{3}\tau \right), \\ & f_{3}^{(6)}(\tau)= \frac{31}{896 \pi}\sqrt{\frac{1}{7}} \cos(\tau) + \left(f_{3}^{(6)}(0) - \frac{31}{896 \pi}\sqrt{\frac{1}{7}} \right) \cos \left(5\tau \right), \\ & f_{3}^{(7)}(\tau)= -\frac{11271}{1724800 \pi} \cos(\tau) + \left(f_{3}^{(7)}(0) + \frac{11271}{1724800 \pi}\right) \cos \left(\frac{17}{3}\tau \right), \\ & f_{3}^{(8)}(\tau)= \frac{1083}{135520 \pi}\sqrt{\frac{1}{5}} \cos(\tau) + \left(f_{3}^{(8)}(0) - \frac{1083}{135520 \pi}\sqrt{\frac{1}{5}} \right) \cos \left(\frac{19}{3}\tau \right), \\ & f_{3}^{(9)}(\tau)= -\frac{1421}{91520 \pi}\sqrt{\frac{1}{55}} \cos(\tau) + \left(f_{3}^{(9)}(0) + \frac{1421}{91520 \pi}\sqrt{\frac{1}{55}}\right) \cos \left(7\tau \right).\end{aligned}$$ We choose $$\begin{aligned} \theta_{2}=\frac{153}{4\pi}\end{aligned}$$ to ensure the periodicity of $f_{3}^{(0)}(\tau)$. Once all secular terms in $f_{3}^{(m)}(\tau)$ are removed, the periodic expression for $f_{3}^{(m)}(\tau)$ implies a periodic expression also for $g_{3}^{(m)}(\tau)$. Growth and decay of the Fourier constants ----------------------------------------- In this section, we focus on the asymptotic behaviour of all the Fourier constants which appear using this approach. We shall use the notation $$\begin{aligned} \sum_{\pm} f(a\pm b \pm c) = f(a + b + c) + f(a + b - c) + f(a - b + c) + f(a - b - c),\end{aligned}$$ that is summation with respect to all possible combinations of plus and minus. Furthermore, expressions like $\omega_{i} \pm \omega_{j} \pm \omega_{m}$ stand not only for $\omega_{i} + \omega_{j} + \omega_{m}$ and $\omega_{i} - \omega_{j} - \omega_{m}$ but also for $\omega_{i} + \omega_{j} - \omega_{m}$ and $\omega_{i} - \omega_{j} + \omega_{m}$, that is considering all possible combinations of plus and minus. Specifically, we prove the following result the proof of which is based on the leading order terms (Remark \[lot\]) together with the asymptotic behavior of the oscillatory integrals (Lemma \[OscillatoryIntegrals\]), the orthogonality properties (Lemma \[ClosedformulasFore\]) and the $L^{\infty}-$bounds (Lemma \[Linftyboundse\]). \[Result1\] Let $N\in \mathbb{N}$. The following growth and decay estimates hold. [ |l|l|l|]{}\ Constant & $\exists ~~\omega_{i} \pm \omega_{j} \pm \omega_{m} \longarrownot\longrightarrow \infty $ & $\forall~~ \omega_{i} \pm \omega_{j} \pm \omega_{m} \longrightarrow \infty $\ &$ \mathcal{O} \left( \omega_{i} \right)$ &$\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{m})^{2}} \right) $\ &$\mathcal{O} \left( \omega_{i} \right)$ &   $\displaystyle \frac{4}{\sqrt{\pi}} + \sum_{\pm}\mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{m} )^2} \right) $\ [ |l|l|l|]{}\ Constant & $\exists ~~\omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \longarrownot\longrightarrow \infty $ & $ \forall~~ \omega_{i} \pm \omega_{j} \pm \omega_{l}\pm \omega_{m} \longrightarrow \infty $\ & $\displaystyle \mathcal{O} \left(\omega_{l}\right) $ &$\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{l}\pm \omega_{m})^{N}} \right) $\ & $\displaystyle \mathcal{O} \left( \frac{\omega_{l}}{ \omega_{m}} \right)$ &$\displaystyle ~~~\frac{1}{\omega_{m}} \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right) $\ First, observe that $\omega_{i} \pm \omega_{j} \pm \omega_{m}$ are all odd, namely $$\begin{aligned} \omega_{i} - \omega_{j} + \omega_{m} & = 2(i-j+m+1)+1, \\ \omega_{i} - \omega_{j} - \omega_{m} & = 2(i-j-m-2)+1, \\ \omega_{i} + \omega_{j} + \omega_{m} & = 2(i+j+m+4)+1, \\ \omega_{i} + \omega_{j} - \omega_{m} & = 2(i+j-m+1)+1.\end{aligned}$$ For large values of $i,j,m$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{m} \longrightarrow \infty$, we obtain $$\begin{aligned} C_{ij}^{(m)} & := \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{m}(x) \tan^2(x) dx \\ & \simeq \left( \frac{2}{\sqrt{\pi}} \right)^3 \int_{0}^{\frac{\pi}{2}} \frac{\sin(\omega_{i}x)\sin(\omega_{j}x)\sin(\omega_{m}x)}{\tan(x)} dx \\ & =\frac{1}{4} \left( \frac{2}{\sqrt{\pi}} \right)^3 \Bigg( \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}-\omega_{j}+\omega_{m})x)}{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}-\omega_{j}-\omega_{m})x)}{\tan(x)} dx \\ &\quad \quad \quad\quad \quad\quad - \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}+\omega_{j}+\omega_{m})x)}{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}+\omega_{j}-\omega_{m})x)}{\tan(x)} dx \Bigg) \\ & =\frac{1}{4} \left( \frac{2}{\sqrt{\pi}} \right)^3 \left( \frac{\pi}{2}-\frac{\pi}{2}-\frac{\pi}{2}+\frac{\pi}{2} + \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{m})^2} \right) \right) \\ & =\sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{m})^2} \right). \end{aligned}$$ On the other hand, for large values of $i,j,m$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| C_{ij}^{(m)} \right| & =\left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{m}(x) \tan^2(x) dx\right| \\ & \leq \left\| e_{i} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{m}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{i}.\end{aligned}$$ Similarly, for large values of $i,j,m$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{m} \longrightarrow \infty$, we obtain $$\begin{aligned} \overline{C}_{ij}^{(m)} & := \int_{0}^{\frac{\pi}{2}} e_{i}(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{m}^{\prime}(x)}{\omega_{m}}\tan^2(x) dx \\ & \simeq \left( \frac{2}{\sqrt{\pi}} \right)^3 \int_{0}^{\frac{\pi}{2}} \frac{\sin(\omega_{i}x)\cos(\omega_{j}x)\cos(\omega_{m}x)}{\tan(x)} dx \\ & =\frac{1}{4} \left( \frac{2}{\sqrt{\pi}} \right)^3 \Bigg( \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}+\omega_{j}+\omega_{m})x)}{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}+\omega_{j}-\omega_{m})x)}{\tan(x)} dx \\ &\quad \quad \quad\quad \quad\quad + \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}-\omega_{j}+\omega_{m})x)}{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \frac{\sin((\omega_{i}-\omega_{j}-\omega_{m})x)}{\tan(x)} dx \Bigg) \\ & =\frac{1}{4} \left( \frac{2}{\sqrt{\pi}} \right)^3 \left( \frac{\pi}{2}+\frac{\pi}{2}+\frac{\pi}{2}+\frac{\pi}{2} + \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{m})^2} \right) \right) \\ & =\frac{4}{\sqrt{\pi}} + \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{m})^2} \right). \end{aligned}$$ On the other hand, for large values of $i,j,m$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left|\overline{C}_{ij}^{(m)}\right| & =\left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{m}^{\prime}(x)}{\omega_{m}}\tan^2(x) dx \right| \\ & \leq \left\| e_{i} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{j}^{\prime}(x)}{\omega_{j}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{m}^{\prime}(x)}{\omega_{m}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{i}.\end{aligned}$$ Next, observe that $\omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m}$ are all even, $$\begin{aligned} \omega_{i} - \omega_{j} + \omega_{l} - \omega_{m} & = 2 (i - j + l - m),\\ \omega_{i} + \omega_{j} - \omega_{l} - \omega_{m} & = 2 (i + j - l - m), \\ \omega_{i} - \omega_{j} - \omega_{l} - \omega_{m}& = 2 (-3 + i - j - l - m), \\ \omega_{i} + \omega_{j} + \omega_{l} - \omega_{m} & = 2 (3 + i + j + l - m),\\ \omega_{i} - \omega_{j} - \omega_{l} + \omega_{m} & = 2 (i - j - l + m), \\ \omega_{i} + \omega_{j} + \omega_{l} + \omega_{m} & = 2 (6 + i + j + l + m), \\ \omega_{i} - \omega_{j} + \omega_{l} + \omega_{m} & = 2 (3 + i - j + l + m), \\ \omega_{i} + \omega_{j} - \omega_{l} + \omega_{m} & = 2 (3 + i + j - l + m).\end{aligned}$$ Now, for large values of $i,j,l,m$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \longrightarrow \infty$, we obtain $$\begin{aligned} \overline{A}_{ijl}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} e_{l}(x) \frac{e_{m}^{\prime}(x)}{\omega_{m}} \frac{\sin^3(x)}{\cos(x)} dx \\ & \simeq \left(\frac{2}{\sqrt{\pi}} \right)^4 \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \cos(\omega_{i}x) \cos(\omega_{j}x) \sin(\omega_{l}x) \cos(\omega_{m}x) }{\tan(x)} dx \\ & = \frac{1}{8}\left(\frac{2}{\sqrt{\pi}} \right)^4\Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \Bigg) \\ &=\frac{1}{8} \left(\frac{2}{\sqrt{\pi}} \right)^4 \left( 4 \left( \frac{\pi}{2} - \frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right) \right) \\ & = \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right), \end{aligned}$$ whereas, for large values of $i,j,l,m$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \overline{A}_{ijl}^{(m)} \right| &= \left| \int_{0}^{\frac{\pi}{2}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} e_{l}(x) \frac{e_{m}^{\prime}(x)}{\omega_{m}} \frac{\sin^3(x)}{\cos(x)} dx\right| \\ &=\left| \int_{0}^{\frac{\pi}{2}} \frac{e_{i}^{\prime}(x)}{\omega_{i}}\tan(x) \cdot \frac{e_{j}^{\prime}(x)}{\omega_{j}}\tan(x)\cdot e_{l}(x)\cos^2(x) \cdot \frac{e_{m}^{\prime}(x)}{\omega_{m}}\tan(x) dx\right| \\ & \leq \left \| \frac{e_{i}^{\prime}}{\omega_{i}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{j}^{\prime}}{\omega_{j}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{l} \cos^2 \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{m}^{\prime}}{\omega_{m}} \tan \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{l}.\end{aligned}$$ Furthermore, for large values of $i,j,l,m$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{l}\pm \omega_{m} \longrightarrow \infty$, $$\begin{aligned} \overline{B}_{ijl}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{l}(x) \frac{e_{m}^{\prime}(x)}{\omega_{m}} \frac{\sin^3(x)}{\cos(x)} dx \\ & \simeq \left(\frac{2}{\sqrt{\pi}} \right)^4 \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{l}x) \cos(\omega_{m}x) }{\tan(x)} dx \\ & =\frac{1}{8} \left(\frac{2}{\sqrt{\pi}} \right)^4\Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx \\ &+ \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \Bigg) \\ &=\frac{1}{8} \left(\frac{2}{\sqrt{\pi}} \right)^4 \left( 4 \left( \frac{\pi}{2} - \frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right) \right) \\ & = \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right), \end{aligned}$$ whereas, for large values of $i,j,l,m$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \overline{B}_{ijl}^{(m)} \right| &= \left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{l}(x) \frac{e_{m}^{\prime}(x)}{\omega_{m}} \frac{\sin^3(x)}{\cos(x)} dx\right| \\ &=\left| \int_{0}^{\frac{\pi}{2}} e_{i}(x)\tan(x) \cdot e_{j}(x) \tan(x)\cdot e_{l}(x)\cos^2(x) \cdot \frac{e_{m}^{\prime}(x)}{\omega_{m}}\tan(x) dx\right| \\ & \leq \left \| e_{i}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{l} \cos^2 \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{m}^{\prime}}{\omega_{m}} \tan \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{l}.\end{aligned}$$ Next, we use once more Remark \[lot\] to compute $$\begin{aligned} & \int_{x}^{\frac{\pi}{2}} e_{m}(y) \cos(y) \sin(y) dy \simeq \frac{2}{\sqrt{\pi}} \int_{x}^{\frac{\pi}{2}} \sin(\omega_{m}y) \cos^2(y) dy \\ &= \frac{2}{\sqrt{\pi}} \left( \frac{\sin(\omega_{m}x)\sin(2x)}{\omega_{m}^2-4} -\frac{2\cos(\omega_{m}x)}{\omega_{m}(\omega_{m}^2-4)} + \frac{\omega_{m}\cos^2(x)\cos(\omega_{m}x)}{\omega_{m}^2-4} \right) \\ & \simeq \frac{2}{\sqrt{\pi}} \frac{1}{\omega_{m}} \cos^2(x)\cos(\omega_{m}x),\end{aligned}$$ for $m \longrightarrow \infty$. Hence, for large values of $i,j,l,m$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \longrightarrow \infty$, $$\begin{aligned} A_{ijl}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) \tan^2(x)dx\\ & \simeq \left(\frac{2}{\sqrt{\pi}} \right)^4 \frac{1}{\omega_{m}} \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \cos(\omega_{i}x) \cos(\omega_{j}x) \sin(\omega_{l}x) \cos(\omega_{m}x) }{\tan(x)} dx \\ & = \frac{1}{8}\left(\frac{2}{\sqrt{\pi}} \right)^4 \frac{1}{\omega_{m}} \Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx \\ &+ \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \Bigg) \\ &=\frac{1}{8\omega_{m}} \left(\frac{2}{\sqrt{\pi}} \right)^4 \left( 4 \left( \frac{\pi}{2} - \frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right) \right) \\ & =\frac{1}{\omega_{m}} \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right), \end{aligned}$$ whereas, for large values of $i,j,l,m$ such that some $\omega_{i} \pm \omega_{j}\pm \omega_{l} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| A_{ijl}^{(m)} \right| &= \left| \int_{0}^{\frac{\pi}{2}} \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) \tan^2(x)dx\right| \\ &=\left| \int_{0}^{\frac{\pi}{2}} \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \tan(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \tan(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) dx\right|\\ & \leq \left \| \frac{e_{i}^{\prime}}{\omega_{i}} \tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{j}^{\prime}}{\omega_{j}} \tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{l} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \int_{\cdot}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \frac{\omega_{l}}{\omega_{m}}.\end{aligned}$$ Finally, for large values of $i,j,l,m$ and in the case where all $\omega_{i} \pm \omega_{j}\pm \omega_{l} \pm \omega_{m} \longrightarrow \infty$, $$\begin{aligned} B_{ijl}^{(m)} &:= \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) \tan^2(x)dx\\ & \simeq \left(\frac{2}{\sqrt{\pi}} \right)^4 \frac{1}{\omega_{m}} \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{l}x) \cos(\omega_{m}x) }{\tan(x)} dx \\ & = \frac{1}{8}\left(\frac{2}{\sqrt{\pi}} \right)^4 \frac{1}{\omega_{m}} \Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx \\ &+ \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} + \omega_{m} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{l} - \omega_{m} \right) x \right) }{\tan(x)} dx \Bigg) \\ &=\frac{1}{8\omega_{m}} \left(\frac{2}{\sqrt{\pi}} \right)^4 \left( 4 \left( \frac{\pi}{2} - \frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right) \right) \\ & =\frac{1}{\omega_{m}} \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{l} \pm \omega_{m} \right)^{N}} \right), \end{aligned}$$ whereas, for large values of $i,j,l,m$ such that some $\omega_{i} \pm \omega_{j}\pm \omega_{l} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| B_{ijl}^{(m)} \right| &= \left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) \tan^2(x)dx\right| \\ &=\left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) \tan(x) e_{j}(x) \tan(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right) dx\right|\\ & \leq \left \| e_{i} \tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{j} \tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{l} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \int_{\cdot}^{\frac{\pi}{2}} e_{m}(y) \sin(y)\cos(y) dy \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \frac{\omega_{l}}{\omega_{m}},\end{aligned}$$ which ends the proof. Perturbations 2: Finite sum with an error term ============================================== Since the series --- may not converge, we can still assume that $(\Phi,\Pi,A,\delta)$ are all close to the AdS solution $(0,0,1,0)$ and expand $(\Phi,\Pi,A,\delta)$ using ----. First, we solve at the linear level and obtain the periodic expressions $$\begin{aligned} \begin{dcases} \Phi_{1}(\tau,x):= \cos(\tau) e^{\prime}_{\gamma}(x), \\ \Pi_{1}(\tau,x) := - \omega_{\gamma} \sin(\tau) e_{\gamma}(x), \\ A_{2}(\tau,x) := \cos^2(\tau) \Gamma_{1}(x)+ \sin^2(\tau) \Gamma_{2}(x),\\ \delta_{2}(\tau,x):= -\cos^2(\tau) \Gamma_{3}(x) - \sin^2(\tau) \Gamma_{4}(x), \end{dcases}\end{aligned}$$ where $$\begin{aligned} & \Gamma_{1}(x):=\frac{\cos^3(x)}{\sin(x)}\int_{0}^{x} \left( e^{\prime}_{0}(y)\right)^2 \tan^2(y) dy, \quad \Gamma_{2}(x):= \omega_{0}^2 \frac{\cos^3(x)}{\sin(x)}\int_{0}^{x} \left( e_{0}(y)\right)^2 \tan^2(y) dy, \\ & \Gamma_{3}(x):= \int_{0}^{x} \left( e^{\prime}_{0}(y)\right)^2 \sin(y)\cos(y) dy, \quad \Gamma_{4}(x):=\omega_{0}^2 \int_{0}^{x} \left( e_{0}(y)\right)^2 \sin(y)\cos(y) dy. \end{aligned}$$ For simplicity, we have fixed $\gamma = 0$ and use Lemma \[ClosedformulasFore\] we compute $$\begin{aligned} & \Gamma_{1}(x) = \frac{3 \cos^2(x)}{2 \pi \tan(x)} \Big( 12 x -3\sin(2x)-3\sin(4x)+\sin(6x) \Big), \\ & \Gamma_{2}(x)= - \frac{3 \cos^2(x)}{2 \pi \tan(x)} \Big( -12 x -3\sin(2x)+3\sin(4x)+\sin(6x) \Big), \\ & \Gamma_{3}(x) = \frac{3 \sin^4(x)}{2\pi} \Big( 25 +20\cos(x) +3\cos(4x) \Big), \\ & \Gamma_{4}(x) = - \frac{9}{32\pi} \Big( -93 +56 \cos(2x) + 28 \cos(4x) +8 \cos(6x) +\cos(8x) \Big). \end{aligned}$$ For future reference, observe that $$\begin{aligned} \label{LinftyGamma} \left \| \Gamma_{a} \right \|_{L^{\infty}\left[0,\frac{\pi}{2} \right]} \lesssim 1, \quad \forall a \in \{1,2,3,4 \}.\end{aligned}$$ Second, we get the following non-linear system for the error terms $(\Psi,\Sigma,B,\Theta)$, $$\begin{aligned} (\omega_{0} + \epsilon^{2} \theta_{0}+\epsilon^{4} \eta_{0})\partial_{\tau}\Psi & = ( \theta_{0}+\epsilon^{2} \eta_{0}) \sin(\tau) e_{0}^{\prime}(x) + \partial_{x} \Bigg( \Sigma +\omega_{0}\sin(\tau)e_{0}(x)(A_{2}+\delta_{2}) \Bigg) \\ & + \epsilon^{2} \partial_{x} \Bigg( -\Sigma(A_{2}+\delta_{2}) -\omega_{0} \sin(\tau) e_{0}(x) ( -B + \Theta + A_{2}\delta_{2} ) \Bigg) \\ & + \epsilon^{4} \partial_{x} \Bigg( \Sigma (-B+\Theta+A_{2} \delta_{2}) -\omega_{0}\sin(\tau)e_{0} (-\Theta A_{2} +B \delta_{2} ) \Bigg) \\ & + \epsilon^{6} \partial_{x} \Bigg( \Sigma (-\Theta A_{2} +B \delta_{2}) + \omega_{0}\sin(\tau)e_{0} B \Theta \Bigg) \\ & - \epsilon^{8} \partial_{x} \Big( \Sigma \Theta B \Big),\end{aligned}$$ $$\begin{aligned} (\omega_{0} + \epsilon^{2}& \theta_{0}+\epsilon^{4} \eta_{0})\partial_{\tau}\Sigma = \omega_{0} (\theta_{0}+\epsilon^{2} \eta_{0}) \cos(\tau) e_{0} \\ &+ \Bigg( \frac{4}{\sin(2x)} \left( \Psi - \cos(\tau) e_{0}^{\prime}(A_{2}+\delta_{2}) \right) -\cos(\tau) \left( e_{0}^{\prime} (A_{2}+\delta_{2}) \right)^{\prime} + \partial_{x}\Psi \Bigg) \\ & + \epsilon^{2} \Bigg( \frac{4}{\sin(2x)} \left( -\Psi (A_{2}+\delta_{2}) +\cos(\tau) e_{0}^{\prime} (-B+\Theta + A_{2} \delta_{2}) \right) -\left( \Psi (A_{2}+\delta_{2}) \right)^{\prime} \\ & + \cos(\tau) \left( e_{0}^{\prime} (-B+\Theta+A_{2}\delta_{2}) \right)^{\prime} \Bigg) \\ & + \epsilon^{4} \Bigg( \frac{4}{\sin(2x)} \left( \Psi (-B+\Theta +A_{2}\delta_{2}) + \cos(\tau) e_{0}^{\prime} (-A_{2}\Theta + \delta_{2}B) \right) \\ & + \cos(\tau) \left( e_{0}^{\prime}(\delta_{2}B-A_{2}\Theta) \right)^{\prime} + \left( \Psi(-B+\Theta+A_{2}\delta_{2}) \right)^{\prime} \Bigg) \\ & + \epsilon^{6} \Bigg( \frac{4}{\sin(2x)} \left( \Psi (-\Theta A_{2} + \delta_{2}B) - \cos(\tau) e_{0}^{\prime} B \Theta \right) - \cos(\tau) \left( e_{0}^{\prime} B \Theta \right)^{\prime} + \left(\delta_{2} \Psi B \right)^{\prime} - \left(A_{2} \Psi \Theta \right)^{\prime} \Bigg) \\ & - \epsilon^{8} \Bigg( \frac{4}{\sin(2x)} B \Theta \Psi + \left( B \Theta \Psi \right)^{\prime} \Bigg),\end{aligned}$$ $$\begin{aligned} \partial_{x} B & + \frac{1+2 \sin^2(x)}{\sin(x)\cos(x)} B = \frac{\sin(2x)}{2} \Bigg( 2 \cos(\tau) e_{0}^{\prime} \Psi - 2 \omega_{0} \sin(\tau) e_{0} \Sigma - A_{2} \cos^2(\tau) (e_{0}^{\prime})^2 - \omega_{0}^2 A_{2} \sin^2(\tau) e_{0}^2 \Bigg) \\ & +\frac{\sin(2x)}{2} \epsilon^2 \Bigg( \Psi^2 + \Sigma^2 - 2 A_{2} \cos(\tau) e_{0}^{\prime} \Psi + 2 \omega_{0} A_{2} \sin(\tau)e_{0} \Sigma - \cos^2(\tau) (e_{0}^{\prime})^2 B - \omega_{0}^2 \sin^2(\tau) (e_{0} )^2 B \Bigg) \\ & +\frac{\sin(2x)}{2} \epsilon^4 \Bigg( -A_{2} \Psi^2 -A_{2} \Sigma^2 -2 \cos(\tau) e_{0}^{\prime} B \Psi + 2 \omega_{0} \sin(\tau) e_{0} B \Sigma \Bigg) \\ & -\frac{\sin(2x)}{2} \epsilon^6 \Big( B \left( \Psi^2 + \Sigma^2 \right) \Big), \end{aligned}$$ $$\begin{aligned} \partial_{x}\Theta & = \frac{\sin(2x)}{2} \Bigg( 2 \cos(\tau) e_{0}^{\prime} \Psi - 2 \omega_{0}\sin(\tau) e_{0} \Sigma - \delta_{2} \cos^{2}(\tau) (e_{0}^{\prime})^2 - \omega_{0}^2 \delta_{2} \sin^{2} (\tau) e_{0}^2 \Bigg) \\ & + \frac{\sin(2x)}{2} \epsilon^{2} \Bigg( \Psi^2+\Sigma^2 - 2\delta_{2} \cos(\tau) e_{0}^{\prime} \Psi + 2 \omega_{0} \sin(\tau) e_{0} \delta_{2} \Sigma + \cos^2(\tau) (e_{0}^{\prime})^2 \Theta + \omega_{0}^2 \sin^2(\tau) e_{0}^2 \Bigg) \\ & + \frac{\sin(2x)}{2} \epsilon^{4} \Bigg( -\delta_{2} \Psi^2 - \delta_{2} \Sigma^2 + 2 \cos(\tau ) e_{0}^{\prime} \Theta \Psi - 2 \omega_{0} \sin(\tau) e_{0} \Theta \Sigma \Bigg) \\ & +\frac{\sin(2x)}{2} \epsilon^{6} \Big( \Theta \left( \Psi^2 + \Sigma^2 \right) \Big).\end{aligned}$$ Definition of the Fourier constants ----------------------------------- As before, we expand the error terms $(\Psi, \Sigma, B, \Theta)$ in terms of the eigenfunctions of the linearized operator as follows $$\begin{aligned} \Psi(\tau,x) = \sum_{i=0}^{\infty} \psi_{i}(\tau) \frac{e_{i}^{\prime}(x)}{\omega_{i}},~~~ \Sigma(\tau,x) = \sum_{i=0}^{\infty} \sigma_{i}(\tau) e_{i}(x), \\ B(\tau,x) = \sum_{i=0}^{\infty} b _{i}(\tau) e_{i}(x),~~~ \Theta(\tau,x) = \sum_{i=0}^{\infty} \xi _{i}(\tau) e_{i}(x).\end{aligned}$$ After substituting these expressions into the equations above, we take the inner product $(\cdot | e_{j}^{\prime})$ (for the equation for $\Psi$) and $(\cdot | e_{j})$ (for the equations for $\Sigma,B$ and $\Theta$) in both sides. A long but straightforward computation yields that all the interactions with respect the space variable $x$ are included into the following Fourier constants: $$\begin{aligned} \mathbb{C}_{abij}& := \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{a}(x) - \Gamma_{b}(x) \right) e_{i}(x) e_{j}(x) \tan^2(x) dx \\ \mathbb{D}_{abij}& := \int_{0}^{\frac{\pi}{2}} \Gamma_{a}(x) \Gamma_{b}(x) e_{i}(x) e_{j}(x) \tan^2(x) dx \\ \mathbb{E}_{1423ij}& := \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x)\Gamma_{4}(x) - \Gamma_{2}(x)\Gamma_{3}(x) \right) e_{i}(x) e_{j}(x) \tan^2(x) dx \\ \mathbb{G}_{jki}& := \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{k}(x) \tan^2(x) dx \\ \mathbb{F}_{ajki}& := \int_{0}^{\frac{\pi}{2}} \Gamma_{a}(x) e_{i}(x) e_{j}(x) e_{k}(x) \tan^2(x) dx \\ \mathbb{H}_{ijkl}& := \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{k}(x) e_{l}(x) \tan^2(x) dx,\end{aligned}$$ for the equation for $\Psi$, $$\begin{aligned} \overline{\mathbb{C}}_{abij}& := \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{a}(x) - \Gamma_{b}(x) \right) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} \tan^2(x) dx \\ \overline{\mathbb{D}}_{abij} &:= \int_{0}^{\frac{\pi}{2}} \Gamma_{a}(x) \Gamma_{b}(x) \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} \tan^2(x) dx \\ \overline{\mathbb{E}}_{1423ij} &:= \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x)\Gamma_{4}(x) - \Gamma_{2}(x)\Gamma_{3}(x) \right) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} \tan^2(x) dx \\ \overline{\mathbb{G}}_{kji} &:= \int_{0}^{\frac{\pi}{2}} e_{j}(x) \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \tan^2(x) dx \\ \overline{\mathbb{F}}_{akji} &:= \int_{0}^{\frac{\pi}{2}} \Gamma_{a}(x) e_{j}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \tan^2(x) dx \\ \overline{\mathbb{H}}_{kjli} &:= \int_{0}^{\frac{\pi}{2}} e_{l}(x) e_{j}(x) \frac{ e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \tan^2(x) dx, \end{aligned}$$ for the equation for $\Sigma$, $$\begin{aligned} \mathbb{J}_{jki} & :=\int_{0}^{\frac{\pi}{2}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ \mathbb{I}_{jki} & :=\int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ \mathbb{P}_{ajki} & :=\int_{0}^{\frac{\pi}{2}} \Gamma_{a}(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ \mathbb{Q}_{ajki} & :=\int_{0}^{\frac{\pi}{2}} \Gamma_{a}(x) e_{j}(x) e_{k}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ \mathbb{R}_{klji} & :=\int_{0}^{\frac{\pi}{2}} e_{j}(x) \frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{ e_{l}^{\prime}(x)}{\omega_{l}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ \mathbb{S}_{jkli} & :=\int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) e_{l}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx,\end{aligned}$$ for the equation for $\Theta$, and finally $$\begin{aligned} \overline{\mathbb{J}}_{jki} & :=\int_{0}^{\frac{\pi}{2}} \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) dy \right) \tan^2(x) dx \\ \overline{\mathbb{I}}_{jki} & :=\int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) dy \right) \tan^2(x) dx \\ \overline{\mathbb{P}}_{ajki} & :=\int_{0}^{\frac{\pi}{2}} \Gamma_{a} (x) \frac{ e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{k}^{\prime}(x)}{\omega_{k}} \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) dy \right) \tan^2(x) dx \\ \overline{\mathbb{Q}}_{ajki} & :=\int_{0}^{\frac{\pi}{2}} \Gamma_{a} (x) e_{j}(x) e_{k}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) dy \right) \tan^2(x) dx \\ \overline{\mathbb{R}}_{klji} & :=\int_{0}^{\frac{\pi}{2}} e_{j}(x) \frac{ e_{k}^{\prime} (x)}{\omega_{k}} \frac{e_{l}^{\prime} (x)}{\omega_{l}} \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) dy \right) \tan^2(x) dx \\ \overline{\mathbb{S}}_{jkli} & :=\int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k} (x) e_{l} (x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) dy \right) \tan^2(x) dx,\end{aligned}$$ for the equation for $B$. In addition, the non-linear system for the error terms boils down to $$\begin{aligned} \frac{d}{d \tau} \psi_{i}(\tau) - \frac{\omega_{i}}{\omega_{0}+\theta_{0} \epsilon^2 +\eta_{0} \epsilon^4} \sigma_{i}(\tau) & = \frac{1}{\omega_{0}+\theta_{0} \epsilon^2+\eta_{0} \epsilon^4} \frac{ \prescript{\psi}{}{F}_{i}(\tau) }{\omega_{i}}\\ & + \frac{1}{\omega_{0}+\theta_{0} \epsilon^2++\eta_{0} \epsilon^4} \frac{\prescript{\psi}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau))}{\omega_{i}}, \\ \frac{d}{d \tau} \sigma_{i}(\tau) + \frac{\omega_{i}}{\omega_{0}+\theta_{0} \epsilon^2+\eta_{0} \epsilon^4} \psi_{i}(\tau) & = \frac{1}{\omega_{0}+\theta_{0} \epsilon^2+\eta_{0} \epsilon^4} \prescript{\sigma}{}{F}_{i}(\tau) \\ & + \frac{1}{\omega_{0}+\theta_{0} \epsilon^2+\eta_{0} \epsilon^4}\prescript{\sigma}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)), \end{aligned}$$ subject to the constraints $$\begin{aligned} b_{i} (\tau)& = \prescript{b}{}{S}_{i}(\tau) + \prescript{b}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)) ,\\ \xi_{i} (\tau)& = \prescript{\xi}{}{S}_{i}(\tau) + \prescript{\xi}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)).\end{aligned}$$ Here, the source terms are given explicitly in terms of the Fourier constants by $$\begin{aligned} \frac{\prescript{\psi}{}{F}_{i}(\tau)}{\omega_{i}} & = \omega_{i} \Big( \frac{\omega_{0}^2\theta_{0}\delta_{0,i}}{\omega_{i}^2} \sin(\tau) + \omega_{0} \mathbb{C}_{13 0 i} \sin(\tau) \cos^2(\tau) + \omega_{0} \mathbb{C}_{240 i} \sin^3(\tau) \Big) \\ &+ \omega_{i} \epsilon^2 \Big( \frac{\omega_{0}^2\eta_{0}\delta_{0,i}}{\omega_{i}^2} \sin(\tau)+ \omega_{0} \mathbb{D}_{130 i }\sin(\tau) \cos^{4}(\tau) + \omega_{0} \mathbb{D}_{24 0 i} \sin^{5} (\tau) + \omega_{0} \mathbb{E}_{1423 0 i} \sin^3(\tau) \cos^2(\tau) \Big),\\ \prescript{\sigma}{}{F}_{i}(\tau) & = \omega_{0} \omega_{i}\Big( \frac{\theta_{0}\delta_{0,i}}{\omega_{i}} \cos(\tau) + \overline{\mathbb{C}}_{13 0 i} \cos^3(\tau) + \overline{\mathbb{C}}_{240 i} \cos(\tau) \sin^2(\tau) \Big) \\ &+\omega_{0} \omega_{i} \epsilon^2 \Big( \frac{\eta_{0}\delta_{0,i}}{\omega_{i}} \cos(\tau)+ \overline{\mathbb{D}}_{130 i }\cos^{5}(\tau) + \overline{\mathbb{D}}_{24 0 i} \cos^{4}(\tau) \sin^{4} (\tau) + \overline{\mathbb{E}}_{1423 0 i} \cos^3(\tau) \sin^2(\tau) \Big),\\ \prescript{b}{}{S}_{i}(\tau) & = \omega_{0}^2 \Big( - \cos^4(\tau) \overline{\mathbb{P}}_{100i} - \sin^2(\tau) \cos^2(\tau) \overline{\mathbb{P}}_{200i} - \sin^2(\tau) \cos^2(\tau) \overline{\mathbb{Q}}_{100i} - \sin^4(\tau) \overline{\mathbb{Q}}_{200i} \Big), \\ \prescript{\xi}{}{S}_{i}(\tau) & = \frac{\omega_{0}^2}{\omega_{i}} \Big( \cos^4(\tau) \mathbb{P}_{300i} + \cos^2(\tau) \sin^2(\tau) \mathbb{P}_{400i} + \sin^4(\tau) \mathbb{Q}_{400i} + \cos^2(\tau) \sin^2(\tau) \mathbb{Q}_{300i} \Big), \end{aligned}$$ where $\delta_{0,i}$ stands for the Kronecker’s delta, whereas the linear and non-linear terms are given by $$\begin{aligned} &\prescript{\psi}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)) = \epsilon^2 \Bigg( -\cos^2(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{C}_{13ji} \sigma_{j}(\tau) -\sin^2(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{ C}_{24ji} \sigma_{j}(\tau) \\ & -\omega_{0} \sin(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{G}_{0 ji} ( \xi_{j}(\tau)-b_{j}(\tau)) \Bigg) \\ & + \epsilon^4 \Bigg( -\cos^{4}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{D}_{13ji} \sigma_{j}(\tau) - \cos^{2}(\tau) \sin^{2}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{E}_{1423ji} \sigma_{j}(\tau)+ \sin^{4}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{D}_{24ji} \sigma_{j}(\tau) \\ & + \omega_{0} \sin(\tau) \cos^{2}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{F}_{10 ji} \xi_{j}(\tau) + \omega_{0} \sin^{3}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{F}_{20 ji} \xi_{j}(\tau) + \omega_{0} \sin(\tau) \cos^{2}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{F}_{3 0 ji} b_{j}(\tau) \\ & + \omega_{0} \sin^{3}(\tau) \sum _{j=0}^{\infty} \omega_{i}^2 \mathbb{F}_{40 ji} b_{j}(\tau) + \sum _{j,k=0}^{\infty} \omega_{i}^2 \mathbb{G}_{jki} \sigma_{j}(\tau)(-b_{k}(\tau)+\xi_{k}(\tau)) \Bigg) \\ & + \epsilon^6 \Bigg( \omega_{0} \sin(\tau) \sum _{j,k=0}^{\infty} \omega_{i}^2 \mathbb{H}_{0 jki} b_{j}(\tau) \xi_{k}(\tau) - \sin^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i}^2 \mathbb{F}_{2jki} \xi_{j}(\tau)\sigma_{k}(\tau) - \sin^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i}^2 \mathbb{F}_{4jki} \sigma_{j}(\tau)b_{k}(\tau) \\ & - \cos^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i}^2 \mathbb{F}_{1jki} \sigma_{j}(\tau)\xi_{k}(\tau) - \cos^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i}^2 \mathbb{F}_{3jki} \sigma_{j}(\tau)b_{k}(\tau) \Bigg) \\ & + \epsilon^8 \Bigg( - \sum_{j,k,l=0}^{\infty} \omega_{i}^2 \mathbb{H}_{jkli} \xi_{j}(\tau) \sigma_{k}(\tau) b_{l}(\tau) \Bigg),\end{aligned}$$ $$\begin{aligned} & \prescript{\sigma}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)) = \epsilon^2 \Bigg( \cos^2(\tau) \sum _{j=0}^{\infty} \omega_{i} \overline{\mathbb{C}}_{13ji} \psi_{j}(\tau) +\sin^2(\tau) \sum _{j=0}^{\infty} \omega_{i} \overline{\mathbb{C}}_{24ji} \psi_{j}(\tau) \\ & - \cos(\tau) \sum _{j=0}^{\infty} \omega_{0} \omega_{i} \overline{\mathbb{G}}_{0 ji} (\xi_{j}(\tau)-b_{j}(\tau)) \Bigg) \\ & + \epsilon^4 \Bigg( \cos^{4}(\tau) \sum _{j=0}^{\infty} \omega_{i} \overline{\mathbb{D}}_{13ji} \psi_{j}(\tau) + \cos^{2}(\tau) \sin^{2}(\tau) \sum _{j=0}^{\infty} \omega_{i} \overline{\mathbb{E}}_{1423ji} \psi_{j}(\tau) + \sin^{4}(\tau) \sum _{j=0}^{\infty} \omega_{i} \overline{\mathbb{D}}_{24ji} \psi_{j}(\tau) \\ & + \sin^{2}(\tau) \cos(\tau) \sum _{j=0}^{\infty} \omega_{0} \omega_{i}\overline{\mathbb{F}}_{20 ji} \xi_{j}(\tau) + \cos^{3}(\tau) \sum _{j=0}^{\infty} \omega_{0} \omega_{i}\overline{\mathbb{F}}_{10 ji} \xi_{j}(\tau) + \cos^{3}(\tau) \sum _{j=0}^{\infty} \omega_{0} \omega_{i} \overline{\mathbb{F}}_{3 0 ji} b_{j}(\tau) \\ & + \cos(\tau) \sin^{2}(\tau) \sum _{j=0}^{\infty}\omega_{0} \omega_{i} \overline{\mathbb{F}}_{40 ji} b_{j}(\tau) + \sum _{j,k=0}^{\infty} \omega_{i} \overline{\mathbb{G}}_{jki} \psi_{j}(\tau)(b_{k}(\tau)-\xi_{k}(\tau)) \Bigg) \\ & + \epsilon^6 \Bigg( \cos(\tau) \sum _{j,k=0}^{\infty} \omega_{0} \omega_{i} \overline{\mathbb{H}}_{0 jki} b_{j}(\tau) \xi_{k}(\tau) + \cos^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i}\overline{\mathbb{F}}_{1jki} \xi_{k}(\tau) \psi_{j}(\tau) + \cos^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i} \overline{\mathbb{F}}_{3jki} \psi_{j}(\tau) b_{k}(\tau) \\ & + \sin^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i}\overline{\mathbb{F}}_{2jki} \psi_{j}(\tau) \xi_{k}(\tau) + \sin^{2}(\tau) \sum _{j,k=0}^{\infty} \omega_{i} \overline{\mathbb{F}}_{4jki} \psi_{j}(\tau) b_{k}(\tau) \Bigg) \\ & + \epsilon^8 \Bigg( \sum_{j,k,l=0}^{\infty} \omega_{i} \overline{\mathbb{H}}_{ljki} \xi_{k}(\tau) \psi_{l}(\tau) b_{j}(\tau) \Bigg),\end{aligned}$$ $$\begin{aligned} & \prescript{b}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)) = \Bigg( 2 \cos(\tau) \sum _{j=0}^{\infty} \omega_{0}\overline{\mathbb{J}}_{0ji} \psi_{j}(\tau) - 2 \omega_{0} \sin(\tau) \sum _{j=0}^{\infty} \overline{\mathbb{I}}_{0ji} \sigma_{j}(\tau) \Bigg) \\ & + \epsilon^2 \Bigg( - 2 \cos^3(\tau) \sum _{j=0}^{\infty} \omega_{0}\overline{\mathbb{P}}_{10ji} \psi_{j}(\tau) - 2 \cos(\tau) \sin^2(\tau) \sum _{j=0}^{\infty} \omega_{0} \overline{\mathbb{P}}_{20ji} \psi_{j}(\tau) \\ & + 2 \omega_{0} \cos^2(\tau) \sin(\tau) \sum _{j=0}^{\infty} \overline{\mathbb{Q}}_{10ji} \sigma_{j}(\tau) + 2 \omega_{0} \sin^3(\tau) \sum _{j=0}^{\infty} \overline{\mathbb{Q}}_{20ji} \sigma_{j}(\tau) - \cos^2(\tau) \sum _{j=0}^{\infty} \omega_{0}^2 \overline{\mathbb{R}}_{00ji} b_{j}(\tau) \\ & - \sin^2(\tau) \sum _{j=0}^{\infty} \omega_{0}^2 \overline{\mathbb{S}}_{00ji} b_{j}(\tau) + \sum _{j,k=0}^{\infty} \overline{\mathbb{J}}_{jki} \psi_{j}(\tau) \psi_{k}(\tau) + \sum _{j,k=0}^{\infty} \overline{\mathbb{I}}_{jki} \sigma_{j}(\tau)\sigma_{k}(\tau) \Bigg) \\ & + \epsilon^4 \Bigg( -\cos^2(\tau) \sum _{j,k=0}^{\infty} \overline{\mathbb{P}}_{1jki} \psi_{j}(\tau) \psi_{k}(\tau) -\sin^2(\tau) \sum _{j,k=0}^{\infty} \overline{\mathbb{P}}_{2jki} \psi_{j}(\tau) \psi_{k}(\tau) \\ & - 2 \cos (\tau) \sum _{j,k=0}^{\infty} \omega_{0} \overline{\mathbb{R}}_{0kji} \psi_{k}(\tau) b_{j}(\tau) -\cos^2(\tau) \sum _{j,k=0}^{\infty} \overline{\mathbb{Q}}_{1jki} \sigma_{j}(\tau) \sigma_{k}(\tau) \\ & -\sin^2(\tau) \sum _{j,k=0}^{\infty} \overline{\mathbb{Q}}_{2jki} \sigma_{j}(\tau) \sigma_{k}(\tau) +2 \sin(\tau) \sum _{j,k=0}^{\infty}\omega_{0} \overline{\mathbb{S}}_{0jki} \sigma_{k}(\tau) b_{j}(\tau) \Bigg) \\ & + \epsilon^6 \Bigg( - \sum _{j,k,l=0}^{\infty} \overline{\mathbb{R}}_{klji} \psi_{k}(\tau) \psi_{l}(\tau) b_{j} (\tau) - \sum _{j,k,l=0}^{\infty} \overline{\mathbb{S}}_{jkli} \sigma_{k}(\tau) \sigma_{l}(\tau) b_{j} (\tau) \Bigg),\end{aligned}$$ $$\begin{aligned} & \prescript{\xi}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)) = \Bigg( 2 \cos(\tau) \sum _{j=0}^{\infty} \omega_{0} \frac{\mathbb{J}_{0ji}}{\omega_{i}} \psi_{j}(\tau) - 2 \omega_{0} \sin(\tau) \sum _{j=0}^{\infty} \frac{\mathbb{I}_{0ji}}{\omega_{i}} \sigma_{j}(\tau) \Bigg) \\ & + \epsilon^2 \Bigg( 2 \cos^3(\tau) \sum _{j=0}^{\infty} \omega_{0} \frac{\mathbb{P}_{30ji}}{\omega_{i}} \psi_{j}(\tau) + 2 \cos(\tau) \sin^2(\tau) \sum _{j=0}^{\infty} \omega_{0} \frac{\mathbb{P}_{40ji}}{\omega_{i}} \psi_{j}(\tau) -2 \omega_{0} \sin^3(\tau) \sum _{j=0}^{\infty} \frac{\mathbb{Q}_{40ji}}{\omega_{i}} \sigma_{j}(\tau) \\ &- 2 \omega_{0} \cos^2(\tau )\sin(\tau) \sum _{j=0}^{\infty} \frac{\mathbb{Q}_{30ji}}{\omega_{i}} \sigma_{j}(\tau) + \cos^2(\tau) \sum _{j=0}^{\infty} \omega_{0}^2 \frac{\mathbb{R}_{00ji}}{\omega_{i}} \xi_{j}(\tau) + \omega_{0}^2 \sin^2(\tau) \sum _{j=0}^{\infty} \omega_{i}\frac{\mathbb{S}_{00ji}}{\omega_{i}^2} \xi_{j}(\tau) \\ & + \sum _{j,k=0}^{\infty} \frac{\mathbb{J}_{jki}}{\omega_{i}} \psi_{j}(\tau) \psi_{k}(\tau) + \sum _{j,k=0}^{\infty} \frac{\mathbb{I}_{jki}}{\omega_{i}} \sigma_{j}(\tau)\sigma_{k}(\tau) \Bigg) \\ & + \epsilon^4 \Bigg( \cos^2(\tau) \sum _{j,k=0}^{\infty} \frac{\mathbb{P}_{3jki}}{\omega_{i}} \psi_{j}(\tau) \psi_{k}(\tau) + \sin^2(\tau) \sum _{j,k=0}^{\infty} \frac{\mathbb{P}_{4jki}}{\omega_{i}} \psi_{j}(\tau) \psi_{k}(\tau) + 2 \cos (\tau) \sum _{j,k=0}^{\infty} \omega_{0} \frac{\mathbb{R}_{0kji}}{\omega_{i}} \psi_{k}(\tau) \xi_{j}(\tau)\\ & +\cos^2(\tau) \sum _{j,k=0}^{\infty} \frac{\mathbb{Q}_{3jki}}{\omega_{i}} \sigma_{j}(\tau) \sigma_{k}(\tau) + \sin^2(\tau) \sum _{j,k=0}^{\infty} \frac{\mathbb{Q}_{4jki}}{\omega_{i}} \sigma_{j}(\tau) \sigma_{k}(\tau) - 2 \omega_{0} \sin(\tau) \sum _{j,k=0}^{\infty} \frac{\mathbb{S}_{0jki}}{\omega_{i}} \sigma_{k}(\tau) \xi{j}(\tau) \Bigg) \\ & + \epsilon^6 \Bigg( - \sum _{j,k,l=0}^{\infty} \frac{\mathbb{R}_{klji}}{\omega_{i}} \psi_{k}(\tau)\psi_{l}(\tau) \xi_{j} (\tau) + \sum _{j,k,l=0}^{\infty} \frac{\mathbb{S}_{jkli}}{\omega_{i}} \sigma_{k}(\tau) \sigma_{l}(\tau) \xi_{j} (\tau) \Bigg).\end{aligned}$$ Approximate periodic solution and small divisors {#giaAppendix} ------------------------------------------------ As in the first approach with the infinite sum, the linear and homogeneous part of the ordinary differential equation for $(\psi_{i},\sigma_{i})$ is simply the equation for the harmonic oscillator and hence we can use the variational constants formula to solve it. We find the fixed-point formulation $$\begin{aligned} \psi_{i}(\tau) = \prescript{\psi}{}{S}_{i}(\tau) + \int_{0}^{\tau} \Bigg( & \frac{\cos\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \frac{\prescript{\psi}{}{N}_{i}(\psi(s),\sigma(s),b(s),\xi(s))}{\omega_{i}} \\ & + \frac{\sin\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \prescript{\sigma}{}{N}_{i}(\psi(s),\sigma(s),b(s),\xi(s)) \Bigg) ds, \\ \sigma_{i}(\tau) = \prescript{\sigma}{}{S}_{i}(\tau) + \int_{0}^{\tau} \Bigg( & \frac{-\sin\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \frac{\prescript{\psi}{}{N}_{i}(\psi(s),\sigma(s),b(s),\xi(s))}{\omega_{i}} \\ & + \frac{\cos\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \prescript{\sigma}{}{N}_{i}(\psi(s),\sigma(s),b(s),\xi(s)) \Bigg) ds, \end{aligned}$$ subject to the constrain equations $$\begin{aligned} b_{i} (\tau)& = \prescript{b}{}{S}_{i}(\tau) + \prescript{b}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau)) ,\\ \xi_{i} (\tau)& = \prescript{\xi}{}{S}_{i}(\tau) + \prescript{\xi}{}{N}_{i}(\psi(\tau),\sigma(\tau),b(\tau),\xi(\tau))\end{aligned}$$ where $$\begin{aligned} \prescript{\psi}{}{S}_{i}(\tau) & = \cos\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right) \psi_{i}(0) + \sin\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right) \sigma_{i}(0) \\ & + \int_{0}^{\tau} \Bigg( \frac{\cos\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \frac{\prescript{\psi}{}{F}_{i}(s)}{\omega_{i}} + \frac{\sin\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \prescript{\sigma}{}{F}_{i}(s) \Bigg) ds, \\ \prescript{\sigma}{}{S}_{i}(\tau) & = -\sin\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right) \psi_{i}(0) + \cos \left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right) \sigma_{i}(0) \\ & + \int_{0}^{\tau} \Bigg( \frac{-\sin\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \frac{\prescript{\psi}{}{F}_{i}(s)}{\omega_{i}} + \frac{\cos\left( \frac{\omega_{i} (\tau-s)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \right)}{\omega_{0} +\theta_{0}\epsilon^2+ \eta_{0}\epsilon^4} \prescript{\sigma}{}{F}_{i}(s) \Bigg) ds. \end{aligned}$$ Furthermore, one can use the trigonometric identities $$\begin{aligned} & \cos^2(\tau) = 1 - \sin^2(\tau), \quad \cos^4(\tau) = \frac{3}{8} + \frac{1}{2} \cos(2\tau) + \frac{1}{8} \cos(4\tau), \\ & \sin^3(\tau) = \frac{3}{4} \sin(\tau) - \frac{3}{4} \sin(3 \tau),\quad \sin^5(\tau) = \frac{5}{8} \sin(\tau) - \frac{5}{16} \sin(3\tau) + \frac{1}{16} \sin(\tau), \\ & \cos(2\tau)\sin(\tau) = \frac{1}{2} \sin(3\tau) - \frac{1}{2}\sin(\tau), \quad \cos(4\tau)\sin(\tau) = \frac{1}{2} \sin(5\tau) - \frac{1}{2}\sin(3\tau)\end{aligned}$$ to write $$\begin{aligned} \prescript{\psi}{}{F}_{i}(s) & = K_{i}(\epsilon ^{2}) \sin(s) + \Lambda_{i}(\epsilon^{2}) \sin(3s) + M_{i}(\epsilon^{2}) \sin(5s), \\ \prescript{\sigma}{}{F}_{i}(s) & = N_{i}(\epsilon^{2}) \cos(s) + \Xi_{i}(\epsilon^{2}) \cos(3s) + T_{i}(\epsilon^{2}) \cos(5s)\end{aligned}$$ where $$\begin{aligned} K_{i}(\epsilon^{2}) &:= \omega_{0} \omega_{i}^2 \Bigg(\frac{\delta_{i,0} \omega_{0} \theta_{0}}{\omega_{i}^2} + \frac{1}{4} \mathbb{C}_{130i} + \frac{3}{4}\mathbb{C}_{240i} + \epsilon^2 \left(\frac{\delta_{i,0} \omega_{0} \eta_{0}}{\omega_{i}^2}+ \frac{1}{8} \mathbb{ D}_{130i} + \frac{5}{8} \mathbb{ D}_{240i} + \frac{1}{8} \mathbb{E}_{14230i} \right) \Bigg), \\ \Lambda_{i}(\epsilon^{2}) &:= \omega_{0}\omega_{i}^2 \Bigg( \frac{1}{4} \mathbb{C}_{130i} -\frac{1}{4} \mathbb{C}_{240i} + \epsilon^2 \left( \frac{3}{16} \mathbb{D}_{130i} -\frac{5}{16} \mathbb{D}_{240i} +\frac{1}{16} \mathbb{E}_{14230i} \right) \Bigg), \\ M_{i}(\epsilon^{2}) &:= \omega_{0}\omega_{i}^2 \epsilon^2 \Bigg( \frac{1}{16} \mathbb{D}_{130i} +\frac{1}{16} \mathbb{D}_{240i} - \frac{1}{16} \mathbb{E}_{14230i} \Bigg), \\ N_{i}(\epsilon^{2}) &:= \omega_{0} \omega_{i} \left(\frac{ \delta_{i,0} \theta_{0}}{\omega_{i}} + \frac{3}{4} \overline{\mathbb{C}}_{130i} + \frac{1}{4} \overline{\mathbb{C}}_{240i} + \epsilon^2 \left( \frac{\delta_{i,0} \eta_{0}}{\omega_{i}}+ \frac{5}{32} \overline{\mathbb{D}}_{130i} + \frac{63}{64} \overline{\mathbb{D}}_{240i} + \frac{19}{32} \overline{\mathbb{E}}_{14230i} \right) \right), \\ \Xi_{i}(\epsilon^{2}) &:= \omega_{0} \omega_{i} \left( \frac{1}{4} \overline{\mathbb{C}}_{130i} -\frac{1}{4} \overline{\mathbb{C}}_{240i} + \epsilon^2 \left( \frac{5}{64} \overline{\mathbb{D}}_{130i} + \frac{27}{64} \overline{\mathbb{D}}_{240i} +\frac{11}{64} \overline{\mathbb{E}}_{14230i} \right) \right), \\ T_{i}(\epsilon^{2}) &:=\omega_{0} \omega_{i} \epsilon^2 \Big( \frac{1}{16} \overline{\mathbb{D}}_{130i} +\frac{5}{64} \overline{\mathbb{D}}_{240i} - \frac{1}{64} \overline{\mathbb{E}}_{14230i} \Big).\end{aligned}$$ Now, computing the integrals above we obtain $$\begin{aligned} \prescript{\psi}{}{S}_{i}(\tau) & = \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}) \cos(\tau) + \prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}) \cos(3 \tau) + \prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2}) \cos(5\tau) \\ &+ \Big( \psi_{i}(0) + \prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2}) \Big) \cos\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+\eta_{0}\epsilon^4} \right) + \sigma_{i}(0) \sin\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+\eta_{0}\epsilon^4} \right), \\ \prescript{\sigma}{}{S}_{i}(\tau) & = \prescript{\sigma}{}{\mathcal{K}}_{i}(\epsilon^{2}) \sin(\tau) + \prescript{\sigma}{}{\mathcal{R}}_{i}(\epsilon^{2}) \sin(3 \tau) + \prescript{\sigma}{}{\mathcal{M}}_{i}(\epsilon^{2}) \sin(5\tau) \\ &+ \sigma_{i}(0) \cos\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+\eta_{0}\epsilon^4} \right) - \Big( \psi_{i}(0) - \prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2}) \Big) \sin\left( \frac{\omega_{i} \tau}{\omega_{0} +\theta_{0}\epsilon^2+\eta_{0}\epsilon^4} \right),\end{aligned}$$ where $$\begin{aligned} \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}) & = \frac{1}{\omega_{i}} \frac{ (\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) K_{i}(\epsilon^2) + \omega_{i}^2 N_{i}(\epsilon^2) }{ (\omega_{i}-\omega_{0}-\epsilon^2 \theta_{0} -\epsilon^4 \eta_{0}) (\omega_{i}+\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0} ) }, \\ \prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}) & = \frac{1}{\omega_{i}} \frac{ 3(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) \Lambda_{i}(\epsilon^2) + \omega_{i}^2 \Xi_{i}(\epsilon^2) }{ (\omega_{i}-3\omega_{0}-3\epsilon^2 \theta_{0}-3\epsilon^4 \eta_{0} ) (\omega_{i}+3\omega_{0}+3\epsilon^2 \theta_{0} +3\epsilon^4 \eta_{0}) }, \\ \prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2}) & = \frac{1}{\omega_{i}} \frac{ 5(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) M_{i}(\epsilon^2) + \omega_{i}^2 T_{i}(\epsilon^2) }{ (\omega_{i}-5\omega_{0}-5\epsilon^2 \theta_{0} -5\epsilon^4 \eta_{0}) (\omega_{i}+5\omega_{0}+5\epsilon^2 \theta_{0} +5\epsilon^4 \eta_{0}) }, \\ \prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2}) & = -\frac{1}{\omega_{i}} \frac{ (\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) K_{i}(\epsilon^2) + \omega_{i}^2 N_{i}(\epsilon^2) }{ (\omega_{i}-\omega_{0}-\epsilon^2 \theta_{0} -\epsilon^4 \eta_{0}) (\omega_{i}+\omega_{0}+\epsilon^2 \theta_{0} +\epsilon^4 \eta_{0}) } \\ &\quad -\frac{1}{\omega_{i}} \frac{ 3(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) \Lambda_{i}(\epsilon^2) + \omega_{i}^2 \Xi_{i}(\epsilon^2) }{ (\omega_{i}-3\omega_{0}-3\epsilon^2 \theta_{0}-3\epsilon^4 \eta_{0} ) (\omega_{i}+3\omega_{0}+3\epsilon^2 \theta_{0}+3\epsilon^4 \eta_{0} ) } \\ &\quad -\frac{1}{\omega_{i}} \frac{ 5(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) M_{i}(\epsilon^2) + \omega_{i}^2 T_{i}(\epsilon^2) }{ (\omega_{i}-5\omega_{0}-5\epsilon^2 \theta_{0}-5\epsilon^4 \eta_{0} ) (\omega_{i}+5\omega_{0}+5\epsilon^2 \theta_{0}+5\epsilon^4 \eta_{0} ) }\end{aligned}$$ and $$\begin{aligned} \prescript{\sigma}{}{\mathcal{K}}_{i}(\epsilon^{2}) & = - \frac{ (\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) N_{i}(\epsilon^2) + K_{i}(\epsilon^2) }{ (\omega_{i}-\omega_{0}-\epsilon^2 \theta_{0} -\epsilon^4 \eta_{0}) (\omega_{i}+\omega_{0}+\epsilon^2 \theta_{0} +\epsilon^4 \eta_{0}) }, \\ \prescript{\sigma}{}{\mathcal{R}}_{i}(\epsilon^{2}) & = - \frac{ 3(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) \Xi_{i}(\epsilon^2) + \Lambda_{i}(\epsilon^2) }{ (\omega_{i}-3\omega_{0}-3\epsilon^2 \theta_{0} -3\epsilon^4 \eta_{0}) (\omega_{i}+3\omega_{0}+3\epsilon^2 \theta_{0}+3\epsilon^4 \eta_{0} ) }, \\ \prescript{\sigma}{}{\mathcal{M}}_{i}(\epsilon^{2}) & = - \frac{ 5(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) T_{i}(\epsilon^2) + M_{i}(\epsilon^2) }{ (\omega_{i}-5\omega_{0}-5\epsilon^2 \theta_{0} -5\epsilon^4 \eta_{0}) (\omega_{i}+5\omega_{0}+5\epsilon^2 \theta_{0}+5\epsilon^4 \eta_{0} ) }, \\ \prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2}) & = \frac{1}{\omega_{i}} \frac{ (\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) K_{i}(\epsilon^2) + \omega_{i}^2 N_{i}(\epsilon^2) }{ (\omega_{i}-\omega_{0}-\epsilon^2 \theta_{0} -\epsilon^4 \eta_{0}) (\omega_{i}+\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0} ) } \\ & + \frac{1}{\omega_{i}} \frac{ 3(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) \Lambda_{i}(\epsilon^2) + \omega_{i}^2 \Xi_{i}(\epsilon^2) }{ (\omega_{i}-3\omega_{0}-3\epsilon^2 \theta_{0}-3\epsilon^4 \eta_{0} ) (\omega_{i}+3\omega_{0}+3\epsilon^2 \theta_{0} +3\epsilon^4 \eta_{0}) } \\ & +\frac{1}{\omega_{i}} \frac{ 5(\omega_{0}+\epsilon^2 \theta_{0}+\epsilon^4 \eta_{0}) M_{i}(\epsilon^2) + \omega_{i}^2 T_{i}(\epsilon^2) }{ (\omega_{i}-5\omega_{0}-5\epsilon^2 \theta_{0}-5\epsilon^4 \eta_{0} ) (\omega_{i}+5\omega_{0}+5\epsilon^2 \theta_{0} +5\epsilon^4 \eta_{0}) },\end{aligned}$$ provided that we choose $\eta_{0}$ so that $$\begin{aligned} & \omega_{i}\pm \omega_{0} \pm \epsilon^2 \theta_{0} \pm \epsilon^4 \eta_{0} \neq 0, \\ &\omega_{i}\pm 3\omega_{0} \pm 3\epsilon^2 \theta_{0} \pm 3\epsilon^4 \eta_{0} \neq 0, \\ & \omega_{i}\pm 5\omega_{0} \pm 5\epsilon^2 \theta_{0} \pm 5\epsilon^4 \eta_{0} \neq 0,\end{aligned}$$ for all $i = 0,1,2,\dots$ and all $0< \epsilon \leq \epsilon_{0}$ for sufficiently small $\epsilon_{0}>0$. Such a condition for the choice of $\eta_{0}$ is closely related to small divisors which play an important role in KAM theory [@MR2345400; @MR3097022; @MR4062430; @MR3867631; @MR3569244; @MR3502158]. Observe that the identity $$\begin{aligned} \prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2}) +\prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2}) = 0\end{aligned}$$ holds true for all $i=0,1,\dots$ and all $\epsilon >0$. Here, all the constants involved $$\begin{aligned} \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}),\prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}),\prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2}),\prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2}), \prescript{\sigma}{}{\mathcal{K}}_{i}(\epsilon^{2}),\prescript{\sigma}{}{\mathcal{R}}_{i}(\epsilon^{2}),\prescript{\sigma}{}{\mathcal{M}}_{i}(\epsilon^{2}),\prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2})\end{aligned}$$ depend explicitly on the Fourier constants defined above and most importantly depend only on one index. Due to this fact, we can compute them (see Lemma \[LemmaAppendixB\] in Appendix B). The fact that these constants are given in closed forms has a numerous advantages. First, we get the asymptotic behaviour for large $i$ and fixed $\epsilon>0$, $$\begin{aligned} & \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}) \simeq \frac{1}{\omega_{i}^5},\prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}) \simeq \frac{\epsilon^2}{\omega_{i}^5},\prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2})\simeq \frac{\epsilon^2}{\omega_{i}^5},\prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2})\simeq \frac{1}{\omega_{i}^5}, \text{~for~} i \longrightarrow \infty \\ & \prescript{\sigma}{}{\mathcal{K}}_{i}(\epsilon^{2}) \simeq \frac{1}{\omega_{i}^6},\prescript{\sigma}{}{\mathcal{R}}_{i}(\epsilon^{2}) \simeq \frac{\epsilon^2}{\omega_{i}^6},\prescript{\sigma}{}{\mathcal{M}}_{i}(\epsilon^{2})\simeq \frac{\epsilon^2}{\omega_{i}^6},\prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2})\simeq \frac{1}{\omega_{i}^5}, \text{~for~} i \longrightarrow \infty.\end{aligned}$$ Second, we get their asymptotic behaviour for sufficiently small $\epsilon$ and fixed $i=0,1,\dots$, $$\begin{aligned} \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} \left(-3+\frac{459}{4 \pi \theta_{0}} \right) \epsilon^{-2} + (...)1+(...)\epsilon^2, & \text{for } i = 0 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i \neq 0 \end{array}\right., \\ \prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} \frac{\omega_{3}}{72 \theta_{0}} \left( \omega_{3} \mathbb{C}_{1303}-\omega_{3} \mathbb{C}_{2403}+3 \omega_{0} \overline{\mathbb{C}}_{1303}-3 \omega_{0} \overline{\mathbb{C}}_{2403} \right)\epsilon^{-2}+ (...)1+(...)\epsilon^2, & \text{for } i = 3 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i \neq 3 \end{array}\right.,\\ \prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} (...)1+(...)\epsilon^2, & \text{for } i = 6 \\ [10pt] (...)\epsilon^2, & \text{for } i \neq 6 \end{array}\right., \\ \prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} \left(-3+\frac{459}{4 \pi \theta_{0}} \right)\epsilon^{-2}+ (...)1+(...)\epsilon^2, & \text{for } i = 0 \\ [10pt] \frac{\omega_{3}}{72 \theta_{0}} \left( \omega_{3} \mathbb{C}_{1303}-\omega_{3} \mathbb{C}_{2403}+3 \omega_{0} \overline{\mathbb{C}}_{1303}-3 \omega_{0} \overline{\mathbb{C}}_{2403} \right)\epsilon^{-2}+(...)1+(...)\epsilon^2, & \text{for } i = 3 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i = 6 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i \neq 0,3,6 \end{array}\right. \end{aligned}$$ $$\begin{aligned} \prescript{\sigma}{}{\mathcal{K}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} \left(-3+\frac{459}{4 \pi \theta_{0}} \right) \epsilon^{-2} + (...)1+(...)\epsilon^2, & \text{for } i = 0 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i \neq 0 \end{array}\right., \\ \prescript{\sigma}{}{\mathcal{R}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} \frac{\omega_{3}}{72 \theta_{0}} \left( \omega_{3} \mathbb{C}_{1303}-\omega_{3} \mathbb{C}_{2403}+3 \omega_{0} \overline{\mathbb{C}}_{1303}-3 \omega_{0} \overline{\mathbb{C}}_{2403} \right)\epsilon^{-2}+ (...)1+(...)\epsilon^2, & \text{for } i = 3 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i \neq 3 \end{array}\right.,\\ \prescript{\sigma}{}{\mathcal{M}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} (...)1+(...)\epsilon^2, & \text{for } i = 6 \\ [10pt] (...)\epsilon^2, & \text{for } i \neq 6 \end{array}\right., \\ \prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2}) &= \left\{\begin{array}{lr} \left(-3+\frac{459}{4 \pi \theta_{0}} \right)\epsilon^{-2}+ (...)1+(...)\epsilon^2, & \text{for } i = 0 \\ [10pt] \frac{\omega_{3}}{72 \theta_{0}} \left( \omega_{3} \mathbb{C}_{1303}-\omega_{3} \mathbb{C}_{2403}+3 \omega_{0} \overline{\mathbb{C}}_{1303}-3 \omega_{0} \overline{\mathbb{C}}_{2403} \right)\epsilon^{-2}+(...)1+(...)\epsilon^2, & \text{for } i = 3 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i = 6 \\ [10pt] (...)1+(...)\epsilon^2, & \text{for } i \neq 0,3,6 \end{array}\right.\end{aligned}$$ We compute $$\begin{aligned} \mathbb{C}_{1303} = - \frac{291}{560\sqrt{10}\pi}, ~\mathbb{C}_{2403} = - \frac{99}{140\sqrt{10}\pi}, ~\overline{\mathbb{C}}_{1303} = - \frac{111}{140\sqrt{10}\pi}, ~\overline{\mathbb{C}}_{2403} = - \frac{339}{560\sqrt{10}\pi}\end{aligned}$$ and hence $$\begin{aligned} \omega_{3} \mathbb{C}_{1303}-\omega_{3} \mathbb{C}_{2403}+3 \omega_{0} \overline{\mathbb{C}}_{1303}-3 \omega_{0} \overline{\mathbb{C}}_{2403} = 0.\end{aligned}$$ Consequently, by the structure of the equations, $\prescript{\psi}{}{\mathcal{R}}_{3},\prescript{\psi}{}{\mathcal{N}}_{3},\prescript{\sigma}{}{\mathcal{R}}_{3}$ and $\prescript{\sigma}{}{\mathcal{N}}_{3}$ cannot blowup as $\epsilon$ does to zero. However, we choose $$\begin{aligned} \theta_{0}:= \frac{153}{4\pi}\end{aligned}$$ to ensure that every component of the periodic parts $\prescript{\psi}{}{S}_{i}(\tau)$ and $\prescript{\sigma}{}{S}_{i}(\tau)$ of $\psi_{i}$ and $\sigma_{i}$ respectively are bounded as $\epsilon$ goes to zero. This choice coincides with the choice of $\theta_{2}$ from the first approach as well as with the numerical computations of Rostworowski-Maliborski [@13033186]. Similarly, using the trigonometric identities $$\begin{aligned} & \cos^4(\tau) = \frac{3}{8} + \frac{1}{2} \cos(2 \tau) + \frac{1}{8} \cos(4\tau),\\ &\sin^4(\tau) = \frac{3}{8} - \frac{1}{2} \cos(2 \tau) + \frac{1}{8} \cos(4\tau),\\ &\cos^2(\tau)\sin^2(\tau) = \frac{1}{8} - \frac{1}{8} \cos^4(\tau)\end{aligned}$$ we get $$\begin{aligned} \prescript{b}{}{S}_{i}(\tau) & = \prescript{b}{}{\mathcal{K}}_{i} + \prescript{b}{}{\mathcal{R}}_{i} \cos(2 \tau)+ \prescript{b}{}{\mathcal{M}}_{i}\cos(4 \tau) \\ \prescript{\xi}{}{S}_{i}(\tau) & = \prescript{\xi}{}{\mathcal{K}}_{i} + \prescript{\xi}{}{\mathcal{R}}_{i} \cos(2 \tau)+ \prescript{\xi}{}{\mathcal{M}}_{i}\cos(4 \tau),\end{aligned}$$ where $$\begin{aligned} \prescript{b}{}{\mathcal{K}}_{i} & =- \frac{\omega_{0}^2}{8} \Big( 3 \overline{\mathbb{P}}_{100i} + \overline{\mathbb{P}}_{200i} + \overline{\mathbb{Q}}_{100i} + 3 \overline{\mathbb{Q}}_{200i} \Big),\\ \prescript{b}{}{\mathcal{R}}_{i} &= \frac{\omega_{0}^2 }{2} \Big( -\overline{\mathbb{P}}_{100i} + \overline{\mathbb{Q}}_{200i} \Big),\\ \prescript{b}{}{\mathcal{M}}_{i} &= \frac{\omega_{0}^2}{8} \Big(- \overline{\mathbb{P}}_{100i} + \overline{\mathbb{P}}_{200i} + \overline{\mathbb{Q}}_{100i} - \overline{\mathbb{Q}}_{200i}\Big),\\ \prescript{\xi}{}{\mathcal{K}}_{i} &= \frac{\omega_{0}^2}{8\omega_{i}^2} \Big( 3 \mathbb{P}_{300i} + \mathbb{P}_{400i} + \mathbb{Q}_{300i} + 3 \mathbb{Q}_{400i} \Big),\\ \prescript{\xi}{}{\mathcal{R}}_{i}& =\frac{\omega_{0}^2}{2 \omega_{i}^2} \Big( \mathbb{P}_{300i} -\mathbb{Q}_{400i} \Big),\\ \prescript{\xi}{}{\mathcal{M}}_{i} &= \frac{\omega_{0}^2}{8\omega_{i}^2} \Big( \mathbb{P}_{300i} - \mathbb{P}_{400i} - \mathbb{Q}_{300i} + \mathbb{Q}_{400i} \Big).\end{aligned}$$ As before, all the constants involved $$\begin{aligned} \prescript{b}{}{\mathcal{K}}_{i},\prescript{b}{}{\mathcal{R}}_{i},\prescript{b}{}{\mathcal{M}}_{i}, \prescript{\xi}{}{\mathcal{K}}_{i},\prescript{\xi}{}{\mathcal{R}}_{i},\prescript{\xi}{}{\mathcal{M}}_{i}\end{aligned}$$ depend explicitly on the Fourier constants defined above and most importantly depend only on one index. Due to this fact, we can compute them (see Lemma \[LemmaAppendixB\] in Appendix B). We immediately get their asymptotic behaviour for large $i$, $$\begin{aligned} & \prescript{b}{}{\mathcal{K}}_{i}\simeq \frac{1}{\omega_{i}^{12}},\prescript{b}{}{\mathcal{R}}_{i}\simeq \frac{1}{\omega_{i}^{12}},\prescript{b}{}{\mathcal{M}}_{i} \simeq \frac{1}{\omega_{i}^{12}}, \text{~for~} i \longrightarrow \infty \\ & \prescript{\xi}{}{\mathcal{K}}_{i} \simeq \frac{1}{\omega_{i}^{6}},\prescript{\xi}{}{\mathcal{R}}_{i}\simeq \frac{1}{\omega_{i}^{6}},\prescript{\xi}{}{\mathcal{M}}_{i} \simeq \frac{1}{\omega_{i}^{6}}, \text{~for~} i \longrightarrow \infty.\end{aligned}$$ Choice of the initial data -------------------------- We choose $$\begin{aligned} \psi_{i}(0):= - \prescript{\psi}{}{\mathcal{N}}_{i}(\epsilon^{2}) = \prescript{\sigma}{}{\mathcal{N}}_{i}(\epsilon^{2}), \quad \sigma_{i}(0):=0\end{aligned}$$ so that $$\begin{aligned} \prescript{\psi}{}{S}_{i}(\tau) & = \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}) \cos(\tau) + \prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}) \cos(3 \tau) + \prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2}) \cos(5\tau), \\ \prescript{\sigma}{}{S}_{i}(\tau) & = \prescript{\sigma}{}{\mathcal{K}}_{i}(\epsilon^{2}) \sin(\tau) + \prescript{\sigma}{}{\mathcal{R}}_{i}(\epsilon^{2}) \sin(3 \tau) + \prescript{\sigma}{}{\mathcal{M}}_{i}(\epsilon^{2}) \sin(5\tau).\end{aligned}$$ This choice is motivated by the fact that the source terms $\prescript{\psi}{}{S}_{i}(\tau)$ and $\prescript{\sigma}{}{S}_{i}(\tau)$ of the solutions $\psi_{i}(\tau)$ and $\sigma_{i}(\tau)$ would give rise to a periodic term. Indeed, $$\begin{aligned} \Psi(\tau,x) &= \sum_{i=0}^{\infty} \psi_{i}(\tau) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \\ &= \sum_{i=0}^{\infty} \prescript{\psi}{}{S}_{i}(\tau) \frac{e_{i}^{\prime}(x)}{\omega_{i}} + \mathcal{O} \left(\epsilon^2 \right) \\ & = \Bigg( \sum_{i=0}^{\infty} \prescript{\psi}{}{\mathcal{K}}_{i}(\epsilon^{2}) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \Bigg) \cos(\tau)+ \Bigg( \sum_{i=0}^{\infty} \prescript{\psi}{}{\mathcal{R}}_{i}(\epsilon^{2}) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \Bigg)\cos(3 \tau) \\ & + \Bigg( \sum_{i=0}^{\infty} \prescript{\psi}{}{\mathcal{M}}_{i}(\epsilon^{2}) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \Bigg) \cos(5\tau) + \mathcal{O} \left(\epsilon^2 \right)\end{aligned}$$ and similarly for $\Sigma, B$ and $\Theta$. Growth and decay of the Fourier constants ----------------------------------------- We are interested in the asymptotic behaviour of all the Fourier constants which appear using this approach. To begin with, we split them into five groups as follows $$\begin{aligned} \mathcal{A}_{1}& := \Bigg \{ \omega_{i}\mathbb{C}_{13ji}, \omega_{i}\mathbb{C}_{24ji}, \omega_{i}\mathbb{D}_{13ji},\omega_{i}\mathbb{D}_{24ji}, \omega_{i}\mathbb{E}_{1423ji}, \omega_{i}\mathbb{\overline{C}}_{13ji}, \omega_{i}\mathbb{\overline{C}}_{24ji}, \omega_{i}\mathbb{\overline{D}}_{13ji}, \omega_{i}\mathbb{\overline{D}}_{24ji}, \\ & \quad \quad \quad \omega_{i}\mathbb{\overline{E}}_{1423ji}, \omega_{i}\mathbb{F}_{10ji}, \omega_{i}\mathbb{F}_{20ji},\omega_{i}\mathbb{F}_{30ji}, \omega_{i}\mathbb{F}_{40ji},\omega_{i}\mathbb{F}_{1jki},\omega_{i}\mathbb{F}_{2jki},\omega_{i}\mathbb{F}_{3jki}, \omega_{i}\mathbb{F}_{4jki},\omega_{i}\mathbb{G}_{0ji}, \\ &\quad \quad \quad \frac{ \omega_{0} \mathbb{P}_{10ji} }{\omega_{i}}, \frac{\omega_{0} \mathbb{P}_{20ji} }{\omega_{i}}, \frac{\omega_{0} \mathbb{P}_{30ji} }{\omega_{i}},\frac{\omega_{0} \mathbb{P}_{40ji}}{\omega_{i}}, \frac{\omega_{0} \mathbb{J}_{0ji}}{\omega_{i}}, \omega_{0} \mathbb{ \overline{P} }_{10ji}, \omega_{0} \mathbb{\overline{P}}_{20ji}, \omega_{0} \mathbb{\overline{P}}_{30ji},\omega_{0} \mathbb{\overline{P}}_{40ji}, \omega_{0} \mathbb{\overline{J}}_{0ji} \Bigg \}, \\ \mathcal{A}_{2}& := \Bigg \{ \omega_{i} \mathbb{F}_{1jki},\omega_{i} \mathbb{F}_{2jki},\omega_{i} \mathbb{F}_{3jki},\omega_{i} \mathbb{F}_{4jki},\frac{\omega_{0} \mathbb{R}_{0kji}}{\omega_{i}},\omega_{i} \mathbb{\overline{F}}_{1jki},\omega_{i} \mathbb{\overline{F}}_{2jki},\omega_{i} \mathbb{\overline{F}}_{3jki},\omega_{i} \mathbb{\overline{F}}_{4jki},\omega_{0} \mathbb{\overline{R}}_{0kji} \Bigg \}, \\ \mathcal{A}_{3}& := \Bigg \{ \frac{\mathbb{ Q}_{10ji}}{\omega_{i}},\frac{\mathbb{ Q}_{20ji}}{\omega_{i}},\frac{\mathbb{ Q}_{30ji}}{\omega_{i}},\frac{\mathbb{ Q}_{40ji}}{\omega_{i}},\frac{\mathbb{ I}_{0ji}}{\omega_{i}}, \frac{\omega_{0}^2 \mathbb{ R}_{00ji}}{\omega_{i}}, \frac{\mathbb{ S}_{00ji}}{\omega_{i}}, \omega_{0} \omega_{i}\mathbb{ \overline{F}}_{10ji},\omega_{0} \omega_{i}\mathbb{\overline{F}}_{20ji},\omega_{0} \omega_{i}\mathbb{\overline{F}}_{30ji},\omega_{0} \omega_{i}\mathbb{\overline{F}}_{40ji}, \\ &\quad \quad \quad \omega_{0} \omega_{i}\mathbb{\overline{G}}_{0ji}, \mathbb{\overline{Q}}_{10ji},\mathbb{\overline{Q}}_{20ji},\mathbb{\overline{Q}}_{30ji},\mathbb{\overline{Q}}_{40ji},\mathbb{\overline{I}}_{0ji},\omega_{0}^2\mathbb{\overline{R}}_{00ji},\mathbb{\overline{S}}_{00ji} \Bigg \}, \\ \mathcal{A}_{4}& := \Bigg \{ \frac{\mathbb{I}_{jki}}{\omega_{i}},\frac{\mathbb{J}_{jki}}{\omega_{i}}, \frac{\mathbb{P}_{1jki}}{\omega_{i}},\frac{\mathbb{P}_{2jki}}{\omega_{i}},\frac{\mathbb{P}_{3jki}}{\omega_{i}},\frac{\mathbb{P}_{4jki}}{\omega_{i}}, \frac{\mathbb{Q}_{1jki}}{\omega_{i}},\frac{\mathbb{Q}_{2jki}}{\omega_{i}},\frac{\mathbb{Q}_{3jki}}{\omega_{i}},\frac{\mathbb{Q}_{4jki}}{\omega_{i}}, \frac{\mathbb{S}_{0jki}}{\omega_{i}} \\ &\quad \quad \quad \overline{\mathbb{I}}_{jki}, \overline{\mathbb{J}}_{jki},\overline{\mathbb{P}}_{1jki},\overline{\mathbb{P}}_{2jki},\overline{\mathbb{P}}_{3jki}, \overline{\mathbb{P}}_{4jki}, \overline{\mathbb{Q}}_{1jki},\overline{\mathbb{Q}}_{2jki}, \overline{\mathbb{Q}}_{3jki}, \overline{\mathbb{Q}}_{4jki}, \overline{\mathbb{S}}_{0jki}, \omega_{0} \omega_{i}\overline{\mathbb{H}}_{0jki} \Bigg \}, \\ \mathcal{B}& := \Bigg \{ \omega_{i} \mathbb{H}_{0jki}, \omega_{i}\mathbb{G}_{jki},\frac{ \mathbb{R}_{klji}}{\omega_{i}},\frac{ \mathbb{S}_{jkli}}{\omega_{i} }, \omega_{i}\mathbb{\overline{G}}_{jki}, \omega_{i}\mathbb{H}_{klji}, \omega_{i}\mathbb{\overline{H}}_{ikjl}, \mathbb{\overline{R}}_{klji}, \overline{ \mathbb{S} }_{jkli} \Bigg \}. \end{aligned}$$ As before, we shall use the notation $$\begin{aligned} \sum_{\pm} f(a\pm b \pm c) = f(a + b + c) + f(a + b - c) + f(a - b + c) + f(a - b - c),\end{aligned}$$ that is summation with respect to all possible combinations of plus and minus and expressions like $\omega_{i} \pm \omega_{j} \pm \omega_{m}$ stand not only for $\omega_{i} + \omega_{j} + \omega_{m}$ and $\omega_{i} - \omega_{j} - \omega_{m}$ but also for $\omega_{i} + \omega_{j} - \omega_{m}$ and $\omega_{i} - \omega_{j} + \omega_{m}$, that is considering all possible combinations of plus and minus. We will use the leading order terms (Remark \[lot\]) together with the asymptotic behavior of the oscillatory integrals (Lemma \[OscillatoryIntegrals\]), the orthogonality properties (Lemma \[ClosedformulasFore\]), the $L^{\infty}-$bounds for quantities related to the eigenfunctions (Lemma \[Linftyboundse\]) and the $L^{\infty}-$bounds of the weights $\Gamma_{a}$ (estimate ). ### Fourier constants in $\mathcal{A}_{1}$, $\mathcal{A}_{2}$, $\mathcal{A}_{3}$ and $\mathcal{A}_{4}$ First, we focus on the elements of $\mathcal{A}_{1}$. \[A1\] The following growth and decay estimates hold. [ |l|l|l|l|l| ]{}\ Constant & $F$ & 1st derivative $\neq 0$ & $ \omega_{i} - \omega_{j} \longarrownot\longrightarrow \infty $ & $ \omega_{i} - \omega_{j} \longrightarrow \infty $\ & $\displaystyle \quad \Gamma_{1}-\Gamma_{3} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^4} \right) $\ & $\displaystyle \quad \Gamma_{2}-\Gamma_{4} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^4}\right) $\ & $\displaystyle \quad \Gamma_{1} \Gamma_{3} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^4} \right) $\ & $\displaystyle \quad \Gamma_{2} \Gamma_{4} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^4}\right) $\ & $\displaystyle \Gamma_{1} \Gamma_{4}+ \Gamma_{2} \Gamma_{3} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^4}\right) $\ & $\displaystyle \quad\quad e_{0} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i} \right)$ & $\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^4}\right) $\ & $\displaystyle \quad\quad \Gamma_{1} e_{0} $ & $\displaystyle \quad F^{(9)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{10}} \right)$\ & $\displaystyle \quad\quad \Gamma_{2} e_{0} $ & $\displaystyle \quad F^{(11)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{12}}\right) $\ & $\displaystyle \quad\quad \Gamma_{3} e_{0} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{4}}\right) $\ & $\displaystyle \quad\quad \Gamma_{4} e_{0} $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{4}}\right) $\ & $\displaystyle \Gamma_{1} e_{0}^{\prime}\sin \cos $ & $\displaystyle \quad F^{(9)} \left( \frac{\pi}{2} \right) \neq 0 $ &$\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{10}}\right) $\ & $\displaystyle \Gamma_{2} e_{0}^{\prime}\sin \cos $ & $\displaystyle \quad F^{(11)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{12}}\right) $\ & $\displaystyle \Gamma_{3} e_{0}^{\prime}\sin \cos $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ &$\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{4}}\right) $\ & $\displaystyle \Gamma_{4} e_{0}^{\prime}\sin \cos $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{4}}\right) $\ & $\displaystyle \quad e_{0}^{\prime}\sin \cos $ & $\displaystyle \quad F^{(3)} \left( \frac{\pi}{2} \right) \neq 0 $ &$\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{4}} \right)$\ All these estimates follow directly from Lemma \[byparts2\] and in particular from $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} F(x) \cos( 2 b x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+2}} \left( (-1)^{b} F^{(2k+1)} \left( \frac{\pi}{2} \right) -F^{(2k+1)} \left( 0 \right) \right) + \mathcal{O} \left( \frac{1}{a^{2N+2}} \right),\end{aligned}$$ as $b \longrightarrow \infty$. However, we illustrate the proof only for the first constant, namely $\omega_{i}\mathbb{C}_{13ji}$. For large values of $i,j$ and in the case where both $\omega_{i} \pm \omega_{j} \longrightarrow \infty$ (equivalently when $\omega_{i} - \omega_{j} \longrightarrow \infty$), $$\begin{aligned} \mathbb{C}_{13ji} &:= \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x) - \Gamma_{3}(x) \right) e_{i}(x)e_{j}(x)\tan^2(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x) - \Gamma_{3}(x) \right) \sin(\omega_{i}x) \sin(\omega_{j}x) dx \\ & = \frac{1}{2} \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x) - \Gamma_{3}(x) \right) \cos((\omega_{i}-\omega_{j})x) dx - \frac{1}{2} \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x) - \Gamma_{3}(x) \right) \cos((\omega_{i}+\omega_{j})x) dx. \end{aligned}$$ Observe that both $\omega_{i} + \omega_{j}$ and $\omega_{i} - \omega_{j}$ are even, $$\begin{aligned} \omega_{i} + \omega_{j} = 2(3 +i+j), \quad \omega_{i} - \omega_{j} = 2(i-j). \end{aligned}$$ We define $F(x):=\Gamma_{1}(x) - \Gamma_{3}(x)$. If $\omega_{i} - \omega_{j} \longrightarrow \infty$, then Lemma \[byparts2\] applies and since $$\begin{aligned} F^{\prime} \left( \frac{\pi}{2}\right) = F^{\prime} \left( 0 \right) =0,~~~F^{\prime \prime \prime} \left(\frac{\pi}{2} \right) = -54 \neq 0\end{aligned}$$ we infer $$\begin{aligned} \mathbb{C}_{13ji}=\mathcal{O}\left( \frac{1}{(\omega_{i} - \omega_{j})^4} + \frac{1}{(\omega_{i}+ \omega_{j})^4} \right),\end{aligned}$$ as $i,j \longrightarrow \infty$. On the other hand, if $\omega_{i}-\omega_{j} \longarrownot\longrightarrow \infty$, then we see that $$\begin{aligned} \left | \mathbb{C}_{13ji} \right| &:=\left| \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x) - \Gamma_{3}(x) \right) e_{i}(x)e_{j}(x)\tan^2(x) dx \right| \\ & \leq \left\| \Gamma_{1} - \Gamma_{3} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{i}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim 1.\end{aligned}$$ In conclusion, $$\begin{aligned} \omega_{i} \mathbb{C}_{13ji} = \Bigg \{ \begin{array}{lr} \sum_{\pm}\mathcal{O}\left( \frac{\omega_{i}}{(\omega_{i} \pm \omega_{j})^4} \right), & \text{if } \omega_{i}-\omega_{j} \longrightarrow \infty\\ \mathcal{O}\left(\omega_{i}\right), & \text{if } \omega_{i}-\omega_{j}\longarrownot\longrightarrow \infty, \end{array}\end{aligned}$$ as $i,j \longrightarrow \infty$. Second, we focus on the elements of $\mathcal{A}_{2}$. \[A2\] Let $N\in \mathbb{N}$. The following growth and decay estimates hold. [ |l|l|l|l|l| ]{}\ Constant & $F$ & 1st derivative $\neq 0$& $\exists~ \omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty$ & $\forall ~\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty$\ & $\displaystyle \frac{\Gamma_{1}}{\tan} $ & $\displaystyle \quad F^{(7)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O} \left(\omega_{i} \omega_{k} \right)$ & $\displaystyle \quad \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j}\pm \omega_{k})^{8}} \right)$\ & $\displaystyle \frac{\Gamma_{2}}{\tan} $ & $\displaystyle \quad F^{(9)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O} \left(\omega_{i} \omega_{k} \right)$ & $\displaystyle \quad \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j}\pm \omega_{k})^{10}}\right) $\ & $\displaystyle \frac{\Gamma_{3}}{\tan} $ & $\displaystyle \quad F^{\prime} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O} \left(\omega_{i} \omega_{k} \right)$ & $\displaystyle \quad \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j}\pm \omega_{k})^{2}}\right) $\ & $\displaystyle \frac{\Gamma_{4}}{\tan} $ & $\displaystyle \quad F^{\prime} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O} \left(\omega_{i} \omega_{k} \right)$ & $\displaystyle \quad \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j}\pm \omega_{k})^{2}}\right) $\ & $\displaystyle e_{0}^{\prime} \cos^2 $ & & $\mathcal{O} \left( \omega_{j}\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j}\pm \omega_{k})^{N}}\right) $\ All these estimates follow directly from Lemma \[byparts2\] and in particular from $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} F(x) \sin( (2 b+1) x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+1}} F^{(2k)} \left( 0 \right) + (-1)^{b} \sum_{k=0}^{N} \frac{(-1)^{k}}{a^{2k+2}} F^{(2k+1)} \left( \frac{\pi}{2} \right) + \mathcal{O} \left( \frac{1}{a^{2N+2}} \right),\end{aligned}$$ as $b \longrightarrow \infty$. However, we illustrate the proof only for the first constant, namely $\omega_{i}\mathbb{F}_{1jki}$. For large values of $i,j,k$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty$, $$\begin{aligned} \mathbb{F}_{1jki} &:= \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{i}(x)e_{j}(x)e_{k}(x) \tan^2(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \frac{\Gamma_{1}(x)}{\tan(x)} \sin(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{k}x) dx \\ & = \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\Gamma_{1}(x)}{\tan(x)} \sin \left( \left(\omega_{i}-\omega_{j}+\omega_{k} \right) x\right) dx - \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\Gamma_{1}(x)}{\tan(x)} \sin \left( \left(\omega_{i}-\omega_{j}-\omega_{k} \right) x\right) dx \\ & - \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\Gamma_{1}(x)}{\tan(x)} \sin \left( \left(\omega_{i}+\omega_{j}+\omega_{k} \right) x\right) dx + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\Gamma_{1}(x)}{\tan(x)} \sin \left( \left(\omega_{i}+\omega_{j}-\omega_{k} \right) x\right) dx. \end{aligned}$$ Observe that all $\omega_{i} \pm \omega_{j} \pm \omega_{k}$ are odd, $$\begin{aligned} \omega_{i} - \omega_{j} + \omega_{k} & = 2(i-j+k+1)+1, \\ \omega_{i} - \omega_{j} - \omega_{k} & = 2(i-j-k-2)+1, \\ \omega_{i} + \omega_{j} + \omega_{k} & = 2(i+j+k+4)+1, \\ \omega_{i} + \omega_{j} - \omega_{k} & = 2(i+j-k+1)+1.\end{aligned}$$ We define $F(x):=\frac{\Gamma_{1}(x) }{\tan(x)}$. Lemma \[byparts2\] applies and since $$\begin{aligned} F \left(0 \right) = F^{\prime} \left( \frac{\pi}{2} \right) =0, F^{\prime \prime} \left(0 \right) = F^{\prime \prime \prime} \left( \frac{\pi}{2} \right) =0, F^{(4)} \left(0 \right) = F^{(5)} \left( \frac{\pi}{2} \right) =0, ~~~F^{(7)} \left( \frac{\pi}{2} \right) = \frac{483840}{\pi} \neq 0\end{aligned}$$ we infer $$\begin{aligned} \mathbb{F}_{1jki} = \sum_{\pm} \mathcal{O}\left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k})^8} \right),\end{aligned}$$ as $i,j,k \longrightarrow \infty$. Finally, for large values of $i,j,k$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \mathbb{F}_{1jki} \right|& :=\left| \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{i}(x)e_{j}(x)e_{k}(x) \tan^2(x) dx \right| \\ & \leq \left\| \Gamma_{1} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{k} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{i}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{k}.\end{aligned}$$ as $i,j,k \longrightarrow \infty$. In conclusion, $$\begin{aligned} \omega_{i}\mathbb{F}_{1jki} = \Bigg \{ \begin{array}{lr} \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i}}{(\omega_{i} \pm \omega_{j}\pm \omega_{k})^{8}} \right), & \text{if all }\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty \\ \mathcal{O} \left(\omega_{i}\omega_{k} \right), & \text{if some } \omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty. \end{array}\end{aligned}$$ Now, we focus on the elements of $\mathcal{A}_{3}$. \[A3\] Let $N\in \mathbb{N}$. The following growth and decay estimates hold. [ |l|l|l|l|l| ]{}\ Constant & $F$ & 1st derivative $\neq 0$& $ \omega_{i} - \omega_{j} \longarrownot\longrightarrow \infty $ & $ \omega_{i} - \omega_{j} \longrightarrow \infty $\ & $\displaystyle \quad \Gamma_{1} e_{0}^{\prime} $ & $\displaystyle \quad F^{(8)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i} \right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{9}}\right) $\ & $\displaystyle \quad \Gamma_{2} e_{0}^{\prime} $ & $\displaystyle \quad F^{(10)} \left( \frac{\pi}{2} \right) \neq 0 $ &$\mathcal{O}\left(\omega_{i} \right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{11}} \right)$\ & $\displaystyle \quad \Gamma_{3} e_{0}^{\prime} $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i} \right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{3}}\right) $\ & $\displaystyle \quad \Gamma_{4} e_{0}^{\prime} $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i} \right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{3}}\right) $\ & $\displaystyle \quad\quad e_{0}^{\prime} $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}\right)$ & $\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j})^{3}} \right)$\ & $\displaystyle \Gamma_{1} e_{0} \sin \cos $ & $\displaystyle \quad F^{(10)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left( \omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{11}}\right) $\ & $\displaystyle \Gamma_{2} e_{0} \sin \cos $ & $\displaystyle \quad F^{(12)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1}{ (\omega_{i} \pm \omega_{j})^{13}} \right)$\ & $\displaystyle \Gamma_{3} e_{0} \sin \cos $ & $\displaystyle \quad F^{(4)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left(\frac{1 }{ (\omega_{i} \pm \omega_{j})^{5}} \right)$\ & $\displaystyle \Gamma_{4} e_{0} \sin \cos $ & $\displaystyle \quad F^{(4)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left(\frac{1 }{ (\omega_{i} \pm \omega_{j})^{5}}\right) $\ & $\displaystyle e_{0} \sin \cos $ & $\displaystyle \quad F^{(4)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{5}}\right) $\ & $\displaystyle (e_{0}^{\prime})^2 \sin \cos $ & & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left(\frac{1 }{ (\omega_{i} \pm \omega_{j})^{N}}\right) $\ & $\displaystyle ~~ e_{0}^2 \sin \cos $ & &$\mathcal{O}\left(\omega_{i}^{-1}\right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j})^{N}}\right) $\ All these estimates follow directly from Lemma \[byparts2\] and in particular from $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} F(x) \sin( 2 b x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k+1}}{a^{2k+1}} \left( (-1)^{b} F^{(2k)} \left( \frac{\pi}{2} \right) -F^{(2k)} \left( 0 \right) \right) + \mathcal{O} \left( \frac{1}{a^{2N+2}} \right),\end{aligned}$$ as $b \longrightarrow \infty$. However, we illustrate the proof only for the first constant, namely $\omega_{0}\omega_{i}\mathbb{\overline{F}}_{10ji}$. For large values of $i,j$ and in the case where $\omega_{i} - \omega_{j} \longrightarrow \infty$, $$\begin{aligned} \omega_{0}\mathbb{\overline{F}}_{10ji} &:= \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{j} (x)e_{0}^{\prime}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \tan^2(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{0}^{\prime}(x) \cos(\omega_{i}x) \sin(\omega_{j}x) dx \\ & = \frac{1}{2} \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{0}^{\prime}(x) \sin \left( \left(\omega_{i}+\omega_{j} \right) x\right) dx - \frac{1}{2} \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{0}^{\prime}(x) \sin \left( \left(\omega_{i}-\omega_{j} \right) x\right) dx. \end{aligned}$$ Observe that both $\omega_{i} + \omega_{j}$ and $\omega_{i} - \omega_{j}$ are even, $$\begin{aligned} \omega_{i} + \omega_{j} = 2(3 +i+j), \quad \omega_{i} - \omega_{j} = 2(i-j). \end{aligned}$$ We define $F(x):= \Gamma_{1}(x) e_{0}^{\prime}(x) $ and compute $$\begin{aligned} F \left( \frac{\pi}{2} \right) = F \left( 0 \right) =0, \dots, F^{(6)} \left( \frac{\pi}{2} \right) = F^{(6) } \left( 0 \right) =0, ~~~F^{(8)} \left( \frac{\pi}{2} \right) = \sqrt{\frac{2}{\pi}} \frac{46448640}{\pi^3} \neq 0.\end{aligned}$$ Now, Lemma \[byparts2\] yields $$\begin{aligned} \omega_{0}\mathbb{\overline{F}}_{10ji} = \sum_{\pm}\mathcal{O}\left( \frac{1}{(\omega_{i} \pm \omega_{j})^9} \right),\end{aligned}$$ as $i,j \longrightarrow \infty$. On the other hand, for large values of $i,j$ such that $\omega_{i} - \omega_{j} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left|\omega_{0}\mathbb{\overline{F}}_{10ji} \right|& :=\left| \int_{0}^{\frac{\pi}{2}} \Gamma_{1}(x) e_{j} (x)e_{0}^{\prime}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \tan^2(x) dx \right| \\ & \leq \left\| \Gamma_{1} e^{\prime}_{0} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e^{\prime}_i}{\omega_{i}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim 1.\end{aligned}$$ as $i,j \longrightarrow \infty$. In conclusion, $$\begin{aligned} \omega_{0}\omega_{i} \mathbb{\overline{F}}_{10ji} = \Bigg \{ \begin{array}{lr} \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i}}{(\omega_{i} \pm \omega_{j})^{9}} \right), & \text{if }\omega_{i} - \omega_{j} \longrightarrow \infty \\ \mathcal{O} \left(\omega_{i} \right), & \text{if } \omega_{i} - \omega_{j} \longarrownot\longrightarrow \infty. \end{array}\end{aligned}$$ Finally, we focus on the elements of $\mathcal{A}_{4}$. \[A4\] Let $N\in \mathbb{N}$. The following growth and decay estimates hold. [ |l|l|l|l|l| ]{}\ Constant & $F$ & 1st derivative $\neq 0$& $ \exists~\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty $ & $ \forall~\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty $\ & $\displaystyle ~~ \cos^2 $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{3}} \right) $\ & $\displaystyle ~~ \cos^2 $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{3}}\right) $\ & $\displaystyle \Gamma_{1} \cos^2 $ & $\displaystyle \quad F^{(8)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{9}}\right) $\ & $\displaystyle \Gamma_{2} \cos^2 $ & $\displaystyle \quad F^{(10)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1}\right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{11}} \right)$\ & $\displaystyle \Gamma_{3} \cos^2 $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{3}}\right) $\ & $\displaystyle \Gamma_{4} \cos^2 $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left(\frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{3}}\right) $\ & $\displaystyle \Gamma_{1} \cos^2 $ & $\displaystyle \quad F^{(8)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{9}}\right) $\ & $\displaystyle \Gamma_{2} \cos^2 $ & $\displaystyle \quad F^{(10)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{11}}\right) $\ & $\displaystyle \Gamma_{3} \cos^2 $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{3}} \right) $\ & $\displaystyle \Gamma_{4} \cos^2 $ & $\displaystyle \quad F^{(2)} \left( \frac{\pi}{2} \right) \neq 0 $ & $\mathcal{O}\left(\omega_{i}^{-1}\right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{3}}\right) $\ & $\displaystyle e_{0} \cos^2 $ & & $\mathcal{O}\left(\omega_{i}^{-1} \right)$ & $\displaystyle \frac{1}{\omega_{i}} \sum_{\pm}\mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{N}} \right) $\ & $\displaystyle ~~ \frac{e_{0}^{\prime}}{\tan} $ & & $\mathcal{O}\left(\omega_{i}\omega_{k} \right)$ & $\displaystyle \sum_{\pm}\mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{N}}\right) $\ All these estimates follow directly from Lemma \[byparts2\] and in particular from $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} F(x) \cos( (2 b+1) x ) dx = \sum_{k=0}^{N} \frac{(-1)^{k+b}}{a^{2k+1}} F^{(2k)} \left( \frac{\pi}{2} \right) + \sum_{k=0}^{N} \frac{(-1)^{k+1}}{a^{2k+2}} F^{(2k+1)} \left( 0 \right) + \mathcal{O} \left( \frac{1}{a^{2N+2}} \right), \end{aligned}$$ as $b \longrightarrow \infty$. However, we illustrate the proof only for the first constant, namely $\overline{\mathbb{I}}_{jki}$. For large $i$, we have $$\begin{aligned} \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y & \simeq \int_{x}^{\frac{\pi}{2}} \sin(\omega_{i} y) \cos^2(y) d y \\ & = -\frac{2}{\omega_{i}(\omega_{i}^2-4)} \cos(\omega_{i}x) +\frac{\omega_{i}\cos^2(x)}{\omega_{i}^2-4} \cos(\omega_{i}x) + \frac{\sin(2x)}{\omega_{i}^2-4} \sin(\omega_{i}x) \\ & \simeq \frac{1}{\omega_{i}} \cos^2(x) \cos(\omega_{i}x).\end{aligned}$$ Now, for large values of $i,j,k$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty$, $$\begin{aligned} \overline{\mathbb{I}}_{jki} &:= \int_{0}^{\frac{\pi}{2}} e_{j} (x) e_{k}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right) \tan^2(x) dx \\ & \simeq \frac{1}{\omega_{i}} \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos(\omega_{i}x)\sin(\omega_{j}x)\sin(\omega_{k}x) dx \\ & = \frac{1}{\omega_{i}} \Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos \left( \left( \omega_{i} +\omega_{j} - \omega_{k} \right) x \right) dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos \left( \left( \omega_{i} + \omega_{j} + \omega_{k} \right) x \right) dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos \left( \left( \omega_{i} - \omega_{j} - \omega_{k} \right) x \right) dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \cos \left( \left( \omega_{i} - \omega_{j} + \omega_{k} \right) x \right) dx \Bigg). \end{aligned}$$ Observe that all $\omega_{i} \pm \omega_{j} \pm \omega_{k}$, $$\begin{aligned} \omega_{i} - \omega_{j} + \omega_{k} & = 2(i-j+k+1)+1, \\ \omega_{i} - \omega_{j} - \omega_{k} & = 2(i-j-k-2)+1, \\ \omega_{i} + \omega_{j} + \omega_{k} & = 2(i+j+k+4)+1, \\ \omega_{i} + \omega_{j} - \omega_{k} & = 2(i+j-k+1)+1.\end{aligned}$$ We define $F(x):= \cos^2(x) $ and compute $$\begin{aligned} F \left( \frac{\pi}{2} \right) = F^{\prime} \left( 0 \right) =0, ~~~F^{\prime \prime} \left( \frac{\pi}{2} \right) = 2 \neq 0.\end{aligned}$$ Lemma \[byparts2\] yields $$\begin{aligned} \overline{\mathbb{I}}_{jki} = \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O}\left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k})^3} \right),\end{aligned}$$ as $i,j,k \longrightarrow \infty$, whereas, for large values of $i,j,k$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \overline{\mathbb{I}}_{jki} \right|& :=\left| \int_{0}^{\frac{\pi}{2}} e_{j} (x) e_{k}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right) \tan^2(x) dx \right| \\ & \leq \left\| \int_{\cdot}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{k}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \frac{1}{\omega_{i}}.\end{aligned}$$ In conclusion, $$\begin{aligned} \overline{\mathbb{I}}_{jki} = \Bigg \{ \begin{array}{lr} \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O}\left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k})^3} \right), & \text{if all } \omega_{i} \pm \omega_{j} \pm \omega_{m} \longrightarrow \infty \\ \mathcal{O} \left(\frac{1}{\omega_{i}} \right), & \text{if some }\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty. \end{array}\end{aligned}$$ ### Fourier constants in $\mathcal{B}$ We write $$\begin{aligned} \mathcal{B} = \mathcal{B}_{1} \cup \mathcal{B}_{2}\end{aligned}$$ where $$\begin{aligned} & \mathcal{B}_{1}:=\left \{ \omega_{i}\mathbb{H}_{0jki},\omega_{i}\mathbb{G}_{jki}, \omega_{i}\overline{\mathbb{G}}_{jki}\right \} \\ & \mathcal{B}_{2}:=\left \{\frac{\mathbb{R}_{klji}}{\omega_{i}},\frac{\mathbb{S}_{jkli}}{\omega_{i}},\omega_{i}\mathbb{H}_{klji},\omega_{i}\overline{\mathbb{H}}_{ijkl}, \overline{\mathbb{R}}_{klji},\overline{\mathbb{S}}_{jkli} \right \}\end{aligned}$$ \[B1\] Let $N\in \mathbb{N}$. The following growth and decay estimates hold. [ |l|l|l|]{}\ Constant & $\exists~ \omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty $ & $\forall~ \omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty $\ & $\mathcal{O}\left(\omega_{i}\omega_{k} \right) $ &$\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k})^{N}}\right) $\ & $\mathcal{O}\left(\omega_{i}\omega_{k} \right) $ &$\displaystyle \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i}}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} )^2} \right) $\ First, observe that $$\begin{aligned} \omega_{i} - \omega_{j} + \omega_{k} & = 2(i-j+k+1)+1, \\ \omega_{i} - \omega_{j} - \omega_{k} & = 2(i-j-k-2)+1, \\ \omega_{i} + \omega_{j} + \omega_{k} & = 2(i+j+k+4)+1, \\ \omega_{i} + \omega_{j} - \omega_{k} & = 2(i+j-k+1)+1,\end{aligned}$$ are all odd. All results here follow from Lemma \[OscillatoryIntegrals\] and Remark \[e0oscilating\]. For large values of $i,j,k$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty$, $$\begin{aligned} \mathbb{H}_{0jki} &:= \int_{0}^{\frac{\pi}{2}} e_{0}(x) e_{i}(x) e_{j}(x) e_{k}(x) \tan^2(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} e_{0}(x) \frac{\sin(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{k}x) }{\tan(x)} dx \\ & = \frac{1}{4} \int_{0}^{\frac{\pi}{2}} e_{0}(x)\frac{\sin\left( \left( \omega_{i} - \omega_{j} + \omega_{k} \right)x \right) }{\tan(x)} dx - \frac{1}{4} \int_{0}^{\frac{\pi}{2}}e_{0}(x) \frac{\sin\left( \left( \omega_{i} - \omega_{j} - \omega_{k} \right)x \right) }{\tan(x)} dx \\ & - \frac{1}{4} \int_{0}^{\frac{\pi}{2}} e_{0}(x)\frac{\sin\left( \left( \omega_{i} + \omega_{j} + \omega_{k} \right)x \right) }{\tan(x)} dx + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} e_{0}(x) \frac{\sin\left( \left( \omega_{i} + \omega_{j} - \omega_{k} \right)x \right) }{\tan(x)} dx.\end{aligned}$$ By Remark \[e0oscilating\], we infer that in this case $$\begin{aligned} \mathbb{H}_{0jki} & = \frac{1}{4} \left( 2 \left(2\sqrt{2 \pi}-2\sqrt{2 \pi}\right) + \sum_{\pm } \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{k} \right)^{N}} \right) \right) = \sum_{\pm } \mathcal{O} \left( \frac{1}{\left( \omega_{i} \pm \omega_{j} \pm \omega_{k} \right)^{N}} \right),\end{aligned}$$ as $i,j,k \longrightarrow \infty$, whereas, for large values of $i,j,k$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \mathbb{H}_{0jki} \right|& :=\left| \int_{0}^{\frac{\pi}{2}} e_{0}(x) e_{i}(x) e_{j}(x) e_{k}(x) \tan^2(x) dx \right| \\ & \leq \left\| e_{0} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{k} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{i}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{k}.\end{aligned}$$ Second, for large values of $i,j,k$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty$, we have $$\begin{aligned} \mathbb{G}_{jki} &:= \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{k}(x) \tan^2(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \frac{\sin(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{k}x) }{\tan(x)} dx \\ & = \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} - \omega_{j} + \omega_{k} \right)x \right) }{\tan(x)} dx- \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} - \omega_{j} - \omega_{k} \right)x \right) }{\tan(x)} dx \\ & - \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} + \omega_{j} + \omega_{k} \right)x \right) }{\tan(x)} dx + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} + \omega_{j} - \omega_{k} \right)x \right) }{\tan(x)} dx.\end{aligned}$$ Hence, by Lemma \[OscillatoryIntegrals\], we get that in this case $$\begin{aligned} \mathbb{G}_{jki} = \frac{1}{4} \left( 2 \left( \frac{\pi}{2} - \frac{\pi}{2} \right)+ \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} )^2} \right) \right) = \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} )^2} \right),\end{aligned}$$ as $i,j,k \longrightarrow \infty$. On the other hand, for large values of $i,j,k$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \mathbb{G}_{jki} \right|& :=\left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{k}(x) \tan^2(x) dx \right| \\ & \leq \left \| e_{k} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{i}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{j}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{k}.\end{aligned}$$ Similarly, for large values of $i,j,k$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longrightarrow \infty$, $$\begin{aligned} \overline{\mathbb{G}}_{jki} &:= \int_{0}^{\frac{\pi}{2}} e_{k}(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \tan^2(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \frac{\sin(\omega_{k}x) \cos(\omega_{j}x) \cos(\omega_{i}x) }{\tan(x)} dx \\ & = \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} - \omega_{j} + \omega_{k} \right)x \right) }{\tan(x)} dx - \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} - \omega_{j} - \omega_{k} \right)x \right) }{\tan(x)} dx \\ & + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} + \omega_{j} + \omega_{k} \right)x \right) }{\tan(x)} dx - \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \frac{\sin\left( \left( \omega_{i} + \omega_{j} - \omega_{k} \right)x \right) }{\tan(x)} dx\end{aligned}$$ and finally $$\begin{aligned} \overline{\mathbb{G}}_{jki} = \frac{1}{4} \left( 2 \left( \frac{\pi}{2} - \frac{\pi}{2} \right)+ \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} )^2} \right) \right) = \sum_{\pm} \mathcal{O} \left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} )^2} \right),\end{aligned}$$ as $i,j,k \longrightarrow \infty$. However, for large values of $i,j,k$ such that some $\omega_{i} \pm \omega_{j} \pm \omega_{k} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \overline{\mathbb{G}}_{jki} \right|& :=\left| \int_{0}^{\frac{\pi}{2}} e_{k}(x) \frac{e_{j}^{\prime}(x)}{\omega_{j}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \tan^2(x) dx \right| \\ & \leq \left \| e_{k} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e^{\prime}_{i}}{\omega_{i}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e^{\prime}_{j}}{\omega_{j}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \lesssim \omega_{k}.\end{aligned}$$ \[B2\] Let $N\in \mathbb{N}$. The following growth and decay estimates hold. [ |l|l|l|]{}\ Constant & $\exists~ \omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l} \longarrownot\longrightarrow \infty $ & $ \forall~\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l} \longrightarrow \infty $\ & $\displaystyle \mathcal{O} \left( \omega_{j}\omega_{i}^{-1}\right) $ &$\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l})^{N}}\right) $\ & $\displaystyle \mathcal{O} \left(\omega_{i} \omega_{j} \omega_{k}\right) $ &$\displaystyle \quad \sum_{\pm} \mathcal{O} \left( \frac{\omega_{i} }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l})^{3}}\right) $\ & $\displaystyle \mathcal{O} \left( \omega_{j}\omega_{i}^{-1} \right) $ &$\displaystyle \frac{1}{\omega_{i}} \sum_{\pm} \mathcal{O} \left( \frac{1 }{ (\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l})^{N}} \right)$\ First, observe that $$\begin{aligned} \omega_{i} - \omega_{j} + \omega_{k} - \omega_{l} & = 2 (i - j + k - l),\\ \omega_{i} + \omega_{j} - \omega_{k} - \omega_{l} & = 2 (i + j - k - l), \\ \omega_{i} - \omega_{j} - \omega_{k} - \omega_{l}& = 2 (-3 + i - j - k - l), \\ \omega_{i} + \omega_{j} + \omega_{k} - \omega_{l} & = 2 (3 + i + j + k - l),\\ \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} & = 2 (i - j - k + l), \\ \omega_{i} + \omega_{j} + \omega_{k} + \omega_{l} & = 2 (6 + i + j + k + l), \\ \omega_{i} - \omega_{j} + \omega_{k} + \omega_{l} & = 2 (3 + i - j + k + l), \\ \omega_{i} + \omega_{j} - \omega_{k} + \omega_{l} & = 2 (3 + i + j - k + l), \end{aligned}$$ are all even. All results here follow from Lemma \[OscillatoryIntegrals\]. For large values of $i,j,k,l$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longrightarrow \infty$, $$\begin{aligned} \mathbb{R}_{klji} &:= \int_{0}^{\frac{\pi}{2}} e_{j}(x) \frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{e_{l}^{\prime}(x)}{\omega_{l}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \cos(\omega_{i}x) \sin(\omega_{j}x) \cos(\omega_{k}x) \cos(\omega_{l}x) }{\tan(x)} dx \\ & = \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right) x \right )}{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{k} - \omega_{l} \right) x \right)) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx \\ &= \left( 4 \left( \frac{\pi}{2} - \frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right)^{N}} \right) \right) \\ & = \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right)^{N}} \right). \end{aligned}$$ On the other hand, in the case where some $\omega_{i} \pm \omega_{j} \pm \omega_{m} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \frac{\mathbb{R}_{klji}}{\omega_{i}} \right| & = \frac{1}{\omega_{i}} \left| \int_{0}^{\frac{\pi}{2}} e_{j}(x) \cos(x) \sin(x)\frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{e_{l}^{\prime}(x)}{\omega_{l}} \frac{e_{i}^{\prime}(x)}{\omega_{i}} \tan^2(x) dx \right| \\ & \leq \frac{1}{\omega_{i}} \left \| e_{j} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \cos \sin \frac{e_{k}^{\prime}}{\omega_{k}} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{l}^{\prime}}{\omega_{l}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{i}^{\prime}}{\omega_{i}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \leq \frac{\omega_{j}}{\omega_{i}},\end{aligned}$$ where we used the $L^{\infty}$ bounds as well as the orthogonality from Lemma \[Linftyboundse\]. Furthermore, for large values of $i,j,k,l$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longrightarrow \infty$, $$\begin{aligned} \mathbb{S}_{jkli} &:= \int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) e_{l} (x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{\sin^3(x)}{\cos(x)} dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \cos(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{k}x) \sin(\omega_{l}x) }{\tan(x)} dx \\ & = - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} + \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} + \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx \\ & + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right) x \right) }{\tan(x)} dx \\ & - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} + \omega_{j} - \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin \left( \left( \omega_{i} - \omega_{j} - \omega_{k} - \omega_{l} \right) x \right) }{\tan(x)} dx \\ &= 4 \left( \frac{\pi}{2} - \frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right)^{N}} \right) \\ &= \sum_{\pm} \mathcal{O} \left( \frac{1}{\left( \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right)^{N}} \right).\end{aligned}$$ On the other hand, in the case where some $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \frac{\mathbb{S}_{jkli}}{\omega_{i}} \right| & = \frac{1}{\omega_{i}} \left| \int_{0}^{\frac{\pi}{2}} e_{j}(x) \cos(x) \sin(x)\frac{e_{i}^{\prime}(x)}{\omega_{i}} e_{k}(x) e_{l}(x) \tan^2(x) dx \right| \\ & \leq \frac{1}{\omega_{i}} \left \| e_{j} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \cos \sin \frac{e_{i}^{\prime}}{\omega_{i}} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{k}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{l}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \leq \frac{\omega_{j}}{\omega_{i}},\end{aligned}$$ where we used the $L^{\infty}$ bounds and the orthogonality from Lemma \[Linftyboundse\]. Similarly, for large values of $i,j,k,l$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longrightarrow \infty$, $$\begin{aligned} \mathbb{H}_{klji} & := \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{k}(x) e_{l}(x) \tan^{2}(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \frac{\sin(\omega_{i}x) \sin(\omega_{j}x) \sin(\omega_{k}x) \sin(\omega_{l}x) }{\tan^2(x)} dx \\ & \simeq \frac{1}{8} \int_{ \frac{2 \pi}{ \omega_{i} -\omega_{j} + \omega_{k} - \omega_{l} } }^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} -\omega_{j} + \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx +\frac{1}{8} \int_{ \frac{2 \pi}{ \omega_{i} + \omega_{j} - \omega_{k} - \omega_{l} }}^{\frac{\pi}{2} } \frac{\cos\left( \left(\omega_{i} + \omega_{j} - \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & - \frac{1}{8} \int_{ \frac{2 \pi}{ \omega_{i} -\omega_{j} - \omega_{k} - \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} -\omega_{j} - \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx -\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} + \omega_{j} + \omega_{k} - \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} + \omega_{j} + \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & +\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} -\omega_{j} - \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} -\omega_{j} - \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx +\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} + \omega_{j} + \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} + \omega_{j} + \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & -\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} -\omega_{j} + \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} -\omega_{j} + \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx -\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} + \omega_{j} - \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} + \omega_{j} - \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & = \frac{c}{16 \pi} \Big( +(\omega_{i} -\omega_{j} + \omega_{k} - \omega_{l}) +( \omega_{i} + \omega_{j} - \omega_{k} - \omega_{l} ) -( \omega_{i} -\omega_{j} - \omega_{k} - \omega_{l} ) \\ & -( \omega_{i} +\omega_{j} + \omega_{k} - \omega_{l} ) +( \omega_{i} -\omega_{j} - \omega_{k} + \omega_{l} ) +( \omega_{i} +\omega_{j} + \omega_{k} + \omega_{l} ) \\ & -( \omega_{i} -\omega_{j} + \omega_{k} + \omega_{l} ) -( \omega_{i} + \omega_{j} - \omega_{k} + \omega_{l} ) \Big) + \sum_{\pm} \mathcal{O} \left( \frac{1}{ ( \omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l} )^{3} } \right) \\ &= \sum_{\pm} \mathcal{O} \left( \frac{1}{ ( \omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l} )^{3} } \right).\end{aligned}$$ On the other hand, in the case where some $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \mathbb{H}_{klji} \right| & = \left| \int_{0}^{\frac{\pi}{2}} e_{i}(x) e_{j}(x) e_{k}(x) e_{l}(x) \tan^2(x) dx \right| \\ & \leq \left \| e_{j} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{k} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{i}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| e_{l}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \leq \omega_{j}\omega_{k}.\end{aligned}$$ Furthermore, for large values of $i,j,k,l$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longrightarrow \infty$, $$\begin{aligned} \overline{\mathbb{H}}_{ijkl}&:=\int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{l}^{\prime}(x)}{\omega_{l}}\tan^{2}(x) dx \\ & \simeq \int_{0}^{\frac{\pi}{2}} \frac{\sin(\omega_{j}x) \sin(\omega_{k}x) \cos(\omega_{i}x) \cos(\omega_{l}x) }{\tan^2(x)} dx \\ & \simeq \frac{1}{8} \int_{ \frac{2 \pi}{ \omega_{i} +\omega_{j} - \omega_{k} - \omega_{l} } }^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} +\omega_{j} - \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx -\frac{1}{8} \int_{ \frac{2 \pi}{ \omega_{i} - \omega_{j} - \omega_{k} - \omega_{l} }}^{\frac{\pi}{2} } \frac{\cos\left( \left(\omega_{i} - \omega_{j} - \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & + \frac{1}{8} \int_{ \frac{2 \pi}{ \omega_{i} +\omega_{j} - \omega_{k} +\omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} +\omega_{j} - \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx -\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} - \omega_{j} - \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} - \omega_{j} - \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & -\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} +\omega_{j} + \omega_{k} - \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} +\omega_{j} + \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx +\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} - \omega_{j} + \omega_{k} - \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} - \omega_{j} + \omega_{k} - \omega_{l} \right)x \right)}{\tan^2(x)} dx \\ & -\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} +\omega_{j} + \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} +\omega_{j} + \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx +\frac{1}{8} \int_{\frac{2 \pi}{ \omega_{i} - \omega_{j} + \omega_{k} + \omega_{l}}}^{\frac{\pi}{2}} \frac{\cos\left( \left(\omega_{i} - \omega_{j} + \omega_{k} + \omega_{l} \right)x \right)}{\tan^2(x)} dx\end{aligned}$$ and the rest of the proof coincides with the one above. On the other hand, if some $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \overline{\mathbb{H}}_{ijkl} \right| & = \left| \int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) \frac{e_{i}^{\prime}(x)}{\omega_{i}} \frac{e_{l}^{\prime}(x)}{\omega_{l}} \tan^2(x) dx \right| \\ & \leq \left \| e_{j} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{k} \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{i}^{\prime}}{\omega_{i}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{l}^{\prime}}{\omega_{l}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \leq \omega_{j}\omega_{k}.\end{aligned}$$ In addition, for large values of $i,j,k,l$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longrightarrow \infty$, $$\begin{aligned} \overline{\mathbb{R}}_{klji} & := \int_{0}^{\frac{\pi}{2}} e_{j}(x) \frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{e_{l}^{\prime}(x)}{\omega_{l}} \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right) \tan^2(x) dx \\ & \simeq \frac{1}{\omega_{i}} \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \cos(\omega_{i}x)\sin(\omega_{j}x)\cos(\omega_{k}x)\cos(\omega_{l}x) }{\tan(x)} dx \\ & \simeq \frac{1}{8 \omega_{i}} \Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}+\omega_{k}+\omega_{l} )x) }{\tan(x)} dx -\int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}+\omega_{k}+\omega_{l} )x) }{\tan(x)} dx \\ & \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}+\omega_{k}-\omega_{l} )x) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}+\omega_{k}-\omega_{l} )x) }{\tan(x)} dx \\ & \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}-\omega_{k}+\omega_{l} )x) }{\tan(x)} dx - \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}-\omega_{k}+\omega_{l} )x) }{\tan(x)} dx \\ & \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}-\omega_{k}-\omega_{l} )x) }{\tan(x)} dx -\int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}-\omega_{k}-\omega_{l} )x) }{\tan(x)} dx \Bigg) \\ & = \frac{1}{8 \omega_{i}} \left( 4 \left( \frac{\pi}{2} -\frac{\pi}{2} \right) + \sum_{\pm} \mathcal{O}\left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l})^{N}} \right) \right) \\ & = \frac{1}{ \omega_{i}}\sum_{\pm} \mathcal{O}\left( \frac{1}{(\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l})^{N}} \right). \end{aligned}$$ On the other hand, in the case where some $\omega_{i} \pm \omega_{j} \pm \omega_{k} \pm \omega_{l} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left| \overline{\mathbb{R}}_{klji} \right| & = \left| \int_{0}^{\frac{\pi}{2}} e_{j}(x) \frac{e_{k}^{\prime}(x)}{\omega_{k}} \frac{e_{l}^{\prime}(x)}{\omega_{l}} \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right) \tan^2(x) dx \right| \\ & \leq \left \| e_{j} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \int_{\cdot}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{k}^{\prime}}{\omega_{k}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]} \left \| \frac{e_{l}^{\prime}}{\omega_{l}}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \leq \frac{\omega_{j}}{\omega_{i}}.\end{aligned}$$ Similarly, for large values of $i,j,k,l$ and in the case where all $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longrightarrow \infty$, $$\begin{aligned} \overline{\mathbb{S}}_{jkli}& := \int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right) \tan^2(x) dx \\ & \simeq \frac{1}{\omega_{i}} \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \cos(\omega_{i}x)\sin(\omega_{j}x)\sin(\omega_{k}x)\sin(\omega_{l}x) }{\tan(x)} dx \\ & \simeq \frac{1}{8 \omega_{i}} \Bigg( \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}-\omega_{k}+\omega_{l} )x) }{\tan(x)} dx -\int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}-\omega_{k}-\omega_{l} )x) }{\tan(x)} dx \\ &- \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}+\omega_{k}+\omega_{l} )x) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}+\omega_{j}+\omega_{k}-\omega_{l} )x) }{\tan(x)} dx \\ &- \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}-\omega_{k}+\omega_{l} )x) }{\tan(x)} dx + \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}-\omega_{k}-\omega_{l} )x) }{\tan(x)} dx \\ &+ \int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}+\omega_{k}+\omega_{l} )x) }{\tan(x)} dx -\int_{0}^{\frac{\pi}{2}} \cos^2(x) \frac{ \sin( (\omega_{i}-\omega_{j}+\omega_{k}-\omega_{l} )x) }{\tan(x)} dx \Bigg)\end{aligned}$$ and the rest of the proof coincides with the one above. On the other hand, if some $\omega_{i} \pm \omega_{j} \pm \omega_{k}\pm \omega_{l} \longarrownot\longrightarrow \infty$, Holder’s inequality implies $$\begin{aligned} \left|\overline{\mathbb{S}}_{jkli} \right| & = \left| \int_{0}^{\frac{\pi}{2}} e_{j}(x) e_{k}(x) e_{l}(x) \left( \int_{x}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right) \tan^2(x) dx \right| \\ & \leq \left \| e_{j} \right\|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| \int_{\cdot}^{\frac{\pi}{2}} e_{i}(y) \sin(y) \cos(y) d y \right \|_{L^{\infty}\left[0,\frac{\pi}{2}\right]} \left \| e_{k}\tan \right \|_{L^{2} \left[0,\frac{\pi}{2}\right]} \left \| e_{l}\tan \right \|_{L^{2}\left[0,\frac{\pi}{2}\right]}\\ & \leq \frac{\omega_{j}}{\omega_{i}}.\end{aligned}$$ Proof of and ============= In this section we prove the estimates , . We will use the notation $$\begin{aligned} \mathbbm{1}\left( \text{condition} \right)= \begin{cases} 1,\quad \text{if the condition if satisfied} \\ 0, \quad \text{otherwise} \end{cases} \end{aligned}$$ \[lemmaAppendix\] For all $i=0,1,2,\dots$, we have $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \cos^2 (\omega_{i}x) \tan^2(x)dx = \int_{0}^{\frac{\pi}{2}} \frac{\sin^2 (\omega_{i}x)}{\tan^2(x)} dx = \frac{\pi}{2} \left(\omega_{i}-\frac{1}{2} \right).\end{aligned}$$ Both results follow from Lemma \[Dirichlet\]. For the first result, we get $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} \cos^2 (\omega_{i}x) \tan^2(x)dx = \int_{0}^{\frac{\pi}{2}}\left( \frac{\cos (\omega_{i}x)}{\cos(x)} \right)^2 \sin^2(x)dx \\ &\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad = \int_{0}^{\frac{\pi}{2}}\left(1+ 2 \sum_{\mu=1}^{i+1} (-1)^{\mu} \cos(2 \mu x) \right)^2 \sin^2(x)dx \\ & = \int_{0}^{\frac{\pi}{2}}\left(1+ 4 \sum_{\mu=1}^{i+1} (-1)^{\mu} \cos(2 \mu x) + 4 \sum_{\mu,\nu=1}^{i+1} (-1)^{\mu+\nu} \cos(2 \mu x)\cos(2 \nu x) \right) \sin^2(x)dx \\ & = \int_{0}^{\frac{\pi}{2}}\sin^2(x) dx + 4 \sum_{\mu=1}^{i+1} (-1)^{\mu} \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x) \sin^2(x) dx + 4 \sum_{\mu,\nu=1}^{i+1} (-1)^{\mu+\nu}\int_{0}^{\frac{\pi}{2}} \cos(2 \mu x)\cos(2 \nu x) \sin^2(x)dx. \end{aligned}$$ Now, we use the trigonometric identities $$\begin{aligned} \sin^2(x) = \frac{1}{2} - \frac{1}{2} \cos(2x), \quad \quad \cos(a)\cos(b) = \frac{1}{2} \cos(a-b)+ \frac{1}{2} \cos(a+b)\end{aligned}$$ to compute $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} \sin^2(x) dx = \frac{\pi}{4}, \\ & \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x) \sin^2(x) dx = \frac{1}{2} \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x) dx -\frac{1}{4} \int_{0}^{\frac{\pi}{2}}\cos(2 (\mu-1) x) dx - \frac{1}{4} \int_{0}^{\frac{\pi}{2}}\cos(2 (\mu+1) x) dx \\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = - \frac{\pi}{8} \mathbbm{1}\left( \mu=1 \right), \\ & \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x)\cos(2 \nu x) \sin^2(x) dx = -\frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1-\mu -\nu)x\right)dx + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(\mu -\nu)x\right)dx\\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad -\frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1+\mu -\nu)x\right) dx -\frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1-\mu +\nu)x\right)dx \\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(\mu +\nu)x\right) dx - \frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1+\mu +\nu)x\right) dx \\ & = - \frac{\pi}{16} \mathbbm{1}\left( 1 -\mu-\nu =0 \right) + \frac{\pi}{8} \mathbbm{1}\left( \mu-\nu =0 \right) -\frac{\pi}{16} \mathbbm{1}\left( 1+ \mu -\nu=0 \right) -\frac{\pi}{16} \mathbbm{1}\left( 1- \mu +\nu=0 \right). \end{aligned}$$ Hence, $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \cos^2 (\omega_{i}x) \tan^2(x)dx & = \frac{\pi}{4} + \frac{\pi}{2} +\frac{\pi}{4} \sum_{\substack{\mu,\nu=1 \\ 1-\mu-\nu=0 }}^{i+1} 1 + \frac{\pi}{2} \sum_{\substack{\mu,\nu=1 \\ \mu-\nu=0 }}^{i+1} 1 + \frac{\pi}{4} \sum_{\substack{\mu,\nu=1 \\ 1+\mu-\nu=0 }}^{i+1} 1 + \frac{\pi}{4} \sum_{\substack{\mu,\nu=1 \\ 1-\mu+\nu=0 }}^{i+1} 1 \\ &= \frac{\pi}{4} + \frac{\pi}{2} +\frac{\pi}{4} \cdot 0 + \frac{\pi}{2} \cdot (1+i) + \frac{\pi}{4} \cdot i + \frac{\pi}{4} \cdot i\\ & = \frac{\pi}{2} \left(\omega_{i}-\frac{1}{2} \right). \end{aligned}$$ Similarly, for the second result, we get $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} \frac{\sin^2 (\omega_{i}x)}{\tan^2(x)} dx = \int_{0}^{\frac{\pi}{2}} \left( \frac{\sin (\omega_{i}x)}{\sin(x)} \right)^2 \cos^2(x) dx\\ &\quad \quad\quad\quad\quad\quad\quad = \int_{0}^{\frac{\pi}{2}}\left(1+ 2 \sum_{\mu=1}^{i+1} \cos(2 \mu x) \right)^2 \cos^2(x)dx \\ &\quad \quad\quad\quad\quad\quad\quad = \int_{0}^{\frac{\pi}{2}}\left(1+ 4 \sum_{\mu=1}^{i+1} \cos(2 \mu x) + 4 \sum_{\mu,\nu=1}^{i+1} \cos(2 \mu x)\cos(2 \nu x) \right) \cos^2(x)dx \\ & = \int_{0}^{\frac{\pi}{2}}\cos^2(x) dx + 4 \sum_{\mu=1}^{i+1} \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x) \cos^2(x) dx + 4 \sum_{\mu,\nu=1}^{i+1} \int_{0}^{\frac{\pi}{2}} \cos(2 \mu x)\cos(2 \nu x) \cos^2(x)dx. \end{aligned}$$ Now, we use the trigonometric identities $$\begin{aligned} \cos^2(x) = \frac{1}{2} + \frac{1}{2} \cos(2x), \quad \quad \cos(a)\cos(b) = \frac{1}{2} \cos(a-b)+ \frac{1}{2} \cos(a+b)\end{aligned}$$ to compute $$\begin{aligned} & \int_{0}^{\frac{\pi}{2}} \cos^2(x) dx = \frac{\pi}{4}, \\ & \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x) \cos^2(x) dx = \frac{1}{2} \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x) dx +\frac{1}{4} \int_{0}^{\frac{\pi}{2}}\cos(2 (\mu-1) x) dx + \frac{1}{4} \int_{0}^{\frac{\pi}{2}}\cos(2 (\mu+1) x) dx \\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = \frac{\pi}{8} \mathbbm{1}\left( \mu=1 \right), \\ & \int_{0}^{\frac{\pi}{2}}\cos(2 \mu x)\cos(2 \nu x) \cos^2(x) dx = \frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1-\mu -\nu)x\right)dx + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(\mu -\nu)x\right)dx\\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad +\frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1+\mu -\nu)x\right) dx +\frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1-\mu +\nu)x\right)dx \\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \frac{1}{4} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(\mu +\nu)x\right) dx + \frac{1}{8} \int_{0}^{\frac{\pi}{2}} \cos \left( 2(1+\mu +\nu)x\right) dx \\ & = \frac{\pi}{16} \mathbbm{1}\left( 1 -\mu-\nu =0 \right) + \frac{\pi}{8} \mathbbm{1}\left( \mu-\nu =0 \right) +\frac{\pi}{16} \mathbbm{1}\left( 1+ \mu -\nu=0 \right) +\frac{\pi}{16} \mathbbm{1}\left( 1- \mu +\nu=0 \right). \end{aligned}$$ Hence, $$\begin{aligned} \int_{0}^{\frac{\pi}{2}} \frac{\sin^2 (\omega_{i}x)}{\tan^2(x)} dx & = \frac{\pi}{4} + \frac{\pi}{2} +\frac{\pi}{4} \sum_{\substack{\mu,\nu=1 \\ 1-\mu-\nu=0 }}^{i+1} 1 + \frac{\pi}{2} \sum_{\substack{\mu,\nu=1 \\ \mu-\nu=0 }}^{i+1} 1 + \frac{\pi}{4} \sum_{\substack{\mu,\nu=1 \\ 1+\mu-\nu=0 }}^{i+1} 1 + \frac{\pi}{4} \sum_{\substack{\mu,\nu=1 \\ 1-\mu+\nu=0 }}^{i+1} 1 \\ &= \frac{\pi}{4} + \frac{\pi}{2} +\frac{\pi}{4} \cdot 0 + \frac{\pi}{2} \cdot (1+i) + \frac{\pi}{4} \cdot i + \frac{\pi}{4} \cdot i\\ & = \frac{\pi}{2} \left(\omega_{i}-\frac{1}{2} \right). \end{aligned}$$ Closed formulas for the Fourier constants with one index ======================================================== In this section we list closed formulas for the Fourier constants with one index which are used in subsection \[giaAppendix\]. \[LemmaAppendixB\] For all $j=0,1,2,\dots$, we have $$\begin{aligned} \mathbb{C}_{130j} & = \begin{cases} -\frac{3}{\pi},\quad j=0 \\[10pt] \frac{513}{320\pi}\sqrt{3},\quad j=1 \\[10pt] -\frac{39}{80\pi}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{291}{560\pi}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{69}{1120\pi}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{162}{\pi \sqrt{2}} \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}} \frac{1}{3+2j} \frac{9+12j+4j^2}{(j-1)j(j+1)(j+2)(j+3)(j+4)},\quad j \geq 5, \end{cases}\\[5pt] \mathbb{C}_{240j} &= \begin{cases} -\frac{279}{16\pi},\quad j=0 \\[10pt] \frac{1683}{320\pi}\sqrt{3},\quad j=1 \\[10pt] \frac{33}{40\pi}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{99}{140\pi}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] -\frac{9}{280\pi}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{162}{\pi \sqrt{2}} \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}} \frac{1}{3+2j} \frac{9+12j+4j^2}{(j-1)j(j+1)(j+2)(j+3)(j+4)},\quad j \geq 5, \end{cases}\\[5pt] \mathbb{D}_{130j} & = \begin{cases} \frac{505521}{28672\pi^2},\quad j=0 \\[10pt] -\frac{981867}{143360\pi^2}\sqrt{3},\quad j=1 \\[10pt] -\frac{42183}{10240\pi^2}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] \frac{13783689}{788480\pi^2}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{1523079}{1576960\pi^2}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] -\frac{35626779}{20500480\pi^2}\sqrt{\frac{3}{7}},\quad j=5 \\[10pt] -\frac{5325129}{20500480\pi^2}\frac{1}{\sqrt{7}},\quad j=6 \\[10pt] \frac{46545}{585728\pi^2},\quad j=7 \\[10pt] \frac{201477}{3727360\pi^2}\frac{1}{\sqrt{5}},\quad j=8 \\[10pt] \frac{486\sqrt{2}}{\pi^2 } \frac{(-1)^{j}(3+2j)}{\sqrt{(j+1)(j+2)}} \frac{-17325-148200j+8272j^2 + 33264 j^3 +1386 j^4 -1512 j^5 -84 j^6 +24 j^7 +2 j^8}{(j-5)(j-4)(j-3)(j-2)(j-1)j(j+1)(j+2)(j+3)(j+4)(j+5)(j+6)(j+7)(j+8)},\quad j \geq 9, \end{cases}\\[5pt] \mathbb{D}_{240j} & = \begin{cases} \frac{1028457}{4096\pi^2},\quad j=0 \\[10pt] -\frac{236643}{20480\pi^2}\sqrt{3},\quad j=1 \\[10pt] -\frac{883833}{10240\pi^2}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{53130249}{788480\pi^2}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{2723691}{143360\pi^2}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{266148291}{20500480\pi^2}\sqrt{\frac{3}{7}},\quad j=5 \\[10pt] \frac{11204163}{1863680\pi^2}\frac{1}{\sqrt{7}},\quad j=6 \\[10pt] \frac{5955813}{20500480\pi^2},\quad j=7 \\[10pt] \frac{2926809}{41000960\pi^2}\frac{1}{\sqrt{5}},\quad j=8 \\[10pt] \frac{1458\sqrt{2}}{\pi^2 } \frac{(-1)^{j}(3+2j)}{\sqrt{(j+1)(j+2)}} \frac{39375-261600j -29528 j^2 + 33264 j^3 +1386 j^4 -1512 j^5 -84 j^5 - 84j^6 +24 j^7 +2j^8}{(j-5)(j-4)(j-3)(j-2)(j-1)j(j+1)(j+2)(j+3)(j+4)(j+5)(j+6)(j+7)(j+8)},\quad j \geq 9, \end{cases}\\[5pt] \mathbb{E}_{14230j} & = \begin{cases} \frac{225639}{1792\pi^2},\quad j=0 \\[10pt] -\frac{137889}{4480\pi^2}\sqrt{3},\quad j=1 \\[10pt] -\frac{13233}{320\pi^2}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] \frac{25083}{448\pi^2}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{160737}{9856\pi^2}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] -\frac{285177}{197120\pi^2}\sqrt{\frac{3}{7}},\quad j=5 \\[10pt] -\frac{9313947}{2562560\pi^2\sqrt{7}},\quad j=6 \\[10pt] -\frac{1123959}{2562560\pi^2},\quad j=7 \\[10pt] -\frac{129009}{2562560\pi^2\sqrt{5}},\quad j=8 \\[10pt] \frac{3888\sqrt{2}}{\pi^2 } \frac{(-1)^{j}(3+2j)}{\sqrt{(j+1)(j+2)}} \frac{-315 + 2892 j + 460 j^2 - 309 j^3 - 29 j^4 + 9 j^5 + j^6}{(j-4)(j-3)(j-2)(j-1)j(j+1)(j+2)(j+3)(j+4)(j+5)(j+6)(j+7)},\quad j \geq 9, \end{cases}\\[5pt] \overline{\mathbb{C}}_{130j} &= \begin{cases} -\frac{105}{16\pi},\quad j=0 \\[10pt] \frac{627}{320\pi}\sqrt{3},\quad j=1 \\[10pt] -\frac{9}{40\pi}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{111}{140\pi}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{3}{40\pi}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{3}{8\pi \sqrt{2}} \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}} \frac{18(56+48j+16j^2)}{(j-1)j(j+1)(j+2)(j+3)(j+4)},\quad j \geq 5, \end{cases}\\[5pt] \overline{\mathbb{C}}_{240j} & = \begin{cases} -\frac{27}{\pi},\quad j=0 \\[10pt] \frac{1377}{320\pi}\sqrt{3},\quad j=1 \\[10pt] \frac{87}{80\pi}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{339}{560\pi}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] -\frac{3}{160\pi}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{3}{8\pi \sqrt{2}} \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}} \frac{18(56+48j+16j^2)}{(j-1)j(j+1)(j+2)(j+3)(j+4)},\quad j \geq 5, \end{cases}\\[5pt] \overline{\mathbb{D}}_{130j} & = \begin{cases} \frac{100521}{4096\pi^2},\quad j=0 \\[10pt] -\frac{8763}{143360\pi^2}\sqrt{3},\quad j=1 \\[10pt] -\frac{677823}{71680\pi^2}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] \frac{7256121}{788480\pi^2}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{714927}{225280\pi^2}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] -\frac{2521353}{1863680\pi^2}\sqrt{\frac{3}{7}},\quad j=5 \\[10pt] -\frac{10458663}{20500480\pi^2}\frac{1}{\sqrt{7}},\quad j=6 \\[10pt] \frac{81573}{2928640\pi^2},\quad j=7 \\[10pt] \frac{3074073}{41000960\pi^2}\frac{1}{\sqrt{5}},\quad j=8 \\[10pt] \frac{162\sqrt{2}}{\pi^2 } \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}}\cdot \\ \quad\quad \cdot \frac{407925 - 574350 j - 1017146 j^2 - 240720 j^3 + 188030 j^4 + 64260 j^5 - 6888 j^6 - 3360 j^7 - 10 j^8 + 60 j^9 + 4 j^{10}}{(j-5)(j-4)(j-3)(j-2)(j-1)j(j+1)(j+2)(j+3)(j+4)(j+5)(j+6)(j+7)(j+8)},\quad j \geq 9, \end{cases}\\[5pt] \overline{\mathbb{D}}_{240j} & = \begin{cases} \frac{5907249}{28672\pi^2},\quad j=0 \\[10pt] \frac{8116251}{143360\pi^2}\sqrt{3},\quad j=1 \\[10pt] -\frac{3585921}{71680\pi^2}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{89919801}{788480\pi^2}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{6828951}{1576960\pi^2}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{154847739}{20500480\pi^2}\sqrt{\frac{3}{7}},\quad j=5 \\[10pt] \frac{9032301}{1863680\pi^2}\frac{1}{\sqrt{7}},\quad j=6 \\[10pt] \frac{856593}{4100096\pi^2},\quad j=7 \\[10pt] \frac{5437431}{41000960\pi^2}\frac{1}{\sqrt{5}},\quad j=8 \\[10pt] \frac{486\sqrt{2}}{\pi^2 } \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}} \cdot \\ \quad\quad \cdot \frac{1107225 - 347550 j - 1281746 j^2 - 467520 j^3 + 150230 j^4 + 64260 j^5 - 6888 j^6 - 3360 j^7 - 10 j^8 + 60 j^9 + 4 j^{10}}{(j-5)(j-4)(j-3)(j-2)(j-1)j(j+1)(j+2)(j+3)(j+4)(j+5)(j+6)(j+7)(j+8)}, \quad j \geq 9, \end{cases}\\[5pt] \overline{\mathbb{E}}_{14230j} & = \begin{cases} \frac{253539}{1792\pi^2},\quad j=0 \\[10pt] \frac{68139}{4480\pi^2}\sqrt{3},\quad j=1 \\[10pt] -\frac{113397}{2240\pi^2}\sqrt{\frac{3}{2}},\quad j=2 \\[10pt] -\frac{4317}{448\pi^2}\frac{1}{\sqrt{10}},\quad j=3 \\[10pt] \frac{162663}{9856\pi^2}\sqrt{\frac{3}{5}},\quad j=4 \\[10pt] \frac{231249}{197120\pi^2}\sqrt{\frac{3}{7}},\quad j=5 \\[10pt] -\frac{4113129}{2562560\pi^2}\frac{1}{\sqrt{7}},\quad j=6 \\[10pt] -\frac{1104963}{2562560\pi^2},\quad j=7 \\[10pt] \frac{81519}{2562560\pi^2}\frac{1}{\sqrt{5}},\quad j=8 \\[10pt] \frac{1296\sqrt{2}}{\pi^2 } \frac{(-1)^{j}}{\sqrt{(j+1)(j+2)}} \frac{ -11655 + 4179 j + 15217 j^2 + 6381 j^3 - 1137 j^4 - 729 j^5 + 3 j^6 + 24 j^7 + 2 j^8}{(j-4)(j-3)(j-2)(j-1)j(j+1)(j+2)(j+3)(j+4)(j+5)(j+6)(j+7)},\quad j \geq 9, \end{cases}\\[10pt] \overline{\mathbb{Q}}_{100j} &:= \frac{214035333120}{\pi^{5/2}} (-1)^j \sqrt{(1 + j) (2 + j)} \frac{-3277699425 - 269297823960 j + 293711943816 j^2}{(1 - 2 j)^2 (3 - 2 j)^2 (5 - 2 j)^2 (7 - 2 j)^2 } \cdots\\ & \cdots\frac{ + 77509866720 j^3 - 99784020400 j^4 - 14916314880 j^5 + 12003789568 j^6 + 1948032000 j^7}{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (1 + 2 j)^2 (3 + 2 j) (5 + 2 j)^2}\cdots \\ &\cdots\frac{ - 570197760 j^8 - 120207360 j^9 + 6178816 j^10 + 2580480 j^{11} + 143360 j^{12}}{(7 + 2 j)^2 (9 + 2 j)^2 (11 + 2 j)^2 (13 + 2 j)^2 (15 + 2 j) (17 + 2 j) (19 + 2 j) },\quad j \geq 0, \\[10pt] \overline{\mathbb{Q}}_{200j} &:= \frac{-113010655887360}{\pi^{5/2}} (-1)^j \sqrt{(1 + j) (2 + j)} \frac{33108075 + 380845380 j - 393913652 j^2 }{(1 - 2 j)^2 (3 - 2 j)^2 (5 - 2 j)^2 (7 - 2 j)^2 }\cdots \\ &\cdots\frac{- 136760640 j^3 + 119740640 j^4 + 28080000 j^5 - 11212416 j^6 - 2933760 j^7}{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (1 + 2 j)^2 (3 + 2 j) (5 + 2 j)^2}\cdots \\ &\cdots\frac{+ 239360 j^8 + 107520 j^{9} + 7168 j^{10}}{(7 + 2 j)^2 (9 + 2 j)^2 (11 + 2 j)^2 (13 + 2 j)^2 (15 + 2 j) (17 + 2 j) (19 + 2 j)},\quad j \geq 0, \\[10pt] \overline{\mathbb{P}}_{100j} &:= \frac{-11890851840}{\pi^{5/2}}(-1)^j \sqrt{(1 + j) (2 + j)} \frac{35629018425 + 1577669058180 j - 2316779684100 j^2 }{(1 - 2 j)^2 (3 - 2 j)^2 (5 - 2 j)^2 (7 - 2 j)^2 }\cdots \\ &\cdots \frac{- 6803783904 j^3 + 981811322512 j^4 - 66644159040 j^5 - 178082423360 j^6 + 948476928 j^7 }{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (1 + 2 j)^2 (3 + 2 j) (5 + 2 j)^2}\cdots \\ &\cdots \frac{+ 15882539776 j^8 + 1123875840 j^9 - 626191360 j^{10} - 87220224 j^{11} + 6336512 j^{12} }{(7 + 2 j)^2 (9 + 2 j)^2 (11 + 2 j)^2 (13 + 2 j)^2 (15 + 2 j) }\cdots\\ &\cdots\frac{+ 1720320 j^{13} + 81920 j^{14}}{(17 + 2 j) (19 + 2 j)},\quad j \geq 0, \\[10pt] \overline{\mathbb{P}}_{200j} &:= \frac{642105999360}{\pi^{5/2}}(-1)^j\sqrt{(1 + j) (2 + j)} \frac{-2350673325 - 17348688120 j + 22705082600 j^2}{(1 - 2 j)^2 (3 - 2 j)^2 (5 - 2 j)^2 (7 - 2 j)^2 }\cdots \\ &\cdots \frac{+ 2565707616 j^3 - 8972306928 j^4 - 257391360 j^5 + 1438027520 j^6 + 121433088 j^7 }{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (1 + 2 j)^2 (3 + 2 j) (5 + 2 j)^2}\cdots \\ &\cdots \frac{97458944 j^8 - 15390720 j^9 + 1812480 j^{10} + 516096 j^{11} + 28672 j^{12}}{(7 + 2 j)^2 (9 + 2 j)^2 (11 + 2 j)^2 (13 + 2 j)^2 (15 + 2 j) (17 + 2 j) (19 + 2 j)},\quad j \geq 0, \\[10pt] \mathbb{Q}_{300j} &:= \frac{1105920}{\pi^{5/2}}(-1)^j\sqrt{(1 + j) (2 + j)} \frac{273378105 - 157311408 j }{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (-7 + 2 j)}\cdots \\ &\cdots \frac{- 29892784 j^2 + 13889088 j^3 + 1385184 j^4}{(-5 + 2 j) (-3 + 2 j) (-1 + 2 j) (1 + 2 j) (5 + 2 j) (7 + 2 j)}\cdots \\ &\cdots \frac{- 352512 j^5 - 28416 j^6 + 3072 j^7 + 256 j^8}{(9 + 2 j) (11 + 2 j) (13 + 2 j) (15 + 2 j) (17 + 2 j) (19 + 2 j)},\quad j \geq 0, \\[10pt] \mathbb{Q}_{400j} &:= \frac{3317760}{\pi^{5/2}}(-1)^j\sqrt{(1 + j) (2 + j)} \frac{446350905 - 189244848 j }{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (-7 + 2 j)}\cdots \\ &\cdots \frac{- 40537264 j^2 + 13889088 j^3 + 1385184 j^4 }{(-5 + 2 j) (-3 + 2 j) (-1 + 2 j) (1 + 2 j) (5 + 2 j) (7 + 2 j)}\cdots \\ &\cdots \frac{- 352512 j^5 - 28416 j^6 + 3072 j^7 + 256 j^8}{(9 + 2 j) (11 + 2 j) (13 + 2 j) (15 + 2 j) (17 + 2 j) (19 + 2 j)},\quad j \geq 0, \\[10pt] \mathbb{P}_{300j} &:= \frac{-36864}{\pi^{5/2}}(-1)^j \sqrt{(1 + j) (2 + j)} \frac{-9861476625 + 10067010780 j + 608214484 j^2}{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (-7 + 2 j) }\cdots \\ &\cdots \frac{- 1532592960 j^3 - 20432800 j^4 + 80991360 j^5 + 2020992 j^6 - 1827840 j^7}{(-5 + 2 j) (-3 + 2 j) (-1 + 2 j) (1 + 2 j) (5 + 2 j) (7 + 2 j)}\cdots \\ &\cdots \frac{- 83200 j^8 + 15360 j^9 + 1024 j^{10}}{(9 + 2 j) (11 + 2 j) (13 + 2 j) (15 + 2 j) (17 + 2 j) (19 + 2 j) },\quad j \geq 0, \\[10pt] \mathbb{P}_{400j} &:= \frac{-110592}{\pi^{5/2}}(-1)^j\sqrt{(1 + j) (2 + j)} \frac{-12888500625 + 11300802780 j + 932387284 j^2}{(-13 + 2 j) (-11 + 2 j) (-9 + 2 j) (-7 + 2 j) }\cdots \\ &\cdots \frac{- 1590653760 j^3 - 30109600 j^4 + 80991360 j^5 + 2020992 j^6 - 1827840 j^7}{(-5 + 2 j) (-3 + 2 j) (-1 + 2 j) (1 + 2 j) (5 + 2 j) (7 + 2 j) }\cdots \\ &\cdots \frac{ - 83200 j^8 + 15360 j^9 + 1024 j^{10}}{(9 + 2 j) (11 + 2 j) (13 + 2 j) (15 + 2 j) (17 + 2 j) (19 + 2 j) },\quad j \geq 0.\end{aligned}$$ All these closed formulas are based on Lemma \[ClosedformulasFore\] and follow similarly. Therefore, we illustrate the proof only for the first constant, namely for $\mathbb{C}_{130j}$. For all $j=0,1,2,\dots$, we have $$\begin{aligned} \mathbb{C}_{130j} &:= \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x)-\Gamma_{3}(x) \right) e_{0}(x)e_{j}(x) \tan^{2}(x)dx\nonumber \\ &=\frac{2}{\sqrt{\pi}} \frac{1}{\sqrt{\omega_{j}^2-1}} \int_{0}^{\frac{\pi}{2}} \left( \Gamma_{1}(x)-\Gamma_{3}(x) \right) e_{0}(x) \left( \omega_{j} \frac{\sin(\omega_{j}x)}{\tan(x)} - \cos(\omega_{j}x) \right) \tan^{2}(x) dx\nonumber \\ &= \frac{\omega_{j}}{\sqrt{\omega_{j}^2-1}} \int_{0}^{\frac{\pi}{2}} w_{1}(x) \sin(\omega_{j}x) dx - \frac{1}{\sqrt{\omega_{j}^2-1}} \int_{0}^{\frac{\pi}{2}} w_{2}(x) \cos(\omega_{j}x) dx,\label{akuro} \end{aligned}$$ where $$\begin{aligned} w_{1}(x)&:=\frac{2}{\sqrt{\pi}} \left( \Gamma_{1}(x)-\Gamma_{3}(x) \right) e_{0}(x) \tan(x) = \frac{3}{4\sqrt{2}\pi^2} \cos^2(x) q(x), \\ w_{2}(x) &:=\frac{2}{\sqrt{\pi}} \left( \Gamma_{1}(x)-\Gamma_{3}(x) \right) e_{0}(x) \tan^2(x)= \frac{3}{8\sqrt{2}\pi^2} \sin(2x) q(x) \end{aligned}$$ and $$\begin{aligned} q(x):= 384 ~x \cos^3(x) - 254 \sin(x)-8 \sin(3x)-16 \sin(5x) - 5 \sin(7x)+ \sin(9x).\end{aligned}$$ We use trigonometric identities to write $$\begin{aligned} w_{1}(x) &=x \left( \frac{90\sqrt{2}}{\pi^2} \cos(x) +\frac{45\sqrt{2}}{\pi^2} \cos(3x) +\frac{9\sqrt{2}}{\pi^2} \cos(5x) \right)\\ &-\frac{393}{8\pi^2 \sqrt{2}} \sin(x) -\frac{429}{8\pi^2 \sqrt{2}} \sin(3x) -\frac{135}{16\pi^2 \sqrt{2}} \sin(5x) -\frac{75}{16\pi^2 \sqrt{2}} \sin(7x)\\ &-\frac{9}{16\pi^2 \sqrt{2}} \sin(9x) +\frac{3}{16\pi^2 \sqrt{2}} \sin(11x).\end{aligned}$$ For $j\geq 5$, we compute each integral separately and find a closed formula for the first integral in . Similarly, for the second. All the other values $\mathbb{C}_{130j}$, $j\in\{0,1,2,3,4\}$ are computed with Mathematica. [10]{} Buchel Alex, Luis Lehner, and Steven L. Liebling. Boson stars in ads, arxiv:1304.4166. Buchel Alex, Luis Lehner, and Steven L. Liebling. Scalar collapse in ads, arxiv:1210.0890. David M. Ambrose and Jon Wilkening. Computation of time-periodic solutions of the [B]{}enjamin-[O]{}no equation. , 20(3):277–308, 2010. Alain Bachelot. The [D]{}irac system on the anti-de [S]{}itter universe. , 283(1):127–167, 2008. Pietro Baldi, Massimiliano Berti, Emanuele Haus, and Riccardo Montalto. Time quasi-periodic gravity water waves in finite depth. , 214(2):739–911, 2018. Pietro Baldi, Massimiliano Berti, and Riccardo Montalto. A note on [KAM]{} theory for quasi-linear and fully nonlinear forced [K]{}d[V]{}. , 24(3):437–450, 2013. Pietro Baldi, Massimiliano Berti, and Riccardo Montalto. K[AM]{} for quasi-linear [K]{}d[V]{}. , 352(7-8):603–607, 2014. Pietro Baldi, Massimiliano Berti, and Riccardo Montalto. K[AM]{} for autonomous quasi-linear perturbations of [K]{}d[V]{}. , 33(6):1589–1638, 2016. Pietro Baldi, Massimiliano Berti, and Riccardo Montalto. K[AM]{} for autonomous quasi-linear perturbations of m[K]{}d[V]{}. , 9(2):143–188, 2016. D. Bambusi and S. Paleari. Families of periodic solutions of resonant [PDE]{}s. , 11(1):69–87, 2001. Massimiliano Berti. , volume 74 of [*Progress in Nonlinear Differential Equations and their Applications*]{}. Birkhäuser Boston, Inc., Boston, MA, 2007. Massimiliano Berti. K[AM]{} for [PDE]{}s. , 9(2):115–142, 2016. Massimiliano Berti, Luca Biasco, and Michela Procesi. Existence and stability of quasi-periodic solutions for derivative wave equations. , 24(2):199–214, 2013. Massimiliano Berti and Philippe Bolle. Multiplicity of periodic solutions of nonlinear wave equations. , 56(7):1011–1046, 2004. Massimiliano Berti and Philippe Bolle. Sobolev periodic solutions of nonlinear wave equations in higher spatial dimensions. , 195(2):609–642, 2010. Massimiliano Berti and Philippe Bolle. Quasi-periodic solutions of nonlinear [S]{}chrödinger equations on [$\Bbb T^d$]{}. , 22(2):223–236, 2011. Massimiliano Berti and Philippe Bolle. Sobolev quasi-periodic solutions of multidimensional wave equations with a multiplicative potential. , 25(9):2579–2613, 2012. Massimiliano Berti and Philippe Bolle. Quasi-periodic solutions with [S]{}obolev regularity of [NLS]{} on [$\Bbb T^d$]{} with a multiplicative potential. , 15(1):229–286, 2013. Massimiliano Berti, Livia Corsi, and Michela Procesi. An abstract [N]{}ash-[M]{}oser theorem and quasi-periodic solutions for [NLW]{} and [NLS]{} on compact [L]{}ie groups and homogeneous manifolds. , 334(3):1413–1454, 2015. Massimiliano Berti and Riccardo Montalto. Quasi-periodic water waves. , 19(1):129–156, 2017. Massimiliano Berti and Riccardo Montalto. Quasi-[P]{}eriodic [S]{}tanding [W]{}ave [S]{}olutions of [G]{}ravity-[C]{}apillary [W]{}ater [W]{}aves. , 263(1273):0, 2020. Piotr Bizoń. Is [A]{}d[S]{} stable? , 46(5):Art. 1724, 14, 2014. Haïm Brézis, Jean-Michel Coron, and Louis Nirenberg. Free vibrations for a nonlinear wave equation and a theorem of [P]{}. [R]{}abinowitz. , 33(5):667–684, 1980. M.W. Choptuik. Universality and scaling in gravitational collapse of a massless scalar field. , 70(9), 1993. Demetrios Christodoulou and Sergiu Klainerman. , volume 41 of [*Princeton Mathematical Series*]{}. Princeton University Press, Princeton, NJ, 1993. Walter Craig and C. Eugene Wayne. Newton’s method and periodic solutions of nonlinear wave equations. , 46(11):1409–1498, 1993. M. Dafermos. The black hole stability problem. , 2006. Jean-Marc Delort. Periodic solutions of nonlinear [S]{}chrödinger equations: a paradifferential approach. , 4(5):639–676, 2011. Óscar J. C. Dias, Gary T. Horowitz, Don Marolf, and Jorge E. Santos. On the nonlinear stability of asymptotically anti-de [S]{}itter solutions. , 29(23):235019, 24, 2012. Óscar J. C. Dias, Gary T. Horowitz, and Jorge E. Santos. Gravitational turbulent instability of anti-de [S]{}itter space. , 29(19):194002, 7, 2012. Andrzej Rostworowski Dominika Hunik-Kostyra. Ads instability: resonant system for gravitational perturbations of ads5 in the cohomogeneity-two biaxial bianchi ix ansatz, arxiv:2002.08393. Lawrence C. Evans. Some new [PDE]{} methods for weak [KAM]{} theory. , 17(2):159–177, 2003. Lawrence C. Evans. Further [PDE]{} methods for weak [KAM]{} theory. , 35(4):435–462, 2009. Helmut Friedrich. On the existence of [$n$]{}-geodesically complete or future complete solutions of [E]{}instein’s field equations with smooth asymptotic structure. , 107(4):587–609, 1986. G. W. Gibbons, S. W. Hawking, Gary T. Horowitz, and Malcolm J. Perry. Positive mass theorems for black holes. , 88(3):295–308, 1983. Sean A. Hartnoll. Lectures on holographic methods for condensed matter physics. , 26(22):224002, 61, 2009. S. W. Hawking and R. Penrose. The singularities of gravitational collapse and cosmology. , 314:529–548, 1970. Gustav Holzegel and Jacques Smulevici. Self-gravitating [K]{}lein-[G]{}ordon fields in asymptotically anti-de-[S]{}itter spacetimes. , 13(4):991–1038, 2012. Gustav Holzegel and Claude M. Warnick. The [E]{}instein-[K]{}lein-[G]{}ordon-[A]{}d[S]{} system for general boundary conditions. , 12(2):293–342, 2015. G. Horowitz. A turbulent instability of anti-de sitter. 2006. Piotr Bizon Joanna Jalmuzna, Andrzej Rostworowski. A comment on ads collapse of a scalar field in higher dimensions, arxiv:1108.4539. S. B. Kuksin. Hamiltonian perturbations of infinite-dimensional linear systems with imaginary spectrum. , 21(3):22–37, 95, 1987. Sergei B. Kuksin. A [KAM]{}-theorem for equations of the [K]{}orteweg-de [V]{}ries type. , 10(3):ii+64, 1998. Sergei B. Kuksin. , volume 19 of [*Oxford Lecture Series in Mathematics and its Applications*]{}. Oxford University Press, Oxford, 2000. Sergei B. Kuksin. Fifteen years of [KAM]{} for [PDE]{}. In [*Geometry, topology, and mathematical physics*]{}, volume 212 of [*Amer. Math. Soc. Transl. Ser. 2*]{}, pages 237–258. Amer. Math. Soc., Providence, RI, 2004. Peter D. Lax. Periodic solutions of the [K]{}d[V]{} equations. In [*Nonlinear wave motion ([P]{}roc. [AMS]{}-[SIAM]{} [S]{}ummer [S]{}em., [C]{}larkson [C]{}oll. [T]{}ech., [P]{}otsdam, [N]{}.[Y]{}., 1972)*]{}, pages 85–96. Lectures in Appl. Math., Vol. 15, 1974. Peter D. Lax. Periodic solutions of the [K]{}d[V]{} equation. , 28:141–188, 1975. Peter D. Lax. Almost periodic solutions of the [K]{}d[V]{} equation. , 18(3):351–375, 1976. Jianjun Liu and Xiaoping Yuan. A [KAM]{} theorem for [H]{}amiltonian partial differential equations with unbounded perturbations. , 307(3):629–673, 2011. G. Holzegel M. Dafermos. Dynamic instability of solitons in $4+1$ dimesnional gravity with negative cosmological constant. 2006. Andrzej Rostworowski Maciej Maliborski. Time-periodic solutions in einstein ads - massless scalar field system, arxiv:1303.3186. Juan Maldacena. The large [$N$]{} limit of superconformal field theories and supergravity. , 2(2):231–252, 1998. Juan Maldacena. The large-[$N$]{} limit of superconformal field theories and supergravity. volume 38, pages 1113–1133. 1999. Quantum gravity in the southern cone (Bariloche, 1998). Maciej Maliborski and Andrzej Rostworowski. Turbulent instability of anti-de [S]{}itter space-time. , 28(22-23):1340020, 12, 2013. John McGreevy. Holographic duality with a view toward many-body physics, arxiv:0909.0518. Riccardo Montalto. Quasi-periodic solutions of forced [K]{}irchhoff equation. , 24(1):Art. 9, 71, 2017. Georgios Moschidis. A proof of the instability of ads for the einstein–massless vlasov system, arxiv:1812.04268. Georgios Moschidis. A proof of the instability of ads for the einstein–null dust system with an inner mirror, arxiv:1704.08681. Andrzej Rostworowski Piotr Bizon. On weakly turbulent instability of anti-de sitter space, arxiv:1104.3702. Oleg Evnin Dominika Hunik Vincent Luyten Maciej Maliborski Piotr Bizon, Ben Craps. Conformal flow on s3 and weak field integrability in ads4. , 353:1179–1199, 2017. Paul H. Rabinowitz. Free vibrations for a semilinear wave equation. , 31(1):31–68, 1978. Alfonso V. Ramallo. Introduction to the ads/cft correspondence, arxiv:1310.4319. Richard Schoen and Shing Tung Yau. Proof of the positive mass theorem. [II]{}. , 79(2):231–260, 1981. Gabor Szegö. . American Mathematical Society Colloquium Publications, Vol. 23. Revised ed. American Mathematical Society, Providence, R.I., 1959. C. Eugene Wayne. Periodic and quasi-periodic solutions of nonlinear wave equations via [KAM]{} theory. , 127(3):479–528, 1990. Edward Witten. A new proof of the positive energy theorem. , 80(3):381–402, 1981. Venkat Balasubramanian, Alex Buchel, Stephen R. Green, Luis Lehner, and Steven L. Liebling. . , 113(7):071601, 2014. Anxo Biasi, Ben Craps, and Oleg Evnin. . , 100(2):024008, 2019. Piotr Bizoń, Maciej Maliborski, and Andrzej Rostworowski. . , 115(8):081103, 2015. Ben Craps, Oleg Evnin, and Joris Vanhoof. . , 10:048, 2014. Ben Craps, Oleg Evnin, and Joris Vanhoof. . , 01:108, 2015. Ben Craps, Oleg Evnin, and Joris Vanhoof. . , 10:079, 2015. Stephen R. Green, Antoine Maillard, Luis Lehner, and Steven L. Liebling. . , 92(8):084001, 2015.
--- abstract: 'Acceleration of high-energy hadrons in GRB blast waves will be established if high-energy neutrinos are detected from GRBs. Recent calculations of photomeson neutrino production are reviewed, and new calculations of high-energy neutrinos and the accompanying hadronic cascade radiation are presented. If hadrons are injected in GRB blast waves with an energy corresponding to the measured hard X-ray/soft $\gamma$-ray emission, then only the most powerful bursts at fluence levels ${\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}3\times 10^{-4}\,\rm erg \, cm^{-2}$ offer a realistic prospect for detection of $\nu_\mu$. Detection of high-energy neutrinos are likely if GRB blast waves have large baryon loads and Doppler factors ${\lower.5ex\hbox{$\; \buildrel < \over\sim \;$}}200$. Significant limitations on the hadronic baryon loading and the number of expected neutrinos are imposed by the fluxes from pair-photon cascades initiated in the same processes that produce neutrinos.' author: - Armen Atoyan - 'Charles D. Dermer' title: Neutrinos and Gamma Rays from Photomeson Processes in Gamma Ray Bursts --- [ address=[Centre de Recherche Mathématiques, Université de Montréal, Montréal, Canada H3C 3J7]{} ]{} [ address=[Code 7653, Naval Research Laboratory, Washington, DC 20375-5352 USA]{} ]{} Introduction ============ In recent work [@da03], we considered neutrino production for two leading scenarios for the sources that power long-duration GRBs, namely the collapsar [@woo] and supranova (SA) [@vs] models. In the collapsar model, the core of a massive star collapses directly to a black hole, and the most important radiation field for photomeson neutrino production is the internal synchrotron radiation field [@wb]. In the supranova (SA) model, a pulsar nebula synchrotron radiation within an expanding supernova remnant (SNR) shell [@kg02] provides additional external photon target for photomeson interactions (see also Refs. [@rmw03; @gg03]). We found that the presence of the external field in the SA model can increase the number of detectable neutrinos by an order of magnitude or more over the collapsar model when the Doppler factor $\delta {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}200$. When $\delta {\lower.5ex\hbox{$\; \buildrel < \over\sim \;$}}200$, the internal synchrotron field can become effective for photomeson interactions. In our calculations, we assumed that the energy injected in protons is equal to the energy of electrons producing the photon fluence measured at X-ray and $\gamma$-ray energies. In both models, the likelihood of detecting a neutrino from a GRB with a km-scale detector such as IceCube is small except for rare GRBs with fluence $ {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}3\times 10^{-4}\,\rm erg \, cm^{-2}$. If GRB blast waves are strongly baryon-loaded, however, as required in a GRB model for high-energy cosmic rays [@wda03], then we predict that 100 TeV – 100 PeV neutrinos will be detected several times per year with IceCube when $\delta {\lower.5ex\hbox{$\; \buildrel < \over\sim \;$}}200$. Here we summarize our calculations of photomeson neutrino production for the collapsar and SA models, and present new calculations for the hadronic cascade radiation from a GRB. Model ===== The GRB model is adapted from our photo-hadronic model for blazar jets [@ad01], and takes into account the injection of nonthermal protons which lose energy through photomeson interactions. Protons are injected with a number spectrum $\propto E_p^{-2}$ at comoving proton energies $E_p > \Gamma $ GeV up to a maximum proton energy determined by the condition that the particle Larmor radius is smaller than both the size scale of the emitting region and the photomeson energy-loss length. Here $\Gamma$ is the Lorentz factor of the blast wave. The observed synchrotron spectral flux in the prompt phase of the burst is parameterized by the expression $F(\nu) \propto \nu^{-1} (\nu/\nu_{br})^{\alpha}$, where $h\nu_{br}=300$ keV, $\alpha = -0.5$ above $\nu_{br}$ and an exponential cutoff at 10 MeV, and $\alpha = 0.5$ when $10\, {\rm keV} \leq h\nu \leq h \nu_{br}$. At lower energies, $\alpha= 4/3$. The observed total hard X-ray/soft $\gamma$-ray photon fluence $ \Phi_{tot} \cong t_{dur}\int_0^\infty d\nu F(\nu )$, where $t_{dur}$ is the characteristic duration of the GRB. We consider a source at redshift $z=1$ and take the hard X-ray/soft $\gamma$-ray fluence $\Phi_{tot} {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}3\times 10^{-5} \,\rm erg \; cm^{-2}$. Two or three GRBs should occur each month above this fluence level. A total amount of energy $E^\prime = 4\pi d_L^2\Phi_{tot} \delta^{-3} (1+z)^{-1}$ is injected in the form of accelerated proton energy into the comoving frame of the GRB blast wave. Here $z$ is the redshift and $d_L$ is the luminosity distance. The energy deposited into each of $N_{sp}$ light-curve pulses (or spikes) is therefore $E_{sp}^\prime = E^\prime/N_{sp}\,$ ergs. We assume that all the energy $E_{sp}^\prime$ is injected in the first half of the time interval of the pulse (to ensure variability in the GRB light curve), which effectively corresponds to a characteristic variability time scale $t_{var} = t_{dur}/2N_{sp}$. The proper width of the radiating region forming the pulse is $\Delta R^\prime \cong t_{var} c\delta/(1+z)$, from which the energy density of the synchrotron radiation can be determined [@ad01]. We set the GRB prompt duration $t_{dur} = 100\,$s, and let $N_{sp} = 50$, corresponding to $t_{var} = 1\,\rm s$. The magnetic field is determined by assuming equipartition between the energy densities of the magnetic field and the electron energy. For the SA model, we assume the existence of an external radiation field given by the expression $\nu L_\nu \propto\nu^{1/2} \exp(-\nu/\nu_{ext})$, with $ h\nu_{ext} \approx 1$ keV [@kg02]. The intensity of this field is determined by the assumption that the integral power $L_{ext} =\int_0^\infty L_\nu \rm d \nu$ is equal to the power of the pulsar wind $L_{pw}\approx (10^{53}\,{\rm erg})/t_{delay}$, assuming that a total of $\approx 10^{53} \,\rm erg$ of pulsar rotation energy is radiated during the time $t_{delay}$ (which is here set equal to 0.1 yr) from the rotating supramassive neutron star before it collapses to a black hole. The energy $h\nu_{ext}\simeq 0.1 \,\rm keV$ is the characteristic energy of synchrotron radiation emitted by electrons (of the pulsar wind) with Lorentz factors $\gamma_{pw}\sim 3\times 10^4$ in a randomly ordered magnetic field of strength $\approx 10$ G. The radius $R = 0.05ct$ is determined by assuming that $0.05c$ is the mean speed of the SNR shell, and that the external photon energy density $\propto L/2\pi R^2$. Fig. 1 shows the total $\nu_\mu$ fluences expected from a model GRB with $N_{sp}=50$ pulses. The thin curves show collapsar model results at $\delta = 100,$ 200, and 300, with $\Gamma = \delta$. The expected numbers of $\nu_\mu$ that a km-scale detector such as IceCube would detect are $N_\nu = 3.2\times 10^{-3}$, $1.5\times 10^{-4}$, and $1.9\times 10^{-5}$, respectively. There is no prospect to detect $\nu_\mu$ from GRBs at these levels. The heavy solid and dashed curves in Fig. 1 give the SA model predictions of $N_\nu = 0.009$ for both $\delta = 100$ and $\delta = 300$. The equipartition magnetic fields are 1.9 kG and 0.25 kG, respectively. The external radiation field in the SA model makes the neutrino detection rate insensitive to the value of $\delta$ (as well as to $t_{var} {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}0.1\,\rm s$, as verified by calculations), but there is still little hope that a km$^3$ detector could detect such GRBs. Neutrino production efficiency would improve by at most a factor of 3 in the collapsar model if $t_{var} \sim 1\,$ms (and $N_{sp} = 5\times 10^4$ to provide the same total fluence). Such narrow spikes are, however, then nearly opaque to $\gamma$ rays with energies ${\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}100$ MeV [@da03]. Even in this case, there is little hope to detect neutrinos except from rare GRBs with fluence $ \Phi_{tot} {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}3\times 10^{-4}\,\rm erg \, cm^{-2}$. ![Energy fluence of photomeson muon neutrinos ($\nu_\mu)$ for a model GRB with hard X-ray fluence $\Phi_{tot} = 3\times 10^{-5} \,\rm erg \; cm^{-2}$ for different Doppler factors $\delta$. The thin curves show collapsar model results where only the internal synchrotron radiation field provides a source of target photons. []{data-label="fig1"}](fig1.eps){width="10.cm"} Fig. 2 shows new calculations of photomeson neutrino production for a GRB with $ \Phi_{tot} = 3\times 10^{-4}\,\rm erg \, cm^{-2}$ and $\delta = 100$, as well as the accompanying electromagnetic radiation induced by pair-photon cascades from the secondary electrons and $\gamma$ rays from the same photomeson interactions. The total number of $\nu_\mu$ expected with IceCube is $\cong 0.1$. The total fluence of cascade photons shown here is contributed by lepton synchrotron (dot-dashed) and Compton (dashed) emissions. For comparison, the dotted curve shows the primary lepton synchrotron radiation spectrum assumed for the calculations. The level of the fluence of the cascade photons is $\approx 10$% of the primary synchrotron radiation. This means that the maximum allowed baryon loading for these parameters cannot exceed a factor of $\approx 30$ in order not to overproduce the primary synchrotron radiation fluence. This limits the maximum number of $\nu_\mu$ to $\approx 3$ even in the case of large baryon loading for rare, powerful GRBs. These numbers cannot be further increased in the SA model because of the efficient extraction that is already provided by internal photons alone for these parameters. ![Energy fluence of photons and photomeson muon neutrinos for a collapsar-model GRB with hard X-ray fluence $\Phi_{tot} = 3\times 10^{-4} \,\rm erg \; cm^{-2}$ and $\delta = 100$. The dotted curve shows the fluence of a GRB used for calculations, and the dashed and dot-dashed curves show the Compton and synchrotron contributions to the photon fluence from the electromagnetic cascade initiated by secondaries from photomeson processes, respectively. []{data-label="fig2"}](fig2.eps){width="11.0cm"} In conclusion, we predict that at most a few high-energy $\nu_\mu$ can be detected with IceCube even from a very bright GRB at the fluence level $\Phi_{tot} {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}3\times 10^{-4} \,\rm erg \; cm^{-2}$, and only when the baryon loading is high [@da03; @wda03]. This is because the detection of a single $\nu_\mu$ requires a $\nu_\mu$ fluence $ {\lower.5ex\hbox{$\; \buildrel > \over\sim \;$}}10^{-4} \,\rm erg \; cm^{-2}$ above 1 TeV. Since the energy release in high-energy neutrinos and electromagnetic secondaries is about equal, this energy will be reprocessed in the pair-photon cascade and emerge in the form of observable radiation at hard X-ray/soft $\gamma$-ray energies, and this radiation cannot exceed the measured fluence in this regime. This imposes a robust limit on the maximum number of $\nu_\mu$ even from a GRB with very high baryon loading. AA thanks the NRL High Energy Space Environment Branch for support and hospitality during visits. The work of CD is supported by the Office of Naval Research and NASA GLAST science investigation grant DPR \# S-15634-Y. C. D. Dermer and A. M. Atoyan, [*Phys. Rev. Lett.*]{} [** 91**]{}, 071102, (2003). C. L. Fryer, S.  E.Woosley, and D. H. Hartmann [[*Astrophys. J.*]{}]{}[** 526**]{}, 152 (1999). M. Vietri and L. Stella, [[*Astrophys. J.*]{}]{}[**  507**]{}, L45 (1998). E. Waxman and J. N. Bahcall, [*Phys. Rev. Lett.*]{} [** 78**]{}, 2292 (1997). A. K[" o]{}nigl and J. Granot, [[*Astrophys. J.*]{}]{}[** 574**]{}, 134 (2002). Razzaque, S., Meszaros, P., and Waxman, E.  [*Phys. Rev. Lett.*]{} [** 90**]{}, 241103 (2003). Guetta, D. & Granot, J. [*Phys. Rev. Lett.*]{}, [**90**]{}, 201103 (2003). S. D. Wick, C. D. Dermer, and A. Atoyan, [*Astroparticle Physics*]{}, submitted (astro-ph/0310667). A. M. Atoyan and C. D. Dermer, [[*Astrophys. J.*]{}]{}[** 586**]{}, 79, (2003).
--- abstract: 'There has been tremendous research progress in estimating the depth of a scene from a monocular camera image. Existing methods for single-image depth prediction are exclusively based on deep neural networks, and their training can be unsupervised using stereo image pairs, supervised using LiDAR point clouds, or semi-supervised using both stereo and LiDAR. In general, semi-supervised training is preferred as it does not suffer from the weaknesses of either supervised training, resulting from the difference in the camera’s and the LiDAR’s field of view, or unsupervised training, resulting from the poor depth accuracy that can be recovered from a stereo pair. In this paper, we present our research in single-image depth prediction using semi-supervised training that outperforms the state-of-the-art. We achieve this through a loss function that explicitly exploits left-right consistency in a stereo reconstruction, which has not been adopted in previous semi-supervised training. In addition, we describe the correct use of ground truth depth derived from LiDAR that can significantly reduce prediction error. The performance of our depth prediction model is evaluated on popular datasets, and the importance of each aspect of our semi-supervised training approach is demonstrated through experimental results. Our deep neural network model has been made publicly available.[^1].' author: - 'Ali Jahani Amiri, Shing Yan Loo, and Hong Zhang' bibliography: - 'refs.bib' title: '**Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network** ' --- INTRODUCTION ============ Single-image depth estimation is an important yet challenging task in the field of robotics and computer vision. A solution to this task can be used in a broad range of applications such as localization of the robot poses[@loo2018cnn; @Yang_2018_ECCV], 3D reconstruction in simultaneous localization and mapping[@tateno2017cnn], collision avoidance[@chakravarty2017cnn], and grasping[@rao2010grasping]. With the rise of deep learning, notable achievements in terms of accuracy and robustness have been obtained in the study of single image depth estimation, and methods of supervised, unsupervised, and semi-supervised have been proposed. Supervised methods in single-image depth estimation use ground truth derived from LiDAR data. It is time-consuming and expensive to obtain dense ground-truth depth, especially for the outdoor scenes. LiDAR data is also sparse relative to the camera view, and it does not share the same field of view with the camera in general. Consequently, supervised methods are unable to produce meaningful depth estimation in the non-overlapping regions with the image. In contrast, unsupervised methods learn dense depth prediction using the principle of reconstruction from stereo views; hence depth can be estimated for the entire image. However, the accuracy of unsupervised depth estimation is limited by that of stereo reconstruction.\ ![Using stereo only (c) leads to the noisy depth map. Using LiDAR only (d) results in inaccurate for the top part of the image because there is no ground-truth available. Our semi-supervised method (e) fuses both LiDAR and Stereo and can predict depth more accurately. Ground truth LiDAR (b) has been interpolated for visualization purpose.[]{data-label="fig:loss"}](images/compare.jpg) In this paper, we present our research in single-image depth prediction using semi-supervised training that outperforms the state-of-the-art. We propose a novel semi-supervised loss that uses left-right consistency term originally proposed in [@godard2017unsupervised]. Our network uses LiDAR data for supervised training and rectified stereo images for unsupervised training, and in the testing phase, our network takes only one image to perform depth estimation. Another focus of our study is the impact of ground truth depth information on the training of our model, when network training is performed with the projected raw LiDAR data and the annotated depth map recently provided by KITTI [@Uhrig2017THREEDV], respectively. We discover that the commonly used projected raw LiDAR contains noisy artifacts due to the displacement between the LiDAR and the camera, leading to poor network performance. In contrast, we use the more reliable preprocessed annotated depth map for training, and we are able to achieve a significant reduction of prediction error. In summary, we propose in this paper a semi-supervised deep neural network for depth estimation from a single image, with state-of-the-art performance. Our work makes the following three main contributions. - We show the importance of including a left-right consistency term in the loss function for performance optimization in semi-supervised single-image prediction. - We provide empirical evidence that training with the annotated ground truth derived from LiDAR leads to better depth prediction accuracy than with the raw LiDAR data as ground truth. - We make our semi-supervised deep neural network - based on the popular Monodepth [@godard2017unsupervised] architecture - available to the community. The rest of this paper is organized as follows. In Section II, we will review related works to our research, and in Section III we will present our proposed neural network model for single-image depth estimation. Experimental evaluation of our proposed model is provided in Section IV, and conclusion of our work in Section V. Related works ============= Over the past few years, numerous deep learning based methods have been proposed for the problem of single-image depth estimation. We can roughly divide these deep methods into three categories: supervised, unsupervised, and semi-supervised. Supervised {#sec:supervised} ---------- Supervised methods use ground truth depth, usually from LiDAR in outdoor scenes, for training a network. Eigen et.al.[@eigen2014depth] was one of the first who used such a method to train a convolutional neural network. First, they generate the coarse prediction and then use another network to refine the coarse output to produce a more accurate depth map. Following [@eigen2014depth], several techniques have been proposed to improve the accuracy of convolutional neural networks such as CRFs [@li2015depth], inverse Huber loss as a more robust loss function [@laina2016deeper], joint optimization of surface normal and depth in the loss function [@wang2015designing; @hu2018revisiting; @qi2018geonet], fusion of multiple depths maps using Fourier transform[@lee2018single], and formulation of depth estimation as a problem of classification[@fu2018deep]. Unsupervised {#sec:unsupervised} ------------ To avoid laborious ground truth depth construction, unsupervised methods based on stereo image pairs have been proposed[@xie2016deep3d]. Garg et al. [@garg2016unsupervised] demonstrated an unsupervised method in which the network is trained to minimize the stereo reconstruction loss; i.e., the loss is defined such that the reconstructed right image (i.e., obtained by warping the left image using the predicted disparity) matches the right image. Later on, Godard et al.[@godard2017unsupervised] extended the idea by enforcing a left-right consistency that makes the left-view disparity map consistent with the right-view disparity map. The unsupervised training of our model is based on [@godard2017unsupervised]. Given a left view as input, the model in[@godard2017unsupervised] outputs two disparities of the left view and the right view, while we are outputting only one depth map for one input image in the form of inverse depth instead of disparity. As a result, we treat both left and right images equivalently which allows us to eliminate the overhead of the post-processing step in [@godard2017unsupervised]. By making these changes, our unsupervised model outperforms [@godard2017unsupervised] as will be discussed in Section \[sec:Experiments\]. Semi-Supervised {#sec:monoslam} --------------- Unlike unsupervised methods, there has not been much work on semi-supervised learning of depth. Luo et al. [@luo2018single] and Guo et al.[@Guo_2018_ECCV] proposed a method that consists of multiple sequential unsupervised and supervised training stages; hence their method could be categorized as a semi-supervised method although they did not use LiDAR and stereo images at the same time in training. Closest to our work is Kuznietsov et al. [@kuznietsov2017semi] who proposed adding the supervised and unsupervised loss term in the final loss together resulting in using LiDAR and stereo at the same time in training. One of the main differences between[@kuznietsov2017semi] and ours is that we have the left-right consistency term first proposed by [@godard2017unsupervised]. Having this term makes the prediction consistent between left and right. Another difference is that their supervised loss term was directly defined on the depth values whereas we defined it on inverse depth instead. As discussed in [@kuznietsov2017semi], a loss term on depth values makes the training unstable because of the high gradients in the early stages of the training. To remedy the situation, Kuznietsov et al. proposed to gradually fade in the supervised loss to achieve convergence whereas our method does not have this problem and does not need to fade in supervised or unsupervised loss terms. In Section \[sec:result\], we show qualitatively and quantitatively that we can obtain better accuracy than [@kuznietsov2017semi], which is considered the state-of-the-art in semi-supervised single image depth estimation, as the result of the above considerations. \[!h\] ![image](images/network3.jpg) Method ====== Our approach is based on Monodepth proposed by Godard et al.[@godard2017unsupervised]. Their work is unsupervised and only uses rectified stereo images in training. In this paper, we extend their work and add ground-truth depth data as additional supervision training data. To the best of our knowledge, we are the first one to use left-right consistency proposed by Godard [@godard2017unsupervised] in a semi-supervised framework of single image depth estimation. Fig. \[fig:loss\] shows the different loss terms we use in our training phase, to be described in detail in the next section. Loss Terms ---------- Similar to [@godard2017unsupervised], we define $L_{s}$ for each output scale s. Hence the total loss is defined as $L_{total}=\sum_{s=1}^{4}{L_{s}}$. $$L_{s} = \lambda_{1}E_{reconstruction} +\lambda_{2}E_{lr} +\lambda_{3}E_{supervised} +\lambda_{4}E_{smooth}$$ where $\lambda_{i}$ are scalars and the $E$ terms are defined below: ### Unsupervised Loss $E_{reconstruction}$ We use photometric reconstruction loss between left and right image. Similar to other unsupervised methods, we assume photometric constancy between left-right images. Inverse warping has been used to get the estimated left/right image and then the estimated image is compared with its corresponding real image. In the inverse warping, bilinear sampler is used to make the pipeline differentiable. For comparison, we use the combination of the structural similarity index (SSIM) and L1 used by Godard et al[@godard2017unsupervised], and the ternary census transform used in  [@meister2018unflow; @zabih1994non; @stein2004efficient]. SSIM and the ternary census transform can compensate for the gamma and illumination change to some extent and result in improved satisfaction of the constancy assumption. Our unsupervised photometric image reconstruction loss term $E_{u}$ is defined as follows: $$\begin{split} E_{reconstruction}= &\sum_{k \in \{l,r\}}f(I^{k},\tilde{I}^{k})\\ f(I,\tilde{I})=&\frac{1}{N}\sum\limits_{i,j} \alpha_{1}*\frac{1- SSIM(I_{ij},\tilde{I}_{ij})}{2} + \\ &\alpha_{2} * ||I_{ij} - \tilde{I}_{ij} ||_{1}+ \\ &\alpha_{3} * census(I_{ij},\tilde{I}_{ij}) \end{split}$$ where $I^{l}$, $I^{r}$, $\tilde{I}^{l}$, and $\tilde{I}^{r}$ are the left image, right image and their reconstructed images, respectively. N is the total number of pixels. $\alpha_{1}$, $\alpha_{2}$, and $\alpha_{3}$ are scalars that define the contribution of each term to the total reconstruction loss.\ \ ### Left-Right Consistency Loss $E_{lr}$ To ensure the equal contribution of both left and right images in the network training, we feed left and right images independently to the network, and then we jointly optimize the output of the network such that the predicted left and right depth maps are consistent. As explained in [@godard2017unsupervised], left-right consistency loss attempts to make the inverse depth of the left (or right) view the same as the projected inverse depth of the right (or left) view. This type of loss is similar to forward-backward consistency for optical flow estimation [@meister2018unflow]. We define our left-right consistency loss as follows:\ $$% \begin{\split} E_{lr}=\frac{1}{N}\sum\limits_{i,j} || \rho_{ij}^{l} - \rho_{ij+d_{ij}^{l}}^{r} ||_{1} + || \rho_{ij}^{r} - \rho_{ij+d_{ij}^{r}}^{l} ||_{1}, % \end{split}$$ where $\rho_{l}$ and $\rho_{r}$ are the predicted inverse depth for left and right images, respectively. $d^{l}$ and $d^{r}$ are predicted disparities correspond to left and right images, respectively. The conversion of inverse depth $\rho$ to disparity $d$ is calculated using : $$\label{eq:bfd} d = baseline * f * \rho ,$$ where $f$ is the focal length of the camera.\ \[t\] ![image](images/lidarInterpolation4.jpg) ### Supervised Loss $E_{s}$ The supervised loss term measures the difference between the ground truth inverse depth $Z^{-1}$ and the predicted inverse depth $\rho$ for the points $\Omega$ where the ground truth is available.\ $$E_{supervised}= \sum\limits_{k \in \{l,r\}}\frac{1}{M_{k}}\sum\limits_{i,j \in \Omega_{k}} {||\rho_{ij}^{k}- {Z^{-1}}_{ij}^{k} ||_{1}} \\$$ where $\Omega_{l}$ and $\Omega_{r}$ are the points where the ground truth depths are available for the left and right images, respectively. $M_{l}$ and $M_{r}$ are the total number of the pixels that ground truth is available for left and right images, respectively.\ ### Smoothness Loss $E_{smooth}$ As suggested in [@godard2017unsupervised; @kuznietsov2017semi], the smoothness loss term is a regularization term that encourages the inverse depth to be locally smooth with a $L_{1}$ penalty on inverse depth gradients. We define our smoothness regularization term as: $$E_{smooth}= \frac{1}{N}\sum_{k\in \{l,r\}} \sum\limits_{i,j}|\partial_{x}\rho_{ij}^k|e^{-|\partial_{x}I_{ij}^{k}|}+ |\partial_{y}\rho_{ij}^k|e^{-|\partial{y}I_{ij}^{k}|}$$ Since the depth is not continuous around object boundaries, this term encourages the neighbouring depth values to be similar in low gradient image regions and dissimilar otherwise. Experiments {#sec:Experiments} =========== ----------------------------------------------- ------ --------- ----------- ----------- ----------- -------------- ----------------- ------------------- -------------------- Abs Rel Sq Rel RMSE RMSE$_{log}$ $\delta < 1.25$ $\delta<1.25^{2}$ $\delta <1.25^{3}$ raw LiDAR - - 0.010 0.126 1.209 0.054 0.993 0.996 0.998 DORN[@fu2018deep] S *K* **0.080** **0.332** **2.888** **0.120** **0.938** **0.986** **0.995** **SemiDepth(Ours)** S *K* 0.096 0.552 3.995 0.152 0.892 0.972 0.992 Monodepth [@godard2017unsupervised](Resnet50) U *C+K* 0.085 0.584 3.938 0.135 0.916 **0.980** 0.994 MonoGAN [@aleotti2018generative](Resnet50) U *C+K* 0.096 0.699 4.236 0.150 0.899 0.974 0.992 **SemiDepth(Ours)** U *C+K* **0.082** **0.551** **3.837** **0.134** **0.920** **0.980** **0.993** Kuznietsov et. al. [@kuznietsov2017semi] Semi *I+K* 0.089 0.478 3.610 0.138 0.906 0.980 0.995 SVSM FT [@luo2018single] Semi *I+F+K* **0.077** **0.392** 3.569 0.127 0.919 0.983 0.995 **SemiDepth (full) (Ours)** Semi *C+K* 0.078 0.417 **3.464** **0.126** **0.923** **0.984** **0.995** ----------------------------------------------- ------ --------- ----------- ----------- ----------- -------------- ----------------- ------------------- -------------------- [| P[1.5cm]{}| P[1.5cm]{}||P[1.5cm]{}|||c|c|c|c|]{} & Loss Term &\ Projected Raw LiDAR & Annotated Depth map & Left-Right Consistency & & & &\ & & & 0.120 & 1.154 & 5.614 & 0.204\ & & & 0.110 & 0.973 & 5.373 & 0.191\ & & & **0.108** & **0.949** & **5.369** & **0.190**\ For comparison, we use the popular Eigen split[@eigen2014depth] in KITTI dataset[@Uhrig2017THREEDV] that has been used in the previous methods. Using this split, we notice the same problem mentioned by Aleotti et al.[@aleotti2018generative] that, when LiDAR points are projected into the camera space, an artifact results around objects that are occluded in the image but not from the LiDAR point of view. This is due to the displacement between the LiDAR and the camera sensors. Recently Uhrig et al. [@Uhrig2017THREEDV] provided preprocessed annotated depth maps of KITTI by a preprocessing step on projected raw LiDAR data. They used multiple sequences, left-right consistency checks, and untwisting methods to carefully filter out outliers and densify projected raw LiDAR point clouds. Fig. \[fig:occlusion\] shows the occlusion artifact in raw projected LiDAR and the corresponding annotated depth map dataset provided by [@Uhrig2017THREEDV]. Since the occlusion artifact is filtered out in the annotated depth ground truth, we train our model with this more accurate ground truth. The first and the third row of Table \[tab:changes\] show the effect of the training network with the projected raw LiDAR versus the annotated ground truth. In the rest of the experiments, we evaluate our method based on the official KITTI annotated depth map rather than noisy projected raw LiDAR. Table \[tab:result\] contains the quantitative evaluation of the projected raw LiDAR based on the provided annotated depth map ground truth if a depth value of a pixel exists in the both annotated depth map and projected raw LiDAR (54.89% of the LiDAR points have been evaluated). The large error for projected raw LiDAR suggests that raw LiDAR is not as accurate as annotated depth maps. Evaulation Metrics ------------------ We use the standard metrics used by previous researchers.[@eigen2014depth; @kuznietsov2017semi; @godard2017unsupervised]. Specifically, we use RMSE, $\text{RMSE}_{log}$, absolute relative difference (Abs Rel), squared relative difference (Sq Rel), and the percentage of depths ($\delta$) within a certain threshold distance to its ground truth. Implementation Details ---------------------- We train our network from scratch using Tensorflow [@abadi2016tensorflow]. Our network and training procedure are identical to the Resnet50 network used by Godard et al.[@godard2017unsupervised] except for the decoder part in which we have one output instead of two for each scale. As [@godard2017unsupervised] all inputs are resized to 256\*512. The output of the network,i.e., inverse depth, is limited to 0 to 1.0 using the sigmoid function. We use Adam optimiser [@kingma2014adam] with $\beta _{1} = 0.9$, $\beta _{2} = 0.999$, and $\epsilon = 10^{-8}$ with initial learning rate of $\lambda = 10^{-4}$, and that remains constant for the first 15 epochs and being halved every 5 epochs for the next 10 epochs for a total of 25 epochs. The hyperparameters for loss are chosen as $\lambda _{1}=1$, $\lambda _{2}= 1.0$, $\lambda _{3} = 150.0$, $\lambda _{4} = 0.1$, $\alpha _{1} = 0.85$, $\alpha _{2} = 0.15$, and $\alpha _{3}=0.08$. Results {#sec:result} ------- Table \[tab:result\] shows the quantitative comparison with the state of the art methods in Eigen split using reliable annotated depth maps for training and testing. Although supervised methods, e.g., DORN[@fu2018deep] can achieve better quantitative performance according to some metrics than semi-supervised methods, they produce an inaccurate prediction of the top portion of the image, which can be seen in Fig. \[fig:result2\], where the LiDAR’s field of view is different from that of the camera. By treating left and right images equivalently and defining our loss symmetrically, we eliminate the post-processing step needed in [@godard2017unsupervised]. As shown in Table \[tab:result\], our unsupervised model outperforms our baseline unsupervised model [@godard2017unsupervised]. In addition, from Table \[tab:result\] among the evaluated semi-supervised methods, our method outperforms [@kuznietsov2017semi], considered the state-of-the-art, with respect to the majority of the performance metrics. To investigate in detail the effect of using left-right consistency term in the loss function and that of using the annotated LiDAR ground truth, the advantage of our method is confirmed in Table \[tab:changes\], where 200 images of KITTI Stereo 2015 split[@Menze2018JPRS] were used in this controlled experiment. \[!h\] ![image](images/result3.jpg) conclusion ========== In this paper, we have presented our approach to semi-supervised training of a deep neural network for single-image depth prediction. Our network uses a novel loss function that uses the left-right consistency term, which has not been used in the semi-supervised training of depth-prediction networks. In addition, we have explained and experimentally confirmed that, for optimal prediction result, in either supervised or semi-supervised training, careful use of the LiDAR data as the ground truth is important. Extensive experiments have been conducted to evaluate our proposed training approach, and we are able to achieve state-of-the-art performance in depth prediction accuracy. Our network model, which is based on Monodepth that is popularly used within the robotics community, is available online for download. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported by NSERC Canadian Robotics Network (NCRN). We would like to thank Godard et al.[@godard2017unsupervised] for making their code publicly available. [^1]: Source code is available at [ ]{}
--- abstract: 'In my joint paper [@MR2132917] with Rohit Parikh we investigate a logic arising from finite information. Here we consider another kind of limited information, namely information with a small number of errors, and prove a related completeness theorem. We point out that this approach naturally leads to considering multi-teams in the team semantics that lies behind [@MR2132917].' author: - | Jouko Väänänen [^1]\ Department of Mathematics and Statistics\ University of Helsinki, Finland\ and\ Institute for Logic, Language and Computation\ University of Amsterdam, The Netherlands title: The Logic of Approximate Dependence --- ø ¶ = The idea of [*finite information logic*]{} \[1\] is that when quantifiers, especialy the existential quantifiers, express choices in social context, the choices are based on [*finite*]{} information about the parameters present. In this paper we consider a different kind of restriction. We do not restrict the information available but we allow a small number of errors. In [*social sofware*]{} a few errors can perhaps be allowed, especially if there is an agreement about it. Consider the sentences “On these flights I have an exit-row seat, apart from a few exceptions.”, “Apart from a few, the participants are logicians.”, One way to handle such expressions is the introduction of generalized quantifiers, such as “few $x$”, “most $x$”, “all $x$ but a few”, etc. The approach of this paper is different, or at least on the surface it looks different. We use the [*team semantics*]{} of \[2\]. In team semantics the main tool for defining the meaning of formulas is not that of an assignment but that of a [*set*]{} of assignments. Such sets are called in \[2\] teams. Intuitively a team can represent (or manifest) different kinds of things, such as ------------------------------------ uncertainty belief plays of a game data about a scientific experiment possible voting profiles database dependence independence inclusion exclusion etc. ------------------------------------ The obvious advantage of considering meaning in terms of teams rather than single assignments is that teams can indeed manifest change and variation unlike static single assignments. For example, Table 1 can tells us e.g. that $y=x^2$ apart from one exception, and $z$ is a constant zero apart from one exception. Likewise, Table 2 tells us that an employee’s salary depends only on the department except for one person. In medical data about causes and effects of treatment there can often be exceptions although there may be compelling evidence of a causal relationship otherwise. $x$ $y$ $z$ ----- ----- ----- 2 4 0 5 25 0 3 9 1 2 3 0 Table 1 Employee Department Salary ---------- ------------ --------- John I 120 000 Mary II 130 000 Ann I 120 000 Paul I 120 000 Matt II 130 000 Julia I 130 000 Table 2 While team semantics is suitable for many purposes, we focus here on the concept of dependence, the main concept of the paper \[1\], too. Dependence is used throughout science and humanities. In particular it appears in database theory in the form of functional dependence. In [@MR2351449] the following concept was introduced: A [*team*]{} is any set of assignments for a fixed set of variables. A team $X$ is said to satisfy the dependence atom $$\label{1} {{=\mkern-1.2mu}}(x, y),$$ where $x$ and $y$ are finite sequences of variables, if any two assignments $s$ and $s'$ in $X$ satisfy $$s(x)=s'(x) \rightarrow s(y)=s'(y). $$ Dependence logic (\[2\]) arises from first order logic by the addition of the dependence atoms (\[1\]). The logical operations $\neg$, $\land$, $\vee$, $\forall$, $\exists$ are defined in such a way that dependence logic is a conservative extension of classical first order logic. The exact expressive power of dependence logic is existential second order logic. With the purpose in mind to capture a concept of dependence logic which is more realistic in the sense that a couple of errors are allowed, we now define[^2]: Suppose $p$ is a real number, $0\leq p \leq 1$. A finite team $X$ is said to satisfy the [*approximate dependence atom*]{} $${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x, y)$$ if there is $Y \subseteq X$, $|Y|\leq p \cdot |X|$, such that the team $X\setminus Y$ satisfies ${{=\mkern-1.2mu}}(x, y)$. We then write $X\models{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x, y)$. For arbitrary teams (finite or infinite) $X$ we say that $X$ satisfies the atom ${{=\mkern-1.2mu}}(x, y)$ [*mod finite*]{}, if there is a finite $Y$ such that $X\setminus{Y}$ satisfies ${{=\mkern-1.2mu}}(x, y)$. In symbols $X\models {{=\mkern-1.2mu}}^*(x, y)$. In other words, a finite team of size $n$ satisfies ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$ if it satisfies ${{=\mkern-1.2mu}}(x,y)$ after we delete a portion measured by the number $p$, of the assignments of $X$. More exactly, we delete up to $p\cdot n$ assignments from the team. Hence the word “approximate". The emphasis in approximate dependence ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x, y)$ is on small $p$ but the general concept is defined for all $p$. It is difficult to imagine any practical importance for, say, ${{=\mkern-1.2mu}_{.95}\mkern-1.4mu}(x, y)$. This is the proposition that the team has a 5% portion where $y$ is functionally determined by $x$. However, if we suppose that the relationship between $x$ and $y$ is totally random, then it may be significant in a big dataset of millions of rows, to observe that the ${{=\mkern-1.2mu}_{.95}\mkern-1.4mu}(x, y)$ holds and violates total randomness. For a trivial example, every finite team satisfies ${{=\mkern-1.2mu}_{1}\mkern-1.4mu}(x, y)$, because the empty team always satisfies ${{=\mkern-1.2mu}}(x, y)$. On the other hand, ${{=\mkern-1.2mu}_{0}\mkern-1.4mu}(x, y)$ is just the old ${{=\mkern-1.2mu}}(x, y)$. Since singleton teams always satisfy ${{=\mkern-1.2mu}}(x,y)$, a team of size $n$ always satisfies ${{=\mkern-1.2mu}_{1-\frac{1}{n}}\mkern-1.4mu}(x,y)$. A finite team trivially satisfies ${{=\mkern-1.2mu}}^*(x, y)$, whatever $x$ and $y$, so the “mod finite” dependence is only interesting in infinite teams. The team of Table 1 satisfies ${{=\mkern-1.2mu}_{\frac{1}{4}}\mkern-1.4mu}(x, y)$ and the team of Table 2 satisfies $${{=\mkern-1.2mu}_{\frac{1}{6}}\mkern-1.4mu}({\tt Department}, {\tt Salary}).$$ We claim that approximate dependence ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x, y)$ is a much more common phenomenon in science and humanities than full dependence ${{=\mkern-1.2mu}}(x, y)$. Any database of a significant size contains errors for merely human reasons or for reasons of errors in transmission. Any statistical data of medical, biological, social, etc information has exceptions partly because of the nature of the data. One rarely if ever encounters absolute dependence of the kind ${{=\mkern-1.2mu}}(x, y)$ in practical examples. The dependencies we encounter in practical applications have exceptions, the bigger the data the more there are exceptions. For the dependence ${{=\mkern-1.2mu}_{.1}\mkern-1.4mu}(x, y)$ we allow an error value in 10% of the cases. This may be unacceptable for some applications but overwhelming evidence of functional dependence in others. A different kind of approximate functional dependence arises if we think of the individual [*values*]{} of variables as being slightly off. For example, we can consider a functional dependence in which values of $y$ are [*almost*]{} the same whenever the values of $x$ are almost the same. This direction is pursued in [@brvv]. We have emphasized the relevance of ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x, y)$ over and above ${{=\mkern-1.2mu}}(x, y)$. So how does dependence logic change if we allow ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x, y)$ in addition to ${{=\mkern-1.2mu}}(x, y)$, that is, if we allow dependance with errors in addition to dependence without errors? One of the first results about database dependencies is the so called Armstrong Completeness Theorem \[3\]. It has as its starting point a set of axioms for the dependance ${{=\mkern-1.2mu}}(x, y)$. We now adapt the axioms from \[3\] to the more general case of approximate dependence atoms. Concatenation of two finite sequences of variables, $x$ and $y$, is denoted $xy$. Such finite sequences can be also empty. \[2\] The [*axioms*]{} of approximate dependence are: A1 : ${{=\mkern-1.2mu}_{0}\mkern-1.4mu}(xy,x)$ (Reflexivity) A2 : ${{=\mkern-1.2mu}_{1}\mkern-1.4mu}(x,y)$ (Totality) The [*rules*]{} of approximate dependence are: A3 : If ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,yv)$, then ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(xu,y)$ (Weakening) A4 : If ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$, then ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(xu,yu)$ (Augmentation) A5 : If ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(xu,yv)$, then ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(ux,yv)$ and ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(xu,vy)$ (Permutation) A6 : If ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$ and ${{=\mkern-1.2mu}_{q}\mkern-1.4mu}(y,v)$, where $p+q\le 1$, then ${{=\mkern-1.2mu}_{p+q}\mkern-1.4mu}(x,v)$ (Transitivity) A7 : If ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$ and $p\le q\le 1$, then ${{=\mkern-1.2mu}_{q}\mkern-1.4mu}(x,y)$ (Monotonicity) These axioms are always satisfied in finite teams. As to (A1), we observe that the empty team $\emptyset$ satisfies ${{=\mkern-1.2mu}}(x, y)$ and hence we can take ${Y}=X$ in Definition \[2\]. As to (A2) we observe that every team satisfies ${{=\mkern-1.2mu}}(x, y)$ and so we can take ${Y}=\emptyset$ in Definition \[2\]. The axiom (A3) can be verified as follows. Suppose $X\setminus{Y} \models {{=\mkern-1.2mu}}(x, yz)$, where ${Y}\leq p\cdot X$, and the domain of $X$ (and of ${Y}$) includes $xuyz$ so that both ${{=\mkern-1.2mu}}(x, yz)$ and ${{=\mkern-1.2mu}}(xu, y)$ can be meaningfully checked for satisfiability in $X$. Suppose $s$, $s'\in X$\\${Y}$ such that $s(xu)=s'(xu)$. Then $s(x)=s'(x)$. Hence $s(yz)=s'(yz)$, whence finally $s(y)=s'(y)$. Let us then verify the validity of (A6). Suppose $X\setminus{Y} \models {{=\mkern-1.2mu}}(x, y)$, $X\setminus{Z} \models {{=\mkern-1.2mu}}(y, z)$, where $|{Y}|\leq p\cdot|X|$ and $|Z|\leq q\cdot|X|$. Then $|{Y}\cup Z|\leq|{Y}|+|Z|\leq (p+q)\cdot |X|$ and $X\setminus ({Y}\cup Z)\models {{=\mkern-1.2mu}}(x, z)$. Finally, (A7) is trivial. The above axioms and rules are designed with finite derivations in mind. With infinitely many numbers $p$ we can have infinitary logical consequences (in finite teams), such as $$\mbox{$\{{{=\mkern-1.2mu}_{\frac{1}{n}}\mkern-1.4mu}(x,y) : n=1,2,\ldots\}\models{{=\mkern-1.2mu}_{0}\mkern-1.4mu}(x,y)$},$$ which do not follow by the axioms and rules (A1)-(A6)[^3]. We now focus on finite derivations and finite sets of approximate dependences. We prove the following Completeness Theorem[^4]: \[main\] Suppose $\Sigma$ is a finite set of approximate dependence atoms. Then ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}{(x,y)}$ follows from $\Sigma$ by the above axioms and rules if and only if every finite team satisfying $\Sigma$ also satisfies ${{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$. We first develop some auxiliary concepts and observations for the proof. Let $\tau$ be a pair $(\Sigma,{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y))$, where $\Sigma$ is a finite set of approximate dependencies. For such $\tau$ let $Z_\tau$ be the finite set of all variables in $\Sigma\cup\{{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)\}$. Let $C_\tau$ be the smallest set containing $\Sigma$ and closed under the rules $(A1)-(A6)$ (but not necessarily under (A7)) for variables in $Z_\tau$. Note that $C_\tau$ is finite. \[3\] $\Sigma\vdash{{=\mkern-1.2mu}_{t}\mkern-1.4mu}(u,v)$ iff $\exists r\le t({{=\mkern-1.2mu}_{r}\mkern-1.4mu}(u,v)\in C_\tau)$. The implication from right to left is trivial. For the converse it suffices to show that the set $$\Sigma'=\{{{=\mkern-1.2mu}_{t}\mkern-1.4mu}(u,v) : \exists r\le t({{=\mkern-1.2mu}_{r}\mkern-1.4mu}(u,v)\in C_\tau)\}$$ is closed under (A1)-(A7). Suppose $\tau=(\Sigma,{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y))$. For any variable $y$ let $$d_\tau(y)=\min\{r\in[0,1]:{{=\mkern-1.2mu}_{r}\mkern-1.4mu}(x,y)\in C_\tau\}.$$ This definition makes sense because there are only finitely many ${{=\mkern-1.2mu}_{r}\mkern-1.4mu}(u,v)$ in $C_\tau$. Note that $d_\tau(x)=0$ by axiom (A1). By Lemma \[3\], $$d_\tau(y)=\min\{r\in[0,1]:\Sigma\vdash{{=\mkern-1.2mu}_{r}\mkern-1.4mu}(x,y)\}.$$ If $\Sigma\vdash{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(u,v)$, then $d_\tau(v)-d_\tau(u)\le p$. Suppose $d_\tau(u)=r$, $d_\tau(v)=t$, $\Sigma\vdash{{=\mkern-1.2mu}_{r}\mkern-1.4mu}(x,u)$ ($r$ minimal) and $\Sigma\vdash{{=\mkern-1.2mu}_{t}\mkern-1.4mu}(x,v)$ ($t$ minimal). Now $\Sigma \vdash {{=\mkern-1.2mu}_{r}\mkern-1.4mu}(x,u)$ and $\Sigma\vdash{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(u,v)$. Hence $\Sigma\vdash{{=\mkern-1.2mu}_{r+p}\mkern-1.4mu}(x,v)$. By the minimality of $t$, $t\le r+p$. Hence $t-r\le p$. For a given $\Sigma$ there are only finitely many numbers $d_\tau(u)$, $u\in Z_\tau$, because $C_\tau$ is finite. Let $A_\tau$ consist of $p$ and the set of $d_\tau(u)$ such that $u\in Z_\tau$. Let $n=1+\max\{\lceil 2/(a-b)\rceil :a,b\in A-\tau, a\ne b\}$. We define a team $X_\tau$ of size $n$ as follows: $$X_\tau=\{s_0,\ldots, s_n\},$$ where for $\frac{m}{n}\le d_\tau(u)<\frac{m+1}{n}$ we let $$s_i(u)=\left\{ \begin{array}{ll} i, &\mbox{if } {i}\le m \\ m, &\mbox{if }{i}> m\\ \end{array}\right.$$ [c|cccc]{} &$x$&…&$u$&…\ $s_0$&0&…& 0 & …\ $s_1$&0&…& 1 & …\ $s_2$&0&…& 2 & …\ \ $s_{m}$&0&…& $m$ & …\ \ $s_{n-1}$&0&…& $m$ & …\ \[fre\] Suppose $X_\tau\models {{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$. Then $\Sigma\vdash{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$. Suppose $X_\tau\models {{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$ but $\Sigma\nvdash{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$. Now $d_\tau(y)>p$. Let $\frac{m}{n}\le d_\tau(y)<\frac{m+1}{n}$. One has to take all the assignments $s_i$, $i\le m-1$, away from $X_\tau$ in order for the remainder to satisfy ${{=\mkern-1.2mu}}(x,y)$. Hence $p\cdot n\ge m$ i.e. $p\ge \frac{m}{n}$. But we have chosen $n$ so that $1/n<d_\tau(y)-p$. Hence $$p<d_\tau(y)-\frac{1}{n}\le \frac{m+1}{n}-\frac{1}{n}=\frac{m}{n},$$ a contradiction. \[fer\] Suppose $\Sigma\vdash{{=\mkern-1.2mu}_{q}\mkern-1.4mu}(u,v)$. Then $X_\tau\models {{=\mkern-1.2mu}_{q}\mkern-1.4mu}(u,v)$. We know already $d_\tau(v)-d_\tau(u)\le q$. If $d_\tau(v)\le d_\tau(u)$, then $X_\tau\models{{=\mkern-1.2mu}}(u,v)$, and hence all the more $X_\tau\models {{=\mkern-1.2mu}_{q}\mkern-1.4mu}(u,v)$. Let us therefore assume $d_\tau(v)>d_\tau(u)$. Since $2/n<d_\tau(v)-d_\tau(u)$, there are $m$ and $k$ such that $$\frac{m}{n}\le d_\tau(u)<\frac{m+1}{n}<\frac{k}{n}\le d_\tau(v)<\frac{k+1}{n}.$$ In order to satisfy ${{=\mkern-1.2mu}}(x,y)$ one has to delete $k-m$ assignments from $X_\tau$. But this is fine, as $qn\ge (d_\tau(v)-d_\tau(u))n\ge k-d_\tau(u)n\ge k-m$. Lemmas \[fre\] and \[fer\] finish the proof of Theorem \[main\]. A problematic feature of the approximate dependence atom is that it is [**not local**]{}, that is, the truth of $X\models{{=\mkern-1.2mu}_{p}\mkern-1.4mu}(x,y)$ may depend on the values of the assignments in $X$ on variables $u$ not occurring in $x$ or $y$. To see this, consider the team $Y$ of Figure \[nl\]. Now $Y$ satisfies ${{=\mkern-1.2mu}_{\frac{1}{3}}\mkern-1.4mu}(x,y)$. Let $Z$ be the team $Y\restriction xy$. Now $Z$ does not satisfy ${{=\mkern-1.2mu}_{\frac{1}{3}}\mkern-1.4mu}(x,y)$, as Figure \[nl\] shows. [ccc]{} $x$&$y$&$z$\ 0&0&0\ 0&0&1\ 0&1&1\ \ [cc]{} $x$&$y$\ 0&0\ 0&1\ \ \ This problem can be overcome by the introduction of [**multi-teams**]{}: A [*multi-team*]{} is a pair $(X,\tau)$, where $X$ is a set and $\tau$ is a function such that 1. $\dom(\tau)=X$, 2. If $i\in X$, then $\tau(i)$ is an assignment for one and the same set of variables. This set of variables is denoted by $\dom(X)$. An ordinary team $X$ can be thought of as the multi-team $(X,\tau)$, where $\tau(i)=i$ for all $i\in X$. When approximate dependence is developed for multi-teams the non-locality phenomenon disappears (see Figure \[nlv\]). Moreover, the above Theorem \[main\] still holds. The canonical example of a team in dependence logic is the set of plays where a player is using a fixed strategy. Such a team satisfies certain dependence atoms reflecting commitments the player has made concerning information he or she is using. If such dependence atoms hold only approximatively, the player is allowed to make a small number of deviations from his or her commitments. Let us suppose the player is committed to $y$ being a function of $x$ during the game. Typically $y$ is a move of this player and $x$ is the information set available for this move. When we look at a table of plays where the player is following his or her strategy, we may observe that indeed $y$ is functionally determined by $x$ except in a small number of plays. To evaluate the amount of such exceptional plays we can look at the table of all possible plays where the said strategy is used and count the numerical proportion of plays that have to be omitted in order that the promised functional dependence holds. We have here merely scratched the surface of approximate dependence. When approximate dependence atoms are added to first order logic we can express propositions such as “the predicate $P$ consists of half of all elements, give or take 5%" or “the predicates $P$ and $Q$ have the same number of elements, with a 1 % margin of error". To preserve locality we have to introduce multi-teams. On the other hand that opens the door to probabilistic teams, teams where every assignment is associated with a probability with which a randomly chosen element of the team is that very assignment. We will not pursue this idea further here. [ccc]{} $x$&$y$&$z$\ 0&0&0\ 0&0&1\ 0&1&1\ \ [cc]{} $x$&$y$\ 0&0\ 0&0\ 0&1\ \ [1]{} Radim Bělohlávek and Vilém Vychodil. Data tables with similarity relations: Functional dependencies, complete rules and non-redundant bases. In Mong Lee, Kian-Lee Tan, and Vilas Wuwongse, editors, [*Database Systems for Advanced Applications*]{}, volume 3882 of [*Lecture Notes in Computer Science*]{}, pages 644–658. Springer Berlin Heidelberg, 2006. Jyrki Kivinen and Heikki Mannila. Approximate inference of functional dependencies from relations. , 149(1):129 – 149, 1995. Fourth International Conference on Database Theory (ICDT ’92). Rohit Parikh and Jouko V[ä]{}[ä]{}n[ä]{}nen. Finite information logic. , 134(1):83–93, 2005. Jouko V[ä]{}[ä]{}n[ä]{}nen. , volume 70 of [*London Mathematical Society Student Texts*]{}. Cambridge University Press, Cambridge, 2007. [^1]: This paper was written while the author was visiting the Computer Science Department of the University of California, Santa Cruz. The author is grateful to his host Phokion Kolaitis for the invitation and for the hospitality. The author is grateful for helpful discussions on this topic with P. Galliani, L. Hella, and P. Kolaitis. The author also thanks J. Kivinen, Jixue Liu, H. Toivonen and M. Warmuth for helpful suggestions. Research partially supported by grant 40734 of the Academy of Finland. [^2]: An essentially same, as well as related approximate functional dependences, were introduced already in [@Kivinen1995129]. [^3]: We can use this example to encode the Halting Problem to the question whether a recursive set of approximate dependence atoms logically implies a given approximate dependence atom. [^4]: Proposition A.3 of [@Kivinen1995129] is a kind of completeness theorem in the spirit the below theorem for one-step derivations involving approximate dependence atoms.
--- abstract: 'There is renewed interest in formulating integration as a statistical inference problem, motivated by obtaining a full distribution over numerical error that can be propagated through subsequent computation. Current methods, such as Bayesian Quadrature, demonstrate impressive empirical performance but lack theoretical analysis. An important challenge is therefore to reconcile these probabilistic integrators with rigorous convergence guarantees. In this paper, we present the first probabilistic integrator that admits such theoretical treatment, called Frank-Wolfe Bayesian Quadrature (FWBQ). Under FWBQ, convergence to the true value of the integral is shown to be up to exponential and posterior contraction rates are proven to be up to super-exponential. In simulations, FWBQ is competitive with state-of-the-art methods and out-performs alternatives based on Frank-Wolfe optimisation. Our approach is applied to successfully quantify numerical error in the solution to a challenging Bayesian model choice problem in cellular biology.' author: - | François-Xavier Briol\ Department of Statistics\ University of Warwick\ `f-x.briol@warwick.ac.uk`\ Chris J. Oates\ School of Mathematical and Physical Sciences\ University of Technology, Sydney\ `christopher.oates@uts.edu.au`\ Mark Girolami\ Department of Statistics\ University of Warwick\ & The Alan Turing Institute for Data Science\ `m.girolami@warwick.ac.uk`\ Michael A. Osborne\ Department of Engineering Science\ University of Oxford\ `mosb@robots.ox.ac.uk`\ bibliography: - 'NIPS\_bib.bib' title: 'Frank-Wolfe Bayesian Quadrature: Probabilistic Integration with Theoretical Guarantees' --- Introduction ============ Computing integrals is a core challenge in machine learning and numerical methods play a central role in this area. This can be problematic when a numerical integration routine is repeatedly called, maybe millions of times, within a larger computational pipeline. In such situations, the cumulative impact of numerical errors can be unclear, especially in cases where the error has a non-trivial structural component. One solution is to model the numerical error statistically and to propagate this source of uncertainty through subsequent computations. Conversely, an understanding of how errors arise and propagate can enable the efficient focusing of computational resources upon the most challenging numerical integrals in a pipeline. Classical numerical integration schemes do not account for prior information on the integrand and, as a consequence, can require an excessive number of function evaluations to obtain a prescribed level of accuracy [@OHagan1984]. Alternatives such as Quasi-Monte Carlo (QMC) can exploit knowledge on the smoothness of the integrand to obtain optimal convergence rates [@Dick2010]. However these optimal rates can only hold on sub-sequences of sample sizes $n$, a consequence of the fact that all function evaluations are weighted equally in the estimator [@Owen2014]. A modern approach that avoids this problem is to consider arbitrarily weighted combinations of function values; the so-called *quadrature rules* (also called cubature rules). Whilst quadrature rules with non-equal weights have received comparatively little theoretical attention, it is known that the extra flexibility given by arbitrary weights can lead to extremely accurate approximations in many settings (see applications to image de-noising [@Chen2015] and mental simulation in psychology [@Hamrick2013mental]). Probabilistic numerics, introduced in the seminal paper of [@Diaconis1988], aims at re-interpreting numerical tasks as inference tasks that are amenable to statistical analysis.[^1] Recent developments include probabilistic solvers for linear systems [@Hennig2015solvers] and differential equations [@Conrad2015; @Schober2014]. For the task of computing integrals, Bayesian Quadrature (BQ) [@OHagan1991] and more recent work by [@Oates2015] provide probabilistic numerics methods that produce a full posterior distribution on the output of numerical schemes. One advantage of this approach is that we can propagate uncertainty through all subsequent computations to explicitly model the impact of numerical error [@Hennig2015ProbNum]. Contrast this with chaining together classical error bounds; the result in such cases will typically be a weak bound that provides no insight into the error structure. At present, a significant shortcoming of these methods is the absence of theoretical results relating to rates of posterior contraction. This is unsatisfying and has likely hindered the adoption of probabilistic approaches to integration, since it is not clear that the induced posteriors represent a sensible quantification of the numerical error (by classical, frequentist standards). This paper establishes convergence rates for a new probabilistic approach to integration. Our results thus overcome a key perceived weakness associated with probabilistic numerics in the quadrature setting. Our starting point is recent work by [@Bach2012], who cast the design of quadrature rules as a problem in convex optimisation that can be solved using the Frank-Wolfe (FW) algorithm. We propose a hybrid approach of [@Bach2012] with BQ, taking the form of a quadrature rule, that (i) carries a full probabilistic interpretation, (ii) is amenable to rigorous theoretical analysis, and (iii) converges orders-of-magnitude faster, empirically, compared with the original approaches in [@Bach2012]. In particular, we prove that super-exponential rates hold for posterior contraction (concentration of the posterior probability mass on the true value of the integral), showing that the posterior distribution provides a sensible and effective quantification of the uncertainty arising from numerical error. The methodology is explored in simulations and also applied to a challenging model selection problem from cellular biology, where numerical error could lead to mis-allocation of expensive resources. Background {#section:sigmapoint} ========== Quadrature and Cubature Methods ------------------------------- Let $\mathcal{X} \subseteq \mathbb{R}^d$ be a measurable space such that $d \in \mathbb{N}_{+}$ and consider a probability density $p(x)$ defined with respect to the Lebesgue measure on $\mathcal{X}$. This paper focuses on computing integrals of the form $\int f(x) p(x) \mathrm{d}x$ for a test function $f:\mathcal{X} \rightarrow \mathbb{R}$ where, for simplicity, we assume $f$ is square-integrable with respect to $p(x)$. A *quadrature rule* approximates such integrals as a weighted sum of function values at some design points $\{x_i\}_{i=1}^n \subset \mathcal{X}$: $$\int_\mathcal{X} f(x) p(x) \mathrm{d}x \approx \sum_{i=1}^n w_i f(x_i).$$ Viewing integrals as projections, we write $p[f]$ for the left-hand side and $\hat{p}[f]$ for the right-hand side, where $\hat{p} = \sum_{i=1}^n w_i \delta(x_i)$ and $\delta(x_i)$ is a Dirac measure at $x_i$. Note that $\hat{p}$ may not be a probability distribution; in fact, weights $\{w_i\}_{i=1}^n$ do not have to sum to one or be non-negative. Quadrature rules can be extended to multivariate functions $f:\mathcal{X} \rightarrow \mathbb{R}^d$ by taking each component in turn. There are many ways of choosing combinations $\{x_i,w_i\}_{i=1}^n$ in the literature. For example, taking weights to be $w_i = 1/n$ with points $\{x_i\}_{i=1}^n$ drawn independently from the probability distribution $p(x)$ recovers basic Monte Carlo integration. The case with weights $w_i= 1/n$, but with points chosen with respect to some specific (possibly deterministic) schemes includes kernel herding [@Chen2010] and Quasi-Monte Carlo (QMC) [@Dick2010]. In Bayesian Quadrature, the points $\{x_i\}_{i=1}^n$ are chosen to minimise a posterior variance, with weights $\{w_i\}_{i=1}^n$ arising from a posterior probability distribution. Classical error analysis for quadrature rules is naturally couched in terms of minimising the worst-case estimation error. Let $\mathcal{H}$ be a Hilbert space of functions $f: \mathcal{X}\rightarrow \mathbb{R}$, equipped with the inner product $\langle \cdot,\cdot \rangle_{\mathcal{H}}$ and associated norm $\|\cdot\|_{\mathcal{H}}$. We define the *maximum mean discrepancy* (MMD) as: $$\text{MMD}\big(\{x_i,w_i\}_{i=1}^n \big) {\coloneqq}\sup_{\substack{f \in \mathcal{H}: \|f\|_{\mathcal{H}}=1}} \big|p[f] - \hat{p}[f] \big|.$$ The reader can refer to [@Sriperumbudur2009] for conditions on $\mathcal{H}$ that are needed for the existence of the MMD. The rate at which the MMD decreases with the number of samples $n$ is referred to as the ‘convergence rate’ of the quadrature rule. For Monte Carlo, the MMD decreases with the slow rate of $\mathcal{O}_P(n^{-1/2})$ (where the subscript $P$ specifies that the convergence is in probability). Let $\mathcal{H}$ be a RKHS with reproducing kernel $k: \mathcal{X}\times \mathcal{X} \rightarrow \mathbb{R}$ and denote the corresponding canonical feature map by $\Phi(x) = k(\cdot,x)$, so that the mean element is given by $\mu_p(x) = p[\Phi(x)] \in \mathcal{H}$. Then, following [@Sriperumbudur2009] $$\text{MMD}\big(\{x_i,w_i\}_{i=1}^n \big) = \| \mu_p - \mu_{\hat{p}} \|_{\mathcal{H}}.$$ This shows that to obtain low integration error in the RKHS $\mathcal{H}$, one only needs to obtain a good approximation of its mean element $\mu_p$ (as $\forall f \in \mathcal{H}$: $p[f] = \langle f , \mu_p \rangle_{\mathcal{H}}$). Establishing theoretical results for such quadrature rules is an active area of research [@Bach2015]. Bayesian Quadrature {#subsec:BQ} ------------------- Bayesian Quadrature (BQ) was originally introduced in [@OHagan1991] and later revisited by [@Rasmussen2003; @Gunter2014] and [@Osborne2012]. The main idea is to place a functional prior on the integrand $f$, then update this prior through Bayes’ theorem by conditioning on both samples $\{x_i\}_{i=1}^n$ and function evaluations at those sample points $\{\mathrm{f}_i\}_{i=1}^n$ where $\mathrm{f}_i = f(x_i)$. This induces a full posterior distribution over functions $f$ and hence over the value of the integral $p[f]$. The most common implementation assumes a Gaussian Process (GP) prior $f \sim \mathcal{GP}(0,k)$. A useful property motivating the use of GPs is that linear projection preserves normality, so that the posterior distribution for the integral $p[f]$ is also a Gaussian, characterised by its mean and covariance. A natural estimate of the integral $p[f]$ is given by the mean of this posterior distribution, which can be compactly written as $$\hat{p}_{\text{BQ}}[f] = \mathrm{z}^T K^{-1} \mathrm{f}. \label{BQmeaneq}$$ where $\mathrm{z}_i = \mu_p(x_i)$ and $K_{ij} = k(x_i,x_j)$. Notice that this estimator takes the form of a quadrature rule with weights $\mathrm{w}^{\text{BQ}} =\mathrm{z}^T K^{-1}$. Recently, [@Sarkka2015] showed how specific choices of kernel and design points for BQ can recover classical quadrature rules. This begs the question of how to select design points $\{x_i\}_{i=1}^n$. A particularly natural approach aims to minimise the posterior uncertainty over the integral $p[f]$, which was shown in [@Huszar2012 Prop. 1] to equal: $$v_{\text{BQ}}\big(\{x_i\}_{i=1}^n \big) \; = \; p [\mu_p] - \mathrm{z}^T K^{-1} \mathrm{z} \; = \; \text{MMD}^2\big(\{x_i,w_i^{\text{BQ}}\}_{i=1}^n \big). \label{eq:variance}$$ Thus, in the RKHS setting, minimising the posterior variance corresponds to minimising the worst case error of the quadrature rule. Below we refer to Optimal BQ (OBQ) as BQ coupled with design points $\{x_i^\text{OBQ}\}_{i=1}^n$ chosen to globally minimise . We also call Sequential BQ (SBQ) the algorithm that greedily selects design points to give the greatest decrease in posterior variance at each iteration. OBQ will give improved results over SBQ, but cannot be implemented in general, whereas SBQ is comparatively straight-forward to implement. There are currently no theoretical results establishing the convergence of either BQ, OBQ or SBQ. [*Remark:*]{} is independent of observed function values $\mathrm{f}$. As such, no active learning is possible in SBQ (i.e. surprising function values never cause a revision of a planned sampling schedule). This is not always the case: For example [@Gunter2014] approximately encodes non-negativity of $f$ into BQ which leads to a dependence on $\mathrm{f}$ in the posterior variance. In this case sequential selection becomes an [*active*]{} strategy that outperforms batch selection in general. Deriving Quadrature Rules via the Frank-Wolfe Algorithm {#section:introFW} ------------------------------------------------------- Despite the elegance of BQ, its convergence rates have not yet been rigorously established. In brief, this is because $\hat{p}_{\text{BQ}}[f]$ is an orthogonal projection of $f$ onto the [*affine*]{} hull of $\{\Phi(x_i)\}_{i=1}^n$, rather than e.g. the [*convex*]{} hull. Standard results from the optimisation literature apply to bounded domains, but the affine hull is not bounded (i.e. the BQ weights can be arbitrarily large and possibly negative). Below we describe a solution to the problem of computing integrals recently proposed by [@Bach2012], based on the FW algorithm, that restricts attention to the (bounded) convex hull of $\{\Phi(x_i)\}_{i=1}^n$. The Frank-Wolfe (FW) algorithm (Alg. \[alg:FWalgorithm\]), also called the conditional gradient algorithm, is a convex optimization method introduced in [@Frank1956]. It considers problems of the form $\min_{g \in \mathcal{G}} J(g)$ where the function $J:\mathcal{G}\rightarrow\mathbb{R}$ is convex and continuously differentiable. A particular case of interest in this paper will be when the domain $\mathcal{G}$ is a compact and convex space of functions, as recently investigated in [@Jaggi2013]. These assumptions imply the existence of a solution to the optimization problem. At each iteration $i$, the FW algorithm computes a linearisation of the objective function $J$ at the previous state $g_{i-1} \in \mathcal{G}$ along its gradient $(DJ)(g_{i-1})$ and selects an ‘atom’ $\bar{g}_i \in \mathcal{G}$ that minimises the inner product a state $g$ and $(DJ)(g_{i-1})$. The new state $g_{i} \in \mathcal{G}$ is then a convex combination of the previous state $g_{i-1}$ and of the atom $\bar{g}_{i}$. This convex combination depends on a step-size $\rho_i$ which is pre-determined and different versions of the algorithm may have different step-size sequences. function $J$, initial state $g_1=\bar{g}_1 \in \mathcal{G}$ (and, for FW only: step-size sequence $\{\rho_i\}_{i=1}^n$). Compute $\bar{g}_i = \text{argmin}_{g \in \mathcal{G}} \big\langle g , (DJ)(g_{i-1}) \big\rangle_{\times} $ Update $g_{i} = (1 - \rho_i) g_{i-1} + \rho_i \bar{g}_i$ Our goal in quadrature is to approximate the mean element $\mu_p$. Recently [@Bach2012] proposed to frame integration as a FW optimisation problem. Here, the domain $\mathcal{G} \subseteq \mathcal{H}$ is a space of functions and taking the objective function to be: $$J(g) = \frac{1}{2}\big\| g - \mu_p \big\|^2_{\mathcal{H}}.$$ This gives an approximation of the mean element and $J$ takes the form of half the posterior variance (or the MMD$^2$). In this functional approximation setting, minimisation of $J$ is carried out over $\mathcal{G} = \mathcal{M}$, the marginal polytope of the RKHS $\mathcal{H}$. The marginal polytope $\mathcal{M}$ is defined as the closure of the convex hull of $\Phi(\mathcal{X})$, so that in particular $\mu_p \in \mathcal{M}$. Assuming as in [@Lacoste-Julien2015] that $\Phi(x)$ is uniformly bounded in feature space (i.e. $\exists R>0: \forall x \in \mathcal{X}$, $\|\Phi(x)\|_{\mathcal{H}} \leq R$), then $\mathcal{M}$ is a closed and bounded set and can be optimised over. In order to define the algorithm rigorously in this case, we introduce the Fréchet derivative of $J$, denoted $DJ$, such that for $\mathcal{H}^*$ being the dual space of $\mathcal{H}$, we have the unique map $DJ:\mathcal{H} \rightarrow \mathcal{H}^*$ such that for each $g \in \mathcal{H}$, $(DJ)(g)$ is the function mapping $h \in \mathcal{H}$ to $(DJ)(g)(h) = \big\langle g - \mu, h \big\rangle_\mathcal{H}$. We also introduce the bilinear map $\langle \cdot, \cdot \rangle_{\times}: \mathcal{H} \times \mathcal{H}^* \rightarrow \mathbb{R}$ which, for $F \in \mathcal{H}^*$ given by $F(g) = \langle g, f \rangle_\mathcal{H}$, is the rule giving $\langle h, F \rangle_{\times} = \langle h, f \rangle_{\mathcal{H}}$. A particular advantage of this method is that it leads to ‘sparse’ solutions which are linear combinations of the atoms $\{\bar{g}_i\}_{i=1}^n$ [@Bach2012]. In particular this provides a weighted estimate for the mean element: $$\label{eq:FWsparse} \hat{\mu}_{\text{FW}} {\coloneqq}g_n = \sum_{i=1}^n \Big( \prod_{j=i+1}^{n} \big( 1 - \rho_{j-1} \big) \rho_{i-1} \Big) \bar{g}_i {\coloneqq}\sum_{i=1}^n w_i^{\text{FW}} \bar{g}_i ,$$ where by default $\rho_0 = 1$ which leads to all $w_i^{\text{FW}} \in [0,1]$ when $\rho_i = 1/(i+1)$. A typical sequence of approximations to the mean element is shown in Fig. \[fig:designpoints\] (left), demonstrating that the approximation quickly converges to the ground truth (in black). Since minimisation of a linear function can be restricted to extreme points of the domain, the atoms will be of the form $\bar{g}_i = \Phi(x_i^{\text{FW}}) = k(\cdot ,x_i^{\text{FW}})$ for some $x_i^{\text{FW}} \in \mathcal{X}$. The minimisation in $g$ over $\mathcal{G}$ from step 2 in Algorithm \[alg:FWalgorithm\] therefore becomes a minimisation in $x$ over $\mathcal{X}$ and this algorithm therefore provides us design points. In practice, at each iteration $i$, the FW algorithm hence selects a design point $x_i^{\text{FW}} \in \mathcal{X}$ which induces an atom $\bar{g}_i$ and gives us an approximation of the mean element $\mu_p$. We denote by $\hat{\mu}_{\text{FW}}$ this approximation after $n$ iterations. Using the reproducing property, we can show that the FW estimate is a quadrature rule: $$\hat{p}_{\text{FW}}[f] {\coloneqq}\big\langle f,\hat{\mu}_{\text{FW}} \big\rangle_{\mathcal{H}} = \Big\langle f , \sum_{i=1}^n w_i^{\text{FW}} \bar{g}_i \Big\rangle_{\mathcal{H}} = \sum_{i=1}^n w_i^{\text{FW}} \big\langle f , k(\cdot,x_i^{\text{FW}}) \big\rangle_{\mathcal{H}} = \sum_{i=1}^n w_i^{\text{FW}} f(x_i^{\text{FW}}).$$ The total computational cost for FW is $\mathcal{O}(n^2)$. An extension known as FW with Line Search (FWLS) uses a line-search method to find the optimal step size $\rho_i$ at each iteration (see Alg. \[alg:FWalgorithm\]). Once again, the approximation obtained by FWLS has a sparse expression as a convex combination of all the previously visited states and we obtain an associated quadrature rule. FWLS has theoretical convergence rates that can be stronger than standard versions of FW but has computational cost in $\mathcal{O}(n^3)$. The authors in [@Garber2015] provide a survey of FW-based algorithms and their convergence rates under different regularity conditions on the objective function and domain of optimisation. [*Remark:*]{} The FW design points $\{x_i^{\text{FW}}\}_{i=1}^n$ are generally not available in closed-form. We follow mainstream literature by selecting, at each iteration, the point that minimises the MMD over a finite collection of $M$ points, drawn i.i.d from $p(x)$. The authors in [@Lacoste-Julien2015] proved that this approximation adds a $\mathcal{O}(M^{-1/4})$ term to the MMD, so that theoretical results on FW convergence continue to apply provided that $M(n) \rightarrow \infty$ sufficiently quickly. Appendix A provides full details. In practice, one may also make use of a numerical optimisation scheme in order to select the points. A Hybrid Approach: Frank-Wolfe Bayesian Quadrature {#section:BayesianFW} ================================================== To combine the advantages of a probabilistic integrator with a formal convergence theory, we propose Frank-Wolfe Bayesian Quadrature (FWBQ). In FWBQ, we first select design points $\{x_i^{\text{FW}}\}_{i=1}^n$ using the FW algorithm. However, when computing the quadrature approximation, instead of using the usual FW weights $\{w_i^{\text{FW}}\}_{i=1}^n$ we use instead the weights $\{w_i^{\text{BQ}}\}_{i=1}^n$ provided by BQ. We denote this quadrature rule by $\hat{p}_{\text{FWBQ}}$ and also consider $\hat{p}_{\text{FWLSBQ}}$, which uses FWLS in place of FW. As we show below, these hybrid estimators (i) carry the Bayesian interpretation of Sec. \[subsec:BQ\], (ii) permit a rigorous theoretical analysis, and (iii) out-perform existing FW quadrature rules by orders of magnitude in simulations. FWBQ is hence ideally suited to probabilistic numerics applications. For these theoretical results we assume that $f$ belongs to a finite-dimensional RKHS $\mathcal{H}$, in line with recent literature [@Bach2012; @Garber2015; @Jaggi2013; @Lacoste-Julien2015]. We further assume that $\mathcal{X}$ is a compact subset of $\mathbb{R}^d$, that $p(x) > 0$ $\forall x \in \mathcal{X}$ and that $k$ is continuous on $\mathcal{X} \times \mathcal{X}$. Under these hypotheses, Theorem \[theorem:posteriormean\] establishes consistency of the posterior mean, while Theorem \[theo1\] establishes contraction for the posterior distribution. \[theorem:posteriormean\] The posterior mean $\hat{p}_{\emph{FWBQ}}[f]$ converges to the true integral $p[f]$ at the following rates: $$\Big|p[f] - \hat{p}_{\emph{FWBQ}}[f]\Big| \leq \text{MMD}\big(\{x_i,w_i\}_{i=1}^n\big) \leq \left\{ \begin{array}{cl} \frac{2 D^2}{R} n^{-1} & \text{for FWBQ}\\ \sqrt{2}D \exp(-\frac{R^2}{2 D^2} n) & \text{for FWLSBQ} \end{array} \right.$$ where the FWBQ uses step-size $\rho_i = 1/(i+1)$, $D \in (0,\infty)$ is the diameter of the marginal polytope $\mathcal{M}$ and $R \in (0,\infty)$ gives the radius of the smallest ball of center $\mu_p$ included in $\mathcal{M}$. Note that all the proofs of this paper can be found in Appendix B. An immediate corollary of Theorem \[theorem:posteriormean\] is that FWLSBQ has an asymptotic error which is exponential in $n$ and is therefore superior to that of any QMC estimator [@Dick2010]. This is not a contradiction - recall that QMC restricts attention to uniform weights, while FWLSBQ is able to propose arbitrary weightings. In addition we highlight a robustness property: Even when the assumptions of this section do not hold, one still obtains atleast a rate $\mathcal{O}_P (n^{-1/2})$ for the posterior mean using either FWBQ or FWLSBQ [@Dunn1980]. [*Remark*]{}: The choice of kernel affects the convergence of the FWBQ method [@Hennig2015ProbNum]. Clearly, we expect faster convergence if the function we are integrating is ‘close’ to the space of functions induced by our kernel. Indeed, the kernel specifies the geometry of the marginal polytope $\mathcal{M}$, that in turn directly influences the rate constant $R$ and $D$ associated with FW convex optimisation. Consistency is only a stepping stone towards our main contribution which establishes posterior contraction rates for FWBQ. Posterior contraction is important as these results justify, for the first time, the probabilistic numerics approach to integration; that is, we show that the [*full*]{} posterior distribution is a sensible quantification (at least asymptotically) of numerical error in the integration routine: \[theo1\] Let $S \subseteq \mathbb{R}$ be an open neighbourhood of the true integral $p[f]$ and let $\gamma = \inf_{r \in S^C} | r - p[f]| >0$. Then the posterior probability mass on $S^c = \mathbb{R} \setminus S$ vanishes at a rate: $$\emph{prob}(S^c) \leq \left\{ \begin{array}{cl} \frac{2\sqrt{2}D^2}{\sqrt{\pi}R \gamma} n^{-1} \exp \Big(- \frac{\gamma^2 R^2}{8 D^4} n^2 \Big) & \text{for FWBQ} \\ \frac{2 D}{\sqrt{\pi} \gamma} \exp\Big( - \frac{R^2}{2 D^2} n - \frac{\gamma^2}{2\sqrt{2}D} \exp\big( \frac{R^2}{2D^2} n\big)\Big) & \text{for FWLSBQ} \end{array} \right.$$ where the FWBQ uses step-size $\rho_i = 1/(i+1)$, $D \in (0,\infty)$ is the diameter of the marginal polytope $\mathcal{M}$ and $R \in (0,\infty)$ gives the radius of the smallest ball of center $\mu_p$ included in $\mathcal{M}$. The contraction rates are exponential for FWBQ and super-exponential for FWLBQ, and thus the two algorithms enjoy both a probabilistic interpretation and rigorous theoretical guarantees. A notable corollary is that OBQ enjoys the same rates as FWLSBQ, resolving a conjecture by Tony O’Hagan that OBQ converges exponentially \[personal communication\]: \[theorem:OBQ\_rates\] The consistency and contraction rates obtained for FWLSBQ apply also to OBQ. Experimental Results {#section:Experimental_Results} ==================== Simulation Study ---------------- To facilitate the experiments in this paper we followed [@Bach2015; @Bach2012; @Rasmussen2003; @Lacoste-Julien2015] and employed an exponentiated-quadratic (EQ) kernel $k(x, x') {\coloneqq}\lambda^{2} \exp ( -\nicefrac{1}{2 \sigma^2} \|x-x'\|^2_2)$. This corresponds to an infinite-dimensional RKHS, not covered by our theory; nevertheless, we note that all simulations are practically finite-dimensional due to rounding at machine precision. See Appendix E for a finite-dimensional approximation using random Fourier features. EQ kernels are popular in the BQ literature as, when $p$ is a mixture of Gaussians, the mean element $\mu_p$ is analytically tractable (see Appendix C). Some other $(p,k)$ pairs that produce analytic mean elements are discussed in [@Bach2015]. For this simulation study, we took $p(x)$ to be a 20-component mixture of 2D-Gaussian distributions. Monte Carlo (MC) is often used for such distributions but has a slow convergence rate in $\mathcal{O}_P(n^{-1/2})$. FW and FWLS are known to converge more quickly and are in this sense preferable to MC [@Bach2012]. In our simulations (Fig. \[fig:sim study\], left), both our novel methods FWBQ and FWLSBQ decreased the MMD much faster than the FW/FWLS methods of [@Bach2012]. Here, the same kernel hyper-parameters $(\lambda,\sigma) = (1,0.8)$ were employed for all methods to have a fair comparison. This suggests that the best quadrature rules correspond to elements [*outside*]{} the convex hull of $\{\Phi(x_i)\}_{i=1}^n$. Examples of those, including BQ, often assign negative weights to features (Fig. S1 right, Appendix D). The principle advantage of our proposed methods is that they reconcile theoretical tractability with a fully probabilistic interpretation. For illustration, Fig. \[fig:sim study\] (right) plots the posterior uncertainty due to numerical error for a typical integration problem based on this $p(x)$. In-depth empirical studies of such posteriors exist already in the literature and the reader is referred to [@Chen2015; @Hamrick2013mental; @OHagan1991] for details. Beyond these theoretically tractable integrators, SBQ seems to give even better performance as $n$ increases. An intuitive explanation is that SBQ picks $\{x_i\}_{i=1}^n$ to minimise the MMD whereas FWBQ and FWLSBQ only minimise an approximation of the MMD (its linearisation along $DJ$). In addition, the SBQ weights are optimal at each iteration, which is not true for FWBQ and FWLSBQ. We conjecture that Theorem \[theorem:posteriormean\] and \[theo1\] provide upper bounds on the rates of SBQ. This conjecture is partly supported by Fig. \[fig:designpoints\] (right), which shows that SBQ selects similar design points to FW/FWLS (but weights them optimally). Note also that both FWBQ and FWLSBQ give very similar result. This is not surprising as FWLS has no guarantees over FW in infinite-dimensional RKHS [@Jaggi2013]. Quantifying Numerical Error in a Proteomic Model Selection Problem ------------------------------------------------------------------ A topical bioinformatics application that extends recent work by [@Oates2014] is presented. The objective is to select among a set of candidate models $\{M_i\}_{i=1}^m$ for protein regulation. This choice is based on a dataset $\mathcal{D}$ of protein expression levels, in order to determine a ‘most plausible’ biological hypothesis for further experimental investigation. Each $M_i$ is specified by a vector of kinetic parameters $\theta_i$ (full details in Appendix D). Bayesian model selection requires that these parameters are integrated out against a prior $p(\theta_i)$ to obtain marginal likelihood terms $L(M_i) = \int p(\mathcal{D}|\theta_i) p(\theta_i) \mathrm{d}\theta_i$. Our focus here is on obtaining the [*maximum a posteriori*]{} (MAP) model $M_j$, defined as the maximiser of the posterior model probability $L(M_j) / \sum_{i=1}^m L(M_i)$ (where we have assumed a uniform prior over model space). Numerical error in the computation of each term $L(M_i)$, if unaccounted for, could cause us to return a model $M_k$ that is different from the true MAP estimate $M_j$ and lead to the mis-allocation of valuable experimental resources. The problem is quickly exaggerated when the number $m$ of models increases, as there are more opportunities for one of the $L(M_i)$ terms to be ‘too large’ due to numerical error. In [@Oates2014], the number $m$ of models was combinatorial in the number of protein kinases measured in a high-throughput assay (currently $\sim 10^2$ but in principle up to $\sim 10^4$). This led [@Oates2014] to deploy substantial computing resources to ensure that numerical error in each estimate of $L(M_i)$ was individually controlled. Probabilistic numerics provides a more elegant and efficient solution: At any given stage, we have a fully probabilistic quantification of our uncertainty in each of the integrals $L(M_i)$, shown to be sensible both theoretically and empirically. This induces a full posterior distribution over numerical uncertainty in the location of the MAP estimate (i.e. ‘Bayes all the way down’). As such we can determine, on-line, the precise point in the computational pipeline when numerical uncertainty near the MAP estimate becomes acceptably small, and cease further computation. The FWBQ methodology was applied to one of the model selection tasks in [@Oates2014]. In Fig. \[fig:model posteriors\] (left) we display posterior model probabilities for each of $m = 352$ candidates models, where a low number ($n = 10$) of samples were used for each integral. (For display clarity only the first 50 models are shown.) In this low-$n$ regime, numerical error introduces a second level of uncertainty that we quantify by combining the FWBQ error models for all integrals in the computational pipeline; this is summarised by a box plot (rather than a single point) for each of the models (obtained by sampling - details in Appendix D). These box plots reveal that our estimated posterior model probabilities are completely dominated by numerical error. In contrast, when $n$ is increased through 50, 100 and 200 (Fig. \[fig:model posteriors\], right and Fig. S2), the uncertainty due to numerical error becomes negligible. At $n = 200$ we can conclude that model $26$ is the true MAP estimate and further computations can be halted. Correctness of this result was confirmed using the more computationally intensive methods in [@Oates2014]. In Appendix D we compared the relative performance of FWBQ, FWLSBQ and SBQ on this problem. Fig. S1 shows that the BQ weights reduced the MMD by orders of magnitude relative to FW and FWLS and that SBQ converged more quickly than both FWBQ and FWLSBQ. Conclusions =========== This paper provides the first theoretical results for probabilistic integration, in the form of posterior contraction rates for FWBQ and FWLSBQ. This is an important step in the probabilistic numerics research programme [@Hennig2015ProbNum] as it establishes a theoretical justification for using the posterior distribution as a model for the numerical integration error (which was previously assumed [@Rasmussen2003; @Gunter2014; @Oates2015; @Osborne2012; @Sarkka2015 e.g.]). The practical advantages conferred by a fully probabilistic error model were demonstrated on a model selection problem from proteomics, where sensitivity of an evaluation of the MAP estimate was modelled in terms of the error arising from repeated numerical integration. The strengths and weaknesses of BQ (notably, including scalability in the dimension $d$ of $\mathcal{X}$) are well-known and are inherited by our FWBQ methodology. We do not review these here but refer the reader to [@OHagan1991] for an extended discussion. Convergence, in the classical sense, was proven here to occur exponentially quickly for FWLSBQ, which partially explains the excellent performance of BQ and related methods seen in applications [@Gunter2014; @Osborne2012], as well as resolving an open conjecture. As a bonus, the hybrid quadrature rules that we developed turned out to converge much faster in simulations than those in [@Bach2012], which originally motivated our work. A key open problem for kernel methods in probabilistic numerics is to establish protocols for the practical elicitation of kernel hyper-parameters. This is important as hyper-parameters directly affect the scale of the posterior over numerical error that we ultimately aim to interpret. Note that this problem applies equally to BQ, as well as related quadrature methods [@Bach2012; @Rasmussen2003; @Gunter2014; @Oates2015] and more generally in probabilistic numerics [@Schober2014]. Previous work, such as [@Hamrick2013mental], optimised hyper-parameters on a per-application basis. Our ongoing research seeks automatic and general methods for hyper-parameter elicitation that provide good frequentist coverage properties for posterior credible intervals, but we reserve the details for a future publication. ### Acknowledgments {#acknowledgments .unnumbered} The authors are grateful for discussions with Simon Lacoste-Julien, Simo S[ä]{}rkk[ä]{}, Arno Solin, Dino Sejdinovic, Tom Gunter and Mathias Cronj[ä]{}ger. FXB was supported by EPSRC \[EP/L016710/1\]. CJO was supported by EPSRC \[EP/D002060/1\]. MG was supported by EPSRC \[EP/J016934/1\], an EPSRC Established Career Fellowship, the EU grant \[EU/259348\] and a Royal Society Wolfson Research Merit Award. Supplementary Material {#supplementary-material .unnumbered} ====================== Appendix A: Details for the FWBQ and FWLSBQ Algorithms {#appendix-a-details-for-the-fwbq-and-fwlsbq-algorithms .unnumbered} ------------------------------------------------------ A high-level pseudo-code description for the Frank-Wolfe Bayesian Quadrature (FWBQ) algorithm is provided below. function $f$, reproducing kernel $k$, initial point $x_0 \in \mathcal{X}$. Compute design points $\big\{x_i^{\text{FW}} \big\}_{i=1}^n$ using the FW algorithm (Alg. 1). Compute associated weights $\big\{w_i^{\text{BQ}}\big\}_{i=1}^n$ using BQ (Eqn. 4). Compute the posterior mean $\hat{p}_{\text{FWBQ}}[f]$, i.e. the quadrature rule with $\big\{ x_i^{\text{FW}}, w_i^{\text{BQ}} \big\}_{i=1}^n$. Compute the posterior variance $v_{\text{BQ}}\big(\{x_i^{\text{FW}}\}_{i=1}^n \big)$ using BQ (Eqn. 5). Return the full posterior $\mathcal{N} \big(\hat{p}_{\text{FWBQ}},v_{\text{BQ}}(\{x_i^{\text{FW}}\}_{i=1}^n)\big)$ for the integral $p[f]$. Frank-Wolfe Line-Search Bayesian Quadrature (FWLSBQ) is simply obtained by substituting the Frank-Wolfe algorithm with the Frank-Wolfe Line-Search algorithm. In this appendix, we derive all of the expressions necessary to implement both the FW and FWLS algorithms (for quadrature) in practice. All of the other steps can be derived from the relevant equations as highlighted in Algorithm \[alg:FWBQalgorithm\] above. The FW/FWLS are both initialised by the user choosing a design point $x_1^{\text{FW}}$. This can be done either at random or by choosing a location which is known to have high probability mass under $p(x)$. The first approximation to $\mu_p$ is therefore given by $g_1 = k(\cdot,x_1^{\text{FW}})$. The algorithm then loops over the next three steps to obtain new design points $\{x_i^{\text{FW}}\}_{i=2}^n$: [*Step 1) Obtaining the new Frank-Wolfe design points $x_{i+1}^{\text{FW}}$.*]{} At iteration $i$, the step consists of choosing the point $\bar{x}_i^{\text{FW}}$. Let $\{w_l^{(i)}\}_{l=1}^{i-1}$ denote the Frank-Wolfe weights assigned to each of the previous design points $\{x_l^{\text{FW}}\}_{l=1}^{i-1}$ at this new iteration, given that we choose $x$ as our new design point. The choice of new design point is done by computing the derivative of the objective function $J(g_{i-1})$ and finding the point $x^*$ which minimises the inner product: $${\arg\min}_{g \in \mathcal{G}} \big\langle g , (DJ)(g_{i-1}) \big\rangle_{\times}$$ To do so, we need to obtain an equivalent expression of the minimisation of the linearisation of $J$ (denoted $DJ$) in terms of kernel values and evaluations of the mean element $\mu_p$. Since minimisation of a linear function can be restricted to extreme points of the domain, we have that $${\arg\min}_{g \in \mathcal{G}} \big\langle g , (DJ)(g_{i-1}) \big\rangle_{\times} \; = \; {\arg\min}_{x \in \mathcal{X}} \big\langle \Phi(x) , (DJ)(g_{i-1}) \big\rangle_{\times}.$$ Then using the definition of $J$ we have: $${\arg\min}_{x \in \mathcal{X}} \big\langle \Phi(x) , (DJ)(g_{i-1}) \big\rangle_{\times} \; = \; {\arg\min}_{x \in \mathcal{X}} \big\langle \Phi(x) , g_{i-1} - \mu_p \big\rangle_{\mathcal{H}} ,$$ where $$\begin{split} \big\langle \Phi(x) , g_{i-1} - \mu_p \big\rangle_{\mathcal{H}} \quad & = \quad \Big\langle \Phi(x), \sum_{l=1}^{i-1} w_l^{(i-1)} \Phi(x_l) - \mu_p \Big\rangle_{\mathcal{H}} \\ & = \quad \sum_{i=1}^{i-1} w_l^{(i-1)} \big\langle \Phi(x) , \Phi(x_l) \big\rangle_{\mathcal{H}} - \big\langle \Phi(x) , \mu_p \big\rangle_{\mathcal{H}} \\ & = \quad \sum_{l=1}^{i-1} w_l^{(i-1)} k( x,x_l) - \mu_p(x). \end{split}$$ Our new design point $x_i^{\text{FW}}$ is therefore the point $x^*$ which minimises this expression. Note that this equation may not be convex and may require us to make use of approximate methods to find the minimum $x^*$. To do so, we sample $M$ points (where $M$ is large) independently from the distribution $p$ and pick the sample which minimises the expression above. From [@Lacoste-Julien2015] this introduces an additive error term of size $\mathcal{O}(M^{-1/4})$, which does not impact our convergence analysis provided that M(n) vanishes sufficiently quickly. In all experiments we took $M$ between $10,000$ and $50,000$ so that this error will be negligible. It is important to note that sampling from $p(x)$ is likely to not be the best solution to optimising this expression. One may, for example, be better off using any other optimisation method which does not require convexity (for example, Bayesian Optimization). However, we have used sampling as the result from [@Lacoste-Julien2015] discussed above allows us to have a theoretical upper bound on the error introduced. [*Step 2) Computing the Step-Sizes and Weights for the Frank-Wolfe and Frank-Wolfe Line-Search Algorithms.*]{} Computing the weights $\{w_l^{(i)}\}_{l=1}^n$ assigned by the FW/FWLS algorithms to each of the design points is obtained using the equation: $$w_l^{(i)}= \prod_{j=l+1}^{i} \big( 1 - \rho_{j-1} \big) \rho_{l-1}$$ Clearly, this expression depends on the choice of step-sizes $\{\rho_l\}_{l=1}^i$. In the case of the standard Frank-Wolfe algorithm, this step-size sequence is a an input from the algorithm and so computing the weights is straightforward. However, in the case of the Frank-Wolfe Line-Search algorithm, the choice of step-size is optimized at each iteration so that $g_i$ minimises $J$ the most. In the case of computing integrals, this optimization step can actually be obtained analytically. This analytic expression will be given in terms of values of the kernel values and evaluations of the mean element. First, from the definition of $J$ $$\begin{split} J \big( (1-\rho) g_{i-1} + \rho \Phi(x_{i}) \big) \quad & = \quad \frac{1}{2} \big\langle (1-\rho) g_{i-1} + \rho \Phi(x_{i}) - \mu_p , (1-\rho) g_{i-1} + \rho \Phi(x_{i}) - \mu_p \big\rangle_{\mathcal{H}} \\ & = \quad \frac{1}{2} \Big[ (1- \rho)^2 \big\langle g_{i-1},g_{i-1} \big\rangle_{\mathcal{H}} + 2 (1 - \rho) \rho \big\langle g_{i-1}, \Phi(x_{i}) \big\rangle_{\mathcal{H}} \\ & \qquad + 2 \rho^2 \big\langle \Phi(x_{i}), \Phi(x_{i}) \big\rangle_{\mathcal{H}} - 2 (1 - \rho) \big\langle g_{i-1}, \mu_p \big\rangle_{\mathcal{H}} \\ & \qquad - 2\rho \big\langle \Phi(x_{i}), \mu_p \big\rangle_{\mathcal{H}} + \big\langle \mu_p,\mu_p \big\rangle_{\mathcal{H}} \Big]. \end{split}$$ Taking the derivative of this expression with respect to $\rho$, we get: $$\begin{split} \frac{\partial J \big( (1-\rho) g_{i-1} + \rho \Phi(x_{i}) \big) }{\partial \rho} \quad & = \quad \frac{1}{2} \Big[ - 2(1- \rho) \big\langle g_{i-1} , g_{i-1} \big\rangle_{\mathcal{H}} + 2 (1 - 2 \rho) \big\langle g_{i-1}, \Phi(x_{i})\big\rangle_{\mathcal{H}} \\ & \qquad + 2 \rho \big\langle \Phi(x_{i}), \Phi(x_{i}) \big\rangle_{\mathcal{H}} + 2 \big\langle g_{i-1} , \mu_p \big\rangle_{\mathcal{H}} - 2 \big\langle \Phi(x_{i}), \mu_p \big\rangle_{\mathcal{H}} \Big] \\ & = \quad \rho \Big[ \big\langle g_{i-1}, g_{i-1} \big\rangle_{\mathcal{H}} - 2 \big\langle g_{i-1} , \Phi(x_{i}) \big\rangle_{\mathcal{H}} + \big\langle \Phi(x_{i}) , \Phi(x_{i}) \big\rangle_{\mathcal{H}} \\ & = \quad \rho \big\| g_{i-1} - \Phi(x_{i}) \big\|^2_{\mathcal{H}} - \big\langle g_{i-1} - \Phi(x_{i}) , g_{i-1} - \mu_p \big\rangle_{\mathcal{H}} .\\ \end{split}$$ Setting this derivative to zero gives us the following optimum: $$\rho^* \quad = \quad \frac{\Big\langle g_{i-1} - \mu_p , g_{i-1} - \Phi(x_{i}) \Big\rangle_{\mathcal{H}}}{\Big\| g_{i-1} - \Phi(x_{i}) \Big\|^2_{\mathcal{H}} }.$$ Clearly, differentiating a second time with respect to $\rho$ gives $\| g_{i-1} - \Phi(x_{i}) \|_{\mathcal{H}}^2$, which is non-negative and so $\rho^*$ is a minimum. One can show using geometrical arguments about the marginal polytope $\mathcal{M}$ that $\rho^*$ will be in $[0,1]$ [@Jaggi2013]. The numerator of this line-search expression is $$\begin{split} \Big\langle g_{i-1} - \mu_p, g_{i-1} - \Phi(x_{i})\Big\rangle_{\mathcal{H}} \quad & = \quad \big\langle g_{i-1}, g_{i-1} \big\rangle_{\mathcal{H}} - \big\langle \mu_p, g_{i-1} \big\rangle_{\mathcal{H}} \\ & \qquad - \sum_{l=1}^{i-1} w_l^{(i-1)} k(x_{l},x_{i}) + \mu_p(x_{i}) \\ & = \quad \sum_{l=1}^{i-1} \sum_{m=1}^{i-1} w_l^{(i-1)} w_m^{(i-1)} k(x_l,x_m) \\ & \qquad - \sum_{l=1}^{i-1} w_l^{(i-1)} \Big[ k(x_l,x_{i}) + \mu_p(x_l) \Big] + \mu_p(x_{i}). \\ \end{split}$$ Similarly the denominator is $$\begin{split} \big\| g_{i-1} - \Phi(x_{i})\big\|^2_{\mathcal{H}} \quad & = \quad \big\langle g_{i-1} - \Phi(x_{i}), g_{i-1} - \Phi(x_{i}) \big\rangle_{\mathcal{H}} \\ & = \quad \big\langle g_{i-1}, g_{i-1} \big\rangle_{\mathcal{H}} - 2 \big\langle g_{i-1} , \Phi(x_{i}) \big\rangle_{\mathcal{H}} + \big\langle \Phi(x_{i}), \Phi(x_{i}) \big\rangle_{\mathcal{H}} \\ & = \quad \sum_{l=1}^{i-1} \sum_{m=1}^{i-1} w_l^{(i-1)} w_m^{(i-1)} k(x_l,x_m) - 2 \sum_{l=1}^{i-1} w_l^{(i-1)} k(x_l,x_{i}) + k(x_{i},x_{i}).\\ \end{split}$$ Clearly all expressions provided here can be vectorised for efficient computational implementation. [*Step 3) Computing a new approximation of the mean element.*]{} The final step consists of updating the approximation of the mean element, which can be done directly by setting: $$g_{i} = (1 - \rho_i) g_{i-1} + \rho_i \bar{g}_i$$ Appendix B: Proofs of Theorems and Corollaries {#appendix-b-proofs-of-theorems-and-corollaries .unnumbered} ---------------------------------------------- The posterior mean $\hat{p}_{\emph{FWBQ}}[f]$ converges to the true integral $p[f]$ at the following rates: $$\Big|p[f] - \hat{p}_{\emph{FWBQ}}[f]\Big| \leq \text{MMD}\big(\{x_i,w_i\}_{i=1}^n\big) \leq \left\{ \begin{array}{cl} \frac{2 D^2}{R} n^{-1} & \text{for FWBQ}\\ \sqrt{2}D \exp(-\frac{R^2}{2 D^2} n) & \text{for FWLSBQ} \end{array} \right.$$ where the FWBQ uses step-size $\rho_i = 1/(i+1)$, $D \in (0,\infty)$ is the diameter of the marginal polytope $\mathcal{M}$ and $R \in (0,\infty)$ gives the radius of the smallest ball of center $\mu_p$ included in $\mathcal{M}$. The posterior mean in BQ is a Bayes estimator and so the MMD takes a minimax form [@Huszar2012]. In particular, the BQ weights perform no worse than the FW weights: $$\text{MMD} \Big( \big\{x_i^{\text{FW}} ,w_i^{\text{BQ}} \big\}_{i=1}^n \Big) \; = \; \inf_{\textrm{w} \in \mathbb{R}^n} \text{MMD} \Big( \big\{x_i^{\text{FW}},w_i \big\}_{i=1}^n \Big) \; \leq \; \text{MMD} \Big( \big\{x_i^{\text{FW}},w_i^{\text{FW}} \big\}_{i=1}^n \Big). \label{eq:minimax}$$ Now, the values attained by the objective function $J$ along the path $\{g_i\}_{i=1}^n$ determined by the FW(/FWLS) algorithm can be expressed in terms of the MMD as follows: $$J(g_n) = \frac{1}{2} \big\|\hat{\mu}_{\text{FW}} - \mu_p \big\|^2_{\mathcal{H}} = \frac{1}{2} \text{MMD}^2\Big( \big\{x_i^{\text{FW}},w_i^{\text{FW}} \big\}_{i=1}^n \Big). \label{FWobjvals}$$ Combining and gives $$\Big|p[f] - \hat{p}_{\text{FWBQ}}[f] \Big| \; \leq \; \text{MMD}\Big( \big\{x_i^{\text{FW}},w_i^{\text{BQ}} \big\}_{i=1}^n \Big) \big\|f \big\|_{\mathcal{H}} \; \leq \; 2^{1/2} J^{1/2}(g_n),$$ since $\|f\|_{\mathcal{H}} \leq 1$. To complete the proof we leverage recent analysis of the FW algorithm with steps $\rho_i = 1/(n+1)$ and the FWLS algorithm. Specifically, from [@Bach2012 Prop. 1] we have that: $$J(g_n) \leq \left\{ \begin{array}{cl} \frac{2D^4}{R^2} n^{-2} & \text{for FW with step size } \rho_i = 1/(i+1) \\ D^2 \exp(-R^2 n/D^2 ) & \text{for FWLS} \end{array} \right.$$ where $D$ is the diameter of the marginal polytope $\mathcal{M}$ and $R$ is the radius of the smallest ball centered at $\mu_p$ included in $\mathcal{M}$. Let $S \subseteq \mathbb{R}$ be an open neighbourhood of the true integral $p[f]$ and let $\gamma = \inf_{r \in S^C} | r - p[f]| >0$. Then the posterior probability mass on $S^c = \mathbb{R} \setminus S$ vanishes at a rate: $$\emph{prob}(S^c) \leq \left\{ \begin{array}{cl} \frac{2\sqrt{2}D^2}{\sqrt{\pi}R \gamma} n^{-1} \exp \Big(- \frac{\gamma^2 R^2}{8 D^4} n^2 \Big) & \text{for FWBQ, } \rho_i = 1/(i+1) \\ \frac{2 D}{\sqrt{\pi} \gamma} \exp\Big( - \frac{R^2}{2 D^2} n - \frac{\gamma^2}{2\sqrt{2}D} \exp\big( \frac{R^2}{2D^2} n\big)\Big) & \text{for FWLSBQ} \end{array} \right.$$ where $D \in (0,\infty)$ is the diameter of the marginal polytope $\mathcal{M}$ and $R \in (0,\infty)$ gives the radius of the smallest ball of center $\mu_p$ included in $\mathcal{M}$. We will obtain the posterior contraction rates of interest using the bounds on the MMD provided in the proof of Theorem 1. Given an open neighbourhood $S \subseteq \mathbb{R}$ of $p[f]$, we have that the complement $S^c = \mathbb{R} \setminus S$ is closed in $\mathbb{R}$. We assume without loss of generality that $S^c \neq \emptyset$, since the posterior mass on $S^c$ is trivially zero when $S^c = \emptyset$. Since $S^c$ is closed, the distance $\gamma = \inf_{r \in S^c} \bigl|r - p[f]\bigr| > 0$ is strictly positive. Denote the posterior distribution by $\mathcal{N}(m_n,\sigma_n^2)$ where we have that $m_n {\coloneqq}\hat{p}_{\text{FWBQ}}[f]$ where $\hat{p}_{\text{FWBQ}} = \sum_{i=1}^n w_i^{\text{BQ}} \delta(x_i^{\text{FW}})$ and $\sigma_n {\coloneqq}\text{MMD}(\{x_i^{\text{FW}},w_i^{\text{BQ}}\}_{i=1}^n)$. Directly from the supremum definition of the MMD we have: $$\Big|p\big[f\big] - m_n \Big| \leq \sigma_n \big\|f\big\|_{\mathcal{H}}. \label{eq:CS}$$ Now the posterior probability mass on $S^c$ is given by $$M_n = \int_{S^c} \phi(r|m_n,\sigma_n) \mathrm{d}r,$$ where $\phi(r|m_n,\sigma_n)$ is the p.d.f. of the posterior normal distribution. By the definition of $\gamma$ we get the upper bound: $$\begin{aligned} M_n & \leq & \int_{-\infty}^{p[f] - \gamma} \phi(r|m_n,\sigma_n) \mathrm{d}r + \int_{p[f] + \gamma}^\infty \phi(r|m_n,\sigma_n) \mathrm{d}r \\ & = & 1 + \Phi\Big(\underbrace{\frac{p[f] - m_n}{\sigma_n}}_{(*)} - \frac{\gamma}{\sigma_n}\Big) - \Phi\Big(\underbrace{\frac{p[f] - m_n}{\sigma_n}}_{(*)} + \frac{\gamma}{\sigma_n}\Big).\end{aligned}$$ From we have that the terms $(*)$ are bounded by $\|f\|_{\mathcal{H}} \leq 1 <\infty$ as $\sigma_n \rightarrow 0$, so that asymptotically we have: $$\begin{aligned} M_n & \lesssim & 1 + \Phi\big(- \gamma / \sigma_n\big) - \Phi\big(\gamma / \sigma_n \big) \\ & = & \text{erfc}\big(\gamma/\sqrt{2}\sigma_n\big) \sim \big(\sqrt{2}\sigma_n / \sqrt{\pi} \gamma \big) \exp\big(- \gamma^2 / 2 \sigma_n^2 \big). \label{eq:final}\end{aligned}$$ Finally we may substitute the asymptotic results derived in the proof of Theorem 1 for the MMD $\sigma_n$ into to complete the proof. The consistency and contraction rates obtained for FWLSBQ apply also to OBQ. By definition, OBQ chooses samples that globally minimise the MMD and we can hence bound this quantity from above by the MMD of FWLSBQ: $$\text{MMD}\Big(\big\{x_i^{\text{OBQ}},w_i^{\text{BQ}}\big\}_{i=1}^n \Big) = \inf_{\{x_i\}_{i=1}^n \in \mathcal{X}} \text{MMD}\Big(\big\{x_i,w_i^{\text{BQ}}\big\}_{i=1}^n \Big) \leq \text{MMD}\Big( \big\{x_i^{\text{FW}},w_i^{\text{BQ}} \big\}_{i=1}^n \Big).$$ Consistency and contraction follow from inserting this inequality into the above proofs. Appendix C: Computing the Mean Element for the Simulation Study {#appendix-c-computing-the-mean-element-for-the-simulation-study .unnumbered} --------------------------------------------------------------- We compute an expression for $\mu_p(x) = \int_{- \infty}^{\infty} k(x,x') p(x') \mathrm{d}x'$ in the case where $k$ is an exponentiated-quadratic kernel with length scale hyper-parameter $\sigma$: $$k \big(x,x' \big) \; := \; \lambda^2 \exp \Big( \frac{- \sum_{i=1}^d (x_{i} - x_{i}')^2 }{2 \sigma^2 } \Big) \; = \; \lambda^2 (\sqrt{2 \pi} \sigma)^d \phi \big(x \big| x', \Sigma_{\sigma} \big),$$ where $\Sigma_\sigma$ is a d-dimensional diagonal matrix with entries $\sigma^2$, and where $p(x)$ is a mixture of d-dimensional Gaussian distributions: $$p(x) \quad = \quad \sum_{l=1}^L \rho_l \hspace{1mm} \phi \big(x\big|\mu_l,\Sigma_l \big).$$ (Note that, in this section only, $x_i$ denotes the $i$th component of the vector $x$.) Using properties of Gaussian distributions (see Appendix A.2 of [@Rasmussen2006]) we obtain $$\begin{split} \mu_p(x) \quad & = \quad \int_{- \infty}^{\infty} k(x,x') p(x') \mathrm{d}x' \\ & = \quad \int_{- \infty}^{\infty} \lambda^2 (\sqrt{2 \pi} \sigma)^d \phi\big(x' \big| x, \Sigma_{\sigma} \big) \times \Big( \sum_{l=1}^L \rho_l \hspace{1mm} \phi\big(x'\big|\mu_l,\Sigma_l \big)\Big) \mathrm{d}x' \\ & = \quad \lambda^2 (\sqrt{2 \pi} \sigma)^d \sum_{l=1}^L \rho_l \int_{- \infty}^{\infty} \phi\big(x' \big| x, \Sigma_{\sigma} \big) \times \phi\big(x'\big|\mu_l,\Sigma_l \big) \mathrm{d}x' \\ & = \quad \lambda^2 (\sqrt{2 \pi} \sigma)^d \sum_{l=1}^L \rho_l \int_{- \infty}^{\infty} a_l^{-1} \phi\big(x'\big|c_l,C_l \big) \mathrm{d}x' \\ & = \quad \lambda^2 (\sqrt{2 \pi} \sigma)^d \sum_{l=1}^L \rho_l a_l^{-1} .\\ \end{split}$$ where we have: $$a_l^{-1} \; = \; (2 \pi)^{-\frac{d}{2}} \big| \Sigma_{\sigma} + \Sigma_{l} \big|^{-\frac{1}{2}} \exp \big( -\frac{1}{2} \big(x - \mu_l \big)^T \big( \Sigma_{\sigma} + \Sigma_{l} \big)^{-1} \big(x - \mu_l \big) \big).$$ This last expression is in fact itself a Gaussian distribution with probability density function $\phi(x|\mu_l, \Sigma_l + \Sigma_\sigma)$ and we hence obtain: $$\mu_p(x) \quad := \quad \lambda^2 \big(\sqrt{2 \pi} \sigma \big)^d \sum_{l=1}^L \rho_l \text{ } \phi\big(x|\mu_l, \Sigma_l + \Sigma_\sigma\big).$$ Finally, we once again use properties of Gaussians to obtain $$\begin{split} \int_{- \infty}^{\infty} \mu_p(x) p(x) \mathrm{d}x \quad & = \quad \int_{- \infty}^{\infty} \Big[ \lambda^2 \big(\sqrt{2 \pi} \sigma \big)^d \sum_{l=1}^L \rho_l \text{ } \phi\big(x|\mu_l, \Sigma_l + \Sigma_\sigma\big) \Big] \\ & \quad \times \Big[ \sum_{m=1}^L \rho_m \hspace{1mm} \phi\big(x\big|\mu_m,\Sigma_m \big) \Big] \mathrm{d}x \\ & = \quad \lambda^2 \big(\sqrt{2 \pi} \sigma \big)^d \sum_{l=1}^L \sum_{m=1}^L \rho_l \rho_m \int_{- \infty}^{\infty} \phi\big(x|\mu_l, \Sigma_l + \Sigma_\sigma\big) \phi\big(x\big|\mu_m,\Sigma_m \big) \mathrm{d}x \\ & = \quad \lambda^2 \big(\sqrt{2 \pi} \sigma \big)^d \sum_{l=1}^L \sum_{m=1}^L \rho_l \rho_m a_{lm}^{-1} \\ & = \quad \lambda^2 \big(\sqrt{2 \pi} \sigma \big)^d \sum_{l=1}^L \sum_{m=1}^L \rho_l \rho_m \phi\big(\mu_l|\mu_m,\Sigma_l+\Sigma_m+\Sigma_{\sigma} \big). \end{split}$$ Other combinations of kernel $k$ and density $p$ that give rise to an analytic mean element can be found in the references of [@Bach2015]. Appendix D: Details of the Application to Proteomics Data {#appendix-d-details-of-the-application-to-proteomics-data .unnumbered} --------------------------------------------------------- [*Description of the Model Choice Problem*]{} The ‘CheMA’ methodology described in [@Oates2014] contains several elements that we do not attempt to reproduce in full here; in particular we do not attempt to provide a detailed motivation for the mathematical forms presented below, as this requires elements from molecular chemistry. For our present purposes it will be sufficient to define the statistical models $\{M_i\}_{i=1}^m$ and to clearly specify the integration problems that are to be solved. We refer the reader to [@Oates2014] and the accompanying supplementary materials for a full biological background. Denote by $\mathcal{D}$ the dataset containing normalised measured expression levels $y_S(t_j)$ and $y_S^*(t_j)$ for, respectively, the unphosphorylated and phosphorylated forms of a protein of interest (‘substrate’) in a longitudinal experiment at time $t_j$. In addition $\mathcal{D}$ contains normalised measured expression levels $y_{E_i}^*(t_j)$ for a set of possible regulator kinases (‘enzymes’, here phosphorylated proteins) that we denote by $\{E_i\}$. An important scientific goal is to identify the roles of enzymes (or ‘kinases’) in protein signaling; in this case the problem takes the form of variable selection and we are interested to discover which enzymes must be included in a model for regulation of the substrate $S$. Specifically, a candidate model $M_i$ specifies which enzymes in the set $\{E_i\}$ are regulators of the substrate $S$, for example $M_3 = \{E_2,E_4\}$. Following [@Oates2014] we consider models containing at most two enzymes, as well as the model containing no enzymes. Given a dataset $\mathcal{D}$ and model $M_i$, we can write down a likelihood function as follows: $$\begin{aligned} L(\theta_i,M_i) = \prod_{n=1}^N \phi\left( \frac{y_S^*(t_{n+1}) - y_S^*(t_n)}{t_{n+1} - t_n} \left| \frac{-V_0 y_S^*(t_n)}{y_S^*(t_n) + K_0} + \sum_{E_j \in M_i} \frac{V_j y_{E_j}^*(t_n) y_S(t_n)}{y_S(t_n) + K_j} , \sigma_{\text{err}}^2 \right. \right). \label{chemalike}\end{aligned}$$ Here the model parameters are $\theta_i = \{\mathrm{K} , \mathrm{V}, \sigma_{\text{err}} \}$, where $(\mathrm{K})_j = K_j$, $(\mathrm{V})_j = V_j$, $\phi$ is the normal p.d.f. and the mathematical forms arise from the Michaelis-Menten theory of enzyme kinetics. The $V_j$ are known as ‘maximum reaction rates’ and the $K_j$ are known as ‘Michaelis-Menten parameters’. This is classical chemical notation, not to be confused with the kernel matrix from the main text. The final parameter $\sigma_{\text{err}}$ defines the error magnitude for this ‘approximate gradient-matching’ statistical model. The prior specification proposed in [@Oates2014] and followed here is $$\begin{aligned} \mathrm{K} & \sim & \phi_T \big(K \big| 1, 2^{-1} \mathrm{I} \big), \\ \sigma_{\text{err}} | \mathrm{K} & \sim & p(\sigma_{\text{err}}) \propto 1/\sigma_{\text{err}}, \\ \mathrm{V} | \mathrm{K},\sigma & \sim & \phi_T \big(V \big| 1, N \sigma_{\text{err}}^2 \big(\mathrm{X}(\mathrm{K})^T\mathrm{X}(\mathrm{K})\big)^{-1} \big),\end{aligned}$$ where $\phi_T$ denotes a Gaussian distribution, truncated so that its support is $[0,\infty)$ (since kinetic parameters cannot be non-negative). Here $\mathrm{X}(\mathrm{K})$ is the design matrix associated with the linear regression that is obtained by treating the $\mathrm{K}$ as known constants; we refer to [@Oates2014] for further details. Due to its careful design, the likelihood in Eqn. \[chemalike\] is partially conjugate, so the following integral can be evaluated in closed form: $$L(\mathrm{K},M_i) = \int_{0}^{\infty} \int_{0}^{\infty} L(\theta_i,M_i) p(\mathrm{V},\sigma_{\text{err}} | \mathrm{K}) \mathrm{d}\mathrm{V} \mathrm{d}\sigma_{\text{err}}.$$ The numerical challenge is then to compute the integral $$L(M_i) = \int_{0}^{\infty} L(\mathrm{K},M_i) p(\mathrm{K}) \mathrm{d}\mathrm{K},$$ for each candidate model $M_i$. Depending on the number of enzymes in model $M_i$, this will either be a 1-, 2- or 3-dimensional numerical integral. Whilst such integrals are not challenging to compute on a per-individual basis, the nature of the application means that the values $L(M_i)$ will be similar for many candidate models and, when the number of models is large, this demands either a very precise calculation per model or a careful quantification of the impact of numerical error on the subsequent inferences (i.e. determining the MAP estimate). It is this particular issue that motivates the use of probabilistic numerical methods. [*Description of the Computational Problem*]{} We need to compute integrals of functions with domain $\mathcal{X} = [0,\infty)^d$ where $d \in \{1,2,3\}$ and the sampling distribution $p(x)$ takes the form $\phi_T(x | 1,2^{-1} \mathrm{I})$. The test function $f(x)$ corresponds to $L(\mathrm{K},M_i)$ with $x = \mathrm{K}$. This is given explicitly by the $g$-prior formulae as: $$\begin{aligned} L(\mathrm{K},M_i) & = & \frac{1}{(2 \pi)^{N/2}} \frac{1}{(N+1)^{d/2}} \Gamma\left(\frac{N}{2}\right) b_N^{-\frac{N}{2}}, \\ b_N & = & \frac{1}{2} \left( \mathrm{Y}^T\mathrm{Y} + \frac{1}{N} 1^T \mathrm{X}^T\mathrm{X} 1 - \mathrm{V}_N^T \mathrm{\Omega}_N \mathrm{V}_N \right), \\ \mathrm{V}_N & = & \mathrm{\Omega}_N^{-1} \left( \frac{1}{N} \mathrm{X}^T\mathrm{X} 1 + \mathrm{X}^T\mathrm{Y} \right), \\ \mathrm{\Omega}_N & = & \left(1 + \frac{1}{N} \right) \mathrm{X}^T \mathrm{X}, \\ (\mathrm{Y})_n & = & \frac{y_S^*(t_{n+1}) - y_S^*(t_n)}{t_{n+1} - t_n}, \\\end{aligned}$$ where for clarity we have suppressed the dependence of $\mathrm{X}$ on $\mathrm{K}$. For the Frank-Wolfe Bayesian Quadrature algorithm, we require that the mean element $\mu_p$ is analytically tractable and for this reason we employed the exponentiated-quadratic kernel with length scale $\lambda$ and width scale $\sigma$ parameters: $$k(x,x') = \lambda^2 \exp\left(- \frac{\sum_{i=1}^d (x_i - x_i')^2}{2 \sigma^2}\right).$$ For simplicity we focussed on the single hyper-parameter pair $\lambda = \sigma = 1$, which produces: $$\begin{aligned} \mu_p(x) & = & \int_{0}^{\infty} k(x,x') p(x') \mathrm{d}x' \\ & = & \int_{0}^{\infty} \exp\left(-\sum_{i=1}^d (x_i - x_i')^2\right) \phi_T \big(x' \big|1,2^{-1}\mathrm{I} \big) \mathrm{d}x' \\ & = & 2^{-d/2} \big(1 + \text{erf}(1) \big)^{-d} \prod_{i=1}^d \exp\left(-\frac{(x_i-1)^2}{2}\right) \left(1 + \text{erf}\left(\frac{x_i + 1}{\sqrt{2}}\right)\right),\end{aligned}$$ where $\phi_T$ is the p.d.f. of the truncated Gaussian distribution introduced above and $\text{erf}$ is the error function. To compute the posterior variance of the numerical error we also require the quantity: $$\int_{0}^{\infty} \int_{0}^{\infty} k(x,x') p(x) p(x') \mathrm{d}x \mathrm{d}x' = \int_{0}^{\infty} \mu_p(x) p(x) \mathrm{d}x = \left\{ \begin{array}{cl} 0.629907... & \text{for } d = 1 \\ 0.396783... & \text{for } d = 2 \\ 0.249937... & \text{for } d = 3 \end{array} \right. ,$$ which we have simply evaluated numerically. We emphasise that principled approaches to hyper-parameter elicitation are an important open research problem that we aim to address in a future publication (see discussion in the main text). The values used here are scientifically reasonable and serve to illustrate key aspects of our methodology. FWBQ provides posterior distributions over the numerical uncertainty in each of our estimates for the marginal likelihoods $L(M_i)$. In order to propagate this uncertainty forward into a posterior distribution over posterior model probabilities (see Figs. 3 in the main text and \[fig:model posteriors2\] below), we simply sampled values $\hat{L}(M_i)$ from each of the posterior distributions for $L(M_i)$ and used these samples values to construct posterior model probabilities $\hat{L}(M_i) / \sum_j \hat{L}(M_j)$. Repeating this procedure many times enables us to sample from the posterior distribution over the posterior model probabilities (i.e. two levels of Bayes’ theorem). This provides a principled quantification of the uncertainty due to numerical error in the output of our primary Bayesian analysis. [*Description of the Data*]{} The proteomic dataset $\mathcal{D}$ that we considered here was a subset of the larger dataset provided in [@Oates2014]. Specifically, the substrate $S$ was the well-studied 4E-binding protein 1 (4EBP1) and the enzymes $E_j$ consisted of a collection of key proteins that are thought to be connected with 4EBP1 regulation, or at least involved in similar regulatory processes within cellular signalling. Full details, including experimental protocols, data normalisation and the specific choice of measurement time points are provided in the supplementary materials associated with [@Oates2014]. For this particular problem, biological interest arises because the data-generating system was provided by breast cancer cell lines. As such, the textbook description of 4EBP1 regulation may not be valid and indeed it is thought that 4EBP1 dis-regulation is a major contributing factor to these complex diseases (see [@Weinberg2006]). We do not elaborate further on the scientific rationale for model-based proteomics in this work. ![ Comparison of quadrature methods on the proteomics dataset. *Left:* Value of the MMD$^2$ for FW (black), FWLS (red), FWBQ (green), FWLSBQ (orange) and SBQ (blue). Once again, we see the clear improvement of using Bayesian Quadrature weights and we see that Sequential Bayesian Quadrature improves on Frank-Wolfe Bayesian Quadrature and Frank-Wolfe Line-Search Bayesian Quadrature. *Right:* Empirical distribution of weights. The dotted line represent the weights of the Frank-Wolfe algorithm with line search, which has all weights $w_i=1/n$. Note that the distribution of Bayesian Quadrature weights ranges from $-17.39$ to $13.75$ whereas all versions of Frank-Wolfe have weights limited to $[0,1]$ and have to sum to $1$.[]{data-label="fig:proteinsignalling"}](MMD2protein4.pdf){width="49.00000%"} Appendix E: FWBQ algorithms with Random Fourier Features {#appendix-e-fwbq-algorithms-with-random-fourier-features .unnumbered} -------------------------------------------------------- In this section, we will investigate the use of random Fourier features (introduced in [@Rahimi2007]) for the FWLS and FWLSBQ algorithms. An advantage of using this type of approximation is that the cost of manipulating the Gram matrix, and in particular of inverting it, goes down from $\mathcal{O}(n^3)$ to $\mathcal{O}(nD^2)$ for some user-defined constant $D$ which controls the quality of approximation. This could make Bayesian Quadrature more competitive against other integration methods such as MCMC or QMC. Furthermore, the kernels obtained using this method lead to finite-dimensional RKHS, which therefore satisfy the assumptions required for the theory in this paper to hold. This will be the aspect that we will focus on. In particular, we will show empirically that exponential convergence may be possible even when the RKHS is infinite-dimensional. We will re-use the $20$-component mixture of Gaussians example with $d=2$ from our simulation studies, but using instead a random Fourier approximation of the exponentiated-quadratic (EQ) kernel $k(x,x'):= \lambda^2\exp(-1/2\sigma^2\|x-x'\|_2^2)$ with $(\lambda,\sigma)=(1,0.8)$ and $M=10000$. Following Bochner’s theorem, we can always express translation invariant kernels in Fourier space: $$k(x,x') = \int_\mathcal{W} g(w) \exp\big(jw(x-x')\big)\mathrm{d}w = \mathbb{E}\Big[\exp\big(jw^Tx \big)\exp\big(jw^Tx' \big)\Big]$$ where $w \sim g(w) $ for $g(w)$ being the Fourier transform of the kernel. One can then use a Monte Carlo approximation of the kernel’s Fourier expression with $D$ samples whenever $g$ is a p.d.f.. Our approximated kernel will then lead to a $D$-dimensional RKHS and will be given by: $$k(x,x') \approx \frac{1}{D} \sum_{j=1}^D z_{w_j,b_j}(x)z_{w_j,b_j}(x') = \hat{k}_D(x,x')$$ where $z_{w_j,b_j}(x)=\sqrt{2}\cos(w_j^Tx+b_j)$ and $b_j \sim [0,2\pi]$ uniformly. Random Fourier features approximations are unbiased and, in the specific case of a $d$-dimensional EQ kernel with $\lambda=1$, we have to samples from the following Fourier transform: $$g(w) = \Big(\frac{2\pi}{\sigma^2}\Big)^{-\frac{d}{2}} \exp \Big(-\frac{\sigma^2\|w\|_2^2}{2} \Big)$$ which is a $d$-dimensional Gaussian distribution with zero mean and covariance matrix with all diagonal elements equal to $(1/\sigma^2)$. The impact on the MMD from the use of random Fourier features to approximate the kernel for both the FWLS and FWLSBQ algorithms is demonstrated in Figure \[fig:RFF\_MMD\]. In this example, the quadrature rule uses the kernel with random features but the MMD is calculated using the original $\mathcal{H}$-norm. The reason for using this $\mathcal{H}$-norm is to have a unique measure of distance between points which can be compared. Clearly, we once again have that the rate of convergence of the FWLSBQ is much faster than FWLS when using the exact kernel. The same phenomena is observed for the method with high number of random features ($D=5000$). This suggests that both the choice of design points and the calculation of the BQ weights is not strongly influenced by the approximation. It is also interesting to notice that the rates of convergence is very close for the exact and $D=5000$ methods (atleast when $n$ is small), potentially suggesting that exponential convergence is possible for the exact method. This is not so surprising in itself since using a Gaussian kernel represents a prior belief that the integrand of interest is very smooth, and we can therefore expect fast convergence of the method. However, in the case with a smaller number of random features is used ($D=1000$), we actually observe a very poor performance of the method, which is mainly due to the fact that the weights are not well approximated anymore. In summary, the experiments in this section suggest that the use of random features is a potential alternative for scaling up Bayesian Quadrature, but that one needs to be careful to use a high enough number of features. The experiments also give hope of having very similar convergence for infinite-dimensional and finite-dimensional spaces. [^1]: A detailed discussion on probabilistic numerics and an extensive up-to-date bibliography can be found at <http://www.probabilistic-numerics.org>.
--- abstract: 'In this paper, we develop an online active mapping system to enable a quadruped robot to autonomously survey large physical structures. We describe the perception, planning and control modules needed to scan and reconstruct an object of interest, without requiring a prior model. The system builds a voxel representation of the object, and iteratively determines the Next-Best-View (NBV) to extend the representation, according to both the reconstruction itself and to avoid collisions with the environment. By computing the expected information gain of a set of candidate scan locations sampled on the as-sensed terrain map, as well as the cost of reaching these candidates, the robot decides the NBV for further exploration. The robot plans an optimal path towards the NBV, avoiding obstacles and un-traversable terrain. Experimental results on both simulated and real-world environments show the capability and efficiency of our system. Finally we present a full system demonstration on the real robot, the ANYbotics ANYmal, autonomously reconstructing a building facade and an industrial structure.' author: - 'Yiduo Wang, Milad Ramezani and Maurice Fallon [^1] [^2]' bibliography: - 'library.bib' title: | **[Actively Mapping Industrial Structures\ with Information Gain-Based Planning on a Quadruped Robot]{}** --- Introduction ============ In the context of robotics, active perceptual planning refers to exploration by a mobile robot equipped with sensors to conduct a survey of an object or environment of interest. It can be of assistance for the regular inspection and monitoring of remote or dangerous facilities such as offshore platforms. Although active mapping has been investigated for many applications such as inspection [@Hollinger2013] and virtual modelling [@Kriegel2015], and on robotic platforms as varied as aerial [@Bircher2018; @Delmerico2018], wheeled [@Isler2016; @Vasquez-Gomez2014a] and underwater robots [@Ho2018; @Franz2016], the online deployment of such a system on a real robot is still a challenge, thus requiring further investigation. Advances in quadruped mobility and hardware reliability have been significant and the first industrial prototypes are being tested on live industrial facilities [@gehring2019fsr]. Quadrupeds can cover the same terrain as wheeled or tracked robots but can also cross mobility hazards and climb stairs. While UAVs are being actively deployed for these kinds of missions, it is difficult to operate aerial platforms within confined spaces and their sensing payload is limited. Our approach is to build and maintain an accurate 3D model of the object of interest as well as the local environment and to use it to plan actions of a quadruped robot to improve and extend the model. Our active mapping framework adapts the Information Gain (IG) approach originally formulated in [@Isler2016], which focused on the IG formulation with minimum attention to the problems that an actual robot faces in realistic inspections. We aim for a more realistic validation in an industrial setting (). The object of interest will be of unknown shape, surrounded by uneven terrain and mobility hazards and our solution will be embodied on the ANYmal quadruped robot. We present a complete active mapping system in the experimental section of this paper. The contributions of our research are as follows: - Implementation of an active mapping system on a quadruped, enabling the robot to not only traverse an unstructured environment, but also to scan an object of interest in an optimal manner using LiDAR. - Formulation of the approach as a Next-Best-View (NBV) problem, which determines the best pose for the robot to conduct the next scan on the basis of metrics drawn from the partial reconstruction (information gain) and the environment map (cost of mobility). - Evaluation of the system in simulated and real-world environments and the real-time deployment of our system on the ANYmal quadruped robot. The remainder of this paper is organised as follows. In Section \[sec:RelatedWork\] we discuss related works with our method detailed in Section \[sec:Method\]. Experimental results are given in Section \[sec:Experiments\] before discussing conclusions and future work in Section \[sec:FutureWork\]. Related Works {#sec:RelatedWork} ============= First we will review existing active mapping systems grouped according to two aspects: the prior assumptions made and the type of representation used. Prior Assumptions ----------------- Active mapping systems are typically divided between model-based and model-free approaches [@Karaszewski2016]. Model-based methods are essential for routine survey and inspection. They are typically applied in industrial scenarios because CAD models are often available [@Chen2011]. Blaer and Allen [@Blaer2007] designed a model-based system for 3D mapping of large environments by solving an Art Gallery Problem (AGP) and a Travelling Salesman Problem (TSP) for path optimisation. The system of Hollinger *et al.* [@Hollinger2013] was designed to decrease the uncertainty in a ship hull surface mesh reconstruction and improve its quality. Their system assumed the availability of an initial mesh reconstruction and planned viewpoints that could inspect high uncertainty areas of the mesh. Model-free active vision systems are more versatile and can be applied to a wider variety of objects and sites. For instance, the system of Bircher *et al.* [@Bircher2016; @Bircher2018] aimed to explore unknown spaces of different scales, and the approach of Kriegel *et al.* [@Kriegel2012; @Kriegel2015] was designed to reconstruct objects of arbitrary shape but confined size. Model-based approaches can also be adapted to incorporate uncertainties in the prior model and to improve the quality of reconstruction. In a work following up on [@Hollinger2013], Hover *et al.* [@Franz2016] addressed the potential lack of prior information by carrying out a coarse-to-fine multi-stage survey. Representations --------------- Many active mapping systems employ either surface mesh or voxel space representations. However, there are works such as [@Kriegel2012; @Kriegel2013] which benefited from both - surface mesh for reconstruction and voxel space for collision avoidance. Hollinger *et al.* [@Hollinger2013] and Hover *et al.* [@Franz2016] utilised a surface mesh to precisely reconstruct ship hulls with the mesh providing information about surface coverage, boundaries and holes. Schmid *et al.* [@schmid2012view] used a coarse Digital Surface Model (DSM) as a prior map to plan viewpoints for a UAV. Their algorithm’s complexity is linearly proportional to the size of the site, limiting the applicability of this system to smaller scale environments. In volumetric approaches, view selection metrics such as IG are normally used to determine an NBV. Bircher *et al.* [@Bircher2016; @Bircher2018] utilised a volumetric representation with their receding horizon planning strategy to progressively explore unknown environments. They grew a Rapidly-exploring Random Tree (RRT) [@LaValle1998] and selected the best branch based upon IG that evaluates the amount of observable unknown space. Isler *et al.* [@Isler2016] proposed a collection of Volumetric Information (VI) measures. *Occlusion Aware* computes the entropy of all observable voxels. *Unobserved Voxel* only counts unknown voxels, thereby inclining the system towards exploring void spaces. *Rear Side Voxel* and *Rear Side Entropy* focus the sensor on the object of interest. *Proximity Count* was shown to be advantageous in ensureing coverage of the object in their experiments but has a disadvantage of potentially pointing the sensor away from the object. The authors demonstrated these measures on a KUKA Youbot with a 5 Degree of Freedom (DoF) arm in an office room. Delmerico *et al.* [@Delmerico2018] then conducted more experiments, comparing their VI formulations with the approaches of [@Kriegel2013] and [@Vasquez-Gomez2014], to determine the best choice for the NBV selection in a volumetric representation. We aim for a model-free system able to fully scan an object of interest but also to explore in an unknown environment, therefore our system minimises the entropy/uncertainty in the environment while focusing on the object. We pair this with an octree as our robot’s volumetric representation, storing occupancy probability using OctoMap [@OctoMap13]. Localization and Mapping ------------------------ Accurate mapping is crucial for precise model reconstruction and active planning. When a prior model is available, it can be used for pure localization, however a model-free approach requires a complete Simultaneous Localisation and Mapping (SLAM) system - itself an active research problem. By its nature, a SLAM system will drift during exploration. Planning methods which use rigid representations such as a single OctoMap would struggle to respond to loop closures. The approach of [@Ho2018] is interesting—using a deformable reconstruction with *Virtual Occupancy Maps* attached to a pose graph to be flexible to new loop closures. For our real world experiments, we used a rigid map representation but are interested in this approach for future work. Method {#sec:Method} ====== In this section we detail the modules of our active mapping system.  illustrates a block diagram of the system architecture. The system is based on an iterative pipeline. At the start of each iteration, the robot executes a scanning action while standing (further described in ) to collect a sensor sweep. These measurements are incorporated into a map, a route to a new scan location is planned and the robot is requested to walk to the NBV for further exploration. This sequence is repeated, until a termination criterion (such as map completion) is met. The LiDAR measurements $\mathcal{L}_\mathcal{B}$ are sensed and then stored relative to the base frame $\{\mathcal{B}\}$. During a scanning action, $\mathcal{L}_\mathcal{B}$ is transformed into the map frame $\{\mathcal{M}\}$ based on the current pose of the robot $\bm{x}_\mathcal{M}\in\mathbb{R}^6$ and is accumulated into a larger point cloud. Our robot runs a localization system with little drift on the scale of our current experiments, allowing us to assume that the pose $\bm{x}_\mathcal{M}$ is accurate. This is discussed further in . The accumulated point cloud is denoted *sweep* $\mathcal{S}_\mathcal{M}$ in our system. $\mathcal{S}_\mathcal{M}$ is then downsampled for uniformity and filtered to remove outliers. Next, the system uses the processed *sweep* $\mathcal{S}_\mathcal{M}$ as well as the pose of the robot $\bm{x}_\mathcal{M}$ to update the occupancy probabilities of voxels in its OctoMap. We also use the LiDAR measurements to generate an elevation map of the environment. The elevation map has a useful range of about $10$ m, allowing only local planning. The path planning module evaluates terrain traversability subject to the elevation map and builds an RRT to generate a collection of scan candidates $\mathcal{C}$. A scan candidate $c\in\mathcal{C}$ is a pose where the robot could go to for the next scanning action. We use a utility function $\mathcal{U}_{c}$ to determine the best scan candidate (NBV), $c_{best}$, from the set of candidates $\mathcal{C}$: $$\begin{aligned} \label{equ:utility_function} \mathcal{U}_{c} = \mathcal{G}_{c}\times(1-\mathcal{P}_{c})\times(1-\mathcal{T}_{c}).\end{aligned}$$ This function combines contributions from - information gain $\mathcal{G}_{c}$: which measures the expected improvement of the model if given a sweep from that pose, - position cost $\mathcal{P}_{c}$: which penalises poses that have already been visited or are too close to the object, - traversal cost $\mathcal{T}_{c}$: which models the cost of travelling to a specific scan candidate pose. These measures are discussed in the following sections. Finally, our system replans an optimised path using RRT\* [@Karaman2011] before the robot takes the next mapping action. Volumetric Information (VI) Gain {#subsec:VIs} -------------------------------- Given a partial model of the object of interest, our system needs to determine the expected improvement in the model should a scan be made from a particular scan candidate pose. The approach is to trace a series of rays from a hypothetical pose and to estimate the expected information gain of observable voxels. Let $\mathcal{R}_c$ denote the set of rays cast by a scan candidate $c$. For each ray $r\in\mathcal{R}_c$, $\mathcal{V}_{r}$ is the set of voxels that the ray intersects with before reaching its endpoint. The information gain $\mathcal{G}_c$ at scan candidate $c$ is the sum of VIs, $\mathcal{I}$, in every voxel $v\in\mathcal{V}_{r}$ along each ray $r\in\mathcal{R}_c$: $$\begin{aligned} \label{equ:information_gain_sum} \mathcal{G}_c = \sum_{\forall r\in \mathcal{R}_c}^{}\sum_{\forall v\in \mathcal{V}_r}^{}\mathcal{I}.\end{aligned}$$ We implemented two formulations for $\mathcal{I}$ from Isler *et al.* [@Isler2016], namely *Occlusion Aware* $\mathcal{I}_{OA}$ and *Rear Side Entropy* $\mathcal{I}_{RSE}$ which we summarise here. Other proposed formulations are less relevant due to our sensor’s long range and $360^\circ$ Field of View (FoV). ### Occlusion Aware This measure determines how effectively uncertainty will be reduced by scanning at a certain pose considering voxel visibility. Given the occupancy probability $P_o(v)$ of voxel $v$, the entropy of the voxel is obtained from: $$\begin{aligned} \label{equ:entropy} H(v) = \text{ - }P_o(v)\ln P_o(v) \text{ - } (1\text{ - }P_o(v))\ln (1\text{ - }P_o(v)). \end{aligned}$$ Then the *Occlusion Aware* VI of voxel $v$, $\mathcal{I}_{OA}(v)$, is: $$\begin{aligned} \label{equ:occlusion_aware} \mathcal{I}_{OA}(v) = P_v(v)H(v), \end{aligned}$$ where $P_v(v)$ is the visibility probability of voxel $v$, which is computed as : $$\begin{aligned} \label{equ:visibility_proability} P_v(v_n) = \prod_{i = 0}^{n-1} (1-P_o(v_i)), \end{aligned}$$ where $v_n$ is the $n$-th voxel along the ray $r$ and $v_i$, $i = 0...n-1$, is a voxel that $r$ intersects before reaching $v_n$. ### Rear Side Entropy This measure is based on the *Occlusion Aware* VI but focuses on voxels at the rear of observed surfaces. Rear Side Entropy is formulated as: $$\begin{aligned} \label{equ:rear_side_entropy} \mathcal{I}_{RSE}(v) = \begin{cases*} \mathcal{I}_{OA}(v) & \text{$v$ is a \textit{Rear Side Voxel}, }\\ 0 &\text{otherwise.} \end{cases*}\end{aligned}$$ The idea is that a *Rear Side Voxel* is also likely to be occupied by the object. Focusing exploration on these voxels concentrates scans on the object rather than on surrounding free space. While these metrics proposed by Isler *et al.* [@Isler2016] are useful, their experimental validation was limited to lab experiments with a stereo camera planning over a fixed set of poses. We are motivated to develop a more complete field system which operates in a large scale industrial site. Our system instead plans scan candidates progressively using an RRT which plans on the LiDAR elevation map. Our robot explores using a Velodyne LiDAR, which has a $360^\circ$ FoV horizontally and long sensing range, which is suitable for scanning large-scale objects or environments. In our experimental section () we compare *Rear Side Entropy* and *Occlusion Aware* in field experiments. Position and Traversal Cost {#subsec:Costs} --------------------------- In our path planning module (), the RRT grows only within the traversable area of the elevation map, therefore the collection of scan candidates $\mathcal{C}$ does not contain invalid or unreachable poses. As a result, the utility value $\mathcal{U}_c$ of each scan candidate is penalised based on the nature of ANYmal and the configuration of the LiDAR system. ### Position Cost The position cost $\mathcal{P}_c$ is defined as: $$\begin{aligned} \label{equ:position_cost} \mathcal{P}_c = \begin{cases} 1-d_{thres}^{-1}\times d_c & d_{thres} \geq d_c \geq 0, \\ 0 & d_c > d_{thres}, \end{cases}\end{aligned}$$ where $d_c$ is the distance to an already visited scanning pose or the object itself, and $d_{thres}$ is a user defined threshold. $\mathcal{P}_c$ is used to avoid rescanning a previously used region and maintains a reasonable distance between robot and object. Using *Occlusion Aware* VI, the system plans NBVs in regions where the robot can observe more void space. The main contribution to the information gain $\mathcal{G}_c$ is from void rays $r_{void}$ — rays that do not hit any surfaces. Voxels $v_{void}\in\mathcal{V}_{r_{void}}$ are mainly unknown (occupancy probability $P_o(v_{void}) = 0.5$) and have high entropy. In our current system, $P_o(v)$ of a voxel $v\in\mathcal{V}_r$ is only updated when ray $r$ hits a surface, so $P_o(v_{void})$ does not change when observed by $r_{void}$. As a result for *Occlusion Aware*, $\mathcal{I}_{OA}(v_{void})$ will not decrease, causing the robot to stop exploring. By applying the position cost, our system can also avoid visiting fully scanned areas. We plan to utilise $r_{void}$ to update voxel occupancy in the future version. In contrast, $r_{void}$ do not contribute to *Rear Side Entropy* VI. Every scan decreases the entropy of observed voxels. In addition, the position cost $\mathcal{P}_c$ that applies to $c$ close to the object encourages candidates farther away, resulting our system observing a wider view. Conversely, if a scan candidate pose $c$ is farther away, less rays in $\mathcal{R}_c$ are able to observe the object, hence IG of this pose $\mathcal{G}_c$ would be lower compared to closer candidates. This discourages our system from selecting distant NBVs, ensuring a high resolution scan. Isler *et al.* [@Isler2016] predefined a set of scan candidates in their system so that the distance of scan poses to the object surface was fixed. However, in our system, the distance between the robot and the scan surface is dynamic so that the robot can avoid obstacles in the environment. Furthermore, since the ANYmal operates on a 2.5D manifold, it is necessary for the quadruped to adjust the distance to the object surface so as to efficiently scan objects of different sizes. By combining IGs with a position cost, our system achieves a balance between coverage and resolution. ### Traversal Cost The traversal cost $\mathcal{T}_c$ represents the difficulty for the robot to execute a certain path to candidate $c$ because of the roughness of terrain and the distance. Currently our approach classifies the elevation map discretely as either safe ($\mathcal{T}_c = 0$) or not traversable ($\mathcal{T}_c = 1$). In addition, a constant traversal cost penalises scan candidates that are behind the robot, because large turns are more difficult for the robot to execute. This policy also encourages the robot to explore forward rather than alternating direction. This makes the system more time and energy efficient. Path Planning {#subsec:PathPlanning} ------------- The path planning module in our system consists of two phases, as indicated in . Both phases rely on an elevation grid map generated from LiDAR measurements of the environment. We used the approach of [@Fankhauser2018ral] to compute the slope and normal of each cell and in turn a measure of the traversability of the terrain. The traversability is used to determine which states planned by the RRT and RRT\* are valid and reachable. In the first phase, the RRT grows into the traversable area without a goal until a user-defined number of nodes have been generated. These nodes are scan candidates $\mathcal{C}$. We then compute the utility value $\mathcal{U}_c$ for each scan candidate $c\in\mathcal{C}$ independently and choose the NBV $c_{best}$ with the highest individual value. Following that, the second phase of our path planning module uses RRT\* to replan the route to NBV, optimising travel distance. Termination Condition --------------------- In a model-free active mapping system, it is difficult to evaluate the completeness of reconstruction. We terminate operation using a user-defined threshold on the utility value $u_{thres}$ after a planning sequence. When the utility value of the NBV $\mathcal{U}_{c_{best}}$ falls below the threshold (), no new scan candidate has satisfactory quality, and the active mapping procedure terminates. $$\begin{aligned} \label{equ:termination} \mathcal{U}_{c_{best}} < u_{thres} \quad \forall c \in \mathcal{C}.\end{aligned}$$ Experiments and Evaluation {#sec:Experiments} ========================== To demonstrate our system’s functionality and to test the VI formulations, we carried out evaluations of increasing complexity—with the simple virtual models in and Gazebo reconstructions of our envisaged test locations to test our system’s ability to avoid collisions. Finally, we deployed our system on the real ANYmal robot in these environments. The results are detailed in the following sections. The real experiments involved scanning a building facade at Green Templeton College (GTC) ($10\times35\times4$ m$^3$) in Oxford () and a mock-up helicopter on the oil rig training site at the Fire Service College (FSC) ($3\times8\times3$ m$^3$) in Gloucestershire (). In these experiments, we used a LIDAR localization system running on the robot’s navigation PC. The system registered LiDAR clouds against a prior point cloud map using Iterative Closest Point (ICP) [@pomerleau13ar] seeded with legged odometry. At the scale of our experiments, a deformable map representation was not needed. Hardware {#sec:Hardware} -------- ### Platform The robot platform employed in this work is an ANYbotics ANYmal (version B) [@hutter2016anymal]. The robot has 12 actuated joints, as well as the 6 DoF floating base link. It is capable of trotting at the maximum speed of $1.0$ m/s and traversing complex terrain, e.g. stairs, kerbs and ramps. ### Sensor The primary sensor of our system is a Velodyne VLP-16 LiDAR which has 16 laser beams spread across a $\pm15.0^\circ$ vertical FoV and measures ranges with the accuracy of $\pm3$ cm across full $360^\circ$ horizontally. Utilising the robot’s wide range of motion, we designed a scanning action to roll the base from $40^\circ$ to $-40^\circ$ (while standing). The action improves the vertical FoV of our system to $\pm55^\circ$ and allows mapping objects much taller than the robot. Using this action, our system collects individual LiDAR sweeps. Simulated Experiments --------------------- We conducted experiments in simulation to map models of a car and a house ( (top)). We then used a Leica BLK360 laser scanner to create accurate reconstructions of our two test sites, the facade of a building and a helicopter deck (( (bottom)). We modelled the major surfaces of these test sites to create Gazebo simulations of the test sites. In these experiments, the approximate location and size of the object of interest are known, which aids segmentation from the accumulated sweep. This informs our system about where the OctoMap should be constructed and the VIs be computed. We chose a $5$ cm resolution for our OctoMap octree, which suits the resolution of the Velodyne LiDAR. For path planning and NBV selection, our system grows an RRT up to $150$ nodes, every iteration, within a $12\times12$ m$^2$ elevation map centred around the robot. This allows the robot to plan and conduct mapping actions around the object. To quantify the mapping results, we exploit different criteria including point cloud coverage ($c_p$), travel distance ($d_t$) and number of scan actions ($n_s$). In addition, we compute the overall task time ($t_{all}$) as well as the average time per-scan spent computing information gains and determining the NBVs ($t_{nbv}$) to evaluate the system’s online feasibility. To compute point cloud coverage, we aligned the accumulated point cloud with the ground truth and determined the points in the accumulated cloud within $4.3$ cm of the nearest point in the ground truth, approximately the distance between the centre of our OctoMap voxel and its vertex ($\frac{\sqrt{3}}{2}\times5$ cm). These points are classified as observed. Point cloud coverage $c_p$ is then defined as: $$\begin{aligned} c_p &= \frac{N_{O}}{N_{GT}}\end{aligned}$$ where $N_{GT}$ and $N_{O}$ are the total number of points in the ground truth model and the number of points observed in the model so far, respectively. As summarised in , the point cloud coverage gained with *Occlusion Aware* IG is slightly higher than that with *Rear Side Entropy* IG. This can also be seen in , which demonstrates the point cloud coverage per scan. The maximum coverage never reaches $100\%$ in our system as the top surfaces of objects are higher than the robot and cannot be observed from the ground, as shown in . While there is on average an 8.5% reduction in travel distance when our system employs *Occlusion Aware* compared to *Rear Side Entropy*, our system is subject to the random scan candidate placement by RRT. Hence the performance difference between two VIs in simulation so far is not significant enough for us to make a conclusive decision on which is the better formulation. Both approaches allowed our system to accomplish the mapping task. In both cases, the travel distance, the overall run time and the NBV computation time are all feasible for real experiments. Real-World Experiments ---------------------- Based on the simulated results in the previous section, we used the *Occlusion Aware* VI gain metric in our real experiments. summarises the evaluation of the reconstruction results in both experiments. In these, because our elevation map is partly corrupted by odometry noise (see attached video) and the LiDAR sensor is just $70$ cm from the ground, we can only plan in a $7\times7$ m$^2$ area around the robot. We therefore decreased the number of RRT scan candidates from $150$ to $75$, consequently decreasing the computation time of determining the NBV $t_{nbv}$. Comparing with , one can see that the computation times for the real experiments at facade and helicopter locations are on average half of the time taken in simulation. Our approach allows the robot to avoid the mobility hazards for the helicopter experiment in : stairwells, open edges on the deck and a skirt around the helicopter. demonstrates a comparison between the robot trajectories in the real FSC helicopter experiment (counter-clockwise) and in simulation (clockwise). The paths taken by our system in both scenarios are similar, demonstrating the practicality of our system in real scenarios. However, as noted before, because of more unknown areas existing within the elevation map, the system had to plan around void cells. demonstrates the success of our system in reconstructing the helicopter body (in green), compared to the ground truth (in red). Due to the limitation in the elevation map and in turn our path planning module, our system planned more scans than in simulation. In the helicopter scenario, the number of scans required more than doubled and as a result the run time almost tripled. A major part of that difference is due to the time spent by the robot operator judging if planned paths were safe. Conclusion and Future Work {#sec:FutureWork} ========================== In this work we presented a model-free active mapping system using a volumetric representation. The system allows a quadruped robot to explore and reconstruct both small and large scale objects, in particular industrial assets, with few assumptions about the test environment and requiring only high level human supervision. We tested our system in fully realistic scenarios and our approach allowed the robot to accomplish mapping missions in a complicated environment, creating accurate reconstructions online. In the future, we plan to improve the quality of elevation map and to incorporate full traversability estimation to allow our robot to navigate over rough terrain, such as kerbs and ramps, so as to fully utilise the dynamics of a legged robot. In addition, we plan to base localisation on a pose-graph SLAM system [@ramezani2020online] so that our active mapping approach can explore larger environments, with the benefit of loop closure. Hence, we would like to modify the reconstruction to be deformable, as in the manner proposed by Ho *et al.* [@Ho2018]. [^1]: This research is supported by the ESPRC ORCA Robotics Hub (EP/R026173/1). M. Fallon is supported by a Royal Society University Research Fellowship. [^2]: The authors are with the Oxford Robotics Institute, University of Oxford, UK. [{ywang, milad, mfallon}@robots.ox.ac.uk]{}
--- abstract: | In this paper we consider the matching coefficients up to two loops between Quantum Chromodynamics (QCD) and Non-Relativistic QCD (NRQCD) for the vector, axial-vector, scalar and pseudo-scalar currents. The structure of the effective theory is discussed and analytical results are presented. Particular emphasis is put on the singlet diagrams. PACS numbers: author: - | B.A. Kniehl$^{(a)}$, A. Onishchenko$^{(a,b)}$, J.H. Piclum$^{(a,c)}$, M. Steinhauser$^{(c)}$\ [*(a) II. Institut für Theoretische Physik, Universität Hamburg*]{}\ [*22761 Hamburg, Germany*]{}\ [*(b) Theoretical Physics Department, Petersburg Nuclear Physics Institute, Orlova Roscha*]{}\ [*188300 Gatchina, Russia*]{}\ [*(c) Institut f[ü]{}r Theoretische Teilchenphysik, Universit[ä]{}t Karlsruhe*]{}\ [*76128 Karlsruhe, Germany*]{} title: | -3cm DESY 06-031\ SFB/CPP-06-12\ TTP06-11\ 1.5cm Two-Loop Matching Coefficients for Heavy Quark Currents --- In the recent years quite a lot of activity has been devoted to the treatment of bound states of two heavy particles both in QED and QCD (for a recent review see, e.g., Ref. [@Brambilla:2004wf]). From the theory point of view the calculations have been put onto a solid basis due to the formulation of proper effective theories [@Caswell:1985ui; @Bodwin:1994jh], NRQED and NRQCD, respectively, which provide the possibility to systematically evaluate higher order corrections. The construction of the effective theories consists of essentially two steps: First, the effective operators involving the light degrees of freedom have to be constructed and second the corresponding couplings, the so-called coefficient functions, have to be computed by comparing the full and the effective theories. The latter is also referred to as matching calculation. The framework which is considered in this letter consists of QCD accompanied by external currents where we allow for vector, axial-vector, scalar and pseudo-scalar couplings. The main results of this letter are the two-loop matching coefficients. Thus, following the prescription outlined above we determine in a first step the effective currents and then perform a matching calculation. The matching coefficients provided in this paper constitute a building block in all calculations involving the corresponding external currents. This includes in particular production and decay processes of heavy quarkonia or the production of top quark pairs close to threshold. One could also think of the decay of a CP-even or CP-odd Higgs boson (with mass $M$) into two quarks with $2 m\approx M$. The basic idea behind the construction of the Lagrange density for NRQCD is to expand all terms of the QCD Lagrangian in the limit of a large quark mass. A similar procedure has to be applied to external currents which we define in coordinate space as $$\begin{aligned} j_v^\mu &=& \bar{\psi} \gamma^\mu \psi\,, \nonumber \\ j_a^\mu &=& \bar{\psi} \gamma^\mu\gamma_5 \psi\,, \nonumber \\ j_s &=& \bar{\psi} \psi\,, \nonumber \\ j_p &=& \bar{\psi} i\gamma_5 \psi\,. \label{eq::currents}\end{aligned}$$ Note that the anomalous dimension of $j_v^\mu$ and $j_a^\mu$ is zero whereas for the scalar and pseudo-scalar current it is obtained from the renormalization constant $Z_s = Z_p = Z_m$, which is given at the two-loop level in Ref. [@Gray:1990yh]. In order to perform the transition to the effective theory it is convenient to work in momentum space and to introduce the two-component Pauli-spinors in the form $$u(\vec{p}\,) = \sqrt{\frac{E+m}{2 E}} \left( \begin{array}{c} \chi \\ \frac{\vec{p}\cdot\vec{\sigma}}{E+m} \chi \end{array} \right)\,,\; v(-\vec{p}\,) = \sqrt{\frac{E+m}{2 E}} \left( \begin{array}{c} \frac{(-\vec{p}\,)\cdot\vec{\sigma}}{E+m} \phi \\ \phi \end{array} \right)\,, \label{eq::psi2phi}$$ where $m$ denotes the heavy quark mass. In Eq. (\[eq::psi2phi\]) $\chi$ is a spinor that annihilates a heavy quark and $\phi$ correspondingly creates a heavy anti-quark with momentum $\vec{p}$. In a first step we want to express the currents of Eq. (\[eq::currents\]) in terms of $\phi$ and $\chi$ and expand in the inverse heavy quark mass. This actually leads to the tree-level matching conditions. When inserting Eq. (\[eq::psi2phi\]) into (\[eq::currents\]) it turns out to be convenient to split the time-like and space-like coefficients of the vector and axial-vector currents. This leads to $$\begin{aligned} j_v^0 &=& 0 + {\cal O}\left(\frac{1}{m^2}\right)\,,\nonumber\\ j_v^k &=& \tilde{j}_v^k + {\cal O}\left(\frac{1}{m^2}\right)\,,\nonumber\\ j_a^0 &=& i\tilde{j}_p + {\cal O}\left(\frac{1}{m^2}\right)\,,\nonumber\\ j_a^k &=& \tilde{j}_a^k + {\cal O}\left(\frac{1}{m^3}\right)\,,\nonumber\\ j_s &=& \tilde{j}_s + {\cal O}\left(\frac{1}{m^3}\right)\,,\nonumber\\ j_p &=& \tilde{j}_p + {\cal O}\left(\frac{1}{m^2}\right)\,,\end{aligned}$$ where $k=1,2,3$ and the currents in the effective theory are given by $$\begin{aligned} \tilde{j}_v^k &=& \phi^\dagger \sigma^k \chi \,,\nonumber\\ \tilde{j}_a^k &=& \frac{1}{2 m} \phi^\dagger [\sigma^k,\vec{p}\cdot\vec{\sigma}] \chi \,,\nonumber\\ \tilde{j}_s &=& -\frac{1}{m} \phi^\dagger \vec{p}\cdot\vec{\sigma} \chi \,,\nonumber\\ \tilde{j}_p &=& -i\phi^\dagger \chi \,.\end{aligned}$$ Note that $\tilde{j}_p$ also appears in the expansion of $j_a^0$ which means that the corresponding matching coefficients are equal. This will be used as a check of our calculation. Due to the occurrence of the momentum $\vec{p}$ in $\tilde{j}_a^k$ and $\tilde{j}_s$ an expansion in the external momenta has to be performed in order to obtain the loop corrections to the corresponding matching coefficients. The basic idea to obtain the matching coefficients is to compute vertex corrections induced by the considered current both in the full and the effective theory. In practice it is convenient to consider the renormalized vertex function with two external on-shell quarks and to perform an asymptotic expansion about $s=4m^2$, where $s$ is the momentum squared of the external current, the so-called threshold expansion [@Beneke:1997zp; @Smirnov:2002pj]. Denoting by $\Gamma_x$ the proper structure of the genuine vertex corrections and by $Z_2$ and $Z_x$ the renormalization constants due to the quark wave function and the anomalous dimension of the current one obtains the equation $$\begin{aligned} Z_2 Z_x \Gamma_x(q_1,q_2) &=& c_x \tilde{Z}_2 \tilde{Z}_x^{-1} \tilde{\Gamma}_x + \ldots \label{eq::match_def} \,,\end{aligned}$$ where $x\in\{v,a,s,p\}$ with the understanding that the axial-vector part is split into time-like and space-like components. The ellipses denote terms suppressed by inverse powers of the heavy quark mass and the quantities in the effective theory are marked by a tilde. $c_x$ is the matching coefficient we are after. In our approximation $\tilde{Z}_2=1$. $Z_2$ to two loops has been computed in Ref. [@Broadhurst:1991fy]. As far as $\tilde{\Gamma}_x$ is concerned only the tree-level result determined by $\tilde{j}_x$ contributes to Eq. (\[eq::match\_def\]). The momenta $q_1$ and $q_2$ in Eq. (\[eq::match\_def\]) correspond to the outgoing momenta of the quark and anti-quark which are considered on-shell. Starting from order $\alpha_s^2$ the matching coefficients $c_x$ exhibit infra-red divergences which are compensated by ultra-violet divergences of the effective theory rendering physical quantities finite. In Eq. (\[eq::match\_def\]) the renormalization constant $\tilde{Z}_x$ which generates the anomalous dimension of $\tilde{j}_x$ takes over this part. The quantities $\Gamma_x$ are conveniently obtained with the help of projectors which are constructed in such a way that they project on the coefficients of $\tilde{\Gamma}_x$. For the vector case, the zeroth component of the axial-vector and the pseudo-scalar case we can simply identify $q_1^2=q_2^2=q^2/4=m^2$ and use $$\begin{aligned} \Gamma_v &=& \mbox{Tr}\left[ P^{(v)}_{\mu} \Gamma^{(v),\mu} \right]\,, \nonumber\\ \Gamma_p &=& \mbox{Tr}\left[ P^{(p)} \Gamma^{(p)} \right]\,, \nonumber\\ \Gamma_{a,0} &=& \mbox{Tr}\left[ P^{(a,0)}_{\mu} \Gamma^{(a),\mu} \right]\,, \label{eq::proj0}\end{aligned}$$ with $$\begin{aligned} P^{(v)}_{\mu} &=& \frac{1}{8 (D-1) m^2} \left( -\frac{\slashed{q}}{2} + m \right) \gamma_\mu \left( \frac{\slashed{q}}{2} + m \right)\,,\nonumber\\ P^{(p)} &=& \frac{1}{8 m^2} \left( -\frac{\slashed{q}}{2} + m \right) \gamma_5 \left( \frac{\slashed{q}}{2} + m \right)\,,\nonumber\\ P^{(a,0)}_{\mu} &=& -\frac{1}{8 m^2} \left( -\frac{\slashed{q}}{2} + m \right) \gamma_\mu \gamma_5 \left( \frac{\slashed{q}}{2} + m \right)\,. \label{eq::proj1}\end{aligned}$$ As already mentioned above the case $(a,0)$ is used as a check for the pseudo-scalar matching coefficient. For the axial-vector and scalar cases we have the equations analogous to Eq. (\[eq::proj0\]). However, since the corresponding effective currents have a suppression factor $|\vec{p}\,|/m$ it is necessary to choose $q_1=q/2+p$ and $q_2=q/2-p$, to expand up to linear order in $p$ and to set afterwards $p=0$ and $q^2=4m^2$. Note that we choose a reference frame where $q\cdot p = 0$ [@Beneke:1997zp; @Smirnov:2002pj]. Thus the projectors are more complicated and are given by $$\begin{aligned} P_{(a,i),\mu} &=& -\frac{1}{8 m^2} \left\{ \frac{1}{D-1} \left(-\frac{\slashed{q}}{2} + m \right) \gamma_\mu \gamma_5 \left(-\frac{\slashed{q}}{2} + m\right) \right. \nonumber\\ && \left. - \frac{1}{D-2} \left(-\frac{\slashed{q}}{2} + m\right) \frac{m}{p^2} \left( (D-3) p_\mu + \gamma_\mu \slashed{p} \right) \gamma_5 \left(\frac{\slashed{q}}{2} + m\right) \right\} \,,\nonumber\\ P_{(s)} &=& \frac{1}{8 m^2} \left\{ \left(-\frac{\slashed{q}}{2} + m \right) {\bf 1} \left(-\frac{\slashed{q}}{2} + m\right) + \left(-\frac{\slashed{q}}{2} + m\right) \frac{m}{p^2} \slashed{p} \left(\frac{\slashed{q}}{2} + m\right) \right\} \,. \label{eq::proj2}\end{aligned}$$ = In Fig. \[fig::diags\] some Feynman diagrams contributing to the matching coefficients are shown. Due to the application of the projectors the corresponding integrals can be reduced to the functions $J_\pm$ and $L_\pm$ as defined in Eqs. (14) and (55) of Ref. [@Beneke:1997zp]. However, due to the expansion in the momentum $p$ the powers of the denominators are higher and a systematic reduction of the scalar integrals to master integrals is necessary. For the current calculation we implemented the method of Ref. [@Smirnov:2003kc]. For some of the occuring integrals the program [AIR]{} [@Anastasiou:2004vj] is applied. The details will be described elsewhere. An important class of diagrams is constituted by the so-called singlet diagrams (cf. Fig. \[fig::diags\](e)) where the external current does not couple to the quark–anti-quark pair of the final state. Due to Furry’s theorem there is no contribution to the vector case from these diagrams, however, non-vanishing, finite results are obtained for $c_a$, $c_s$ and $c_p$. For the scalar and the pseudo-scalar currents only the heavy quark is running in the closed fermion loop. All other quark flavours are suppressed by the light quark mass. This is different for the axial-vector coupling. Here we consider the effective current formed by the top and bottom quark field $$\begin{aligned} j_a^\mu &=& \bar{t} \gamma^\mu \gamma_5 t - \bar{b} \gamma^\mu \gamma_5 b\,,\end{aligned}$$ which ensures the cancellation of the anomaly-like contributions. For the same reason the contributions from the remaining light quarks cancel. In the analytical results given below the contributions from the singlet diagrams are marked separately. At this point we only want to mention that in the axial-vector and pseudo-scalar case $\gamma_5$ was treated according to the prescription of Ref. [@Larin:1993tq]. In practice this means that we perform the replacements $$\begin{aligned} \gamma^\mu\gamma_5 &\to& \frac{i}{3!}\epsilon^{\mu\nu\rho\sigma} \gamma_\nu\gamma_\rho\gamma_\sigma \,,\nonumber\\ \gamma_5 &\to& \frac{i}{4!}\epsilon^{\mu\nu\rho\sigma} \gamma_\mu\gamma_\nu\gamma_\rho\gamma_\sigma \label{eq::g5} \,,\end{aligned}$$ strip off the $\epsilon$ tensor and deal with the objects with three and four indices, respectively. The corresponding projectors are obtained by performing the replacements of Eq. (\[eq::g5\]) in Eqs. (\[eq::proj1\]) and (\[eq::proj2\]) which makes them more complicated. However, the very calculation is in close analogy to the non-singlet case. After summing all two-loop contributions one obtains a finite result. Note that for the non-singlet contributions it is save to use anti-commuting $\gamma_5$. Actually, the treatment according to Ref. [@Larin:1993tq] leads to a wrong result. This is due to the infra-red divergences which are absent in the singlet diagrams. An alternative method to perform the calculation of the vertex corrections is based on the evaluation of the tensor integrals. Here, we used the T-operator method of Ref. [@Tarasov:1996br] to reduce the tensor integrals to products of the metric tensor and external momenta and scalar integrals with shifted space-time dimension. The resulting Dirac structures were further simplified and rewritten in terms of NRQCD fields. In addition to the bare two-loop diagrams we have to take into account the one-loop renormalization contribution from the heavy quark mass, which we renormalize on-shell, and the strong coupling renormalized in the $\overline{\rm MS}$ scheme. We want to mention that all contributions have been evaluated for general gauge parameter $\xi$. The final results for the matching coefficients are all independent of $\xi$ which constitutes an important check on our calculation. Let us in the following present our results and compare with the literature. The two-loop matching coefficient for the vector current has been computed almost ten years ago [@Czarnecki:1997vz; @Beneke:1997jm]. We confirmed these results and provide for completeness the analytical expressions $$\begin{aligned} c_v &=& 1 - 2 \frac{\alpha_s(m)}{\pi} C_F + \left(\frac{\alpha_s(m)}{\pi}\right)^2 \left[ C_F T \left( \frac{11}{18} n_l + \frac{22}{9} - \frac{4}{3} \zeta_2 \right) \right. \nonumber\\ && + C_F^2 \left( \frac{23}{8} - \frac{79}{6} \zeta_2 + 6 \zeta_2 \ln 2 - \frac{1}{2} \zeta_3 - \zeta_2 \ln\frac{\mu^2}{m^2} \right) \nonumber\\ && \left. + C_F C_A \left( -\frac{151}{72} + \frac{89}{24} \zeta_2 - 5 \zeta_2 \ln 2 - \frac{13}{4} \zeta_3 - \frac{3}{2} \zeta_2 \ln\frac{\mu^2}{m^2} \right) \right] \,,\end{aligned}$$ where $C_A = N_c$ and $C_F = (N_c^2 - 1)/(2 N_c)$ are the Casimir operators of the adjoint and fundamental representation of SU($N_c$), respectively, $T = 1/2$, and $n_l$ is the number of massless quarks. $\zeta_n$ denotes Riemann’s zeta-function. The one-loop result can already be found in Ref. [@KalSar]. The anomalous dimension of the effective vector current, which is related to $\tilde{Z}_v$ through $\gamma_v = \frac{{\rm d} \ln \tilde{Z}_v}{{\rm d} \ln \mu}$, reads $$\begin{aligned} \gamma_v &=& -\left(\frac{\alpha_s}{\pi}\right)^2\left( 2 C_F^2 + 3 C_F C_A \right) \zeta_2 \,.\end{aligned}$$ Our results for the two-loop matching coefficients $c_a$, $c_s$ and $c_p$ are given by $$\begin{aligned} c_a &=& 1 - \frac{\alpha_s(m)}{\pi} C_F + \left(\frac{\alpha_s(m)}{\pi}\right)^2 \left[ C_F T \left( \frac{7}{18} n_l + \frac{20}{9} - \frac{4}{3} \zeta_2 \right) \right. \nonumber\\ && + C_F^2 \left( \frac{23}{24} - \frac{27}{4} \zeta_2 + \frac{19}{4} \zeta_2 \ln 2 - \frac{27}{16} \zeta_3 - \frac{5}{4} \zeta_2 \ln\frac{\mu^2}{m^2} \right) \nonumber\\ && \left. + C_F C_A \left( -\frac{101}{72} + \frac{35}{24} \zeta_2 - \frac{7}{2} \zeta_2 \ln 2 - \frac{9}{8} \zeta_3 - \frac{1}{2} \zeta_2 \ln\frac{\mu^2}{m^2} \right) + C_F T X^{(a)}_{\rm sing} \right] \,,\nonumber\\ c_s &=& 1 - \frac{1}{2} \frac{\alpha_s(m)}{\pi} C_F + \left(\frac{\alpha_s(m)}{\pi}\right)^2 \left[ C_F T \left( -\frac{5}{36} n_l + \frac{121}{36} - 2 \zeta_2 \right) \right. \nonumber\\ && + C_F^2 \left( \frac{5}{16} - \frac{37}{8} \zeta_2 + 3 \zeta_2 \ln 2 - \frac{11}{4} \zeta_3 - 2 \zeta_2 \ln\frac{\mu^2}{m^2} \right) \nonumber\\ && \left. + C_F C_A \left( \frac{49}{144} + \frac{1}{8} \zeta_2 - 3 \zeta_2 \ln 2 - \frac{5}{4} \zeta_3 - \frac{1}{2} \zeta_2 \ln\frac{\mu^2}{m^2} \right) + C_F T X^{(s)}_{\rm sing} \right] \,,\nonumber\\ c_p &=& 1 - \frac{3}{2} \frac{\alpha_s(m)}{\pi} C_F + \left(\frac{\alpha_s(m)}{\pi}\right)^2 \left[ C_F T \left( \frac{1}{12} n_l + \frac{43}{12} - 2 \zeta_2 \right) \right. \nonumber\\ && + C_F^2 \left( \frac{29}{16} - \frac{79}{8} \zeta_2 + 6 \zeta_2 \ln 2 - \frac{9}{2} \zeta_3 - 3 \zeta_2 \ln\frac{\mu^2}{m^2} \right) \nonumber\\ && \left. + C_F C_A \left( -\frac{17}{48} + \frac{17}{8} \zeta_2 - 6 \zeta_2 \ln 2 - 3 \zeta_3 - \frac{3}{2} \zeta_2 \ln\frac{\mu^2}{m^2} \right) + C_F T X^{(p)}_{\rm sing} \right] \,. \label{eq::c_asp}\end{aligned}$$ The one-loop result for $c_p$ can already be found in Ref. [@Braaten:1995ej]; the two-loop coefficients of Eq. (\[eq::c\_asp\]) are new. They constitute our main result. The one-loop coefficients can be easily obtained from the one-loop on-shell vertex corrections with arbitrary momentum squared of the external current, $s$. In the analytic expressions it is straightforward to perform the limit where the velocity of the produced quarks is small. After subtracting the leading term, which corresponds to the Coulomb singularity, one remains with the result for the matching coefficients [@Chetyrkin:1997mb]. At two loops this simple trick does not work any more and the calculation has to be performed from scratch as has been done in this letter. The contributions from the singlet diagrams correspond to $$\begin{aligned} X^{(a)}_{\rm sing} &=& -\frac{23}{12} \zeta_2 + 4 \zeta_2 \ln 2 - 2 \ln 2 + \frac{2}{3} \ln^2 2 + i\pi \left( 1 - \frac{2}{3} \ln 2 \right) \,,\nonumber\\ X^{(s)}_{\rm sing} &=& \frac{2}{3} - \frac{29}{12} \zeta_2 + 4 \zeta_2 \ln 2 - \ln 2 + i\frac{\pi}{2} \,,\nonumber\\ X^{(p)}_{\rm sing} &=& \frac{5}{4} \zeta_2 + 3 \zeta_2 \ln 2 - \frac{21}{8} \zeta_3 + i\pi \frac{3}{4} \zeta_2 \,. \label{eq::res_sing}\end{aligned}$$ $X^{(s)}_{\rm sing}$ and $X^{(p)}_{\rm sing}$ receive only contributions from diagrams which are finite and contain only the heavy quark. The corresponding result holds both for top and bottom quarks. This is different in the case of $X^{(a)}_{\rm sing}$. Actually, the result in Eq. (\[eq::res\_sing\]) corresponds to the case where top quarks are considered in the final state. Note that $X^{(a)}_{\rm sing}$ receives contributions from diagrams with top and bottom quarks in the closed triangle loop (cf. Fig. \[fig::diags\](e)). Taken separately they are divergent, however, the sum is finite. If one considers bottom quarks in the final state one still has to consider top and bottom quarks in the closed triangle loop. Again only the sum of all diagrams is finite with the result $$X^{(a)}_{\rm sing} = \frac{55}{24} + \frac{19}{12} \zeta_2 - 4 \zeta_2 \ln 2 - \frac{3}{4} \ln \frac{m_b^2}{m_t^2} + {\cal O}\left(\frac{m_b^2}{m_t^2}\right) \,. \label{eq::res_sing2}$$ The diagram in Fig. \[fig::diags\](e) for axial-vector coupling and with bottom in the final state was also considered in Ref. [@Kniehl:1989bb; @Kniehl:1989qu], for abritrary values of $s$ and $m_t$, but for $m_b=0$ so that no direct comparison with Eq. (\[eq::res\_sing2\]) is possible. As mentioned above, it is possible to extract the result for $c_p$ from the zero component of the axial-vector current. This is quite evident in the non-singlet case. However, for the singlet contribution this check is highly non-trivial since in this approach $c_p$ is obtained from diagrams both with top and bottom quarks in the closed triangle whereas in the direct calculation only one type of quarks appears. The singlet results of Eqs. (\[eq::res\_sing\]) and (\[eq::res\_sing2\]) and the fermionic contributions of Eq. (\[eq::c\_asp\]) are in agreement with Ref. [@Bernreuther:2004th; @Bernreuther:2005rw; @Bernreuther:2005gw] where the off-shell contributions have been considered. Since they do not develop an infrared singularity the limit $s\to 4m^2$ can be performed. This is different in the case of the non-fermionic contributions where due to the infrared divergence the off-shell results [@Bernreuther:2004ih; @Bernreuther:2004th; @Bernreuther:2005gw] cannot be used in order to obtain the matching coefficients. For completeness we also provide the result for the anomalous dimensions corresponding to Eq. (\[eq::c\_asp\]) which read $$\begin{aligned} \gamma_a &=& -\left(\frac{\alpha_s}{\pi}\right)^2\left( \frac{5}{2} C_F^2 + C_F C_A \right) \zeta_2 \,,\nonumber\\ \gamma_s &=& -\left(\frac{\alpha_s}{\pi}\right)^2\left( 4 C_F^2 + C_F C_A \right) \zeta_2 \,,\nonumber\\ \gamma_p &=& -\left(\frac{\alpha_s}{\pi}\right)^2\left( 6 C_F^2 + 3 C_F C_A \right) \zeta_2 \,. \label{eq::gamma_asp}\end{aligned}$$ The result for $\gamma_p$ agrees with the one extracted from Ref. [@Hoang:2005dk]. We want to mention that the coefficient $c_p$ has been considered in Ref. [@Onishchenko:2003ui] in the context of the $B_c$ meson. The latter consists of two heavy quarks, however, with different masses. This makes the calculation significantly more difficult since two instead of one mass scale appear in the integrals. In Ref. [@Onishchenko:2003ui] the reduction to master integrals has been performed exactly whereas the latter have been evaluated in the limit $m_c\ll m_b$ so that a comparison with the present analysis is not possible. To summarize, in this paper we computed the two-loop matching coefficients between QCD and NRQCD for an axial-vector, scalar and pseudo-scalar current. Furthermore, we performed an independent check of the matching coefficient in the vector case. The latter contributes to the second order result of the threshold production of top quark pairs. The result for the axial-vector current only contributes to the fourth-order analysis which is currently still out of reach. [**Acknowledgments.**]{}\ We would like to thank K.G. Chetyrkin, A.A. Penin, and V.A. Smirnov for useful discussions. J.H.P. would like to thank S. Bekavac for discussions about Mellin-Barnes integrals. This work was supported by the “Impuls- und Vernetzungsfonds” of the Helmholtz Association, contract number VH-NG-008 and the SFB/TR 9. The Feynman diagrams were drawn with [JaxoDraw]{} [@Binosi:2003yf]. [99]{} N. Brambilla [*et al.*]{}, arXiv:hep-ph/0412158. W. E. Caswell and G. P. Lepage, Phys. Lett. B [**167**]{} (1986) 437. G. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D [**51**]{} (1995) 1125 \[Erratum-ibid. D [**55**]{} (1997) 5853\] \[arXiv:hep-ph/9407339\]. N. Gray, D. J. Broadhurst, W. Grafe and K. Schilcher, Z. Phys. C [**48**]{} (1990) 673. M. Beneke and V. A. Smirnov, Nucl. Phys. B [**522**]{} (1998) 321 \[arXiv:hep-ph/9711391\]. V. A. Smirnov, “Applied asymptotic expansions in momenta and masses,” Springer (2002). D. J. Broadhurst, N. Gray and K. Schilcher, Z. Phys. C [**52**]{} (1991) 111. V. A. Smirnov and M. Steinhauser, Nucl. Phys. B [**672**]{} (2003) 199 \[arXiv:hep-ph/0307088\]. C. Anastasiou and A. Lazopoulos, JHEP [**0407**]{} (2004) 046 \[arXiv:hep-ph/0404258\]. S. A. Larin, Phys. Lett. B [**303**]{} (1993) 113 \[arXiv:hep-ph/9302240\]. O. V. Tarasov, Phys. Rev. D [**54**]{}, 6479 (1996) \[arXiv:hep-th/9606018\]. A. Czarnecki and K. Melnikov, Phys. Rev. Lett.  [**80**]{} (1998) 2531 \[arXiv:hep-ph/9712222\]. M. Beneke, A. Signer and V. A. Smirnov, Phys. Rev. Lett.  [**80**]{} (1998) 2535 \[arXiv:hep-ph/9712302\]. G. K[ä]{}llen and A. Sarby, K. Dan. Vidensk. Selsk. Mat.-Fis. Medd. [29]{}, N17 (1955) 1. E. Braaten and S. Fleming, Phys. Rev. D [**52**]{} (1995) 181 \[arXiv:hep-ph/9501296\]. K. G. Chetyrkin, J. H. Kühn and M. Steinhauser, Nucl. Phys. B [**505**]{} (1997) 40 \[arXiv:hep-ph/9705254\]. B. A. Kniehl and J. H. Kühn, Phys. Lett. B [**224**]{} (1989) 229. B. A. Kniehl and J. H. Kühn, Nucl. Phys. B [**329**]{} (1990) 547. W. Bernreuther, R. Bonciani, T. Gehrmann, R. Heinesch, T. Leineweber and E. Remiddi, Nucl. Phys. B [**723**]{} (2005) 91 \[arXiv:hep-ph/0504190\]. W. Bernreuther, R. Bonciani, T. Gehrmann, R. Heinesch, T. Leineweber, P. Mastrolia and E. Remiddi, Nucl. Phys. B [**712**]{} (2005) 229 \[arXiv:hep-ph/0412259\]. W. Bernreuther, R. Bonciani, T. Gehrmann, R. Heinesch, P. Mastrolia and E. Remiddi, Phys. Rev. D [**72**]{} (2005) 096002 \[arXiv:hep-ph/0508254\]. W. Bernreuther, R. Bonciani, T. Gehrmann, R. Heinesch, T. Leineweber, P. Mastrolia and E. Remiddi, Nucl. Phys. B [**706**]{} (2005) 245 \[arXiv:hep-ph/0406046\]. A. H. Hoang and P. Ruiz-Femenia, Phys. Rev. D [**73**]{} (2006) 014015 \[arXiv:hep-ph/0511102\]. A. I. Onishchenko and O. L. Veretin, arXiv:hep-ph/0302132. D. Binosi and L. Theussl, Comput. Phys. Commun.  [**161**]{} (2004) 76 \[arXiv:hep-ph/0309015\].
--- abstract: 'We discuss under what conditions the duality between electric and magnetic fields is a valid symmetry of macroscopic quantum electrodynamics. It is shown that Maxwell’s equations in the absence of free charges satisfy duality invariance on an operator level, whereas this is not true for Lorentz forces and atom–field couplings in general. We prove that derived quantities like Casimir forces, local-field corrected decay rates as well as van-der-Waals potentials are invariant with respect to a global exchange of electric and magnetic quantities. This exact symmetry can be used to deduce the physics of new configurations on the basis of already established ones.' author: - Stefan Yoshi Buhmann - Stefan Scheel title: Macroscopic quantum electrodynamics and duality --- In the past, studies of quantum electrodynamic (QED) phenomena have often been restricted to purely electric systems, because effects associated with magnetic properties are considerably smaller for materials occurring in nature. Two developments have recently triggered an increased interest in such magnetic effects: The first was the suggestion [@0477] and subsequent fabrication [@0479] of artificial metamaterials with controllable electric permittivity $\varepsilon$ and magnetic permeability $\mu$, where left-handed materials (LHMs) with negative real parts of $\varepsilon$ and $\mu$ are of particular interest. As had already pointed out in 1968 [@0476], the basis vectors of an electromagnetic wave propagating inside such a medium form a left-handed triad, implying negative refraction. Motivated by the progress in metamaterial fabrication, researchers have intensively studied their potentials, leading to proposals of a perfect lens with sub-wavelength resolution [@0478] as well as cloaking devices [@0835] and predictions of an unusual behaviour of the decay of one or two atoms in the presence of LHMs [@0002; @0737]. Another, closely related motivation for considering magnetic systems was due to the fact that dispersion forces [@0696] have gained an increasing influence on micro-mechanical devices where they often lead to undesired effects such as stiction [@0578]. The question naturally arose whether LHMs could be exploited to modify or even change the sign of dispersion forces. Forces on excited systems might indeed be influenced by LHMs [@0828]. Ground-state forces are not as easily manipulated because they depend on the medium response at all frequencies, whereas the Kramers-Kronig relations imply that LHMs can only be realized in limited frequency windows. However, the controllable magnetic properties available in metamaterials can still have a large impact on dispersion forces: The dispersion forces between electric and magnetic atoms [@0089] or bodies [@0122] differ both in sign and power laws from those between only electric ones. On the search for repulsive dispersion forces, interactions of electric/magnetic atoms [@0121], plates [@0123; @0134] and atoms with plates [@0017; @0012] have been studied; more complex problems such as atom-atom interactions in the presence of a magneto-electric bulk medium [@0829], plate [@0009] or sphere [@0830] have also been addressed. Reductions or even sign changes of the forces have been predicted for such scenarios and have been attributed primarily to large permeabilities rather than left-handed properties. Metamaterials have thus considerably increased the parameter space at one’s disposal for manipulating QED phenomena. An efficient use of this new freedom requires the formulation of general statements of what might be achieved in principle. Working in this direction, upper bounds for the strength of attractive and repulsive Casimir forces have been formulated [@0134] and it has been proven that the force between two mirror-symmetric purely electric bodies is always attractive [@0503]. In the present Letter, we establish another such general principle on the basis of the duality of Maxwell’s equations under an exchange of electric and magnetic fields [@0844; @0845], also known as electric/magnetic reciprocity within a generalised framework of classical electrodynamics [@0850]. In particle physics, duality has been discussed as a symmetry of the $\mathcal{N}=4$ supersymmetric Yang-Mills theory [@0860]. We will prove its validity in the context of macroscopic QED [@0002; @0696] and show that under certain conditions, quantities such as decay rates and dispersion forces are invariant with respect to a global exchange of electric and magnetic properties. The parameter space to be considered in the search for optimal geometries and materials will thus be effectively halved. We begin by verifying duality for macroscopic QED in the absence of free charges and currents. We group the fields into dual pairs $(\sqrt{\varepsilon_0}\hat{{\bm{E}}},\sqrt{\mu_0}\hat{{\bm{H}}})$, $(\sqrt{\mu_0}\hat{{\bm{D}}},\sqrt{\varepsilon_0}\hat{{\bm{B}}})$ and $(\sqrt{\mu_0}\hat{{\bm{P}}}, \sqrt{\varepsilon_0}\mu_0\hat{{\bm{M}}})$, so that Maxwell’s equations read $$\begin{gathered} \label{d1} {\bm{\nabla}}{\!\cdot\!}\begin{pmatrix}\sqrt{\mu_0}\hat{{\bm{D}}}\\ \sqrt{\varepsilon_0}\hat{{\bm{B}}}\end{pmatrix} =\begin{pmatrix}0\\ 0\end{pmatrix},\\ \label{d2} {\bm{\nabla}}{\!\times\!}\begin{pmatrix}\sqrt{\varepsilon_0}\hat{{\bm{E}}}\\ \sqrt{\mu_0}\hat{{\bm{H}}}\end{pmatrix} +\frac{\partial}{\partial t} \begin{pmatrix}0&1\\-1&0\end{pmatrix} \begin{pmatrix}\sqrt{\mu_0}\hat{{\bm{D}}}\\ \sqrt{\varepsilon_0}\hat{{\bm{B}}}\end{pmatrix} =\begin{pmatrix}{\mbox{\textbf{\textit{0}}}}\\ {\mbox{\textbf{\textit{0}}}}\end{pmatrix}\end{gathered}$$ with $$\label{d3} \begin{pmatrix}\sqrt{\mu_0}\hat{{\bm{D}}}\\ \sqrt{\varepsilon_0}\hat{{\bm{B}}}\end{pmatrix} =\frac{1}{c} \begin{pmatrix}\sqrt{\varepsilon_0}\hat{{\bm{E}}}\\ \sqrt{\mu_0}\hat{{\bm{H}}}\end{pmatrix} +\begin{pmatrix}\sqrt{\mu_0}\hat{{\bm{P}}}\\ \sqrt{\varepsilon_0}\mu_0\hat{{\bm{M}}}\end{pmatrix}.$$ Maxwell’s equations are invariant under the general $\operatorname{SO}(2)$ duality transformation $$\label{d4} \begin{pmatrix}{\bm{x}}\\ {\bm{y}}\end{pmatrix}^\star =\mathcal{D}(\theta)\begin{pmatrix}{\bm{x}}\\ {\bm{y}}\end{pmatrix}, \quad\mathcal{D}(\theta) =\begin{pmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix},$$ which may equivalently be expressed as a $\operatorname{U}(1)$ transformation when introducing complex Riemann–Silberstein fields [@0844]. The invariance of Maxwell’s equations under this rotation can be verified by multiplying Eqs. (\[d1\])–(\[d3\]) by $\mathcal{D}(\theta)$ and using the fact that $\mathcal{D}(\theta)$ commutes with the symplectic matrix in Eq. (\[d2\]). Note that the grouping into dual pairs is solely due to the mathematical structure of the equations and is in contrast to the fact that $\hat{{\bm{E}}}$, $\hat{{\bm{B}}}$ and $\hat{{\bm{D}}}$, $\hat{{\bm{H}}}$ are the pairs of physically corresponding quantities. For it to be a valid symmetry of the electromagnetic field, duality must also be consistent with the constitutive relations. In the presence of linear, local, isotropic, dispersing and absorbing media, the constitutive relations in frequency space can be given as $$\begin{gathered} \label{d5} \begin{pmatrix}\sqrt{\mu_0}\hat{\underline{{\bm{D}}}}\\ \sqrt{\varepsilon_0}\hat{\underline{{\bm{B}}}}\end{pmatrix} =\frac{1}{c}\begin{pmatrix}\varepsilon&0\\ 0&\mu\end{pmatrix} \begin{pmatrix}\sqrt{\varepsilon_0}\hat{\underline{{\bm{E}}}}\\ \sqrt{\mu_0}\hat{\underline{{\bm{H}}}}\end{pmatrix}\\ +\begin{pmatrix}1&0\\0&\mu\end{pmatrix} \begin{pmatrix}\sqrt{\mu_0}\hat{\underline{{\bm{P}}}}_\mathrm{N}\\ \sqrt{\varepsilon_0}\mu_0\hat{\underline{{\bm{M}}}}_\mathrm{N} \end{pmatrix},\end{gathered}$$ where $\varepsilon$ $\!=$ $\!\varepsilon({\bm{r}},\omega)$ and $\mu$ $\!=$ $\!\mu({\bm{r}},\omega)$ denote the relative electric permittivity and magnetic permeability of the media and $\hat{{\bm{P}}}_\mathrm{N}$ and $\hat{{\bm{M}}}_\mathrm{N}$ are the noise polarisation and magnetisation which necessarily arise in the presence of absorption. Invariance of the constitutive relations (\[d5\]) under the duality transformation requires that $$\begin{gathered} \label{d7} \begin{pmatrix}\varepsilon^\star&0\\0&\mu^\star\end{pmatrix} =\mathcal{D}(\theta) \begin{pmatrix}\varepsilon&0\\0&\mu\end{pmatrix} \mathcal{D}^{-1}(\theta)\\ =\begin{pmatrix}\varepsilon\cos^2\theta+\mu\sin^2\theta &(\mu-\varepsilon)\sin\theta\cos\theta\\ (\mu-\varepsilon)\sin\theta\cos\theta &\varepsilon\sin^2\theta+\mu\cos^2\theta\end{pmatrix}.\end{gathered}$$ This condition is trivially fulfilled if (including both free space and the perfect lens, [@0478]), where duality is a continuous symmetry. For media with a non-trivial impedance, the condition (\[d7\]) only holds for $\theta$ $\!=$ $\!n\pi/2$ with $n\in\mathbb{Z}$. The presence of such media thus reduces the continuous symmetry to a discrete symmetry with four distinct members, whose group structure is that of $\mathbb{Z}_4$. For $\theta$ $\!=$ $\!n\pi/2$, Eqs. (\[d5\]) and (\[d7\]) imply the transformations $$\begin{gathered} \label{d10} \begin{pmatrix}\varepsilon\\ \mu\end{pmatrix}^\star =\begin{pmatrix}\cos^2\theta&\sin^2\theta\\ \sin^2\theta&\cos^2\theta\end{pmatrix} \begin{pmatrix}\varepsilon\\ \mu\end{pmatrix},\\ \label{d11} \begin{pmatrix}\sqrt{\mu_0}\hat{\underline{{\bm{P}}}}_\mathrm{N}\\ \sqrt{\varepsilon_0}\mu_0\hat{\underline{{\bm{M}}}}_\mathrm{N} \end{pmatrix}^\star =\begin{pmatrix}\cos\theta&\mu\sin\theta\\ -\varepsilon^{-1}\sin\theta& \cos\theta\end{pmatrix} \begin{pmatrix}\sqrt{\mu_0}\hat{\underline{{\bm{P}}}}_\mathrm{N}\\ \sqrt{\varepsilon_0}\mu_0\hat{\underline{{\bm{M}}}}_\mathrm{N} \end{pmatrix}.\end{gathered}$$ Maxwell’s equations (\[d1\]) and (\[d2\]), together with the constitutive relations (\[d5\]) for the electromagnetic field in the absence of free charges and currents, are thus invariant under the discrete duality transformations $\theta$ $\!=$ $\!n\pi/2$, $n\in\mathbb{Z}$ given by Eqs. (\[d4\]), (\[d10\]) and (\[d11\]). This is not only true for the equations of motion, but clearly must also hold on a Hamiltonian level. To see this explicitly, recall that the Hamiltonian of the medium-assisted field is given by [@0002] where the fundamental bosonic fields $\hat{{\bm{f}}}_\lambda$ are related to the noise terms via $$\label{d13} \begin{pmatrix}\sqrt{\mu_0}\hat{\underline{{\bm{P}}}}_\mathrm{N}\\ \sqrt{\varepsilon_0}\mu_0\hat{\underline{{\bm{M}}}}_\mathrm{N} \end{pmatrix} =\sqrt{\frac{\hbar}{\pi c^2}} \begin{pmatrix}{\mathrm{i}}\sqrt{\operatorname{Im}\varepsilon}&0\\ 0&\sqrt{\operatorname{Im}\mu}/|\mu|\end{pmatrix} \begin{pmatrix}\hat{{\bm{f}}}_e\\ \hat{{\bm{f}}}_m\end{pmatrix}.$$ Combining Eqs. (\[d10\]), (\[d11\]) and (\[d13\]), one finds that the fundamental fields transform as $$\label{d14} \begin{pmatrix}\hat{{\bm{f}}}_e\\ \hat{{\bm{f}}}_m\end{pmatrix}^\star =\begin{pmatrix}\cos\theta &-{\mathrm{i}}(\mu/|\mu|)\sin\theta\\ -{\mathrm{i}}(|\varepsilon|/\varepsilon)\sin\theta &\cos\theta\end{pmatrix} \begin{pmatrix}\hat{{\bm{f}}}_e\\ \hat{{\bm{f}}}_m\end{pmatrix}$$ for $\theta$ $\!=$ $\!n\pi/2$, so that $\hat{H}_\mathrm{F}^\star$ $\!=$ $\!\hat{H}_\mathrm{F}$. It is sufficient to focus on the single duality transformation $\theta$ $\!=$ $\!\pi/2$ as summarised in Tab. \[Tab1\], which is a generator of the whole group. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Partners ----------------------------------------------------------- ------------------------------------------------------ -- ------------------------------------------------------------ $\hat{{\bm{E}}}$, $\hat{{\bm{H}}}$: $\hat{{\bm{E}}}^\star=c\mu_0\hat{{\bm{H}}}$, $\hat{{\bm{H}}}^\star=-\hat{{\bm{E}}}/(c\mu_0)$ $\hat{{\bm{D}}}$, $\hat{{\bm{B}}}$: $\hat{{\bm{D}}}^\star=c\varepsilon_0\hat{{\bm{B}}}$, $\hat{{\bm{B}}}^\star=-\hat{{\bm{D}}}/(c\varepsilon_0)$ $\hat{{\bm{P}}}$, $\hat{{\bm{M}}}$: $\hat{{\bm{P}}}^\star=\hat{{\bm{M}}}/c$, $\hat{{\bm{M}}}^\star=-c\hat{{\bm{P}}}$ $\hat{{\bm{P}}}_A$, $\hat{{\bm{M}}}_A$: $\hat{{\bm{P}}}_A^\star=\hat{{\bm{M}}}_A/c$, $\hat{{\bm{M}}}_A^\star=-c\hat{{\bm{P}}}_A$ $\hat{{\bm{d}}}$, $\hat{{\bm{m}}}$: $\hat{{\bm{d}}}^\star=\hat{{\bm{m}}}/c$, $\hat{{\bm{m}}}^\star=-c\hat{{\bm{d}}}$ $\hat{{\bm{P}}}_\mathrm{N}$, $\hat{{\bm{M}}}_\mathrm{N}$: $\hat{{\bm{P}}}_\mathrm{N}^\star $\hat{{\bm{M}}}_\mathrm{N}^\star =\mu\hat{{\bm{M}}}_\mathrm{N}/c$, =-c\hat{{\bm{P}}}_\mathrm{N}/\varepsilon$ $\hat{{\bm{f}}}_e$, $\hat{{\bm{f}}}_m$: $\hat{{\bm{f}}}_e^\star $\hat{{\bm{f}}}_m^\star =-{\mathrm{i}}(\mu/|\mu|)\hat{{\bm{f}}}_m$, =-{\mathrm{i}}(|\varepsilon|/\varepsilon)\hat{{\bm{f}}}_e$ $\varepsilon$, $\mu$: $\varepsilon^\star=\mu$, $\mu^\star=\varepsilon$ $\alpha$, $\beta$: $\alpha^\star=\beta/c^2$, $\beta^\star=c^2\alpha$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : \[Tab1\] Effect of the duality transformation with $\theta$ $\!=$ $\!\pi/2$. Let us next turn our attention to Lorentz forces and the coupling of the medium-assisted field to charged particles: We recall that the operator Lorentz force on a neutral body occupying a volume $V$ can be given as [@0696] $$\begin{gathered} \label{d15x} \hat{{\bm{F}}} =\int_{\partial V}{\mathrm{d}}{\bm{A}}{\!\cdot\!}\biggl\{ \varepsilon_0\hat{{\bm{E}}}({\bm{r}}) {}\hat{{\bm{E}}}({\bm{r}}) +\frac{1}{\mu_0}\hat{{\bm{B}}}({\bm{r}}) {}\hat{{\bm{B}}}({\bm{r}})\\ -\frac{1}{2}\biggl[\varepsilon_0 \hat{{\bm{E}}}^2({\bm{r}}) +\frac{1}{\mu_0}\hat{{\bm{B}}}^2({\bm{r}})\biggr]{\mbox{\textbf{{\textsf{I}}}}} \biggr\}\\ -\varepsilon_0\,\frac{{\mathrm{d}}}{{\mathrm{d}}t} \int_V {\mathrm{d}}^3r\,\hat{{\bm{E}}}({\bm{r}}) {\!\times\!}\hat{{\bm{B}}}({\bm{r}})\end{gathered}$$ (${\mbox{\textbf{{\textsf{I}}}}}$: unit tensor), while that on a neutral atom with polarisation $\hat{{\bm{P}}}_A$ and magnetisation $\hat{{\bm{M}}}_A$ reads [@0696; @0008] $$\begin{gathered} \label{d16x} \hat{{\bm{F}}} ={\bm{\nabla}}_{\!\!{A}}\int{\mathrm{d}}^3r\,\Bigl[ \hat{{\bm{P}}}_A({\bm{r}}){\!\cdot\!}\hat{{\bm{E}}}({\bm{r}}) +\hat{{\bm{M}}}_A({\bm{r}}){\!\cdot\!}\hat{{\bm{B}}}({\bm{r}})\\ +\hat{{\bm{P}}}_A({\bm{r}}){\!\times\!}\dot{\hat{{\bm{r}}}}_{A} {\!\cdot\!}\hat{{\bm{B}}}({\bm{r}})\Bigr] +\frac{{\mathrm{d}}}{{\mathrm{d}}t}\int{\mathrm{d}}^3r\, \hat{{\bm{P}}}_A({\bm{r}}){\!\times\!}\hat{{\bm{B}}}({\bm{r}}).\end{gathered}$$ The coupling of one or more atoms to the medium-assisted electromagnetic field can in the multipolar coupling scheme be implemented via [@0696; @0009] $$\begin{gathered} \label{d17x} \hat{H}_{A\mathrm{F}} =-\int{\mathrm{d}}^3r\,\Bigl[ \hat{{\bm{P}}}_A({\bm{r}}){\!\cdot\!}\hat{{\bm{E}}}({\bm{r}}) +\hat{{\bm{M}}}_A({\bm{r}}){\!\cdot\!}\hat{{\bm{B}}}({\bm{r}})\\ +m_A^{-1}\hat{{\bm{P}}}_A({\bm{r}}){\!\times\!}\hat{{\bm{p}}}_{A} {\!\cdot\!}\hat{{\bm{B}}}({\bm{r}})\Bigr],\end{gathered}$$ when neglecting diamagnetic interactions. Using the transformation behaviour given in Tab. \[Tab1\], it is immediately clear that neither the Lorentz forces on bodies or atoms nor the atom-field interactions are duality invariant on an operator level. Even for atoms and bodies at rest with time-independent fields, duality invariance is prohibited by the unavoidable noise polarisation and magnetisation in the constitutive relations (\[d5\]). That said, we will show that effective quantities derived from the above operator Lorentz forces and atom–field couplings do obey duality invariance when considering atoms and bodies at rest and not embedded in a medium. In particular, we will consider the Casimir force [@0198] $$\begin{gathered} \label{d15y} {\bm{F}}=-\frac{\hbar}{\pi}\int_{0}^{\infty} {\mathrm{d}}\xi \int_{\partial V}{\mathrm{d}}{\bm{A}}{\!\cdot\!}\Bigl\{ {\mbox{\textbf{{\textsf{G}}}}}_{ee}^{(1)}({\bm{r}},{\bm{r}},{\mathrm{i}}\xi) +{\mbox{\textbf{{\textsf{G}}}}}_{mm}^{(1)}({\bm{r}},{\bm{r}},{\mathrm{i}}\xi)\\ -{\textstyle\frac{1}{2}}{\operatorname{Tr}}\Bigl[ {\mbox{\textbf{{\textsf{G}}}}}_{ee}^{(1)}({\bm{r}},{\bm{r}},{\mathrm{i}}\xi) +{\mbox{\textbf{{\textsf{G}}}}}_{mm}^{(1)}({\bm{r}},{\bm{r}},{\mathrm{i}}\xi) \Bigr]{\mbox{\textbf{{\textsf{I}}}}}\Bigr\},\end{gathered}$$ the single- and two-atom vdW potentials [@0696; @0012; @0831] $$\begin{gathered} \label{d17y} U({\bm{r}}_{\!A}) =\frac{\hbar}{2\pi\varepsilon_0} \int_0^\infty{\mathrm{d}}\xi\,\Bigl[\alpha({\mathrm{i}}\xi) {\operatorname{Tr}}{\mbox{\textbf{{\textsf{G}}}}}_{ee}^{(1)}({\bm{r}}_{\!A},{\bm{r}}_{\!A},{\mathrm{i}}\xi)\\ +\frac{\beta({\mathrm{i}}\xi)}{c^2}\,{\operatorname{Tr}}{\mbox{\textbf{{\textsf{G}}}}}_{mm}^{(1)}({\bm{r}}_{\!A},{\bm{r}}_{\!A},{\mathrm{i}}\xi)\Bigr]\end{gathered}$$ and $$\begin{aligned} \label{d17z} &U({\bm{r}}_{\!A},{\bm{r}}_{\!B}) =-\frac{\hbar}{2\pi\varepsilon_0^2}\int_0^\infty{\mathrm{d}}\xi \nonumber\\ &\times{\operatorname{Tr}}\Bigl\{\alpha_A({\mathrm{i}}\xi)\alpha_B({\mathrm{i}}\xi) {\mbox{\textbf{{\textsf{G}}}}}_{ee}({\bm{r}}_{\!A},{\bm{r}}_{\!B},{\mathrm{i}}\xi) {\!\cdot\!}{\mbox{\textbf{{\textsf{G}}}}}_{ee}({\bm{r}}_{\!B},{\bm{r}}_{\!A},{\mathrm{i}}\xi) \nonumber\\ &+\alpha_A({\mathrm{i}}\xi)\,\frac{\beta_B({\mathrm{i}}\xi)}{c^2}\, {\mbox{\textbf{{\textsf{G}}}}}_{em}({\bm{r}}_{\!A},{\bm{r}}_{\!B},{\mathrm{i}}\xi) {\!\cdot\!}{\mbox{\textbf{{\textsf{G}}}}}_{me}({\bm{r}}_{\!B},{\bm{r}}_{\!A},{\mathrm{i}}\xi) \nonumber\\ &+\frac{\beta_A({\mathrm{i}}\xi)}{c^2}\,\alpha_B({\mathrm{i}}\xi) {\mbox{\textbf{{\textsf{G}}}}}_{me}({\bm{r}}_{\!A},{\bm{r}}_{\!B},{\mathrm{i}}\xi) {\!\cdot\!}{\mbox{\textbf{{\textsf{G}}}}}_{em}({\bm{r}}_{\!B},{\bm{r}}_{\!A},{\mathrm{i}}\xi) \nonumber\\ &+\frac{\beta_A({\mathrm{i}}\xi)}{c^2}\,\frac{\beta_B({\mathrm{i}}\xi)}{c^2}\, {\mbox{\textbf{{\textsf{G}}}}}_{mm}({\bm{r}}_{\!A},{\bm{r}}_{\!B},{\mathrm{i}}\xi) {\!\cdot\!}{\mbox{\textbf{{\textsf{G}}}}}_{mm}({\bm{r}}_{\!B},{\bm{r}}_{\!A},{\mathrm{i}}\xi)\Bigr\}\end{aligned}$$ ($\alpha$, $\beta$: atomic polarisability, magnetisability) and the atomic decay rate [@0002; @0832] $$\begin{gathered} \label{d17q} \Gamma_n({\bm{r}}_{A})=\frac{2}{\hbar\varepsilon_0}\,\sum_{k<n} \biggl[{\bm{d}}_{kn}{\!\cdot\!}\operatorname{Im}\,{\mbox{\textbf{{\textsf{G}}}}}_{ee}({\bm{r}}_{\!A},{\bm{r}}_{\!A}, \omega_{nk}){\!\cdot\!}{\bm{d}}_{nk}\\ +\frac{{\bm{m}}_{kn}}{c}\,{\!\cdot\!}\operatorname{Im}\,{\mbox{\textbf{{\textsf{G}}}}}_{mm}({\bm{r}}_{\!A},{\bm{r}}_{\!A}, \omega_{nk}){\!\cdot\!}\frac{{\bm{m}}_{nk}}{c}\biggr]\end{gathered}$$ ($|n\rangle$: initial atomic state, $\omega_{nk}$: atomic transition frequencies; ${\bm{d}}_{nk}$, ${\bm{m}}_{nk}$: electric, magnetic dipole matrix elements). Here, ${\mbox{\textbf{{\textsf{G}}}}}^{(1)}$ is the scattering part of the classical Green tensor, where a left index $e$, $m$ indicates that ${\mbox{\textbf{{\textsf{G}}}}}$ is multiplied by ${\mathrm{i}}\omega/c$ $\!=$ $\!-\xi/c$ or ${\bm{\nabla}}{\!\times\!}$ from the left and a right index $e$, $m$ denotes multiplication with ${\mathrm{i}}\omega/c$ $\!=$ $\!-\xi/c$ or ${\!\times\!}\overleftarrow{{\bm{\nabla}}}'$ from the right. The Casimir force and the single-atom vdW force are the ground-state averages of the above operator Lorentz forces, while the atomic potentials and rates follow from the atom–field coupling. To prove the duality invariance of the above quantities (\[d15y\])–(\[d17q\]), we note that the Casimir force depends solely on the classical Green tensor $$\label{d18x} \biggl[{\bm{\nabla}}{\!\times\!}\,\frac{1}{\mu({\bm{r}},\omega)}\,{\bm{\nabla}}{\!\times\!}\,-\,\frac{\omega^2}{c^2}\,\varepsilon({\bm{r}},\omega)\biggr] {\mbox{\textbf{{\textsf{G}}}}}({\bm{r}},{\bm{r}}',\omega) =\bm{\delta}({\bm{r}}-{\bm{r}}'),$$ while vdW forces and decay rates also depend on $\alpha$, $\beta$, $\hat{{\bm{d}}}$ and $\hat{{\bm{m}}}$. While the transformation behaviour of the latter quantities under duality follows immediately from that of $\varepsilon$, $\mu$, $\hat{{\bm{P}}}_A$ and $\hat{{\bm{M}}}_A$ (see Tab. \[Tab1\]), the transformed Green tensor, which is the solution to Eq. (\[d18x\]) with $\varepsilon$ and $\mu$ exchanged, can be determined as follows: We first note that Maxwell’s equations (\[d1\]), (\[d2\]) together with the constitutive relations (\[d5\]) are uniquely solved by [@0002] $$\begin{aligned} \label{d15} &\hat{\underline{{\bm{E}}}}({\bm{r}},\omega) =-\varepsilon_0^{-1}\int{\mathrm{d}}^3r'\, {\mbox{\textbf{{\textsf{G}}}}}_{ee}({\bm{r}},{\bm{r}}',\omega) {\!\cdot\!}\hat{\underline{{\bm{P}}}}_\mathrm{N}({\bm{r}}',\omega) \nonumber\\ &\quad-c\mu_0\int{\mathrm{d}}^3r'\,{\mbox{\textbf{{\textsf{G}}}}}_{em}({\bm{r}},{\bm{r}}',\omega) {\!\cdot\!}\hat{\underline{{\bm{M}}}}_\mathrm{N}({\bm{r}}',\omega),\end{aligned}$$ $$\begin{aligned} \label{d16} &\hat{\underline{{\bm{B}}}}({\bm{r}},\omega) =-c\mu_0\int{\mathrm{d}}^3r'\, {\mbox{\textbf{{\textsf{G}}}}}_{me}({\bm{r}},{\bm{r}}',\omega) {\!\cdot\!}\hat{\underline{{\bm{P}}}}_\mathrm{N}({\bm{r}}',\omega) \nonumber\\ &\quad-\mu_0\int{\mathrm{d}}^3r'\, {\mbox{\textbf{{\textsf{G}}}}}_{mm}({\bm{r}},{\bm{r}}',\omega) {\!\cdot\!}\hat{\underline{{\bm{M}}}}_\mathrm{N}({\bm{r}}',\omega),\end{aligned}$$ $$\begin{aligned} \label{d17} &\hat{\underline{{\bm{D}}}}({\bm{r}},\omega) =-\frac{\varepsilon({\bm{r}},\omega)}{c}\, \int{\mathrm{d}}^3r'\,{\mbox{\textbf{{\textsf{G}}}}}_{em}({\bm{r}},{\bm{r}}',\omega) {\!\cdot\!}\hat{\underline{{\bm{M}}}}_\mathrm{N}({\bm{r}}',\omega) \nonumber\\ &-\int\!{\mathrm{d}}^3r'\!\biggl[\varepsilon({\bm{r}},\omega) {\mbox{\textbf{{\textsf{G}}}}}_{ee}({\bm{r}},{\bm{r}}',\omega) -\bm{\delta}({\bm{r}}\!-\!{\bm{r}}')\biggr] {\!\cdot\!}\hat{\underline{{\bm{P}}}}_\mathrm{N}({\bm{r}}',\omega),\end{aligned}$$ $$\begin{aligned} \label{d18} &\hat{\underline{{\bm{H}}}}({\bm{r}},\omega) =-\frac{c}{\mu({\bm{r}},\omega)}\int{\mathrm{d}}^3r'\, {\mbox{\textbf{{\textsf{G}}}}}_{me}({\bm{r}},{\bm{r}}',\omega) {\!\cdot\!}\hat{\underline{{\bm{P}}}}_\mathrm{N}({\bm{r}}',\omega) \nonumber\\ &-\int{\mathrm{d}}^3r'\biggl[\frac{{\mbox{\textbf{{\textsf{G}}}}}_{mm}({\bm{r}},{\bm{r}}',\omega)} {\mu({\bm{r}},\omega)} +\bm{\delta}({\bm{r}}\!-\!{\bm{r}}')\biggr] {\!\cdot\!}\hat{\underline{{\bm{M}}}}_\mathrm{N}({\bm{r}}',\omega).\end{aligned}$$ The invariance of Maxwell’s equations implies that this solution remains valid after applying the duality transformation. Taking duality transforms of Eqs. (\[d15\]) and (\[d16\]), the unknown transformed Green tensor appears on the rhs of these equations, whereas the transformations of all other quantities occurring in the equations can be determined with the aid of Tab. \[Tab1\]. After using Eqs. (\[d15\])–(\[d18\]) to express the resulting fields on the lhs in terms of $\hat{\underline{{\bm{P}}}}_\mathrm{N}$ and $\hat{\underline{{\bm{M}}}}_\mathrm{N}$ and equating coefficients, one obtains the following transformation rules: $$\begin{aligned} {\mbox{\textbf{{\textsf{G}}}}}_{ee}^\star({\bm{r}},{\bm{r}}',\omega) =&\;\mu^{-1}({\bm{r}},\omega) {\mbox{\textbf{{\textsf{G}}}}}_{mm}({\bm{r}},{\bm{r}}',\omega) \mu^{-1}({\bm{r}}',\omega) \nonumber\\ \label{d19} &+\mu^{-1}({\bm{r}},\omega) \bm{\delta}({\bm{r}}\!-\!{\bm{r}}'),\\ \label{d20} {\mbox{\textbf{{\textsf{G}}}}}_{em}^\star({\bm{r}},{\bm{r}}',\omega) =&-\mu^{-1}({\bm{r}},\omega) {\mbox{\textbf{{\textsf{G}}}}}_{me}({\bm{r}},{\bm{r}}',\omega) \varepsilon({\bm{r}}',\omega),\\ \label{d21} {\mbox{\textbf{{\textsf{G}}}}}_{me}^\star({\bm{r}},{\bm{r}}',\omega) =&-\varepsilon({\bm{r}},\omega) {\mbox{\textbf{{\textsf{G}}}}}_{em}({\bm{r}},{\bm{r}}',\omega) \mu^{-1}({\bm{r}}',\omega),\\ {\mbox{\textbf{{\textsf{G}}}}}_{mm}^\star({\bm{r}},{\bm{r}}',\omega) =&\;\varepsilon({\bm{r}},\omega) {\mbox{\textbf{{\textsf{G}}}}}_{ee}({\bm{r}},{\bm{r}}',\omega) \varepsilon({\bm{r}}',\omega)\nonumber\\ &-\varepsilon({\bm{r}},\omega)\bm{\delta}({\bm{r}}\!-\!{\bm{r}}'). \label{d22}\end{aligned}$$ The duality invariance of dispersion forces and decay rates follows immediately. Using Eqs. (\[d19\]) and (\[d22\]) and noting that the $\delta$ function does not contribute to the scattering part of the Green tensor, it is seen that the Casimir force (\[d15y\]) on a body is unchanged when globally exchanging $\varepsilon$ and $\mu$, provided that the body is located in free space. The duality invariance of the vdW potentials (\[d17y\]) and (\[d17z\]) also follows from the transformation rules (\[d19\])–(\[d22\]). This invariance with respect to a simultaneous exchange $\varepsilon\leftrightarrow\mu$ and $\alpha\leftrightarrow\beta/c^2$ again only holds if $\varepsilon({\bm{r}}_{\!A/B})$ $\!=$ $\mu({\bm{r}}_{\!A/B})$ $\!=$ $\!1$. In contrast to the Casimir force, this does not mean that the atom has to be located in vacuum, but merely implies that for atoms embedded in media, local-field corrections must be included via the real-cavity model in order to insure invariance [@0739]. Duality invariance can be used to obtain the full functional dependence of dispersion forces in given scenarios on the atomic and medium parameters from knowledge of the respective dual scenario. For instance, it has recently been shown that in the retarded limit the vdW potential of two polarisable atoms reads $U(r_{AB})=-1863\hbar c\alpha_A\alpha_B\varepsilon^2/[64\pi^3\varepsilon_0^2\sqrt{\varepsilon\mu}(2\varepsilon+1)^4r_{AB}^7]$ when including local-field corrections [@0739]. Making the replacements $\alpha\rightarrow\beta/c^2$, $\varepsilon\leftrightarrow\mu$, one can immediately infer $U(r_{AB})=-1863\hbar c\mu_0^2\beta_A\beta_B\mu^2/[64\pi^3\sqrt{\varepsilon\mu}(2\mu+1)^4r_{AB}^7]$ for magnetisable atoms. The utility of this principle becomes even more apparent for complex problems like the interaction of two atoms in the presence of a magneto-electric object [@0009; @0830]. Finally, using the fact that two purely electric, mirror-symmetric bodies always attract [@0503], we can immediately conclude that so do two purely magnetic ones. In addition, Eqs. (\[d19\]) and (\[d22\]) imply the duality invariance of the decay rate (\[d17q\]) since the $\delta$ functions do not contribute to the imaginary part of the Green tensor; again, local-field corrections have to be included for atoms embedded in media. This symmetry can be exploited, e.g., to obtain magnetically driven spin-flip rates of atoms in specific environments from known electric-dipole driven decay rates. In conclusion, we have shown that dispersion forces on atoms and bodies as well as decay rates of atoms are duality invariant, provided that the bodies are located in free space at rest and that local-field corrections are taken into account when considering (stationary) atoms embedded in a medium. The established symmetry operation of globally exchanging electric and magnetic body and atom properties is a powerful tool for obtaining new results on the basis of already established ones. The invariance can easily be extended to other effective quantities of macroscopic QED such as frequency shifts, heating rates or energy transfer rates. This work was supported by the Alexander von Humboldt Foundation and the UK Engineering and Physical Sciences Research Council. S.Y.B. is grateful to F.W. Hehl and T. Kästner for discussions. [32]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} *et al.*, ****, (). *et al.*, ****, (). , ****, (). , ****, (). , ****, (). *et al.*, ****, (). , , ****, (). , , ****, (). , , , ****, (). *et al.*, (). , , ****, (). , ****, (). , , , ****, (). , ****, (). , , ****, (). , , , ****, (). *et al.*, ****, (). , , , ****, (). *et al.*, ****, (). *et al.*, ****, (). , , ****, (). , ****, (). , ****, (); , , ****, (). , , ** (, , ). , , ****, (). , , ****, (). *et al.*, ****, (). *et al.*, ****, (). , ****, (). *et al.*, ****, ().
--- abstract: 'We extend the one parameter $\theta $-bump theorem for fractional integrals of Sawyer and Wheeden to the setting of two parameters, as well as improving the multiparameter result of Tanaka and Yabuta for doubling weights to classical reverse doubling weights.' address: - McMaster University - McMaster University author: - Eric Sawyer - Zipeng Wang title: 'The $\theta $-bump theorem for product fractional integrals' --- Introduction ============ In [@SaWh Theorem 1(A)], Sawyer and Wheeden proved that the fractional integral $I_{\alpha }f\left( x\right) =\int \left\vert x-u\right\vert ^{\alpha -n}f\left( u\right) du$, $x\in \mathbb{R}^{N}$, is bounded from one weighted space $L^{p}\left( v^{p}\right) $ to another $L^{q}\left( w^{q}\right) $ provided there is $\theta >1$ such that$$A_{p,q;\theta }^{\alpha ,m}\left( v,w\right) \equiv \sup_{I\in \mathcal{D}% ^{N}}\left\vert I\right\vert ^{\frac{\alpha }{m}-\frac{1}{p}+\frac{1}{q}% }\left( \frac{1}{\left\vert I\right\vert }\int_{I}v^{-p^{\prime }\theta }\right) ^{\frac{1}{p^{\prime }\theta }}\ \left( \frac{1}{\left\vert I\right\vert }\int_{I}w^{q\theta }\right) ^{\frac{1}{q\theta }}<\infty .$$Here $1<p\leq q<\infty $, $0<\alpha <N$ and $v,w$ are nonnegative measurable functions on $\mathbb{R}^{N}$, $N\geq 1$. The finiteness of $% A_{p,q;1}^{\alpha ,m}\left( v,w\right) $ when $\theta =1$ is a well-known necessary condition for the boundedness of $I_{\alpha }$, and the above strengthening of that condition is usually referred to as a $\theta $-bump condition. In the same paper [SaWh]{}, it was shown that in the case $p<q$, if $v^{-p^{\prime }}$ and $% w^{q} $ are both reverse doubling weights, then the necessary condition $% A_{p,q;1}^{\alpha ,m}\left( v,w\right) <\infty $ is also sufficient for the boundedness of $I_{\alpha }$ from $L^{p}\left( v^{p}\right) $ to $% L^{q}\left( w^{q}\right) $. Here a measure $\mu $ is *reverse doubling* in $\mathbb{R}^{N}$ if there are $C,\varepsilon >0$ such that$$\left\vert 2^{-s}I\right\vert _{\mu }\leq C2^{-\varepsilon s}\left\vert I\right\vert _{\mu }\ ,\ \ \ \ \ \text{for all }s>0\text{ and cubes }% I\subset \mathbb{R}^{N},$$where $2^{-s}I$ denotes the cube *concentric* with $I$ and having side length $\ell \left( 2^{-s}I\right) $ equal to $2^{-s}\ell \left( I\right) $. Recently, H. Tanaka and K. Yabuta [@TaYa] used a clever iteration[^1] to obtain an $n$-linear embedding theorem for rectangles that has as a corollary the following result for certain *product* fractional integrals $\widetilde{I}_{\alpha }^{N}$ on $\mathbb{R}^{N}$ given by $$\widetilde{I}_{\alpha }^{N}f\left( x\right) \equiv \int_{\mathbb{R}% ^{N}}\prod_{j=1}^{N}\left\vert x_{j}-u_{j}\right\vert ^{\alpha -1}f\left( u\right) du,\ \ \ \ \ x\in \mathbb{R}^{N},\ \ \ \ \ 0<\alpha <1.$$Let $\mathcal{R}^{N}$ denote the *partial grid* of all rectangles in $% \mathbb{R}^{N}$ with sides parallel to the coordinate axes (which is not a grid). A weight $\mu $ is a rectangle doubling weight on $\mathbb{R}^{N}$ if there is $C>0$ such that$$\left\vert 2R\right\vert _{\mu }\leq C\left\vert R\right\vert _{\mu },\ \ \ \ \text{ for all rectangles }R\in \mathcal{R}^{N}.$$ Suppose $1<p<q<\infty $ and that both $v^{-p^{\prime }}$ and $w^{q}$ are rectangle doubling weights[^2] on $\mathbb{R}^{N}$. Then $\widetilde{I}_{\alpha }^{N}$ is bounded from $L^{p}\left( v^{p}\right) $ to $L^{q}\left( w^{q}\right) $ if and only if $$\sup_{R\in \mathcal{R}^{N}}\left\vert R\right\vert ^{\frac{\alpha }{N}-\frac{% 1}{p}+\frac{1}{q}}\left( \frac{1}{\left\vert R\right\vert }% \int_{R}v^{-p^{\prime }}\right) ^{\frac{1}{p^{\prime }}}\ \left( \frac{1}{% \left\vert R\right\vert }\int_{R}w^{q}\right) ^{\frac{1}{q}}<\infty .$$ Thus the theorem of Tanaka and Yabuta extends the second assertion in Theorem 1(B) of [@SaWh] to product fractional integrals with rectangle doubling weights. The purpose of this paper is to extend both Theorem 1(A) and the second assertion in Theorem 1(B) of [@SaWh] to product fractional integrals of the form (more than two factors in the kernel are handled similarly)$$I_{\alpha ,\beta }^{m,n}f\left( x,y\right) \equiv \int_{\mathbb{R}^{m}}\int_{% \mathbb{R}^{n}}\left\vert x-u\right\vert ^{\frac{\alpha }{m}-1}\left\vert y-t\right\vert ^{\frac{\beta }{n}-1}f\left( u,t\right) dtdu,\ \ \ \ \ \left( x,y\right) \in \mathbb{R}^{m}\times \mathbb{R}^{n},$$more precisely, to show that a product $\theta $-bump condition is always sufficient for the norm inequality, and that the same condition without a bump is sufficient provided the weights $v^{-p^{\prime }}$ and $w^{q}$ are product reverse doubling on $\mathbb{R}^{m}\times \mathbb{R}^{n}$ in this sense: a weight $\mu $ is *product reverse doubling* on $\mathbb{R}% ^{m}\times \mathbb{R}^{n}$ if there are $C,\varepsilon _{1},\varepsilon _{2}>0$ such that$$\left\vert \left( 2^{-s}I\right) \times \left( 2^{-t}J\right) \right\vert _{\mu }\leq C2^{-\varepsilon _{1}s-\varepsilon _{2}t}\left\vert I\times J\right\vert _{\mu }\ ,\ \ \ \ \ \text{for all }s,t>0\text{ and cubes }% I\subset \mathbb{R}^{m}\text{ and }J\subset \mathbb{R}^{n}. \label{product rev doub}$$ Our proof of the first result adapts the Tanaka-Yabuta argument to the $% \theta $-bump functional used in [@SaWh], while the second result regarding reverse doubling weights adapts the Tanaka-Yabuta argument to the use of NTV good/bad grids in place of the Strömberg $\frac{1}{3}$-trick that was used in [@TaYa]. Additional results for the product situation can be found in our paper [@SaWa]. See the appendix below for a discussion of the doubling and various reverse doubling conditions. We are grateful to Hitoshi Tanaka for bringing our attention to his beautiful paper [@TaYa] with K. Yabuta. Preliminaries ------------- Let $\mathcal{D}^{m}$ denote the grid of dyadic cubes in $\mathbb{R}^{m}$, and let $\mathcal{R}^{m,n}\equiv \mathcal{D}^{m}\times \mathcal{D}^{n}$ denote the partial grid of dyadic rectangles in $\mathbb{R}^{m}\times \mathbb{R}^{n}$ (which is not actually a grid since it fails the nested property). For $d\mu \left( x\right) =u\left( x\right) dx$ absolutely continuous with respect to Lebesgue measure on $\mathbb{R}^{N}$, we will use the following $\theta $-bump functional for a cube $Q$ and $\theta >1$ (\[see page 830\][SaWh]{}):$$\left\vert Q\right\vert _{\mu ,\theta }\equiv \left\vert Q\right\vert ^{% \frac{1}{\theta ^{\prime }}}\left( \int_{Q}u^{\theta }\right) ^{\frac{1}{% \theta }}.$$We have $\left\vert Q\right\vert _{\mu }\leq \left\vert Q\right\vert _{\mu ,\theta }$, and if $P=\overset{\cdot }{\bigcup }_{i=1}^{\infty }Q_{i}$ is a pairwise disjoint union of the cubes $Q_{i}$, then we have$$\sum_{i=1}^{\infty }\left\vert Q_{i}\right\vert _{\mu ,\theta }=\sum_{i=1}^{\infty }\left\vert Q\right\vert ^{\frac{1}{\theta ^{\prime }}% }\left( \int_{Q}u^{\theta }\right) ^{\frac{1}{\theta }}\leq \left( \sum_{i=1}^{\infty }\left\vert Q_{i}\right\vert \right) ^{\frac{1}{\theta ^{\prime }}}\left( \sum_{i=1}^{\infty }\int_{Q_{i}}u^{\theta }\right) ^{% \frac{1}{\theta }}=\left\vert P\right\vert ^{\frac{1}{\theta ^{\prime }}% }\left( \int_{P}u^{\theta }\right) ^{\frac{1}{\theta }}=\left\vert P\right\vert _{\mu ,\theta }\ .$$ The important property of the $\theta $-bump functional on cubes for us is that, when taken to a power larger than $1$, it automatically satisfies a Carleson condition taken over all dyadic subcubes. More precisely, if $\rho >1$, then$$\begin{aligned} \sum_{Q\in \mathcal{D}^{N}:\ Q\subset P}\left\vert Q\right\vert _{\mu ,\theta }^{\rho } &=&\sum_{k=-\infty }^{\infty }\sum_{Q\in \mathcal{D}^{N}:\ \ell \left( Q\right) =2^{-k}\ell \left( P\right) }\left\vert Q\right\vert ^{% \frac{\rho -1}{\theta ^{\prime }}}\left( \int_{Q}u^{\theta }\right) ^{\frac{% \rho -1}{\theta }}\left\vert Q\right\vert _{\mu ,\theta } \label{automatic} \\ &\leq &\sum_{k=-\infty }^{\infty }\sum_{Q\in \mathcal{D}^{N}:\ \ell \left( Q\right) =2^{-k}\ell \left( P\right) }\left( C2^{-kN\varepsilon }\left\vert P\right\vert \right) ^{\frac{\rho -1}{\theta ^{\prime }}}\left( \int_{P}u^{\theta }\right) ^{\frac{\rho -1}{\theta }}\left\vert Q\right\vert _{\mu ,\theta } \notag \\ &\leq &\sum_{k=-\infty }^{\infty }\left( C2^{-kN\varepsilon }\left\vert P\right\vert \right) ^{\frac{\rho -1}{\theta ^{\prime }}}\left( \int_{P}u^{\theta }\right) ^{\frac{\rho -1}{\theta }}\left\vert P\right\vert _{\mu ,\theta }=C_{N\varepsilon \frac{\rho -1}{\theta ^{\prime }}}\left\vert P\right\vert _{\mu ,\theta }^{\rho }\ . \notag\end{aligned}$$This automatic Carleson condition leads to a corresponding automatic Carleson embedding lemma. \[theta bump lemma\]Suppose that $1<s<r<\infty $, $\theta >1$, and that $% d\mu \left( x\right) =u\left( x\right) dx$ is a locally $L^{\theta }$ absolutely continuous measure on $\mathbb{R}^{N}$. Then we have$$\left\{ \sum_{Q\in \mathcal{D}^{N}}\left\vert Q\right\vert _{\mu ,\theta }^{% \frac{r}{s}}\left( \frac{1}{\left\vert Q\right\vert _{\mu ,\theta }}% \int_{Q}fd\mu \right) ^{r}\right\} ^{\frac{1}{r}}\leq C_{r,s,\theta }\left\Vert f\right\Vert _{L^{s}\left( \mu \right) }\ ,\ \ \ \ \ f\geq 0.$$ The cubes in $\mathcal{D}^{N}$ form a grid, and so for each integer $k\in \mathbb{Z}$, we can consider the maximal dyadic cubes $\left\{ M_{i}^{k}\right\} _{i=1}^{\infty }$ from $\mathcal{D}^{N}$ such that$$\frac{1}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}\int_{M_{i}^{k}}fd% \mu >2^{k}.$$Then we can estimate using (\[automatic\]) that$$\begin{aligned} &&\sum_{Q\in \mathcal{D}^{N}}\left\vert Q\right\vert _{\mu ,\theta }^{\frac{r% }{s}}\left( \frac{1}{\left\vert Q\right\vert _{\mu ,\theta }}\int_{Q}fd\mu \right) ^{r}\leq \sum_{k=-\infty }^{\infty }\sum_{\substack{ Q\in \mathcal{D}% ^{N} \\ 2^{k}<\frac{1}{\left\vert Q\right\vert _{\mu ,\theta }}% \int_{Q}fd\mu \leq 2^{k+1}}}\left\vert Q\right\vert _{\mu ,\theta }^{\frac{r% }{s}}\left( 2^{k+1}\right) ^{r} \\ &\leq &\sum_{k=-\infty }^{\infty }\ \sum_{i=1}^{\infty }\sum_{Q\in \mathcal{D% }^{N}:\ Q\subset M_{i}^{k}}\left\vert Q\right\vert _{\mu ,\theta }^{\frac{r}{% s}}\left( 2^{k+1}\right) ^{r} \\ &=&2^{r}\sum_{k=-\infty }^{\infty }\ \sum_{i=1}^{\infty }\left\{ \sum_{Q\in \mathcal{D}^{N}:\ Q\subset M_{i}^{k}}\left\vert Q\right\vert _{\mu ,\theta }^{\frac{r}{s}}\right\} 2^{kr}\leq 2^{r}C_{N\varepsilon \frac{\frac{r}{s}-1}{% \theta ^{\prime }}}\sum_{k=-\infty }^{\infty }\ \sum_{i=1}^{\infty }\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }^{\frac{r}{s}}2^{kr}\ .\end{aligned}$$Now we use the fact that $$\begin{aligned} \frac{1}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}\int_{M_{i}^{k}\cap \left\{ f>2^{k-1}\right\} }fd\mu &=&\frac{1}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}\int_{M_{i}^{k}}fd\mu -\frac{1}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}\int_{M_{i}^{k}\cap \left\{ f\leq 2^{k-1}\right\} }fd\mu \\ &\geq &\frac{1}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}% \int_{M_{i}^{k}}fd\mu -\frac{1}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}\int_{M_{i}^{k}}2^{k-1}d\mu \\ &>&2^{k}-2^{k-1}\frac{\left\vert M_{i}^{k}\right\vert _{\mu }}{\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }}\geq 2^{k-1},\end{aligned}$$to obtain$$\begin{aligned} \sum_{Q\in \mathcal{D}^{N}}\left\vert Q\right\vert _{\mu ,\theta }^{\frac{r}{% s}}\left( \frac{1}{\left\vert Q\right\vert _{\mu ,\theta }}\int_{Q}fd\mu \right) ^{r} &\leq &2^{r}C_{N\varepsilon \frac{\frac{r}{s}-1}{\theta ^{\prime }}}\sum_{k=-\infty }^{\infty }\sum_{i=1}^{\infty }\left\vert M_{i}^{k}\right\vert _{\mu ,\theta }^{\frac{r}{s}}2^{kr} \\ &\leq &C_{r,s,\theta }^{r}\sum_{k=-\infty }^{\infty }\sum_{i=1}^{\infty }\left( 2^{-k}\int_{M_{i}^{k}\cap \left\{ f>2^{k-1}\right\} }fd\mu \right) ^{% \frac{r}{s}}2^{kr} \\ &\leq &C_{r,s,\theta }^{r}\left( \sum_{k=-\infty }^{\infty }\sum_{i=1}^{\infty }2^{k\left( s-1\right) }\int_{M_{i}^{k}\cap \left\{ f>2^{k-1}\right\} }fd\mu \right) ^{\frac{r}{s}}.\end{aligned}$$We now use that the cubes $\left\{ M_{i}^{k}\right\} _{i=1}^{\infty }$ are pairwise disjoint in $i$ to continue with the estimate$$\begin{aligned} \left( \sum_{k=-\infty }^{\infty }\sum_{i=1}^{\infty }2^{k\left( s-1\right) }\int_{M_{i}^{k}\cap \left\{ f>2^{k-1}\right\} }fd\mu \right) ^{\frac{r}{s}} &\leq &\left( \sum_{k=-\infty }^{\infty }2^{k\left( s-1\right) }\int_{\left\{ f>2^{k-1}\right\} }fd\mu \right) ^{\frac{r}{s}} \\ &=&\left( \int \left\{ \sum_{k\in \mathbb{Z}:\ 2^{k}<2f\left( x\right) }2^{k\left( s-1\right) }\right\} f\left( x\right) d\mu \left( x\right) \right) ^{\frac{r}{s}} \\ &\leq &C_{s}\left( \int f\left( x\right) ^{\left( s-1\right) }f\left( x\right) d\mu \left( x\right) \right) ^{\frac{r}{s}} \\ &=&C_{s}\left( \int f\left( x\right) ^{s}d\mu \left( x\right) \right) ^{% \frac{r}{s}}=C_{s}\left\Vert f\right\Vert _{L^{s}\left( \mu \right) }^{r}\ .\end{aligned}$$ The $2$-parameter theory ======================== Here we state and prove our extensions of Theorem 1(A) and the second assertion of Theorem 1(B) in [@SaWh]. We begin with the $\theta $-bump condition. The $\protect\theta $-bump condition for bilinear embeddings ------------------------------------------------------------ Here is a variation on the Tanaka-Yabuta theorem [@TaYa Theorem 1.1] involving general weights that satisfy a $\theta $-bump analogue of the ‘rectangle testing’ condition in [@TaYa]. We extend the definition of the $\theta $-bump functional to rectangles in the obvious way,$$\left\vert R\right\vert _{\mu ,\theta }\equiv \left\vert R\right\vert ^{% \frac{1}{\theta ^{\prime }}}\left( \int_{R}u^{\theta }\right) ^{\frac{1}{% \theta }},$$for $d\mu \left( x,y\right) =u\left( x,y\right) dxdy$ absolutely continuous and $R$ a rectangle in $\mathbb{R}^{m}\times \mathbb{R}^{n}$. \[variation\]Suppose $1<p<q<\infty $. Let $d\sigma =v^{-p^{\prime }}dx$ and $d\omega =w^{q}dx$ be locally finite absolutely continuous weights on $% \mathbb{R}^{m}\times \mathbb{R}^{n}$, let $\theta >1$, and let $K:\mathcal{R}% ^{m,n}\rightarrow \left[ 0,\infty \right) $. Then the norm $\mathbb{N}% _{K}\left( \sigma ,\omega \right) $ of the positive bilinear inequality,$$\sum_{R\in \mathcal{R}^{m,n}}K\left( R\right) \left( \int_{R}fd\sigma \right) \left( \int_{R}gd\omega \right) \leq \mathbb{N}_{K}\left( \sigma ,\omega \right) \ \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{q^{\prime }}\left( \omega \right) }\ ,\ \ \ \ \ f,g\geq 0,$$is finite independent of all partial grids $\mathcal{R}^{m,n}=\mathcal{D}% ^{m}\times \mathcal{D}^{n}$ *if* the $\theta $-bump product characteristic $\mathbb{A}_{K,\theta }\left( \sigma ,\omega \right) $ is finite, where$$\begin{aligned} \mathbb{A}_{K,\theta }\left( \sigma ,\omega \right) &\equiv &\sup_{R\in \mathcal{R}^{m,n}}K\left( R\right) \ \left[ \left\vert R\right\vert ^{\frac{1% }{p^{\prime }\theta ^{\prime }}}\left( \int_{R}v^{-p^{\prime }\theta }d\sigma \right) ^{\frac{1}{p^{\prime }\theta }}\right] \ \left[ \left\vert R\right\vert ^{\frac{1}{q\theta ^{\prime }}}\left( \int_{R}w^{q\theta }d\omega \right) ^{\frac{1}{q\theta }}\right] \\ &=&\sup_{R\in \mathcal{R}^{m,n}}K\left( R\right) \ \left\vert R\right\vert _{\omega ,\theta }^{\frac{1}{q}}\ \left\vert R\right\vert _{\sigma ,\theta }^{\frac{1}{p^{\prime }}}\ .\end{aligned}$$ As in [@TaYa], we choose $p<r<q$. Then the definition of the $\theta $-bump characteristic, followed by Hölder’s inequality with exponents $r$ and $r^{\prime }$, gives$$\begin{aligned} &&\sum_{R\in \mathcal{R}^{m,n}}K\left( R\right) \left( \int_{R}fd\sigma \right) \left( \int_{R}gd\omega \right) \\ &=&\sum_{R\in \mathcal{R}^{m,n}}\left\{ K\left( R\right) \ \left\vert R\right\vert _{\sigma ,\theta }^{\frac{1}{p^{\prime }}}\left\vert R\right\vert _{\omega ,\theta }^{\frac{1}{q}}\right\} \left\vert R\right\vert _{\sigma ,\theta }^{\frac{1}{p}}\left\vert R\right\vert _{\omega ,\theta }^{\frac{1}{q^{\prime }}}\left( \frac{1}{\left\vert R\right\vert _{\sigma ,\theta }}\int_{R}fd\sigma \right) \left( \frac{1}{% \left\vert R\right\vert _{\omega ,\theta }}\int_{R}gd\omega \right) \\ &\leq &\mathbb{A}_{K,\theta }\left( \sigma ,\omega \right) \left\{ \sum_{R\in \mathcal{R}^{m,n}}\left\vert R\right\vert _{\sigma ,\theta }^{% \frac{r}{p}}\left( \frac{1}{\left\vert R\right\vert _{\sigma ,\theta }}% \int_{R}fd\sigma \right) ^{r}\right\} ^{\frac{1}{r}}\left\{ \sum_{R\in \mathcal{R}^{m,n}}\left\vert R\right\vert _{\omega ,\theta }^{\frac{% r^{\prime }}{q^{\prime }}}\left( \frac{1}{\left\vert R\right\vert _{\omega ,\theta }}\int_{R}gd\omega \right) ^{r^{\prime }}\right\} ^{\frac{1}{% r^{\prime }}},\end{aligned}$$and the theorem now follows from the following proposition. Suppose that $1<s<r<\infty $, $\theta >1$, and that $\mu $ is a locally finite absolutely continuous measure on $\mathbb{R}^{m}\times \mathbb{R}^{n}$. Then we have$$\left\{ \sum_{R\in \mathcal{R}^{m,n}}\left\vert R\right\vert _{\mu ,\theta }^{\frac{r}{s}}\left( \frac{1}{\left\vert R\right\vert _{\mu ,\theta }}% \int_{R}fd\mu \right) ^{r}\right\} ^{\frac{1}{r}}\leq C_{s,r,\theta }\left\Vert f\right\Vert _{L^{s}\left( \mu \right) }\ ,\ \ \ \ \ f\geq 0.$$ We follow the outline of the iteration argument in H. Tanaka and K. Yabuta [@TaYa], but adapted to $\theta $-bump functionals. Let $d\mu \left( x,y\right) =u\left( x,y\right) dxdy$ and define$$\begin{aligned} &&u^{y}\left( x\right) \equiv u\left( x,y\right) \text{ and }u_{x}\left( y\right) \equiv u\left( x,y\right) , \\ &&d\mu ^{y}\left( x\right) =u^{y}\left( x\right) dx\text{ and }d\mu _{x}\left( y\right) =u_{x}\left( y\right) dy \\ &&\ \ \ \ \ \text{for a.e. }x\in \mathbb{R}^{m},\text{ a.e. }y\in \mathbb{R}% ^{n},\end{aligned}$$and note that$$\begin{aligned} &&\left\vert J\right\vert _{\mu _{x},\theta }\equiv \left\vert J\right\vert ^{\frac{1}{\theta ^{\prime }}}\left( \int_{J}u_{x}\left( y\right) ^{\theta }dy\right) ^{\frac{1}{\theta }}\text{ and }\left\vert I\right\vert _{\mu ^{y},\theta }\equiv \left\vert I\right\vert ^{\frac{1}{\theta ^{\prime }}% }\left( \int_{I}u^{y}\left( y\right) ^{\theta }dx\right) ^{\frac{1}{\theta }} \\ &&\ \ \ \ \ \text{for a.e. }x\in \mathbb{R}^{m},\text{ a.e. }y\in \mathbb{R}% ^{n}.\end{aligned}$$ Now take $f\in L^{p}\left( \mu \right) $ and let$$F^{J}\left( x\right) \equiv \frac{1}{\left\vert J\right\vert _{\mu _{x},\theta }}\int_{J}f\left( x,y\right) u\left( x,y\right) dy\ \ \ \ \ \text{for a.e. }x\in \mathbb{R}^{m}.$$Note that$$\begin{aligned} \left\vert I\times J\right\vert _{\mu ,\theta } &=&\left\vert I\times J\right\vert ^{\frac{1}{\theta ^{\prime }}}\left( \int_{I}\left\{ \int_{J}u\left( x,y\right) ^{\theta }dy\right\} dx\right) ^{\frac{1}{\theta }% } \\ &=&\left\vert I\right\vert ^{\frac{1}{\theta ^{\prime }}}\left( \int_{I}\left\{ \left\vert J\right\vert ^{\frac{1}{\theta ^{\prime }}}\left( \int_{J}u\left( x,y\right) ^{\theta }dy\right) ^{\frac{1}{\theta }}\right\} ^{\theta }dx\right) ^{\frac{1}{\theta }}\end{aligned}$$where we can interpret the term in braces as$$\left\vert J\right\vert ^{\frac{1}{\theta ^{\prime }}}\left( \int_{J}u_{x}\left( y\right) ^{\theta }dy\right) ^{\frac{1}{\theta }% }=\left\vert J\right\vert _{\mu _{x},\theta }$$so that we have$$\left\vert I\times J\right\vert _{\mu ,\theta }=\left\vert I\right\vert ^{% \frac{1}{\theta ^{\prime }}}\left( \int_{I}\left\vert J\right\vert _{\mu _{x},\theta }^{\theta }dx\right) ^{\frac{1}{\theta }}\equiv \left\vert I\right\vert ^{\frac{1}{\theta ^{\prime }}}\left( \int_{I}\left( J_{\mu ,\theta }\left( x\right) \right) ^{\theta }dx\right) ^{\frac{1}{\theta }% }=\left\vert I\right\vert _{J_{\mu ,\theta },\theta }$$where we have defined the absolutely continuous measure $J_{\mu ,\theta }$ by $dJ_{\mu ,\theta }\left( x\right) =J_{\mu ,\theta }\left( x\right) dx$ and where its density function, which with a small abuse of notation we also denote by $J_{\mu ,\theta }$, is given by $$J_{\mu ,\theta }\left( x\right) \equiv \left\vert J\right\vert _{\mu _{x},\theta },\ \ \ \ \ x\in \mathbb{R}^{m}.$$We then estimate$$\begin{aligned} &&\sum_{R\in \mathcal{R}^{m,n}}\left\vert R\right\vert _{\mu ,\theta }^{% \frac{r}{s}}\left( \frac{1}{\left\vert R\right\vert _{\mu ,\theta }}% \int_{R}f\left( x,y\right) u\left( x,y\right) dxdy\right) ^{r} \\ &=&\sum_{I\times J\in \mathcal{R}^{m,n}}\left\vert I\times J\right\vert _{\mu ,\theta }^{\frac{r}{s}}\left( \frac{1}{\left\vert I\times J\right\vert _{\mu ,\theta }}\int_{I\times J}f\left( x,y\right) u\left( x,y\right) dxdy\right) ^{r} \\ &=&\sum_{J\in \mathcal{D}^{n}}\sum_{I\in \mathcal{D}^{m}}\left\vert I\right\vert _{J_{\mu ,\theta },\theta }^{\frac{r}{s}}\left( \frac{1}{% \left\vert I\right\vert _{J_{\mu ,\theta },\theta }}\int_{I}\left( \int_{J}f\left( x,y\right) u\left( x,y\right) dy\ \frac{1}{J_{\mu ,\theta }\left( x\right) }\right) J_{\mu ,\theta }\left( x\right) dx\right) ^{r} \\ &=&\sum_{J\in \mathcal{D}^{n}}\left\{ \sum_{I\in \mathcal{D}^{m}}\left\vert I\right\vert _{J_{\mu ,\theta },\theta }^{\frac{r}{s}}\left( \frac{1}{% \left\vert I\right\vert _{J_{\mu ,\theta },\theta }}\int_{I}F^{J}\left( x\right) J_{\mu ,\theta }\left( x\right) dx\right) ^{r}\right\} \lesssim \sum_{J\in \mathcal{D}^{n}}\left( \int_{\mathbb{R}^{m}}F^{J}\left( x\right) ^{s}J_{\mu ,\theta }\left( x\right) dx\right) ^{\frac{r}{s}},\end{aligned}$$by Lemma \[theta bump lemma\] above applied with the locally finite absolutely continuous measures $J_{\mu ,\theta }$ on $\mathbb{R}^{m}$, $J\in \mathcal{D}^{n}$. Now we continue to estimate the latter sum raised to the power $\frac{s}{r}$ by Minkowski’s inequality,$$\left\{ \sum_{J\in \mathcal{D}^{n}}\left( \int_{\mathbb{R}^{m}}F^{J}\left( x\right) ^{s}J_{\mu ,\theta }\left( x\right) dx\right) ^{\frac{r}{s}% }\right\} ^{\frac{s}{r}}\leq \int_{\mathbb{R}^{m}}\left\{ \sum_{J\in \mathcal{D}^{n}}\left( F^{J}\left( x\right) ^{s}\right) ^{\frac{r}{s}% }\right\} ^{\frac{s}{r}}J_{\mu ,\theta }\left( x\right) dx=\int_{\mathbb{R}% ^{m}}\left\{ \sum_{J\in \mathcal{D}^{n}}J_{\mu ,\theta }\left( x\right) ^{% \frac{r}{s}}F^{J}\left( x\right) ^{r}\right\} ^{\frac{s}{r}}dx.$$Now apply Lemma \[theta bump lemma\] above with the locally finite absolutely continuous measures $\mu _{x}$ on $\mathbb{R}^{n}$ for a.e. $x\in \mathbb{R}^{m}$ to obtain$$\begin{aligned} \sum_{J\in \mathcal{D}^{n}}J_{\mu ,\theta }\left( x\right) ^{\frac{r}{s}% }F^{J}\left( x\right) ^{r} &=&\sum_{J\in \mathcal{D}^{n}}J_{\mu ,\theta }\left( x\right) ^{\frac{r}{s}}\left( \frac{1}{\left\vert J\right\vert _{\mu _{x},\theta }}\int_{J}f_{x}\left( y\right) u_{x}\left( y\right) dy\right) ^{r} \\ &=&\sum_{J\in \mathcal{D}^{n}}\left\vert J\right\vert _{\mu _{x},\theta }^{% \frac{r}{s}}\left( \frac{1}{\left\vert J\right\vert _{\mu _{x},\theta }}% \int_{J}f_{x}\left( y\right) u_{x}\left( y\right) dy\right) ^{r} \\ &\lesssim &\left( \int_{\mathbb{R}^{n}}f_{x}\left( y\right) ^{s}u_{x}\left( y\right) dy\right) ^{\frac{r}{s}}=\left( \int_{\mathbb{R}^{n}}f\left( x,y\right) ^{s}u\left( x,y\right) dy\right) ^{\frac{r}{s}},\end{aligned}$$uniformly for a.e. $x\in \mathbb{R}^{m}$. Plugging this into the previous display gives$$\begin{aligned} \left\{ \sum_{J\in \mathcal{D}^{n}}\left( \int_{\mathbb{R}^{m}}F^{J}\left( x\right) ^{s}J_{\mu ,\theta }\left( x\right) dx\right) ^{\frac{r}{s}% }\right\} ^{\frac{s}{r}} &\lesssim &\int_{\mathbb{R}^{m}}\left\{ \left( \int_{\mathbb{R}^{n}}f\left( x,y\right) ^{s}u\left( x,y\right) dy\right) ^{% \frac{r}{s}}\right\} ^{\frac{s}{r}}dx \\ &=&\int_{\mathbb{R}^{m}}\int_{\mathbb{R}^{n}}f\left( x,y\right) ^{s}u\left( x,y\right) dydx=\left\Vert f\right\Vert _{L^{s}\left( \mu \right) }^{s}.\end{aligned}$$Altogether then we have$$\begin{aligned} &&\sum_{R\in \mathcal{R}^{m,n}}\left\vert R\right\vert _{\mu ,\theta }^{% \frac{r}{s}}\left( \frac{1}{\left\vert R\right\vert _{\mu ,\theta }}% \int_{R}f\left( x,y\right) u\left( x,y\right) dxdy\right) ^{r} \\ &\lesssim &\sum_{J\in \mathcal{D}^{n}}\left( \int_{\mathbb{R}% ^{m}}F^{J}\left( x\right) ^{s}J_{\mu ,\theta }\left( x\right) dx\right) ^{% \frac{r}{s}}\lesssim \left\Vert f\right\Vert _{L^{s}\left( \mu \right) }^{r}\ .\end{aligned}$$ ### Product fractional integrals The Tanaka-Yabuta theorem [@TaYa Theorem 1.1], as well as the variant in Theorem \[variation\] above, uses an arbitrary nonnegative function $% K\left( R\right) $ defined on dyadic rectangles $R\in \mathcal{R}^{m,n}$. If for $0<\frac{\alpha }{m},\frac{\beta }{n}<1$, we define $$K_{\alpha ,\beta }^{m,n}\left( R\right) =K\left( I\times J\right) \equiv \left\vert I\right\vert ^{\frac{\alpha }{m}-1}\left\vert J\right\vert ^{% \frac{\beta }{n}-1}, \label{def K}$$for $R=I\times J\in \mathcal{R}^{m,n}$, then in the special case $% K=K_{\alpha ,\beta }^{m,n}$ we have the following pointwise estimate,$$\begin{aligned} \sum_{R\in \mathcal{R}^{m,n}}K_{\alpha ,\beta }^{m,n}\left( R\right) \mathbf{% 1}_{R}\left( x,y\right) \mathbf{1}_{R}\left( u,v\right) &=&\sum_{I\times J\in \mathcal{R}^{m,n}}\left\{ K\left( I\times J\right) :x,u\in I\text{ and }% y,v\in J\right\} \\ &=&\sum_{I\times J\in \mathcal{R}^{m,n}}\left\{ \left\vert I\right\vert ^{% \frac{\alpha }{m}-1}\left\vert J\right\vert ^{\frac{\beta }{n}-1}:x,u\in I% \text{ and }y,v\in J\right\} \\ &=&\sum_{I\in \mathcal{D}^{m}}\left\{ \left\vert I\right\vert ^{\frac{\alpha }{m}-1}:x,u\in I\right\} \ \times \ \sum_{J\in \mathcal{D}^{n}}\left\{ \left\vert J\right\vert ^{\frac{\beta }{n}-1}:y,v\in J\right\} \\ &\approx &d\left( x,u\right) ^{\frac{\alpha }{m}-1}d\left( y,v\right) ^{% \frac{\beta }{n}-1}\lesssim \left\vert x-u\right\vert ^{\frac{\alpha }{m}% -1}\left\vert y-v\right\vert ^{\frac{\beta }{n}-1},\end{aligned}$$where $d_{\limfunc{dy}}\left( x,u\right) $ denotes the *dyadic* distance between $x$ and $u$ in $\mathbb{R}^{m}$, and $d_{\limfunc{dy}% }\left( y,v\right) $ denotes the *dyadic* distance between $y$ and $v$ in $\mathbb{R}^{n}$. Here the dyadic distance between two points $p$ and $q$ in $\mathbb{R}^{k}$ is defined to be the side length of the smallest dyadic cube containing $p$ and $q$. Note that the dyadic distance is at least $% \frac{1}{\sqrt{k}}$ times the Euclidean distance since any dyadic cube $Q$ containing $x$ and $y$ must satisfy $$\ell \left( Q\right) \geq \max_{1\leq i\leq k}\left\vert x_{i}-y_{i}\right\vert \geq \sqrt{\frac{1}{k}\sum_{i=1}^{k}\left\vert x_{i}-y_{i}\right\vert ^{2}}=\frac{1}{\sqrt{k}}\left\vert x-y\right\vert .$$So in order to apply the next theorem to the product fractional integral operator with kernel $\left\vert x-u\right\vert ^{\frac{\alpha }{m}% -1}\left\vert y-v\right\vert ^{\frac{\beta }{n}-1}$ it suffices to appeal to Stromberg’s well-known $\frac{1}{3}$-trick for the dyadic grids $\left\{ \mathcal{D}_{i}^{m}\right\} _{i=1}^{3^{m}}$ and $\left\{ \mathcal{D}% _{j}^{n}\right\} _{j=1}^{3^{n}}$, to obtain$$\sum_{i=1}^{3^{m}}\sum_{j=1}^{3^{n}}\left[ \sum_{R=I\times J\in \mathcal{D}% _{i}^{m}\times \mathcal{D}_{j}^{n}}K\left( R\right) \mathbf{1}_{R}\left( x,y\right) \mathbf{1}_{R}\left( u,v\right) \right] \approx \left\vert x-u\right\vert ^{\frac{\alpha }{m}-1}\left\vert y-v\right\vert ^{\frac{\beta }{n}-1}. \label{expectation}$$Variants of the following lemma can be found many times over in the literature, too numerous to mention here. Let $\mathcal{P}^{N}$ denote the collection of all cubes in $\mathbb{R}^{N}$ with sides parallel to the coordinate axes. For $K\left( R\right) $ defined as in (\[def K\]) we have ([expectation]{}). For convenience we recall a variation on the $\frac{1}{3}$-trick given in Lemma 2.5 of [@HyLaPe]. For a given dyadic grid $\mathcal{D\subset P}% ^{N} $ with side lengths in $\left\{ \frac{2^{m}}{3}\right\} _{m\in \mathbb{Z% }}$, partition the collection of tripled cubes $\left\{ 3I\right\} _{I\in \mathcal{D}}$ into $3^{N}$ subcollections $\left\{ S_{u}\right\} _{u=1}^{3^{N}}$, with the property that for each subcollection $S_{u}$ there exists a dyadic grid $\mathcal{D}_{u}$ with side lengths in $\left\{ 2^{m}\right\} _{m\in \mathbb{Z}}$, such that $S_{u}\subset \mathcal{D}_{u}$. With these grids $\left\{ \mathcal{D}_{u}\right\} _{u=1}^{3^{N}}$ fixed, we have the following sandwiching property. For each cube $P\in \mathcal{P}^{N}$ and each integer $j\in \mathbb{N}$, there is a choice of $u=u\left( P,j\right) $ with $1\leq u\leq 3^{n}$ and a cube $I=I_{u\left( P,j\right) }\in \mathcal{D}_{u}$ such that $$\begin{aligned} \ell \left( I\right) &\leq &18\ \ell \left( P\right) , \label{sandwich} \\ 3P &\subset &I, \notag \\ 2^{j}P &\subset &\pi _{\mathcal{D}_{u}}^{\left( j\right) }I, \notag\end{aligned}$$where $\pi _{\mathcal{D}_{u}}^{\left( j\right) }I$ denotes the $j^{th}$ grandparent of $I$ in the grid $\mathcal{D}_{u}$. Now fix $\left( x,y\right) \in \mathbb{R}^{m}\times \mathbb{R}^{n}$. For $% x\in \mathbb{R}^{N}$, let $P\left( x,\ell \right) $ denote the cube centered at $x$ with side length $\ell \in \left\{ 2^{k}\right\} _{k\in \mathbb{Z}}$. Then with $R_{a,b}\left( x,y\right) \equiv P\left( x,2^{a}\right) \times Q\left( y,2^{b}\right) $ for $a,b\in \mathbb{Z}$, we note that the right hand side of (\[expectation\]) is equivalent to$$\sum_{a,b\in \mathbb{Z}}K\left( R_{a,b}\left( x,y\right) \right) \mathbf{1}% _{R_{a,b}\left( x,y\right) }\left( x,y\right) \mathbf{1}_{R_{a,b}\left( x,y\right) }\left( u,v\right) ,\ \ \ \ \ \left( u,v\right) \in \mathbb{R}% ^{m}\times \mathbb{R}^{n}.$$The first two lines in (\[sandwich\]) now prove (\[expectation\]), since for each rectangle $R_{a,b}\left( x,y\right) \equiv P\left( x,2^{a}\right) \times Q\left( y,2^{b}\right) $ there is $I\times J\in \bigcup_{i=1}^{3^{m}}\bigcup_{j=1}^{3^{n}}\left( \mathcal{D}_{i}^{m}\times \mathcal{D}_{j}^{n}\right) $ such that $$3R_{a,b}\left( x,y\right) \subset I\times J\subset 18R_{a,b}\left( x,y\right) ,$$and moreover, by the definition of $K$ in (\[def K\]), we then have $% K\left( R_{a,b}\left( x,y\right) \right) \approx K\left( I\times J\right) $. We do not need the third line in (\[expectation\]) here. \[ext\]Let $1<p<q<\infty $, $0<\alpha <m$,$\ 0<\beta <n$, $\theta >1$, and let $v$ and $w$ be absolutely continuous weights on $\mathbb{R}% ^{m}\times \mathbb{R}^{n}$. Then the product fractional integral $I_{\alpha ,\beta }^{m,n}$ is bounded from $L^{p}\left( v^{p}\right) $ to $L^{q}\left( w^{q}\right) $ if the $\theta $-bump rectangle characteristic $\mathbb{A}% _{p,q;\theta }^{\left( \alpha ,\beta \right) ,\left( m,n\right) }\left( v,w\right) $ is finite, where$$\mathbb{A}_{p,q;\theta }^{\left( \alpha ,\beta \right) ,\left( m,n\right) }\left( v,w\right) \equiv \sup_{I\times J\in \mathcal{R}^{m,n}}\left\vert I\right\vert ^{\frac{\alpha }{m}-\frac{1}{p}+\frac{1}{q}}\left\vert J\right\vert ^{\frac{\beta }{n}-\frac{1}{p}+\frac{1}{q}}\left( \frac{1}{% \left\vert I\times J\right\vert }\int \int_{I\times J}v^{-p^{\prime }\theta }\right) ^{\frac{1}{p^{\prime }\theta }}\ \left( \frac{1}{\left\vert I\times J\right\vert }\int \int_{I\times J}w^{q\theta }\right) ^{\frac{1}{q\theta }% }\ .$$ The above proof of the Corollary, when restricted to the $1$-parameter case, gives a short and elegant proof of Theorem 1(A) in [@SaWh] in the special case $p<q$. Reverse doubling weights for bilinear embeddings ------------------------------------------------ Here is a slight improvement of the theorem of Tanaka and Yabuta [@TaYa], valid for the product fractional integral kernel, as well as more general kernels $K$ satisfying property (\[satisfy both\]) below regarding expectations taken over partial grids $\mathcal{R}^{m,n}=\mathcal{D}% ^{m}\times \mathcal{D}^{n}$. Recall that $\mu $ is a product reverse doubling weight on $\mathbb{R}^{m}\times \mathbb{R}^{n}$ if (\[product rev doub\]) holds. \[theorem rev doub\]Suppose $1<p<q<\infty $. Let $\sigma $ and $\omega $ be product reverse doubling weights on $\mathbb{R}^{m}\times \mathbb{R}^{n}$, and let $K=K_{\alpha ,\beta }^{m,n}:\mathcal{R}^{m,n}\rightarrow \left[ 0,\infty \right) $ be as in (\[def K\]), or more generally satisfy the expectation inequality (\[satisfy both\]) below. Then the norm $\mathbb{N}% _{K}\left( \sigma ,\omega \right) $ of the positive bilinear inequality,$$\sum_{R\in \mathcal{R}^{m,n}}K\left( R\right) \left( \int_{R}fd\sigma \right) \left( \int_{R}gd\omega \right) \leq \mathbb{N}_{K}\left( \sigma ,\omega \right) \ \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{q^{\prime }}\left( \omega \right) }\ ,\ \ \ \ \ f,g\geq 0,$$is finite for all partial grids $\mathcal{R}^{m,n}=\mathcal{D}^{m}\times \mathcal{D}^{n}$ *if and only if*$$\mathbb{A}_{K}\left( \sigma ,\omega \right) \equiv \sup_{R\in \mathcal{P}% ^{m}\times \mathcal{P}^{n}}K\left( R\right) \ \left\vert R\right\vert _{\omega }^{\frac{1}{q}}\ \left\vert R\right\vert _{\sigma }^{\frac{1}{% p^{\prime }}}<\infty ,\ \ \ \ \ \text{for all rectangles }R\in \mathcal{P}% ^{n}\times \mathcal{P}^{m}\ .$$ We begin the proof with a brief review of the good/bad grid technology of Nazarov, Treil and Volberg. See [@NTV2], [@NTV4], or [@Vol] for more detail. We restrict to dimension $n=1$ for the moment. Let $% 0<\varepsilon <1$ and $\mathbf{r}\in \mathbb{N}$ to be chosen later. Define $% J$ to be $\varepsilon -\limfunc{good}$ in an interval $K$ if $$d\left( J,\limfunc{skel}K\right) >2\left\vert J\right\vert ^{\varepsilon }\left\vert K\right\vert ^{1-\varepsilon },$$where the skeleton $\limfunc{skel}K$ of an interval $K$ consists of its two endpoints and its midpoint. Define $\mathcal{D}_{\left( \mathbf{r}% ,\varepsilon \right) -\limfunc{good}}$ to consist of those $J\in \mathcal{D}$ such that $J$ is good in every superinterval $K\in \mathcal{D}$ that lies at least $\mathbf{r}$ levels above $J$. As the goodness parameters $\varepsilon $ and $\mathbf{r}$ will eventually be fixed throughout the proof, we sometimes suppress the parameters, and simply write $\mathcal{D}_{\limfunc{% good}}$ in place of $\mathcal{D}_{\left( \mathbf{r},\varepsilon \right) -% \limfunc{good}}$, and say “$J$ is $\limfunc{good}$” instead of “$J$ is good in every superinterval $K\in \mathcal{D}$ that lies at least $\mathbf{r}$ levels above $J$”. We also define $\mathcal{D}_{\limfunc{bad}}\equiv \mathcal{D}\setminus \mathcal{D}_{\limfunc{good}}$. **Parameterizations of dyadic grids**: Here we recall a construction from [@SaShUr10] that was in turn based on that of Hytönen in [Hyt2]{}. Momentarily fix a large positive integer $M\in \mathbb{N}$, and consider the tiling of $\mathbb{R}$ by the family of intervals $\mathbb{D}% _{M}\equiv \left\{ I_{\alpha }^{M}\right\} _{\alpha \in \mathbb{Z}}$ having side length $2^{-M}$ and given by $I_{\alpha }^{M}\equiv I_{0}^{M}+2^{-M}\alpha $ where $I_{0}^{M}=\left[ 0,2^{-M}\right) $. A *dyadic grid* $\mathcal{D}$ built on $\mathbb{D}_{M}$ is defined to be a family of intervals $\mathcal{D}$ satisfying: (**1**) Each $I\in \mathcal{D}$ has side length $2^{-\ell }$ for some $% \ell \in \mathbb{Z}$ with $\ell \leq M$, and $I$ is a union of $2^{M-\ell }$ intervals from the tiling $\mathbb{D}_{M}$, (**2**) For $\ell \leq M$, the collection $\mathcal{D}_{\ell }$ of intervals in $\mathcal{D}$ having side length $2^{-\ell }$ forms a pairwise disjoint decomposition of the space $\mathbb{R}$, (**3**) Given $I\in \mathcal{D}_{i}$ and $J\in \mathcal{D}_{j}$ with $% j\leq i\leq M$, it is the case that either $I\cap J=\emptyset $ or $I\subset J$. We now momentarily fix a *negative* integer $N\in -\mathbb{N}$, and restrict the above grids to intervals of side length at most $2^{-N}$:$$\mathcal{D}^{N}\equiv \left\{ I\in \mathcal{D}:\text{side length of }I\text{ is at most }2^{-N}\right\} \text{.}$$We refer to such grids $\mathcal{D}^{N}$ as a (truncated) dyadic grid $% \mathcal{D}$ built on $\mathbb{D}_{M}$ of size $2^{-N}$. There are now two traditional means of constructing probability measures on collections of such dyadic grids, namely parameterization by choice of parent, and parameterization by translation. We will only need the former parameterization here. For any $$\beta =\{\beta _{i}\}_{i\in _{M}^{N}}\in \omega _{M}^{N}\equiv \left\{ 0,1\right\} ^{\mathbb{Z}_{M}^{N}},$$where $\mathbb{Z}_{M}^{N}\equiv \left\{ \ell \in \mathbb{Z}:N\leq \ell \leq M\right\} $, define the dyadic grid $\mathcal{D}_{\beta }$ built on $\mathbb{% D}_{m}$ of size $2^{-N}$ by $$\mathcal{D}_{\beta }=\left\{ 2^{-\ell }\left( [0,1)+k+\sum_{i:\ \ell <i\leq m}2^{-i+\ell }\beta _{i}\right) \right\} _{N\leq \ell \leq m,\,k\in {\mathbb{% Z}}}\ . \label{def dyadic grid}$$Place the uniform probability measure $\rho _{M}^{N}$ on the finite index space $\omega _{M}^{N}=\left\{ 0,1\right\} ^{\mathbb{Z}_{M}^{N}}$, namely that which charges each $\beta \in \omega _{M}^{N}$ equally. This construction may be thought of as being *parameterized by scales* - each component $\beta _{i}$ in $\beta =\{\beta _{i}\}_{i\in _{M}^{N}}\in \omega _{M}^{N}$ amounting to a choice of the two possible tilings at level $% i$ that respect the choice of tiling at the level below. For purposes of notation and clarity, we now suppress all reference to $M$ and $N$ in our families of grids, and use the notation $\Omega $ instead of $\omega _{M}^{N} $ for the index or parameter set, and then use $\boldsymbol{P}% _{\Omega }$ and $\boldsymbol{E}_{\Omega }$ to denote probability and expectation with respect to families of grids. We will now instead proceed as if all grids considered are unrestricted. The careful reader can supply the modifications necessary to handle the assumptions made above on the grids $\mathcal{D}$ regarding $M$ and $N$. Given a pair of grids $\mathcal{D}^{m}$ and $\mathcal{D}^{n}$ in $\mathbb{R}% ^{m}$ and $\mathbb{R}^{n}$ respectively, form the corresponding partial grid $\mathcal{R}^{m,n}=\mathcal{D}^{m}\times \mathcal{D}^{n}$ of rectangles. We say that a rectangle $R=I\times J\in \mathcal{R}_{\limfunc{good}}^{m,n}$ (and say $R$ is good) if both $I\in \mathcal{D}_{\limfunc{good}}^{m}$ and $% J\in \mathcal{D}_{\limfunc{good}}^{n}$. Given a positive bilinear form$$\mathcal{B}_{\mathcal{R}^{m,n}}\left( f,g\right) \equiv \sum_{R\in \mathcal{R% }^{m,n}}K\left( R\right) \left( \int_{R}fd\sigma \right) \left( \int_{R}gd\omega \right) ,\ \ \ \ \ f\in L^{p}\left( \sigma \right) ,g\in L^{q^{\prime }}\left( \omega \right) ,$$we follow the NTV idea and dominate $B_{\mathcal{R}^{m,n}}\left( f,g\right) =B_{\mathcal{D}^{m}\times \mathcal{D}^{n}}\left( f,g\right) $ as follows:$$\begin{aligned} \mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}^{n}}\left( f,g\right) &\leq &\left\{ \sum_{I\times J\in \mathcal{D}_{\limfunc{good}}^{m}\times \mathcal{D% }_{\limfunc{good}}^{n}}+\sum_{I\times J\in \mathcal{D}^{m}\times \mathcal{D}% _{\limfunc{bad}}^{n}}+\sum_{I\times J\in \mathcal{D}_{\limfunc{bad}% }^{m}\times \mathcal{D}^{n}}\right\} K\left( I\times J\right) \left( \int_{I\times J}fd\sigma \right) \left( \int_{I\times J}gd\omega \right) \\ &\equiv &\mathcal{B}_{\mathcal{D}_{\limfunc{good}}^{m}\times \mathcal{D}_{% \limfunc{good}}^{n}}\left( f,g\right) +\mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}_{\limfunc{bad}}^{n}}\left( f,g\right) +\mathcal{B}_{\mathcal{D}_{% \limfunc{bad}}^{m}\times \mathcal{D}^{n}}\left( f,g\right) .\end{aligned}$$From the previous subsection we have that the positive bilinear form$$\mathcal{I}\left( f,g\right) \equiv \int_{\mathbb{R}^{m}\times \mathbb{R}% ^{n}}I_{\alpha ,\beta }^{m,n}\left( f\sigma \right) g\omega$$satisfies$$\boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}^{n}}\left( f,g\right) \geq c\mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}^{n}}\left( f,g\right) ,\ \ \ \ \ \text{for all }\mathcal{D}% ^{m}\times \mathcal{D}^{n}\text{ and some }c>0. \label{satisfy both}$$It then follows that the norm $\mathfrak{N}_{\mathcal{I}}$ of the bilinear form $\mathcal{I}$ can be estimated using $\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=\left\Vert g\right\Vert _{L^{q^{\prime }}\left( \omega \right) }=1$ chosen so that $\mathfrak{N}_{\mathcal{I}}=% \mathcal{I}\left( f,g\right) $:$$\begin{aligned} \mathfrak{N}_{\mathcal{I}} &=&\mathcal{I}\left( f,g\right) =\boldsymbol{E}% _{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}% ^{n}}\left( f,g\right) \\ &\leq &\boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}_{% \limfunc{good}}^{m}\times \mathcal{D}_{\limfunc{good}}^{n}}\left( f,g\right) +\boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}_{\limfunc{bad}}^{n}}\left( f,g\right) +\boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}_{\limfunc{bad}}^{m}\times \mathcal{D}% ^{n}}\left( f,g\right) .\end{aligned}$$ Now the conditional probability that a given cube $I$ is bad in a grid $% \mathcal{D}^{m}$ that contains it is small, in fact (see e.g. [@NTV2], [@NTV4], [@Vol] or [@SaShUr Subsubsection 3.1.1]) we have $$\boldsymbol{P}_{\Omega }\left\{ \mathcal{D}^{m}:I\text{ is bad in }\mathcal{D% }^{m}\mid \text{conditioned on }I\in \mathcal{D}^{m}\right\} \leq C2^{-\varepsilon \mathbf{r}}.$$Thus we obtain$$\begin{aligned} \boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}^{m}\times \mathcal{D}_{\limfunc{bad}}^{n}}\left( f,g\right) &\leq &C2^{-\varepsilon \mathbf{r}}\mathfrak{N}_{\mathcal{I}}\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{q^{\prime }}\left( \omega \right) }=C2^{-\varepsilon \mathbf{r}}\mathfrak{N}_{\mathcal{I}}\ , \\ \boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}_{\limfunc{bad}% }^{m}\times \mathcal{D}^{n}}\left( f,g\right) &\leq &C2^{-\varepsilon \mathbf{r}}\mathfrak{N}_{\mathcal{I}}\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{q^{\prime }}\left( \omega \right) }=C2^{-\varepsilon \mathbf{r}}\mathfrak{N}_{\mathcal{I}}\ ,\end{aligned}$$and hence$$\mathfrak{N}_{\mathcal{I}}\leq \boldsymbol{E}_{\Omega \times \Omega }% \mathcal{B}_{\mathcal{D}_{\limfunc{good}}^{m}\times \mathcal{D}_{\limfunc{% good}}^{n}}\left( f,g\right) +2C2^{-\varepsilon \mathbf{r}}\mathfrak{N}_{% \mathcal{I}}\ ,$$which gives$$\mathfrak{N}_{\mathcal{I}}\leq \frac{1}{1-2C2^{-\varepsilon \mathbf{r}}}% \boldsymbol{E}_{\Omega \times \Omega }\mathcal{B}_{\mathcal{D}_{\limfunc{good% }}^{m}\times \mathcal{D}_{\limfunc{good}}^{n}}\left( f,g\right)$$if $\varepsilon \mathbf{r}$ is chosen sufficiently small. Thus we see that in order to prove Theorem \[theorem rev doub\], we need only consider the ‘good’ bilinear form $\mathcal{B}_{\mathcal{D}_{\limfunc{% good}}^{m}\times \mathcal{D}_{\limfunc{good}}^{n}}\left( f,g\right) $ and estimate it independently of the partial grid of good rectangles $\mathcal{D}% _{\limfunc{good}}^{m}\times \mathcal{D}_{\limfunc{good}}^{n}$. Then using arguments as in [@TaYa] or above, the proof of Theorem \[theorem rev doub\] is reduced to the following Carleson embedding for ‘good’ rectangles. **Carleson embedding**: Suppose that $1<s<r<\infty $ and that $\mu $ is a product reverse doubling measure on $\mathbb{R}^{m}\times \mathbb{R}^{n}$. Then we have$$\left\{ \sum_{R\in \mathcal{D}_{\limfunc{good}}^{m}\times \mathcal{D}_{% \limfunc{good}}^{n}}\left\vert R\right\vert _{\mu }^{\frac{r}{s}}\left( \frac{1}{\left\vert R\right\vert _{\mu }}\int_{R}fd\mu \right) ^{r}\right\} ^{\frac{1}{r}}\leq C_{s,r}\left\Vert f\right\Vert _{L^{s}\left( \mu \right) }\ ,\ \ \ \ \ f\geq 0,$$where $C_{s,r}$ depends only on $s$, $r$, the reverse doubling constants for $\mu $, and the goodness parameters $\varepsilon ,\mathbf{r}$. In paricular, $C_{s,r}$ is independent of the partial grid $\mathcal{D}_{\limfunc{good}% }^{m}\times \mathcal{D}_{\limfunc{good}}^{n}$. Continuing to follow the iteration argument of Tanaka and Yabuta as in [@TaYa] or above, further reduces matters to proving the following Carleson condition on cubes for a reverse doubling measure $\mu $ on $\mathbb{R}^{N}$ with exponent $\eta >0$, and a power $\rho >1$:$$\sum_{Q\in \mathcal{D}_{\limfunc{good}}^{N}:\ Q\subset P}\left\vert Q\right\vert _{\mu }^{\rho }\leq C_{N,\mathbf{r},\varepsilon ,\rho }\left\vert P\right\vert _{\mu ,\theta }^{\rho }\ . \label{Car rev doub}$$Indeed, the reader can easily verify that the arguments work just as well for the subgrids $\mathcal{D}_{\limfunc{good}}^{m}$ and $\mathcal{D}_{% \limfunc{good}}^{n}$ in place of the grids $\mathcal{D}^{m}$ and $\mathcal{D}% ^{n}$. It is now at this point that the goodness of the cubes $Q$ plays a crucial role in conjuction with the reverse doubling property. To see (\[Car rev doub\]), recall the goodness parameters $0<\varepsilon <1$ and $\mathbf{r}\in \mathbb{N}$ and observe that if $Q$ is a good cube contained in $P$ then**either** $\ell \left( Q\right) \geq \ell \left( P\right) -\mathbf{r}$ and we can use the trivial estimate $\left\vert Q\right\vert _{\mu }^{\rho }\leq \left\vert P\right\vert _{\mu }^{\rho }$,**or** $\ell \left( Q\right) <\ell \left( P\right) -r$ in which case $% \limfunc{dist}\left( Q,\partial P\right) \geq 2\ell \left( Q\right) ^{\varepsilon }\ell \left( P\right) ^{1-\varepsilon }$. In this latter case we note that if $\ell \left( Q\right) =2^{-k}\ell \left( P\right) $ then $$2^{k\left( 1-\varepsilon \right) }Q=\left( \frac{\ell \left( P\right) }{\ell \left( Q\right) }\right) ^{1-\varepsilon }Q\subset \frac{2\ell \left( Q\right) ^{\varepsilon }\ell \left( P\right) ^{1-\varepsilon }}{\ell \left( Q\right) }Q\subset \frac{\limfunc{dist}\left( Q,\partial P\right) }{\ell \left( Q\right) }Q\subset P$$and so by reverse doubling we have $$\left\vert Q\right\vert _{\mu }\leq C2^{-\eta k\left( 1-\varepsilon \right) }\left\vert \left( \frac{\ell \left( P\right) }{\ell \left( Q\right) }% \right) ^{1-\varepsilon }Q\right\vert _{\mu }\leq C2^{-\eta \left( 1-\varepsilon \right) k}\left\vert P\right\vert _{\mu }\ .$$Thus we can estimate$$\begin{aligned} \sum_{Q\in \mathcal{D}_{\limfunc{good}}^{N}:\ Q\subset P}\left\vert Q\right\vert _{\mu }^{\rho } &=&\sum_{k=0}^{\mathbf{r}}2^{N\mathbf{r}% }\left\vert P\right\vert _{\mu }^{\rho }+\sum_{k=\mathbf{r}+1}^{\infty }\sum_{Q\in \mathcal{D}_{\limfunc{good}}^{N}:\ \ell \left( Q\right) =2^{-k}\ell \left( P\right) }\left\vert Q\right\vert _{\mu }^{\rho -1}\left\vert Q\right\vert _{\mu } \\ &\leq &C_{N,\mathbf{r}}\left\vert P\right\vert _{\mu }^{\rho }+\sum_{k=% \mathbf{r}+1}^{\infty }\sum_{Q\in \mathcal{D}_{\limfunc{good}}^{N}:\ \ell \left( Q\right) =2^{-k}\ell \left( P\right) }\left( C2^{-\eta \left( 1-\varepsilon \right) k}\left\vert P\right\vert _{\mu }\right) ^{\rho -1}\left\vert Q\right\vert _{\mu } \\ &\leq &C_{N,\mathbf{r}}\left\vert P\right\vert _{\mu }^{\rho }+\left\{ \sum_{k=0}^{\infty }\left( C2^{-\eta \left( 1-\varepsilon \right) \left( \rho -1\right) k}\right) \right\} \left\vert P\right\vert _{\mu }^{\rho }=C_{N,\mathbf{r},\varepsilon ,\rho }\left\vert P\right\vert _{\mu }^{\rho }\ .\end{aligned}$$This completes the proof of (\[Car rev doub\]), and hence also that of Theorem \[theorem rev doub\]. Concluding remarks ------------------ In the case of kernels $K=K_{\alpha ,\beta }^{m,n}$ given by (\[def K\]), or more generally that satisfy (\[satisfy both\]), one can assume for each weight separately, either rectangle reverse doubling, or a half $\theta $-bump condition, in order to obtain norm boundedness. For example, the following hybrid theorem holds. Suppose $1<p<q<\infty $. Let $\sigma $ be a product reverse doubling weight on $\mathbb{R}^{n}$, let $d\omega \left( x\right) =w\left( x\right) ^{q}dx$ be absolutely continuous with respect to Lebesgue measure, and let $% K=K_{\alpha ,\beta }^{m,n}:\mathcal{R}^{m,n}\rightarrow \left[ 0,\infty \right) $ be as in (\[def K\]), or more generally satisfy (\[satisfy both\]). Then the norm $\mathbb{N}_{K}\left( \sigma ,\omega \right) $ of the positive bilinear inequality,$$\sum_{R\in \mathcal{R}^{m,n}}K\left( R\right) \left( \int_{R}fd\sigma \right) \left( \int_{R}gd\omega \right) \leq \mathbb{N}_{K}\left( \sigma ,\omega \right) \ \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{q^{\prime }}\left( \omega \right) }\ ,\ \ \ \ \ f,g\geq 0,$$is finite for all products of grids $\mathcal{R}^{m,n}=\mathcal{D}^{m}\times \mathcal{D}^{n}$ if the half $\theta $-bump rectangle characteristic $% \mathbb{A}_{K,\theta }^{\omega }\left( \sigma ,\omega \right) $ is finite, where$$\begin{aligned} \mathbb{A}_{K,\theta }^{\omega }\left( \sigma ,\omega \right) &\equiv &\sup_{R\in \mathcal{R}^{m,n}}K\left( R\right) \ \left( \int_{R}v^{-p^{\prime }}d\sigma \right) ^{\frac{1}{p^{\prime }}}\ \left[ \left\vert R\right\vert ^{\frac{1}{q\theta ^{\prime }}}\left( \int_{R}w^{q\theta }d\omega \right) ^{\frac{1}{q\theta }}\right] \\ &=&\sup_{R\in \mathcal{R}^{m,n}}K\left( R\right) \ \left\vert R\right\vert _{\omega ,\theta }^{\frac{1}{q}}\ \left\vert R\right\vert _{\sigma }^{\frac{1% }{p^{\prime }}}\ .\end{aligned}$$ The proof is an easy exercise in combining the proofs of Theorems [variation]{} and \[theorem rev doub\] above. Appendix ======== We say that a weight $\mu $ on the real line is **strongly** reverse doubling if there is $\beta <1$ such that$$\left\vert I_{\limfunc{left}}\right\vert _{\mu },\left\vert I_{\limfunc{right% }}\right\vert _{\mu }\leq \beta \left\vert I\right\vert _{\mu }\text{ for all intervals }I,$$where if $I=\left[ a,b\right) $, then $I_{\limfunc{left}}=\left[ a,\frac{a+b% }{2}\right) $ and $I_{\limfunc{right}}=\left[ \frac{a+b}{2},b\right) $ are the left and right halves of $I$ respectively. A strongly reverse doubling weight on $\mathbb{R}$ is a doubling weight on $\mathbb{R}$, since if we choose $N$ so large that $\beta ^{N}<\frac{1}{4}$, then for $I=\left[ a,b\right) $, we have $$\left\vert \left[ a,a+\frac{b-a}{2^{N}}\right) \right\vert _{\mu },\ \ \ \left\vert \left[ b-\frac{b-a}{2^{N}},b\right) \right\vert _{\mu }\leq \beta ^{N}\left\vert I\right\vert _{\mu }<\frac{1}{4}\left\vert I\right\vert _{\mu }\ .$$Hence$$\begin{aligned} \left\vert \left[ a+\frac{b-a}{2^{N}},b-\frac{b-a}{2^{N}}\right) \right\vert _{\mu } &=&\left\vert \left[ a,b\right) \right\vert _{\mu }-\left\vert \left[ a,a+\frac{b-a}{2^{N}}\right) \right\vert _{\mu }-\left\vert \left[ b-\frac{% b-a}{2^{N}},b\right) \right\vert _{\mu } \\ &\geq &\left( 1-\frac{1}{4}-\frac{1}{4}\right) \left\vert I\right\vert _{\mu }=\frac{1}{2}\left\vert I\right\vert _{\mu }\ ,\end{aligned}$$where the length of the interval $\left[ a+\frac{b-a}{2^{N}},b-\frac{b-a}{% 2^{N}}\right) $ is $\frac{2^{N-1}-1}{2^{N-1}}\ell \left( I\right) $. Thus with $\gamma =\frac{2^{N-1}}{2^{N-1}-1}>1$, we have for every interval $K$,$$\left\vert \gamma K\right\vert _{\mu }\leq 2\left\vert K\right\vert _{\mu },\ \text{hence }\left\vert 2K\right\vert _{\mu }\leq 2^{M}\left\vert K\right\vert _{\mu }\text{ if }\gamma ^{M}\geq 2,$$which shows that $\mu $ is doubling. Similarly we see that a strongly rectangle reverse doubling weight on $\mathbb{R}^{N}$ is a rectangle doubling weight on $\mathbb{R}^{N}$. Here $\mu $ is strongly rectangle reverse doubling if there is $\beta <1$ such that$$\begin{aligned} &&\left\vert I^{1}\times ...\times I_{\limfunc{left}}^{k}\times ...\times I^{N}\right\vert _{\mu },\left\vert I^{1}\times ...\times I_{\limfunc{right}% }^{k}\times ...\times I^{N}\right\vert _{\mu } \\ &\leq &\beta \left\vert I^{1}\times ...\times I_{\mu }^{N}\right\vert _{\mu }% \text{ for all rectangles }I^{1}\times ...\times I^{N}\text{\ and }1\leq k\leq N,\end{aligned}$$and $\mu $ is rectangle doubling if there is $C>0$ such that$$\left\vert \left( 2I^{1}\right) \times ...\times \left( 2I^{N}\right) \right\vert _{\mu }\leq C\left\vert I^{1}\times ...\times I^{N}\right\vert _{\mu }\text{ for all rectangles }I^{1}\times ...\times I^{N}.$$ Suppose that $\mu $ is a doubling weight on $\mathbb{R}^{N}$. Then $d\nu \left( x\right) \equiv \mathbf{1}_{\left[ 0,\infty \right) ^{N}}\left( x\right) \mu \left( x\right) $ is a reverse doubling weight on $\mathbb{R}% ^{N}$ that is not a doubling weight on $\mathbb{R}^{N}$. [SaShUr10]{} <span style="font-variant:small-caps;">Hytönen, Tuomas,</span> *The two weight inequality for the Hilbert transform with general measures, `arXiv:1312.0843v2`.* <span style="font-variant:small-caps;">Hytönen, Tuomas, Lacey, Michael T., and Pérez, C.,</span> *Sharp weighted bounds for the* $q$*-variation of singular integrals*, Bull. Lon. Math. Soc. **45** (2013), 529-540. <span style="font-variant:small-caps;">Nazarov, F., Treil, S. and Volberg, A.,</span> *The* $Tb$*-theorem on non-homogeneous spaces,* Acta Math. **190** (2003), no. 2, MR 1998349 (2005d:30053). <span style="font-variant:small-caps;">F. Nazarov, S. Treil and A. Volberg,</span> *Two weight estimate for the Hilbert transform and corona decomposition for non-doubling measures*, preprint (2004) `arxiv:1003.1596` <span style="font-variant:small-caps;">E. Sawyer and R. L. Wheeden,</span> *Weighted inequalities for fractional integrals on Euclidean and homogeneous spaces,* Amer. J. Math.* ***114** (1992), 813-874. <span style="font-variant:small-caps;">E. Sawyer, C.-Y. Shen and I. Uriarte-Tuero,</span> *A two weight local* $Tb$* theorem for the Hilbert transform*, `arXiv 1709.09595v2.` <span style="font-variant:small-caps;">Sawyer, Eric T., Shen, Chun-Yen, Uriarte-Tuero, Ignacio,</span> *A good-*$\lambda $* lemma, two weight* $T1$* theorems without weak boundedness, and a two weight accretive global* $Tb$* theorem,* Harmonic Analysis, Partial Differential Equations and Applications (In Honor of Richard L. Wheeden), Birkhäuser 2017 (see also `arXiv:1609.08125v2`). <span style="font-variant:small-caps;">Eric T. Sawyer and Zipeng Wang,</span> *Weighted inequalities for product fractional integrals*, `arXiv 1702.03870v6.` <span style="font-variant:small-caps;">H. Tanaka and K. Yabuta,</span> *The* $n$*-linear embedding theorem for dyadic rectangles,* `arXiv 1710.08059v1.` <span style="font-variant:small-caps;">A. Volberg,</span> *Calderón-Zygmund capacities and operators on nonhomogeneous spaces,* CBMS Regional Conference Series in Mathematics (2003), MR{2019058 (2005c:42015)}. [^1]: In a nutshell, they use $p<r<q$ and Hölder’s inequality with $r$ and $% r^{\prime }$ to separate the measures $\sigma $ and $\omega $ early on, and then use iteration on the resulting ‘one weight’ Carleson embeddings, the point being that iteration works better with one weight than with two weights. [^2]: In [@TaYa] the authors use a strong form of reverse doubling on rectangles, which is equivalent to rectangle doubling. See the appendix below.
--- abstract: 'We introduce HyperLex - a dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, <span style="font-variant:small-caps;">type-of</span> relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research, and existing large-scale invetories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgements with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems.' author: - 'Ivan Vulić[^1]' - Daniela Gerz - 'Douwe Kiela[^2]' - 'Felix Hill[^3]' - Anna Korhonen bibliography: - 'references\_hyperlex.bib' nocite: '[@Young:2014tacl]' title: 'HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment' --- Introduction {#s:intro} ============ Most native speakers of English, in almost all contexts and situations, would agree that [*dogs*]{}, [*cows*]{}, or [*cats*]{} are [*animals*]{}, and that *tables* or *pencils* are not. However, for certain concepts, membership of the animal category is less clear-cut. Whether lexical concepts such as *dinosaur*, *human being* or *amoeba* are considered animals seems to depend on the context in which such concepts are described, the perspective of the speaker or listener and even the formal scientific knowledge of the interlocutors. Despite this indeterminacy, when communicating, humans intuitively reason about such relations between concepts and categories [@Quillian:1967bs; @Collins:1969article]. Indeed, the ability to quickly perform inference over such networks and arrive at coherent knowledge representations is crucial for human language understanding. The Princeton WordNet lexical database [@Miller:1995cacm; @Fellbaum:1998wn] is perhaps the best known attempt to formally represent such a semantic network. In WordNet, concepts are organised in a hierarchical fashion, in an attempt to replicate observed aspects of human semantic memory [@Collins:1972article; @Beckwith:1991wn]. One of the fundamental relations between concepts in WordNet is the so-called [type-of]{} or [*hypernymy-hyponymy*]{} relation that exists between category concepts such as *animal* and their constituent members such as *cat* or *dog*. The type-of relation is particularly important in language understanding because it underlines the lexical entailment (LE) relation. Simply put, an instantiation of a member concept such as a cat *entails* the existence of an *animal*. This lexical entailment in turns governs many cases of phrasal and sentential entailment: if we know that *a cat is in the garden*, we can quickly and intuitively conclude that *an animal is in the garden* too.[^4] Because of this fundamental connection to language understanding, the automatic detection and modelling of lexical entailment has been an area of much focus in natural language processing [@Bos:2005emnlp; @Dagan:2006pascal; @Baroni:2012eacl; @Beltagy:2013sem inter alia]. The ability to effectively detect and model both lexical and phrasal entailment in a human-like way may be critical for numerous related applications such as question answering, information retrieval, information extraction, and text summarisation and generation [@Androutsopoulos:2010jair]. For instance, in order to answer a question such as “[*Which mammal has a strong bite?*]{}”, a question-answering system has to know that a jaguar or a grizzly bear are types of mammals, while a crocodile or a piranha are not. Although inspired to some extent by theories of human semantic memory, large-scale inventories of semantic concepts, such as WordNet, typically make many simplifying assumptions, particularly regarding the nature of the *type-of* relation, and consequently the effect of LE. In WordNet, for instance, all semantic relations are represented in a binary way (i.e., concept $X$ entails $Y$) rather than gradual (e.g., $X$ entails $Y$ to a certain degree). However, since at least the pioneering experiments of prototypes [@Rosch:1973:natural; @Rosch:1975cognitive], it has been known that, for a given semantic category, certain member concepts are consistently understood as more central to the category than others (even when controlling for clearly confounding factors such as frequency) [@Coleman:1981language; @Medin:1984jep; @Lakoff:1990book; @Hampton:2007cogsci]. In other words, WordNet and similar resources fail to capture the fact that category membership is a gradual semantic phenomenon. This limitation of WordNet also characterises much of the LE research in NLP, as we discuss later in Sect. \[s:motivation\]. To address these limitations, the present work is concerned with [*graded lexical entailment*]{}: the degree of the LE relation between two concepts on a continuous scale. Thanks to the availability of crowdsourcing technology, we conduct a variant of the seminal behavioural data collection by Rosch , but on a massive scale. To do so, we introduce the idea of graded or [*soft*]{} LE, and design a human rating task for $(X,Y)$ concept pairs based on the following question: [*To what degree is X a type of Y?*]{}. We arrive at a data set with 2,616 concept pairs, each rated by at least 10 human raters, scored by the degree to which they exhibit typicality and semantic category membership and, equivalently, LE. Using this dataset, HyperLex,[^5] we investigate two questions: - [**(Q1)**]{} Do we observe the same effects of typicality, graded membership and graded lexical entailment in human judgements as observed by Rosch? Do humans intuitively distinguish between central and non-central members of a category/class? Do humans distinguish between full and partial membership in a class as discussed by Kamp and Partee ? - [**(Q2)**]{} Is the current LE modeling and representation methodology as applied in NLP research and technology sufficient to accurately capture graded lexical entailment automatically? What is the gap between current automatic systems and human performance in the graded LE task? The article is structured as follows. We define and discuss graded LE in Sect. \[s:graded\]. In Sect. \[s:motivation\], we survey benchmarking resources from the literature that pertain to semantic category membership, LE identification or evaluation, and motivate the need for a new, more expressive resource. In Sect. \[s:hyperlex\], we describe the design and development of HyperLex, and outline the various semantic dimensions (such as POS usage, hypernmy levels and concreteness levels) along which these concept pairs are designed to vary. This allows us to address Q1 in Sect. \[s:analysis\], where we present a series of qualitative analyses of the data gathered and collated into HyperLex. High inter-annotator agreement scores (pairwise and mean Spearman’s $\rho$ correlations around 0.85 on the entire dataset, similar correlations on noun and verb subsets) indicate that participants found it unproblematic to rate consistently the graded LE relation for the full range of concepts. These analyses reveal that the data in HyperLex enhances, rather than contradicts or undermines the information in WordNet, in the sense that hypernymy-hyponymy pairs receive highest average ratings in HyperLex compared to all other WordNet relations. We also show that participants are able to capture the implicit asymmetry of the graded LE relation by examining ratings of $(X,Y)$ and reversed $(Y,X)$ pairs. Most importantly, our analysis shows that the effects of typicality, vagueness, and gradual nature of LE are indeed captured in human judgements. For instance, graded LE scores indicate that humans rate concepts such as [*to talk*]{} or [*to speak*]{} as more typical instances of the class [*to communicate*]{} than concepts such as [*to touch*]{}, or [*to pray*]{}. In Sect. \[s:experiments\] we then turn our attention to Q2: we evaluate the performance of a wide range of LE detection or measurement approaches. This review covers: (i) distributional models relying on the distributional inclusion hypothesis [@Geffet:2005acl; @Lenci:2012sem] and semantic generality computations [@Santus:2014eacl], (ii) multi-modal approaches [@Kiela:2015acl], (iii) WordNet-based approaches [@Pedersen:2004aaai], (iv) a selection of state-of-the-art recent word embeddings, some optimised for similarity on semantic similarity data sets [@Mikolov:2013nips; @Levy:2014acl; @Wieting:2015tacl inter alia], others developed to better capture the asymmetric LE relation [@Vilnis:2015iclr; @Vendrov:2016iclr]. Due to its size, and unlike other word pair scoring data sets such as SimLex-999 or WordSim-353, in HyperLex we provide standard train/dev/test splits (both [*random*]{} and [*lexical*]{} [@Levy:2015naacl; @Shwartz:2016arxiv]) so that HyperLex can be used for supervised learning. We therefore evaluate several prominent supervised LE architectures [@Baroni:2012eacl; @Weeds:2014coling; @Roller:2014coling inter alia]. Although we observe interesting differences in the models, our findings indicate clearly that none of the currently available models or approaches accurately model the relation of graded LE reflected in human subjects. This study therefore calls for new paradigms and solutions capable of capturing the gradual nature of semantic relations such as hypernymy in hierarchical semantic networks. In Section \[s:application\], we turn to the future and discuss potential applications of the graded LE concept and HyperLex. We conclude in Section \[s:conclusion\] by summarising the key aspects of our contribution. HyperLex offers robust, data-driven insight into how humans perceive the concepts of typicality and graded membership within the graded LE relation. We hope that this will in turn incentivise research into language technology that both reflects human semantic memory more faithfully and interprets and models linguistic entailment more effectively. Graded Lexical Entailment {#s:graded} ========================= \[ss:what\] #### Note on Terminology Due to dual and inconsistent use in prior work, in this work we use the term [*lexical entailment (LE)*]{} in its stricter definition. It refers precisely to the taxonomical [*hyponymy-hypernymy*]{} relation, also known as [is-a]{}, or [type-of]{} relation [@Hearst:1992coling; @Weeds:2004coling; @Snow:2004nips; @Pantel:2006acl; @Do:2010emnlp inter alia], e.g., [*snake*]{} is a [type-of]{} [*animal*]{}, [*computer*]{} is a [type-of]{} [*machine*]{}. This is different from the definition used in [@Zhitomirsky:2009cl; @Kotlerman:2010nle; @Turney:2015nle] as [*substitutable*]{} lexical entailment: this relation holds for a pair of words $(X,Y)$ if a possible meaning of one word (i.e., $X$) entails a meaning of the other, and the entailing word can substitute the entailed one in some typical contexts. This definition is looser and more general than the [type-of]{} definition, as it also encompasses other lexical relations such as synonymy, metonymy, meronymy, etc.[^6] #### Definitions The classical definition of [*ungraded lexical entailment*]{} is as follows: Given a concept word pair $(X,Y)$, $Y$ is a hypernym of $X$ if and only if $X$ is a type of $Y$, or equivalently every $X$ is a $Y$.[^7] On the other hand, [*graded lexical entailment*]{} defines the strength of the lexical entailment relation between the two concepts. Given the concept pair $(X,Y)$ and the entailment strength $s$, the triplet $(X,Y,s)$ defines to what degree $Y$ is a hypernym of $X$ (i.e., [*to what degree $X$ is a type of $Y$*]{}), where the degree is quantified by $s$, e.g., to what degree [*snake*]{} is a [type-of]{} [*animal*]{}. It may be observed as approximate or [*soft*]{} entailment, a weaker form of the classical entailment variant [@Esteva:2012fuzzy; @Bankova:2016arxiv]. By imposing a threshold $thr$ on $s$, all graded relations may be straightforwardly converted to discrete ungraded decisions. #### (Proto)typicality, Graded Membership, and Graded LE The graded LE relation as described by the intuitive question “to what degree is X a type of Y?” encompasses two distinct phenomena described in cognitive science research (cf. [@Hampton:2007cogsci]). First, it can be seen as the measure of [*typicality*]{} in graded cognitive categorisation [@Rosch:1973:natural; @Rosch:1975cognitive; @Medin:1984jep; @Lakoff:1990book], where some instances of a category are more central than others, as illustrated in Fig. \[fig:animal\]-Fig. \[fig:move\]. It measures to what degree some class instance $X$ is a prototypical example of class/concept $Y$. For instance, when humans are asked to give an example instance of the concept [*sport*]{}, it turns out that [*football*]{} and [*basketball*]{} are more frequently cited than [*wrestling*]{}, [*chess*]{}, [*softball*]{}, or [*racquetball*]{}. Another viewpoint stresses that “prototypes serve as reference points for the categorisation of not-so-clear instances” [@Taylor:2003book]. Osherson and Smith make further developments to the theory of (proto)typicality by recognising that there exist concepts “that lack prototypes while possessing degrees of exemplification”. They list the famous example of the concept *building* without a clear prototype; however, people tend to agree that most banks are more typical buildings than, say, barns or pile dwellings. Second, the graded LE relation also arises when one asks about the applicability of concepts to objects: the boundaries between a category and its instances are much more often fuzzy and vague than it is unambiguous and clear-cut [@Kamp:1995cog]. In other words, the *graded membership* (often termed *vagueness*) measures the graded applicability of a concept to different instances, e.g., it is not clear to what extent different objects in our surroundings (e.g., *tables*, *pavements*, *washing machines*, *stairs*, *benches*) could be considered members of the category *chair* despite the fact that such objects can be used as “objects on which one can sit”. The notions of typicality and graded membership are not limited to concrete or nominal concepts, as similar gradience effects are detected for more complex and abstract concepts (e.g., [*“To what degree is THESIS an instance/type of STATEMENT?”*]{}) [@Coleman:1981language], or action verbs [@Pulman:1983book] and adjectives [@Dirven:1986book]. In short, graded membership or vagueness quantifies “whether or not and to what degree an instance falls within a conceptual category” , while typicality reflects “how representative an exemplar is of a category” [@Hampton:2007cogsci]. The subtle distinction between the two is discussed and debated at length from the philosophical and psychological perspective [@Osherson:1981cog; @Kamp:1995cog; @Osherson:1997cog; @Hampton:2006; @Hampton:2007cogsci; @Blutner:2013; @Decock:2014nous]. In our crowdsourcing study with non-expert workers, we have deliberately avoided any explicit differentiation between the two phenomena captured by the same intuitive ’to-what-degree’ question, reducing the complexity of the study design and allowing for their free variance in the collected data in terms of their quantity and representative concept pairs. In addition, the distinction is often not evident for verb concepts. We leave further developments with respect to the two related phenomena of typicality and vagueness for future work, and refer the interested reader to the aforementioned literature. #### Relation to Relational Similarity A strand of related research on relational similarity [@Turney:2006cl; @Jurgens:2012semeval] also attempts to assign the score $s$ to a pair of concepts $(X,Y)$. Note that there exists a fundamental difference between relational similarity and graded lexical entailment. In the latter, $s$ refers to the degree of the LE relation in the $(X,Y)$ pair, that is, to the levels of typicality and graded membership of the instance $X$ for the class $Y$, while the former quantifies the typicality of the pair $(X,Y)$ for some fixed lexical relation class $R$ [@Bejar:1991book; @Vylomova:2016acl], e.g., to what degree the pair [*(snake, animal)*]{} reflects a typical LE relation or a typical synonymy relation.[^8] #### Graded LE vs. Semantic Similarity A plethora of current evaluations in NLP and representation learning almost exclusively focus on semantic similarity and relatedness. Semantic similarity as quantified by e.g. SimLex-999 or SimVerb-3500 [@Gerz:2016emnlp] may be redefined as [*graded synonymy relation*]{}. The graded scores there, in fact, refer to the strength of the synonymy relation between any pair of concepts $(X,Y)$. One could say that semantic similarity aims to answer the question [*to what degree $X$ and $Y$ are similar*]{}.[^9] Therefore, an analogy between previously annotated semantic similarity data sets and our objective to construct a graded LE data set may be utilised to introduce the graded LE task and facilitate the construction of HyperLex. Design Motivation {#s:motivation} ================= Lexical Entailment Evaluations in NLP ------------------------------------- Since the work in NLP and human language understanding focuses on the ungraded version of the LE relation, we briefly survey main ungraded LE evaluation protocols in Sect. \[ss:evalprot\], followed by an overview of benchmarking LE evaluation sets in Sect. \[ss:sets\]. We show that none of the existing evaluation protocols coupled with existing evaluation sets enables a satisfactory evaluation of the capability of statistical models to capture graded LE. As opposed to existing evaluation sets, by collecting human judgements through a crowdsourcing study our new HyperLex evaluation set also enables qualitative linguistic analysis on how humans perceive and rate graded lexical entailment. ### Evaluation Protocols {#ss:evalprot} Evaluation protocols for the lexical entailment or type-of relation in NLP, based on the classical definition of ungraded LE, may be roughly clustered as follows: #### (i) Entailment Directionality Given two words $(X,Y)$ that are known to stand in a lexical entailment relation, the system has to predict the relation directionality, that is, which word is the hypernym and which word is the hyponym. More formally, the following mapping is defined by the directionality function $f_{dir}$: $f_{dir}$ simply maps to $1$ when $Y$ is the hypernym, and to $-1$ otherwise. #### (ii) Entailment Detection The system has to predict whether there exists a lexical entailment relation between two words, or the words stand in some other relation, e.g., synonymy, meronymy-holonymy, causality, no relation, see [@Hendrickx:2010semeval; @Jurgens:2012semeval; @Vylomova:2016acl] for a more detailed overview of lexical relations. The following mapping is defined by the detection function $f_{det}$: $f_{det}$ simply maps to $1$ when $(X,Y)$ stand in a lexical entailment relation, irrespective to the actual directionality of the relation, and to $0$ otherwise. #### (iii) Entailment Detection and Directionality This recently proposed evaluation protocol [@Weeds:2014coling; @Kiela:2015acl] combines (i) and (ii). The system first has to detect whether there exists a lexical entailment relation between two words $(X,Y)$, and then, if the relation holds, it has to predict its directionality, i.e., the correct hypernym. The following mapping is defined by the joint detection and directionality function $f_{det+dir}$: $f_{det+dir}$ maps to $1$ when $(X,Y)$ stand in a lexical entailment relation and $Y$ is the hypernym, to $-1$ if $X$ is the hypernym, and to $0$ if $X$ and $Y$ stand in some other lexical relation or no relation. #### Standard Modeling Approaches These decisions are typically based on the [*distributional inclusion hypothesis*]{} [@Geffet:2005acl] or a measure of [*lexical generality*]{} [@Herbelot:2013acl]. The intuition supporting the former is that the class (i.e., [*extension*]{}) denoted by a hyponym is included in the class denoted by the hypernym, and therefore hyponyms are expected to occur in a subset of the contexts of their hypernyms. The intuition supporting the latter hints that typical characteristics constituting the [*intension*]{} (i.e., concept) expressed by a hypernym (e.g., [*move*]{} or [*eat*]{} for the concept word [*animal*]{}) are semantically more general than the characteristics forming the intension[^10] of its hyponyms (e.g., [*bark*]{} or [*has tail*]{} for the concept word [*dog*]{}). In other words, superordinate concepts such as [*animal*]{} or [*appliance*]{} are semantically less informative than their hyponyms [@Murphy:2003book], which is also reflected in less specific contexts for hypernyms. Unsupervised (distributional) models of lexical entailment were instigated by the early work of Hearst on prototypicality patterns (e.g., the pattern “X such as Y” indicates that Y is a hyponym of X). The current unsupervised models typically replace the symmetric cosine similarity measure which works well for semantic similarity computations [@Bullinaria:2007brm; @Mikolov:2013iclr] with an asymmetric similarity measure optimised for entailment [@Weeds:2004coling; @Clarke:2009gems; @Kotlerman:2010nle; @Lenci:2012sem; @Herbelot:2013acl; @Santus:2014eacl]. Supervised models, on the other hand, attempt to learn the asymmetric operator from a training set, differing mostly in the feature selection to represent each candidate pair of words [@Baroni:2012eacl; @Fu:2014acl; @Rimell:2014eacl; @Weeds:2014coling; @Roller:2014coling; @Fu:2015taslp; @Shwartz:2016arxiv; @Roller:2016arxiv].[^11] An overview of the supervised techniques also discussing their main shortcomings is provided by Levy et al. , while a thorough discussion of differences between unsupervised and supervised entailment models is provided by Turney and Mohammad . #### Why is HyperLex Different? In short, regardless of the chosen methodology, the evaluation protocols (directionality or detection) may be straightforwardly translated into binary decision problems: (1) distinguishing between hypernyms and hyponyms, (2) distinguishing between lexical entailment and other relations. HyperLex, on the other hand, targets a different type of evaluation. The graded entailment function $f_{graded}$ defines the following mapping: $f_{graded}$ outputs the strength of the lexical entailment relation $s \in \mathbb{R}_0^+$. By adopting the graded LE paradigm, HyperLex thus measures the degree of lexical entailment between words $X$ and $Y$ constituting the order-sensitive pair $(X,Y)$. From another perspective, it measures the typicality and graded membership of the instance $X$ for the class/category $Y$. From the relational similarity viewpoint [@Turney:2006cl; @Jurgens:2012semeval; @Zhila:2013naacl], it also measures the prototypicality of the pair $(X,Y)$ for the LE relation. ### Evaluation Sets {#ss:sets} #### BLESS Introduced by Baroni and Lenci , the original BLESS evaluation set includes 200 concrete English nouns as target concepts (i.e., $X$-s from the pairs $(X,Y)$), equally divided between animate and inanimate entities. 175 concepts were extracted from the McRae feature norms dataset [@McRae:2005brm], while the remaining 25 were selected manually by the authors. These concepts were then paired to 8,625 different relatums (i.e., $Y$-s) yielding a total of 26,554 $(X,Y)$ pairs, where 14,440 contain a meaningful lexical relation and 12,154 are paired randomly. The lexical relations represented in BLESS are lexical entailment, co-hyponymy, meronymy, attribute, event, and random/no relation. The use of its hyponymy-hypernymy/LE subset of 1,337 $(X,Y)$ pairs is then twofold. First, for directionality evaluations [@Santus:2014eacl; @Kiela:2015acl], only the LE subset is used. Note that original BLESS data is always presented with the hyponym first, so gold annotations are implicitly provided here. Second, for detection evaluations [@Santus:2014eacl; @Roller:2014coling; @Levy:2015naacl], the pairs from the LE subset are taken as positive pairs, while all the remaining pairs are considered negative pairs. That way, the evaluation data effectively measures a model’s ability to predict the positive LE relation. Another evaluation dataset based on BLESS was introduced by Santus et al. . Following the standard annotation scheme, it comprises 7,429 noun pairs in total, and 1,880 pairs LE pairs in particular, covering a wider range of relations than BLESS (i.e., the dataset now includes synonymy and antonymy pairs). [1.0]{}[XXc]{} [Variant]{} & [Pair]{} & [Annotation]{}\ & [(cat, animal)]{} & [1]{}\ & [(cat, animal)]{} & [1]{}\ [WBLESS]{} & [(cat, monkey)]{} & [0]{}\ & [(animal, cat)]{} & [0]{}\ & [(cat, animal)]{} & [1]{}\ [BiBLESS]{} & [(cat, monkey)]{} & [0]{}\ & [(animal, cat)]{} & [-1]{}\ Adaptations of the original BLESS evaluation set were proposed recently. First, relying on its LE subset, Weeds et al. created another dataset called WBLESS [@Kiela:2015acl] consisting of 1,976 concept pairs in total. Only $(X,Y)$ pairs where $Y$ is the hypernym are annotated as positive examples. It also contains reversed LE pairs (i.e., $X$ is the hypernym), cohyponymy pairs, meronymy-holonymy pairs and randomly matched nouns balanced across different lexical relations, all are annotated as negative examples. Due to its construction, WBLESS is used solely for experiments on LE detection. Weeds et al. created another dataset in a similar fashion, consisting of 5,835 noun pairs, targeting co-hyponymy detection. For the combined detection and directionality evaluation, a variant evaluation set called BiBLESS was proposed [@Kiela:2015acl]. It is built on WBLESS, but now explicitly distinguishes direction in LE pairs. Examples of concept pairs in all BLESS variants can be found in Tab. \[tab:bless\]. A majority of alternative ungraded LE evaluation sets briefly discussed here have a structure very similar to BLESS and its variants. #### Kotlerman et al. (2010) Based on the original dataset of [@Zhitomirsky:2009cl], this evaluation set [@Kotlerman:2010nle] contains 3,772 word pairs in total. The structure is similar to BLESS: 1,068 pairs are labeled as positive examples (i.e., 1 or [*entails*]{} iff $X$ entails $Y$), and 2,704 labeled as negative examples, including the reversed positive pairs. The assignment of binary labels is described in detail by [@Zhitomirsky:2009cl]. The class sizes are not balanced, and due to its design, although each pair is unique, 30 high-frequent nouns occur in each pair in the dataset. Note that this dataset has been annotated according to the broader definition of substitutable LE, see Sect. \[ss:what\]. #### Baroni et al. (2012) The $N_1 \vDash N_2$ evaluation set contains 2,770 nominal concept pairs, with 1,385 pairs labeled as positive examples (i.e., 1 or [*entails*]{}) [@Baroni:2012eacl]. The remaining 1,385 pairs labeled as negatives were created by inverting the positive pairs and randomly matching concepts from the positive pairs. The pairs and annotations were extracted automatically from WordNet and then validated manually by the authors, e.g., the abstract concepts with a large number of hyponyms such as *entity* or *object* were removed from the pool of concepts). #### Levy et al. (2014) A similar dataset for the standard LE evaluation may be extracted from manually annotated entailment graphs of subject-verb-object tuples (i.e., propositions) [@Levy:2014conll]: noun LEs were extracted from entailing tuples that were identical except for one of the arguments, thus propagating the proposition-level entailment to the word level. This data set was built for the medical domain and adopts the looser definition of substitutable LE. [1.0]{}[l l X]{} [Resource]{} & & [Relation]{}\ & & [instance hypernym, hypernym]{}\ [Wikidata]{} & & [subclass of, instance of]{}\ [DBPedia]{} & & [type]{}\ [Yago]{} & & [subclass of]{}\ #### Custom Evaluation Sets A plethora of relevant work on ungraded LE do not rely on established evaluation resources, but simply extract ad-hoc LE evaluation data using distant supervision from readily available semantic resources and knowledge bases such as WordNet [@Miller:1995cacm], DBPedia [@Auer:2007iswc], [@Tanon:2016www], Yago [@Suchanek:2007www], or dictionaries [@Gheorghita:2012lrec]. Although plenty of the custom evaluation sets are available online, there is a clear tendency to construct a new custom dataset in every subsequent paper which uses the same evaluation protocol for ungraded LE. A standard practice [@Snow:2004nips; @Snow:2006acl; @Bordes:2011aaai; @Riedel:2013naacl; @Socher:2013nips; @Weeds:2014coling; @Vendrov:2016iclr; @Shwartz:2016arxiv inter alia] is to extract positive and negative pairs by coupling concepts that are directly related in at least one of the resources. Only pairs standing in an unambiguous hypernymy/LE relation, according to the set of indicators from Tab. \[tab:kbs\], are annotated as positive examples (i.e., again $1$ or [*entailing*]{}, Tab. \[tab:bless\]) [@Shwartz:2015conll]. All other pairs standing in other relations are taken as negative instances. Using related rather than random concept pairs as negative instances enables detection experiments. We adopt a similar construction principle regarding wide coverage of different lexical relations in HyperLex. This decision will support a variety of interesting analyses related to graded LE and other relations. #### Jurgens et al. (2012) Finally, the evaluation resource most similar in spirit to HyperLex is the dataset of Jurgens et al. (*https://sites.google.com/site/semeval2012task2/*) created for measuring degrees of relational similarity. It contains 3,218 word pairs labelled with 79 types of lexical relations from the Bejar et al.’s relation classification scheme. The dataset was constructed using two phases of crowdsourcing. First, for each of the 79 subcategories, human subjects were shown paradigmatic examples of word pairs in the given subcategory. They were then asked to generate more pairs of the same semantic relation type. Second, for each of the 79 subcategories, other subjects were shown word pairs that were generated in the first phase, and they were asked to rate the pairs according to their degree of prototypicality for the given semantic relation type. This is different from HyperLex where all word pairs, regardless of their actual relation, were scored according to the degree of lexical entailment between them. The Bejar et al.’s hierarchical classification system contains ten high-level categories, with five to ten subcategories each. Only one high-level category, [Class-Inclusion]{} refers to the true relation of ungraded LE or hypernymy-hyponymy, and the scores in the data set do not reflect graded LE. The data set aims at a wide coverage of different fine-grained relations: it comprises a small sample of manually generated instances (e.g., the number of distinct pairs for the [Class-Inclusion]{} class is 200) for each relation scored according to their prototypicality only for that particular relation. For more details concerning the construction of the evaluation set, we refer the reader to the original work. Also, for details on how to convert the dataset to an evaluation resource for substitutable LE, we refer the reader to [@Turney:2015nle]. #### HyperLex: A Short Summary of Motivation The usefulness of these evaluation sets is evident from their wide usage in the LE literature over recent years: they helped to guide the development of semantic research focussed on taxonomical relations. However, none of the evaluation sets contains graded LE ratings. Therefore, HyperLex may be considered as a more informative data collection: it enables a new evaluation protocol focussed on gradience of the <span style="font-variant:small-caps;">type-of</span> relation rooted in cognitive science [@Hampton:2007cogsci]. As discussed in Sect. \[ss:what\], graded annotations from HyperLex may be easily converted to ungraded annotations: HyperLex may also be used in the standard format of previous LE evaluation sets (see Tab. \[tab:bless\]) for detection and directionality evaluation protocols (see later in Sect. \[ss:results\]). Second, a typical way to evaluate word representation quality at present is by judging the similarity of representations assigned to similar words. The most popular semantic similarity evaluation sets such as SimLex-999 or SimVerb-3500 consist of word pairs with similarity ratings produced by human annotators. HyperLex is the first resource that can be used for the intrinsic evaluation [@Schnabel:2015emnlp; @Faruqui:2016arxiv] of LE-based vector space models [@Vendrov:2016iclr], see later in Sect. \[ss:orderemb\]. Encouraged by high inter annotator agreement scores and evident large gaps between the human and system performance (see Sect. \[s:results\]), we believe that HyperLex will guide the development of a new generation of representation-learning architectures that induce hypernymy/LE-specialised word representations, as opposed to nowadays ubiquitous word representations targeting exclusively semantic similarity and/or relatedness (see later the discussion in Sect. \[ss:further\] and Sect. \[s:application\]). Finally, HyperLex provides a wide coverage of different semantic phenomena related to LE: graded membership vs typicality (see Sect. \[ss:what\]), entailment depths, concreteness levels, word classes (nouns and verbs), word pairs standing in other lexical relations, etc. Besides its primary purpose as an evaluation set, such a large-scale and diverse crowdsourced semantic resource (2,616 pairs in total!) enables novel linguistic and cognitive science analyses regarding human typicality and vagueness judgments, as well as taxonomic relationships (discussed in Sect. \[s:analysis\]). The HyperLex Data Set {#s:hyperlex} ===================== #### Construction Criteria Hill, Reichart, and Korhonen argue that comprehensive high-quality evaluation resources have to satisfy the following three criteria: [*(C1) Representative*]{}: The resource covers the full range of concepts occurring in natural language. [*(C2) Clearly defined*]{}: A clear understanding is needed of what exactly the gold standard measures, that is, the data set has to precisely define the annotated relation, e.g., relatedness as with WordSim-353, similarity as with SimLex-999, or in this case [*graded lexical entailment*]{}. (C3) [*Consistent and reliable*]{}: Untrained native speakers must be able to quantify the target relation consistently relying on simple instructions. The choice of word pairs and construction of the evaluation set were steered by the requirements. The criterion C1 was satisfied by sampling a sufficient number of pairs from the University of Southern Florida (USF) Norms data set [@Nelson:2004usf]. As shown in prior work [@Hill:2015cl], the USF data set provides an excellent range of different semantic relations (e.g., synonyms vs hypernyms vs meronyms vs cohyponyms) and semantic phenomena (e.g., it contains concrete vs abstract word pairs, noun pairs vs verb pairs). This, in turn, guarantees a wide coverage of distinct semantic phenomena in HyperLex. We discuss USF and the choice of concept words in more detail in Sect. \[ss:choice\]. C2-C3 were satisfied in HyperLex by providing clear and precise annotation guidelines which accurately outline the lexical entailment relation and its graded variant in terms of the synonymous definition based on the [type-of]{} relationship [@Fromking:2013book] for average native speakers of English without any linguistic background. We discuss the annotation guidelines and questionnaire structure in Sect. \[ss:guidelines\] and Sect. \[ss:questionnaire\]. #### Final Output The HyperLex evaluation set contains noun pairs (2,163 pairs) and verb pairs (453 pairs) annotated for the strength of the lexical entailment relation between the words in each pair. Since the LE relation is asymmetric and the score always quantifies to what degree $X$ is a type of $Y$, pairs $(X,Y)$ and $(Y,X)$ are considered distinct pairs. Each concept pair is rated by at least 10 human raters. The rating scale goes from 0 (no type-of relationship at all) to 10 (perfect type-of relationship). Several examples from HyperLex are provided in Tab. \[tab:examples\]. [X l]{} Pair & HyperLex LE Rating\ chemistry / science & 10.0\ motorcycle / vehicle & 9.85\ pistol / weapon & 9.62\ to ponder / to think & 9.40\ to scribble / to write & 8.18\ gate / door & 6.53\ thesis / statement & 6.17\ to overwhelm / to defeat & 4.75\ shore / beach & 3.33\ vehicle / motorcycle & 1.09\ enemy / crocodile & 0.33\ ear / head & 0.00\ \[tab:examples\] In its 2,616 word pairs, HyperLex contains 1,843 distinct noun types and 392 distinct verb types. In comparison, SimLex-999 as the standard crowdsourced evaluation benchmark for representation learning architectures focused on the synonymy relation contains 751 distinct nouns and 170 verbs in its 999 word pairs. In another comparison, the LE benchmark BLESS (see Sect. \[ss:sets\]) contains relations where one of the words in each pair comes from the set of 200 distinct concrete noun types. Choice of Concepts {#ss:choice} ------------------ #### Sources: USF and WordNet To ensure a wide coverage of a variety semantic phenomena (C1), the choice of candidate pairs is steered by two standard semantic resources available online: (1) the USF norms data set[^12] [@Nelson:2004usf], and (2) WordNet[^13] [@Miller:1995cacm]. USF was used as the primary source of concept pairs. It is a large database of free association data collected for English, generated by presenting human subjects with one of $5,000$ cue concepts and asking them to write the first word coming to mind that is associated with that concept. Each cue concept $c$ was normed in this way by over 10 participants, resulting in a set of associates $a$ for each cue, for a total of over $72,000$ $(c,a)$ pairs. For each such pair, the proportion of participants who produced associate $a$ when presented with cue $c$ can be used as a proxy for the strength of association between the two words. The norming process guarantees that two words in a pair have a degree of semantic association which correlates well with semantic relatedness reflected in different lexical relations between words in the pairs. Inspecting the pairs manually revealed a good range of semantic relationship values represented, e.g., there were examples of ungraded LE pairs ([*car / vehicle*]{}, [*biology / science*]{}), cohyponym pairs ([*peach / pear*]{}), synonyms or near-synonyms ([*foe / enemy*]{}), meronym-holonym pairs ([*heel / boot*]{}), and antonym pairs ([*peace / war*]{}). USF also covers different POS categories: nouns ([*winter / summer*]{}), verbs ([*to elect / to select*]{}), and adjectives ([*white / gray*]{}), at the same time spanning word pairs at different levels of concreteness ([*panther / cat*]{} vs [*wave / motion*]{} vs [*hobby / interest*]{}). The rich annotations of the USF data (e.g., concreteness scores, association strength) can be combined with graded LE scores to yield additional analyses and insight. WordNet was used to automatically assign a fine-grained lexical relation to each pair in the pool of candidates, which helped to guide the sampling process to ensure a wide coverage of word pairs standing in a variety of lexical relations [@Shwartz:2016arxiv]. #### Lexical Relations To guarantee the coverage of a wide range of semantic phenomena, we have conditioned the cohort/pool used for sampling on the lexical relation between the words in each pair. As mentioned above, the information was extracted from WordNet. We consider the following lexical relations in HyperLex: \(1) `hyp-N`: $(X,Y)$ pairs where $X$ is a hyponym of $Y$ according to WordNet. $N$ denotes the path length between the two concepts in the WordNet hierarchy, e.g., the pair [*cathedral / building*]{} is assigned the `hyp-3` relation. Due to unavailability of a sufficient number of pairs for longer paths, we have grouped all pairs with the path length $\geq 4$ into a single relation class `hyp\geq4`. It was shown that pairs that are separated by fewer levels in the WordNet hierarchy are both more strongly associated and rated as more similar [@Hill:2015cl]. This fine-grained division over different LE levels will enable analyses based on the semantic distance in a concept hierarchy. \(2) `rhyp-N`: The same as `hyp-N`, now with the order reversed: $X$ is now a hypernym of $Y$. Such pairs were included to investigate the inherent asymmetry of the type-of relation and how human subjects perceive it. \(3) `cohyp`: $X$ and $Y$ are two instances of the same implicit category, that is, they share a hypernym (e.g., [*dog*]{} and [*elephant*]{} are instances of the category [*animal*]{}). For simplicity, we retain only $(X,Y)$ pairs that share a direct hypernym. \(4) `mero`: It denotes the [Part-Whole]{} relation, where $X$ always refer to the meronym (i.e., [Part]{}), and $Y$ to the holonym (i.e., [Whole]{}): [*finger / hand*]{}, [*letter / alphabet*]{}. By its definition, this relation is observed only between nominal concepts. \(5) `syn`: $X$ and $Y$ are synonyms and near-synonyms, e.g., [*movement / motion*]{}, [*attorney / lawyer*]{}. In case of polysemous concepts, at least one sense has to be synonymous with a meaning of the other concept, e.g., [*author / writer*]{}. \(6) `ant`: $X$ and $Y$ are antonyms, e.g., [*beginning / end*]{}, [*day / night*]{}, [*to unite / to divide*]{}. \(7) `no-rel`: $X$ and $Y$ do not stand in any lexical relation, including the ones not present in HyperLex (e.g., causal relations, space-time relations), and are also not semantically related. This relation specifies that there is no apparent semantic connection between the two concepts at all, e.g., [*chimney / swan*]{}, [*nun / softball*]{}. #### POS Category HyperLex includes subsets of pairs from two principle meaning-bearing POS categories: nouns and verbs.[^14] This decision will enable finer-grained analyses based on the two main POS categories. It is further supported by recent research in distributional semantics showcasing that different word classes (e.g., nouns vs verbs) require different modeling approaches and distributional information to reach per-class peak performances [@Schwartz:2015conll]. In addition, we expect verbs to have fuzzier category borders due to their high variability and polysemy, increased abstractness, and a wide range of syntactic-semantic behaviour [@Jackendoff:1972book; @Levin:1993book; @Gerz:2016emnlp]. #### Pools of Candidate Concept Pairs The initial pools for sampling were selected as follows. First, we extracted all possible noun pairs (N / N) and verb pairs (V / V) from USF based on the associated POS tags available as part of USF annotations. Concept pairs of other and mixed POS (e.g., [*puzzle / solve*]{}, [*meet / acquaintance*]{}) were excluded from the pool of candidate pairs.[^15] To ensure that semantic association between concepts in a pair is not accidental, we then discarded all such USF pairs that had been generated by two or fewer participants in the original USF experiments.[^16] We also excluded all concept pairs containing a multi-word expression (e.g., [*put down / insult*]{}, [*stress / heart attack*]{}), pairs containing a named entity (e.g., [*Europe / continent*]{}), and pairs containing a potentially offensive concept (e.g., [*weed / pot*]{}, [*heroin / drug*]{}).[^17] All remaining concept pairs were then assigned a lexical relation according to WordNet. In case of duplicate $(X,Y)$ and $(Y,X)$ pairs, only one variant (i.e., $(X,Y)$) was retained. In addition, all `rhyp-N` pairs at this stage were reversed into `hyp-N` pairs. All `no-rel` pairs from USF were also discarded at this stage to prevent the inclusion of semantically related pairs in the `no-rel` subset of HyperLex. In the final step, all remaining pairs were divided into per-relation pools of candidate noun and verb pairs for each represented relation: `hyp-N`, `cohyp`, `mero`, `syn`, `ant`. Two additional pools were created for `rhyp-N` and `no-rel` after the sampling process. ![image](./relation_distribution.png){width="0.98\linewidth"} \[fig:reldistr\] Sampling Procedure {#ss:sampling} ------------------ The candidate pairs were then sampled from the respective per-relation pools. The final number of pairs per each relation and POS category was influenced by: (1) the number of candidates in each pool (therefore, HyperLex contains significantly more noun pairs), (2) the focus on LE (therefore, HyperLex contains more `hyp-N` pairs at different LE levels), (3) the wide coverage of most prominent lexical relations (therefore, each lexical relation is represented by a sufficient number of pairs), and (4) logistic reasons (we were unable to rate all candidates in a crowdsourcing study and had to sample a representative subset of candidates for each relation and POS category in the first place). #### Step 1: Initial Sampling First, pairs for lexical relations `hyp-N`, `cohyp`, `mero`, `syn`, and `ant` were sampled from their respective pools. WordNet, although arguably the best choice for our purpose, is not entirely reliable as a gold standard resource with occasional inconsistencies and debatable precision regarding the way lexical relations have been encoded: e.g., [*silly*]{} is a hyponym of [*child*]{} according to WordNet. Therefore, all sampled pairs were manually checked by the authors plus two native English speakers in several iterations. Only such sampled pairs where the majority of human checkers agreed on the lexical relation were retained. If a pair was discarded, another substitute pair was randomly sampled if available, and again verified against human judgements. #### Step 2: Reverse and No-Rel Pairs Before the next step, the pool for `rhyp-N` was generated by simply reversing the order of concepts in all previously sampled $(X,Y)$ `hyp-N` pairs. The pool for `no-rel` was generated by pairing up the concepts from the pairs extracted in Step 1 at random using the Cartesian product. From these random parings, we excluded those that coincidentally occurred elsewhere in USF (and therefore had a degree of association), as well as those that were assigned any lexical relation according to WordNet. From the remaining pairs, we accepted only those in which both concepts had been subject to the USF norming procedure, ensuring that these non-USF pairs were indeed unassociated rather than simply not normed. `rhyp-N` and `no-rel` were then sampled from these two pools, followed by another manual check. The `rhyp-N` pairs will be used to test the asymmetry of human and system judgements (see later in Tab. \[tab:relations\] and Tab. \[tab:reversed\]), which is immanent to the LE relation. Fig. \[fig:reldistr\] shows the exact numbers of noun and verb pairs across different lexical relations represented in HyperLex. The final set of 2,616 distinct word pairs[^18] was then annotated in a crowdsourcing study (Sect. \[ss:guidelines\] and Sect. \[ss:questionnaire\]). Question Design and Guidelines {#ss:guidelines} ------------------------------ Here, we show and detail the exact annotation guidelines followed by the participants in the crowdsourcing study. In order to accurately outline the lexical entailment relation to average native speakers of English without any linguistic background, we have deliberately eschewed the usage of any expert linguistic terminology in the annotation guidelines, and have also avoided addressing the subtle differences between typicality and vagueness (Sect. \[s:graded\]). For instance, the terms such as [*hyponymy/hypernymy*]{}, [*lexical entailment*]{}, [*prototypicality*]{}, or [*taxonomy*]{} were never explicitly defined using any precise linguistic formalism. #### (Page 1) We have adopted a simpler and more intuitive definition of lexical entailment instead, based on the [*type-of*]{} relationship between words in question [@Fromking:2013book], illustrated by a set of typical examples in the guidelines (see Fig. \[fig:instructions01\]). #### (Page 2) Following that, a clear distinction was made between words standing in a broader relationship of [*semantic relatedness*]{} and words standing in an actual type-of relation (see Fig. \[fig:instructions02a\]). We have included typical examples of related words without any entailment relation, including meronymy pairs ([*tyre / car*]{}), cohyponymy pairs ([*plant / animal*]{}), and antonymy pairs ([*white / black*]{}), and pairs in other lexical relations (e.g., [*shore / sea*]{}). Since HyperLex also contains verbs, we have also provided several examples for a type-of relation between verbs (see again Fig. \[fig:instructions02a\]). Potential polysemy issues have been addressed by stating (using intuitive examples) that [*two words stand in a type-of relation if any of their senses stand in a type-of relation*]{}. However, we acknowledge that this definition is vague, and the actual disambiguation process was left to the annotators and their intuition as native speakers. A similar context-free rating was used in the construction of other word pair scoring datasets such as SimLex-999 [@Hill:2015cl] or WordSim-353 [@Finkelstein:2002tois].[^19] In the next step, we have explicitly stressed that the type-of relation is [*asymmetric*]{} (see Fig. \[fig:instructions02b\]). #### (Page 3) The final page explains the main idea behind graded lexical entailment, graded membership and prototypical class instances according to the theories from cognitive science [@Rosch:1973:natural; @Rosch:1975cognitive; @Lakoff:1990book; @Hampton:2007cogsci; @Divjak:2013cogling] by providing another illustrative set of examples (see Fig. \[fig:instructions03\]). The main goals of the study were then quickly summarised in the final paragraph, and the annotators were reminded to think in terms of the type-of relationship throughout the study. Questionnaire Structure and Participants {#ss:questionnaire} ---------------------------------------- We employ the Prolific Academic (PA) crowdsourcing platform,[^20] an online marketplace very similar to Amazon Mechanical Turk and to CrowdFlower. While PA was used to recruit participants, the actual questionnaire was hosted on Qualtrics.[^21] Unlike other crowdsourcing platforms, PA collects and stores detailed demographic information from the participants upfront. This information was used to carefully select the pool of eligible participants. We restricted the pool to native English speakers with a 90% approval rate (maximum rate on PA), of age 18-50, born and currently residing in the United States or the United Kingdom. Immediately after the guidelines, similar to the SimLex-999 questionnaire, a [*qualification question*]{} is posed to the participant to test whether she/he understood the guidelines and is allowed to proceed with the questionnaire. The question is: Fig. \[fig:question\]. In the case of an incorrect answer, the study terminates for the participant without collecting any ratings. In case of a correct answer, the participant begins rating pairs by moving a slider, as shown in Fig. \[fig:survey\]. Having a slider attached to the question [*“Is X a type of Y?”*]{} implicitly translates the posed question to the question [*“To what degree is X a type of Y?”*]{} (Sect. \[ss:what\]). The pairs are presented to the participant in groups of six or seven. As with SimLex-999, this group size was chosen because the (relative) rating of a set of pairs implicitly requires pairwise comparisons between all pairs in that set. Therefore, larger groups would have significantly increased the cognitive load on the annotators. Since concept pairs were presented to raters in batches defined according to POS, another advantage of grouping was the clear break (submitting a set of ratings and moving to the next page) between the tasks of rating noun and verb pairs. For better inter-group calibration, from the second group onward the last pair of the previous group became the first pair of the present group. The participants were then asked to re-assign the rating previously attributed to the first pair before rating the remaining new items (Fig. \[fig:survey\]). It is also worth stressing that we have decided to retain the type-of structure of each question explicitly for all word pairs so that raters are constantly reminded of the targeted lexical relation, i.e., all $(X,Y)$ word pairs are rated according to the question [*“Is X a type of Y?”*]{}, as shown in Fig. \[fig:survey\]. For verbs, we have decided to use the infinitive form in each question, e.g., [*“Is TO RUN a type of TO MOVE?”*]{} Following a standard practice in crowdsourced word pair scoring studies [@Finkelstein:2002tois; @Luong:2013conll; @Hill:2015cl], each of the 2,616 concept pairs has to be assigned at least 10 ratings from 10 different accepted annotators. We collected ratings from more than 600 annotators in total. To distribute the workload, we divided the 2,616 pairs into 45 tranches, with 79 pairs each: 50 are unique to one tranche, while 20 manually chosen pairs are in all tranches to ensure consistency: the use of such consistency pairs enabled control for possible systematic differences between annotators and tranches, which could detected by variation on this set of 20 pairs shared across all tranches. The remaining 9 are duplicate pairs displayed to the same participant multiple times to detect inconsistent annotations. The number of noun and verb pairs is the same across all tranches (64/79 and 15/79 respectively). Each annotator was asked to rate the pairs in a single tranche only. Participants took 10 minutes on average to complete one tranche, including the time spent reading the guidelines and answering the qualification question. Post-Processing --------------- 85% of total exclusions occurred due to crowdsourcers answering the qualification question incorrectly: we did not collect any ratings from such workers. In the post-processing stage, we additionally excluded ratings of annotators who (a) did not give equal ratings to duplicate pairs; (b) showed suspicious rating patterns (e.g., randomly alternating between two ratings, using one single rating throughout the study, or assigning random ratings to pairs from the consistency set). The final acceptance rate was 85.7% (if we also count the workers who answered the qualification question incorrectly for the total number of assignments) and 97.5% (with such workers excluded from the counts). We then calculated the average of all ratings from the accepted raters ( $\geq$ 10 ) for each word pair. The score was finally scaled linearly from the 0-6 to the 0-10 interval as in [@Hill:2015cl]. Analysis {#s:analysis} ======== #### Inter-Annotator Agreement We report two different inter-annotator agreement (IAA) measures. **IAA-1 (pairwise)** computes the average pairwise Spearman’s $\rho$ correlation between any two raters. This is a common choice in previous data collection in distributional semantics [@Pado:2007emnlp; @Reisinger:2010emnlp; @Silberer:2014acl; @Hill:2015cl]. [X ll]{} Benchmark & IAA-1 & IAA-2\ & [0.611]{} & [0.756]{}\ [[@Finkelstein:2002tois]]{} & &\ [WS-Sim (203)]{} & [0.667]{} & [0.651]{}\ [[@Agirre:2009naacl]]{} & &\ [SimLex (999)]{} & [0.673]{} & [0.778]{}\ [[@Hill:2015cl]]{} & &\ & [**0.854**]{} & [**0.864**]{}\ & [0.854]{} & [0.864]{}\ [HyperLex: Verbs (453)]{} & [0.855]{} & [0.862]{}\ \[tab:iaa\] [l lllXXXXXlllll]{} & [hyp-1]{} & [hyp-2]{} & [hyp-3]{} & [hyp$\geq$4]{} & [cohyp]{} & [mero]{} & [syn]{} & [ant]{} & [no-rel]{} & [rhyp-1]{} & [rhyp-2]{} & [rhyp-3]{} & [rhyp$\geq$4]{}\ (lr)[2-14]{} [IAA-1]{} & [0.850]{} & [0.844]{} & [0.859]{} & [0.848]{} & [0.857]{} & [0.856]{} & [0.860]{} & [0.858]{} & [0.854]{} & [0.855]{} & [0.842]{} & [0.868]{} & [0.856]{}\ [IAA-2]{} & [0.866]{} & [0.847]{} & [0.872]{} & [0.851]{} & [0.875]{} & [0.876]{} & [0.883]{} & [0.858]{} & [0.859]{} & [0.845]{} & [0.850]{} & [0.846]{} & [0.859]{}\ \[tab:iaarelations\] A complementary measure would smooth individual annotator effects. For this aim, our **IAA-2 (mean)** measure compares the average correlation of a human rater with the average of all the other raters. It arguably serves as better ’upper bound’ than IAA-1 for the performance of automatic systems. HyperLex obtains $\rho$ = 0.854 (IAA-1) and $\rho$ = 0.864 (IAA-2), a very good agreement compared to other prominent crowdsourced benchmarks for semantic evaluation which also used word pair scoring (see Tab. \[tab:iaa\]).[^22] We also report IAAs over different groups of pairs according to the relation extracted from WordNet in Tab. \[tab:iaarelations\]. We acknowledge the fact that the grading process at places requires specific world-knowledge (e.g., [*to what degree is SNAKE a type of REPTILE?*]{}, [*to what degree is TOMATO a type of FRUIT?*]{}), or is simply subjective and demographically biased (e.g., [*to what degree is TO PRAY a type of TO COMMUNICATE?*]{}), which needs principled qualitative analyses. However, the HyperLex inter-rater agreement scores suggest that participants were able to understand the characterisation of graded lexical entailment presented in the instructions and to apply it to concepts of various types (e.g. nouns vs verbs, concrete vs abstract concepts, different lexical relations from WordNet) consistently. #### Typicality in Human Judgements In the first qualitative analysis, we aim to investigate the straightforward question: are some concepts really more (proto)typical of semantically broader higher-level classes? Several examples of prominent high-level taxonomical categories along with LE scores are shown in Tab. \[tab:graded\_examples\]. We might draw several preliminary insights based on the presented lists. There is an evident prototyping effect present in human judgements: concepts such as [*cat*]{}, [*monkey*]{} or [*cow*]{} are more typical instances of the class [*animal*]{} than the more peculiar instances such as [*mongoose*]{} or [*snail*]{} according to HyperLex annotators. Instances of the class [*sport*]{} also seem to be sorted accordingly, as higher scores are assigned to arguably more prototypical sports such as [*basketball*]{}, [*volleyball*]{} or [*soccer*]{}, and less protypical sports such as [*racquetball*]{} or [*wrestling*]{} are assigned lower scores. [1.0]{}[lX lX lX lX lX lX]{} & [**animal**]{} & & [**food**]{} & & [**plant**]{} & & [**sport**]{} & & [**person**]{} & & [**vehicle**]{}\ (lr)[1-2]{} (lr)[3-4]{} (lr)[5-6]{} (lr)[7-8]{} (lr)[9-10]{} (lr)[11-12]{} [cat]{} & [10.0]{} & [sandwich]{} & [10.0]{} & [rose]{} & [9.75]{} & [basketball]{} & [10.0]{} & [girl]{} & [9.85]{} & [car]{} & [10.0]{}\ [monkey]{} & [10.0]{} & [pizza]{} & [10.0]{} & [cactus]{} & [9.58]{} & [hockey]{} & [10.0]{} & [customer]{} & [9.08]{} & [limousine]{} & [10.0]{}\ [cow]{} & [10.0]{} & [rice]{} & [10.0]{} & [flower]{} & [9.45]{} & [volleyball]{} & [10.0]{} & [clerk]{} & [8.97]{} & [motorcycle]{} & [9.85]{}\ [bat]{} & [9.52]{} & [hamburger]{} & [9.75]{} & [lily]{} & [9.40]{} & [soccer]{} & [9.87]{} & [citizen]{} & [8.63]{} & [van]{} & [9.75]{}\ [mink]{} & [9.17]{} & [mushroom]{} & [9.07]{} & [weed]{} & [9.23]{} & [baseball]{} & [9.75]{} & [nomad]{} & [8.63]{} & [automobile]{} & [9.58]{}\ [snake]{} & [8.75]{} & [pastry]{} & [8.83]{} & [orchid]{} & [9.08]{} & [softball]{} & [9.55]{} & [poet]{} & [7.78]{} & [tractor]{} & [9.37]{}\ [snail]{} & [8.62]{} & [clam]{} & [8.20]{} & [ivy]{} & [9.00]{} & [cricket]{} & [9.37]{} & [guest]{} & [7.22]{} & [truck]{} & [9.23]{}\ [mongoose]{} & [8.33]{} & [snack]{} & [7.78]{} & [tree]{} & [8.63]{} & [racquetball]{} & [9.03]{} & [mayor]{} & [6.67]{} & [caravan]{} & [8.33]{}\ [dinosaur]{} & [8.20]{} & [oregano]{} & [5.97]{} & [clove]{} & [8.47]{} & [wrestling]{} & [8.85]{} & [publisher]{} & [6.03]{} & [buggy]{} & [8.20]{}\ [crab]{} & [7.27]{} & [rabbit]{} & [5.83]{} & [turnip]{} & [8.05]{} & [recreation]{} & [2.46]{} & [climber]{} & [5.00]{} & [bicycle]{} & [8.00]{}\ [plant]{} & [0.13]{} & [dinner]{} & [4.85]{} & [fungus]{} & [4.75]{} & [-]{} & [-]{} & [idol]{} & [4.28]{} & [vessel]{} & [6.38]{}\ Nonetheless, the majority of `hyp-N` pairs $(X, animal)$ or $(X, sport)$, where $X$ is a hyponym of [*animal/sport*]{} according to WN, are indeed assigned reasonably high graded LE scores. It suggests that humans are able to: (1) judge the LE relation consistently and decide that a concept indeed stands in a type-of relation with another concept, and (2) grade the LE relation by assigning more strength to more prototypical class instances. Similar patterns are visible with other class instances from Tab. \[tab:graded\_examples\], as well as with other prominent nominal classes (e.g., [*bird*]{}, [*appliance*]{}, [*science*]{}). We also observe the same effect with verbs, e.g., [*(drift, move, 8.58), (hustle, move, 7.67), (tow, move, 7.37), (wag, move, 6.80), (unload, move, 6.22)*]{}. We have also analysed if the effects of graded membership/vagueness (see the discussion in Sect. \[ss:what\]) are also captured in the ratings, and our preliminary qualitative analysis suggests so. For instance, an interesting example quantifies the graded membership in the class *group*: [*(gang, group, 9.25), (legion, group, 7.67), (conference, group, 6.80), (squad, group, 8.33), (caravan, group, 5.00), (grove, group, 3.58), (herd, group, 9.23), (fraternity, group, 8.72), (staff, group, 6.28)*]{}. [l XXX]{} & [**All**]{} & [**Nouns**]{} & [**Verbs**]{}\ (lr)[2-4]{} [hyp-1]{} & [7.86]{} & [7.99]{} & [7.49]{}\ [hyp-2]{} & [8.10]{} & [8.31]{} & [7.08]{}\ [hyp-3]{} & [8.16]{} & [8.39]{} & [6.55]{}\ [hyp$\geq$4]{} & [8.33]{} & [8.62]{} & [5.12]{}\ [cohyp]{} & [3.54]{} & [3.29]{} & [4.76]{}\ [mero]{} & [3.14]{} & [3.14]{} & [-]{}\ [syn]{} & [6.83]{} & [6.69]{} & [7.66]{}\ [ant]{} & [1.47]{} & [1.57]{} & [1.25]{}\ [no-rel]{} & [0.85]{} & [0.64]{} & [1.48]{}\ [rhyp-1]{} & [4.75]{} & [4.17]{} & [6.45]{}\ [rhyp-2]{} & [4.19]{} & [3.44]{} & [6.15]{}\ [rhyp-3]{} & [3.07]{} & [2.72]{} & [4.47]{}\ [rhyp$\geq$4]{} & [2.85]{} & [2.54]{} & [4.11]{}\ \[tab:relations\] #### Hypernymy/LE Levels Graded LE scores in HyperLex averaged for each WordNet relation are provided in Tab. \[tab:relations\]. Note that the LE level is extracted as the shortest direct path between two concept words in the WordNet taxonomy, where $X$-s in each $(X,Y)$ pair always refer to the less general concept (i.e., hyponym). The scores suggest several important observations. Graded LE scores for nouns increase with the increase of the LE level (i.e., WN path length) between the concepts. A longer WN path implies a clear difference in semantic generality between nominal concepts which seems to be positively correlated with the degree of the LE relation and ease of human judgement. A similar finding in directionality and detection experiments on BLESS and its variants was reported by Kiela et al. . They demonstrate that their model is less accurate on concepts with short paths (i.e., the lowest results are reported for WN `hyp-1` pairs from BLESS), and the performance increases with the increase of the WN path length. The tendency is explained by the lower difference in generality between concepts with short paths, which may be difficult to discern for a statistical model. The results from Tab. \[tab:relations\] show that human raters also display a similar tendency when rating nominal pairs. Another factor underlying the observed scores might be the link between HyperLex and the source USF norms. Since USF contains free association norms, one might assume that more prototypical instances are generated more frequently as responses to cue words in the original USF experiments. This, in turn, reflects in their greater presence in HyperLex, especially for concept pairs with longer WN distances. Further, nominal concepts higher in the WN hierarchy typically refer to semantically very broad but well-defined categories such as [*animal*]{}, [*food*]{}, [*vehicle*]{}, or [*appliance*]{} (see again Tab. \[tab:graded\_examples\]). Semantically more specific instances of such concepts are easier to judge as [*true*]{} hyponyms (using the ungraded LE terminology), which also reflects in higher LE ratings for such instances. However, gradience effects are clearly visible even for pairs with longer WN distances (Tab. \[tab:graded\_examples\]). The behaviour with respect to the LE level is reversed for verbs: the average scores decrease over increasing LE levels. We may attribute this effect to a higher level of abstractness and ambiguity present in verb concepts higher in the WN hierarchy stemming from a fundamental cognitive difference: Gentner showed that children find verb concepts harder to learn than noun concepts, and Markman and Wisniewski present evidence that different cognitive operations are used when comparing two nouns or two verbs .For instance, it is intuitive to assume that human subjects find it easier to grade instances of the class [*animal*]{} than instances of verb classes such as [*to get*]{}, [*to set*]{} or [*to think*]{}. #### LE Directionality Another immediate analysis investigates whether the inherent asymmetry of the type-of relation is captured by the human annotations in HyperLex. Several illustrative example pairs and their reverse pairs split across different LE levels are shown in Tab. \[tab:reversed\]. Two important conclusions may be drawn from the analysis. [1.0]{}[Xll Xll Xll]{} & &\ (lr)[1-3]{} (lr)[4-6]{} (lr)[7-9]{} [Pair]{} & [ `scr`]{} & [`rscr`]{} & [Pair]{} & [`scr`]{} & [`rscr`]{} & [Pair]{} & [`scr`]{} & [`rscr`]{}\ (lr)[1-3]{} (lr)[4-6]{} (lr)[7-9]{} (computer, machine) & [**9.83**]{} & [2.43]{} & (gravity, force) & [**9.50**]{} & [3.58]{} & (flask, container) & [**9.37**]{} & [1.83]{}\ (road, highway) & [**9.67**]{} & [4.30]{} & (professional, expert) & [**6.37**]{} & [6.03]{} & (elbow, joint) & [**7.18**]{} & [1.07]{}\ (dictator, ruler) & [**9.87**]{} & [6.22]{} & (therapy, treatment) & [**9.17**]{} & [4.10]{} & (nylon, material) & [**9.75**]{} & [1.42]{}\ (truce, peace) & [**8.00**]{} & [6.38]{} & (encyclopedia, book) & [**8.93**]{} & [2.22]{} & (choir, group) & [**8.72**]{} & [2.43]{}\ (remorse, repentance) & [**7.63**]{} & [3.50]{} & (empathy, feeling) & [**8.85**]{} & [2.42]{} & (beer, beverage) & [**9.25**]{} & [0.67]{}\ (disagreement, conflict) & [**8.78**]{} & [8.67]{} & (shovel, tool) & [**9.70**]{} & [2.57]{} & (reptile, animal) & [**9.87**]{} & [1.17]{}\ (navigator, explorer) & [6.80]{} & [**7.63**]{} & (fraud, deception) & [**9.52**]{} & [8.17]{} & (parent, ancestor) & [**7.00**]{} & [6.17]{}\ (ring, jewelry) & [**10.0**]{} & [2.78]{} & (bed, furniture) & [**9.75**]{} & [2.63]{} & (note, message) & [**9.00**]{} & [6.07]{}\ (solution, mixture) & [6.52]{} & [**7.37**]{} & (verdict, judgment) & [**9.67**]{} & [7.57]{} & (oven, appliance) & [**9.83**]{} & [1.33]{}\ (spinach, vegetable) & [**10.0**]{} & [0.55]{} & (reader, person) & [**7.43**]{} & [3.33]{} & (king, leader) & [**8.67**]{} & [4.55]{}\ (surgeon, doctor) & [**8.63**]{} & [4.05]{} & (vision, perception) & [3.82]{} & [**6.25**]{} & (hobby, activity) & [**7.12**]{} & [6.83]{}\ (hint, suggestion) & [**8.75**]{} & [7.03]{} & (daughter, child) & [**9.37**]{} & [2.78]{} & (prism, shape) & [**7.50**]{} & [2.70]{}\ First, human raters are able to capture the asymmetry as the strong majority of `hyp-N` pairs is rated higher than their `rhyp-N` counterparts: 94% of all `hyp-N` pairs for which exists the `rhyp-N` counterpart are assigned a higher rating. Second, the ability to clearly detect the correct LE direction seems to increase with the increase of semantic distance in WordNet: (1) We notice decreasing average scores for the `rhyp-N` relation as N increases (see Tab. \[tab:relations\]), (2) We notice a higher proportion of `hyp-N` concept pairs scoring higher than their `rhyp-N` counterparts as N increases (see Tab. \[tab:reversed\]). There are evident difficulties to decide on the entailment direction with several pairs (e.g., [*navigator / explorer*]{}, [*solution / mixture*]{}, [*disagreement / conflict*]{}), especially for the taxonomically closer `hyp-1` pairs, a finding aligned with prior work on LE directionality [@Rimell:2014eacl; @Kiela:2015acl]. #### Other Lexical Relations Another look into Tab. \[tab:relations\], where graded LE scores are averaged across each WN-based lexical relation, indicates the expected order of all other lexical relations sorted by the average per-relation scores (i.e., `syn` $>$ `cohyp` $>$ `mero` $>$ `ant` $>$ `no-rel`). `no-rel` and `ant` pairs have the lowest graded LE scores by a large margin, as expected. `no-rel` pairs are expected to have completely non-overlapping semantic fields, which facilitates human judgement. With antonyms, the graded LE question may be implicitly reformulated as [*To what degree is X a type of $\neg$X?*]{} (e.g., [*winner / loser, to depart / to arrive*]{}), which intuitively should result in low graded LE scores: the HyperLex ratings confirm the intuition. Low scores for `cohyp` pairs in comparison to `hyp-N` pairs indicate that the annotators are able to effectively distinguish between the two related but different well-defined taxonomical relations (i.e., `hyp-N` vs `cohyp`). High scores for `syn` pairs are also aligned with our expectations and agree with intuitions from prior work on ungraded LE [@Rei:2014conll]. In a slightly simplified view, given that two synonyms may be observed as two different utterances of the same semantic concept $X$, the graded LE question may be rephrased as [*To what degree is X a type of X?*]{}. One might say that `syn` could be seen as a special case: the degenerate taxonomical `hyp-0` relation. Such an implicit reformulation of the posed question naturally results in higher scores for `syn` pairs on average. [l XXXX]{} & [G1]{} & [G2]{} & [G3]{} & [G4]{}\ (lr)[2-5]{} [\# Pairs]{} & [979]{} & [259]{} & [883]{} & [344]{}\ \[tab:concreteness\] #### Concreteness Differences in human and computational concept learning and representation have been attributed to the effects of concreteness, the extent to which a concept has a directly perceptible physical referent [@Paivio:1991; @Hill:2014cogsci]. Since the main focus of this work is not on the distinction between abstract and concrete concepts, we have not explicitly controlled for the balanced amount of concrete/abstract pairs in HyperLex. However, since the source USF dataset provides concreteness scores, we believe that HyperLex will also enable various additional analyses regarding this dimension in future work. Here, we report the number of pairs in four different groups based on concreteness ratings of two concepts in each pair. The four groups are as follows: ($G_1$) both concepts are concrete (USF concreteness rating $\geq$ 4); ($G_2$) both concepts are abstract (USF rating $<$ 4), ($G_3$) one concept is considered concrete and another abstract, with their difference in ratings $\leq 1$, ($G_4$) one concept is considered concrete and another abstract, with their difference in ratings $>1$. The statistics regarding HyperLex pairs divided into groups $G_1-G_4$ is presented in Tab. \[tab:concreteness\]. `rhyp-N` pairs are not counted as they are simply reversed `hyp-N` pairs present in HyperLex. Concept pairs where at least one concreteness rating is missing in the USF data are also not taken into account. Although HyperLex contains more concrete pairs overall, there is also a large sample of highly abstract pairs and mixed pairs. For instance, HyperLex contains 125 highly abstract concept pairs, with both concepts scoring $\leq$ 3 in concreteness, e.g., [*misery / sorrow, hypothesis / idea, competence / ability*]{}, or [*religion / belief*]{}. This preliminary coarse-grained analysis already hints that HyperLex provides a good representation of concepts across the entire concreteness scale. This will also facilitate further analyses related to concept concreteness and its influence on the automatic construction of semantic taxonomies. #### Data Splits: Random and Lexical A common problem in scored/graded word pair datasets is the lack of a standard split to development and test sets [@Faruqui:2016arxiv]. Custom splits, e.g., 10-fold cross-validation make results incomparable with others. Further, due to their limited size, they also do not support supervised learning, and do not provide splits into training, development, and test data. The lack of standard splits in such word pair datasets stems mostly from small size and poor coverage – issues which we have solved with HyperLex. [l lXXXXX]{} &\ (lr)[2-7]{} [Split]{} & [All]{} & [$[0,2>$]{} & [$[2,4>$]{} & [$[4,6>$]{} & [$[6,8>$]{} & [$[8,10]$]{}\ (lr)[2-7]{} [**HyperLex-All**]{} & [2616 (2163 + 453)]{} & [604 (504 + 100)]{} & [350 (304 + 46)]{} & [307 (243 + 64)]{} & [515 (364 + 151)]{} & [840 (748 + 92)]{}\ [**Random Split**]{}\ [Train]{} & [1831 (1514 + 317)]{} & [423 (353 + 70)]{} & [245 (213 + 32)]{} & [215 (170 + 45)]{} & [361 (255 + 106)]{} & [587 (523 + 64)]{}\ [Dev]{} & [130 (108 + 22)]{} & [30 (25 + 5)]{} & [17 (15 + 2)]{} & [15 (13 + 2)]{} & [26 (18 + 8)]{} & [42 (37 + 5)]{}\ [Test]{} & [655 (541 + 114)]{} & [151 (126 + 25)]{} & [88 (76 + 12)]{} & [77 (60 + 17)]{} & [128 (91 + 37)]{} & [211 (188 + 23)]{}\ [**Lexical Split**]{}\ [Train]{} & [1133 (982 + 151)]{} & [253 (220 + 33)]{} & [140 (122 + 18)]{} & [129 (109 + 20)]{} & [195 (148 + 47)]{} & [416 (383 + 33)]{}\ [Dev]{} & [85 (71 + 14)]{} & [20 (18 + 2)]{} & [13 (11 + 2)]{} & [11 (8 + 3)]{} & [17 (10 + 7)]{} & [24 (24 + 0)]{}\ [Test]{} & [269 (198 + 71)]{} & [65 (52 + 13)]{} & [37 (29 + 8)]{} & [41 (31 + 10)]{} & [63 (37 + 26)]{} & [63 (49 + 14)]{}\ \[tab:stats\] We provide two standard data splits into train, dev, and test data: [*random*]{} and [*lexical*]{}. In the random split, 70% of all pairs were reserved for training, 5% for development, and 25% for testing. The subsets were selected by random sampling, but controlling for a broad coverage in terms of similarity ranges, i.e., non-similar and highly similar pairs, as well as pairs of medium similarity are represented. Some statistics are available in Tab. \[tab:stats\]. A manual inspection of the subsets revealed that a good range of lexical relations is represented in the subsets. The lexical split, advocated in [@Levy:2015naacl; @Shwartz:2016arxiv], prevents the effect of ”lexical memorisation”: supervised distributional lexical inference models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.[^23] To prevent such behaviour, we split HyperLex into a train and test set with zero lexical overlap. We tried to retain roughly the same 70%/25%/5% ratio in the lexical split. Note that the lexical split discards all “cross-set” training-test concept pairs. Therefore, the number of instances in each subset is lower than with the random split. Statistics are again given in Tab. \[tab:stats\]. We believe that the provided standardised HyperLex data splits will enable easy and direct comparisons of various LE modeling architectures in unsupervised and supervised settings. Following arguments from prior work, we hold that it is important to provide both data set splits, as they can provide additional possibility to assess differences between models. It is true that training a model on a lexically split dataset may result in a more general model [@Levy:2015naacl], which is able to better reason over pairs consisting of two unseen concepts during inference. However, Shwartz, Goldberg, and Dagan argue that a random split emulates a more typical “real-life” reasoning scenario, where inference involves an unseen concept pair $(X,Y)$, in which $X$ and/or $Y$ have already been observed separately. Models trained on a random split may introduce the model with a concept’s “prior belief” of being a frequent hypernym or a hyponym. This information can be effectively exploited during inference. Evaluation Setup and Models {#s:experiments} =========================== #### Evaluation Setup We compare the performance of prominent models and frameworks focused on modeling lexical entailment on our new HyperLex evaluation set now measuring the strength of the lexical entailment relation. Due to the evident similarity of the graded evaluation with standard protocols in the semantic similarity (i.e., synonymy detection) literature [@Finkelstein:2002tois; @Agirre:2009naacl; @Hill:2015cl; @Schwartz:2015conll inter alia], we adopt the same evaluation setup. Each evaluated model assigns a score to each pair of words measuring the strength of lexical entailment relation between them.[^24] As in prior work on intrinsic semantic evaluations with word pair scoring evaluation sets, e.g., [@Hill:2015cl; @Levy:2015tacl] as well as on measuring relational similarity [@Jurgens:2012semeval], all reported scores are Spearman’s $\rho$ correlations between the ranks derived from the scores of the evaluated models and the human scores provided in HyperLex. In this work, we evaluate off-the-shelf unsupervised models and insightful baselines on the entire HyperLex. We also report on preliminary experiments exploiting provided data splits for supervised learning. Directional Entailment Measures {#ss:dem} ------------------------------- Note that all directional entailment measures (DEMs) available in the literature have “pre-embedding” origins and assume traditional count-based vector spaces [@Turney:2010jair; @Baroni:2014acl] based on counting word-to-word corpus co-occurrence. Distributional features are typically words co-occurring with the target word in a chosen context (e.g., a window of neighbouring words, a sentence, a document, a dependency-based context). This collection of models is grounded on variations of the distributional inclusion hypothesis [@Geffet:2005acl]: if $X$ is a semantically narrower term than $Y$, then a significant number of salient distributional features of $X$ are included in the feature vector of $Y$ as well. We closely follow the work from Lenci and Benotto in the presentation. Let $Feat_{X}$ denote the set of distributional features $ft$ for a concept word $X$, and let $w_X(ft)$ refer to the weight of the feature $ft$ for $X$. The most common choices for the weighting function in traditional count-based distributional models are positive variants of pointwise mutual information (PMI) [@Bullinaria:2007brm] and local mutual information (LMI) [@Evert:2008corpling]. #### WeedsPrec ($DEM_1$) This DEM quantifies the weighted inclusion of the features of a concept word $X$ within the features of a concept word $Y$ [@Weeds:2003emnlp; @Weeds:2004coling; @Kotlerman:2010nle]: #### WeedsSim ($DEM_2$) It computes the geometrical average of WeedsPrec ($DEM_1$) or some other asymmetric measure (e.g., APinc from Kotlerman et al. ) and the symmetric similarity $sim(X,Y)$ between $X$ and $Y$, typically measured by cosine [@Weeds:2004coling], or the Lin measure [@Lin:1998acl] as in the balAPinc measure of Kotlerman et al. : #### ClarkeDE ($DEM_3$) A close variation of $DEM_1$ was proposed by Clarke : #### InvCL ($DEM_4$) A variation of $DEM_3$ was introduced by Lenci and Benotto . It takes into account both the inclusion of context features of $X$ in context features of $Y$ and non-inclusion of features of $Y$ in features of $X$.[^25] Generality Measures {#ss:gem} ------------------- Another related view towards the [type-of]{} relation is as follows. Given two semantically related words, a key aspect of detecting lexical entailment is the generality of the hypernym compared to the hyponym. For example, [*bird*]{} is more general than [*eagle*]{}, having a broader intension and a larger extension. This property has led to the introduction of lexical entailment measures that compare the entropy/semantic content of distributional word representations, under the assumption that a more general term has a higher-entropy distribution [@Herbelot:2013acl; @Rimell:2014eacl; @Santus:2014eacl]. From this group we show the results with the SLQS [@Santus:2014eacl] model demonstrating the best performance in prior work. #### SLQS It is an entropy-based measure which actually quantifies the specificity/generality level of related terms. First, the top $n$ most associated context features (i.e., typically context words as in the original work of Santus et al. ) are identified (e.g., using positive PMI or LMI); for each identified context feature $cn$, its entropy $H(cn)$ is defined as: where $ft_i$, $i=1,\ldots,n$ is the $i$-th context feature, and $P(ft_i|cn)$ is computed as the ratio of the co-occurrence frequency $(cn,ft_i)$ and the total frequency of $cn$. For each concept word $X$, it is possible to compute its median entropy $E_X$ over the $N$ most associated context features. A higher value $E_X$ implies a higher semantic generality of the concept word $X$. The initial SLQS measure called [SLQS-Basic]{} is then defined as: This measure may be directly used in standard ungraded LE directionality experiments since $SLQS(X,Y)>0$ implies that X is a type of Y (see Tab. \[tab:bless\]). Another variant of SLQS called [SLQS-Sim]{} is tailored to LE detection experiments: it resembles the $DEM_2$ measure from eq. , the only difference is that, since SLQS can now produce negative scores, all such scores are set to 0. Visual Generality Measures {#ss:vem} -------------------------- Kiela et al. showed that such generality-based measures for ungraded LE need not be linguistic in nature, and proposed a series of visual and multi-modal models for LE directionality and detection. We briefly outline the two best performing ones in their experiments. Deselaers and Ferrari previously showed that sets of images corresponding to terms at higher levels in the WordNet hierarchy have greater visual variability than those at lower levels. They exploit this tendency using sets of images associated with each concept word as returned by Google’s image search. The intuition is that the set of images returned for the broader concept *animal* will consist of pictures of different kinds of animals, that is, exhibiting greater visual variability and lesser concept specificity; on the other hand, the set of images for [*bird*]{} will consist of pictures of different birds, while the set for [*owl*]{} will mostly consist only of images of owls. The generality of a set of $n$ images for each concept $X$ is then computed. The first model relies on the [*image dispersion*]{} measure [@Kiela:2014acl]. It is the average pairwise cosine distance between all image representations[^26] $\{\overrightarrow{i_{X,1}},\ldots,\overrightarrow{i_{X,n}}\}$ for $X$: Another similar measure instead of calculating the pairwise distance calculates the distance to the centroid $\overrightarrow{\mu_X}$ of $\{\overrightarrow{i_{X,1}},\ldots,\overrightarrow{i_{X,n}}\}$: #### Final Model The following formula summarises the visual model for ungraded LE directionality and detection which we also test in graded evaluations: $f$ is one of the functions for image generality given by eq.  and eq. . The model relying on eq.  is called [Vis-ID]{}, while the other is called [Vis-Cent]{}. $\alpha$ is a tunable threshold which sets a minimum difference in generality for LE identification, driven by the idea that non-LE pairs also have non-identical generality scores. To avoid false positives where one word is more general but the pair is not semantically related, a second threshold $\theta $ is used, which sets $f$ to zero if the two concepts have low cosine similarity. Finally, $\vec{X}$ and $\vec{Y}$ are representations of concept words used to compute their semantic similarity, e.g., [@Turney:2010jair; @Kiela:2014emnlp]. Concept Frequency Ratio ----------------------- Concept word frequency ratio (FR) is used as a proxy for lexical generality and it is a surprisingly competitive baseline in the standard (binary) LE evaluation protocols (see Sect. \[ss:evalprot\] and later Sect. \[ss:results\]) [@Weeds:2004coling; @Santus:2014eacl; @Kiela:2015acl inter alia]. The FR model also relies on eq. , the only difference is that $f(X)=freq(X)$, where $freq(X)$ is a simple word frequency count obtained from a large corpus. WordNet-Based Similarity Measures {#ss:wnsim} --------------------------------- A variant of eq.  may also be used with any standard WordNet-based similarity measure to quantify the degree of [type-of]{} relation: where $f_{WN}(X,Y)$ returns a similarity score based on the WordNet path between two concepts. We use three different standard measures for $f_{WN}$ resulting in three variant WN-based models: \(1) [WN-Basic]{}: $f_{WN}$ returns a score denoting how similar two concepts are, based on the shortest path that connects the concepts in the WN taxonomy. \(2) [WN-LCh]{}: Leacock-Chodorow similarity function returns a score denoting how similar two concepts are, based on their shortest connecting path (as above) and the maximum depth of the taxonomy in which the concepts occur. The score is then $-\log(path/2 \cdot depth)$, where $path$ is the shortest connecting path length and $depth$ the taxonomy depth. \(3) [WN-WuP]{}: Wu-Palmer similarity function [@Wu:1994acl; @Pedersen:2004aaai] returns a score denoting how similar two concepts are, based on the depth of the two concepts in the taxonomy and that of their most specific ancestor node. Note that all three WN-based similarity measures are not well-suited for graded LE experiments by their design: e.g., they will rank direct co-hyponyms as more similar than distant hypernymy-hyponymy pairs. Order Embeddings {#ss:orderemb} ---------------- Following trends in semantic similarity (or graded synonymy computations, see Sect. \[s:graded\] again) Vendrov et al. have recently demonstrated that it is possible to construct a [*vector space*]{} or [*word embedding*]{} model that specialises in the lexical entailment relation, rather than in the more popular similarity/syonymy relation. The model is then applied in a variety of tasks including ungraded LE detection and directionality. ![image](./vissem.png){width="0.63\linewidth"} \[fig:vissem\] The order embedding model exploits the partial order structure of a visual-semantic hierarchy (see Fig. \[fig:vissem\]) by learning a mapping which is not distance-preserving but order-preserving between the visual-semantic hierarchy and a partial order over the embedding space. It learns a mapping from a partially ordered set $(U,\preceq_{U})$ into a partially ordered embedding space $(V,\preceq_{V})$: the ordering of a pair in $U$ is then based on the ordering in the embedding space. The chosen embedding space is the reversed product order on $\mathbb{R}_{+}^N$, defined by the conjunction of total orders on each coordinate: for all vectors $\vec{X}$ and $\vec{Y}$ with nonnegative coordinates, where the vectors $\vec{X}$ and $\vec{Y}$ are order embeddings of concept words $X$ and $Y$.[^27] With a slight abuse of notation, $X_i$ refers to the $i$-th coordinate of vector $\vec{X}$, the same for $Y_i$. The ordering criterion, however, is too restrictive to impose as a hard constraint. Therefore, an approximate order-embedding is sought: a mapping which violates the order-embedding condition, imposed as a soft constraint, as little as possible. In particular, the penalty $L$ for an ordered pair $(\vec{X},\vec{Y})$ of points/vectors in $\mathbb{R}_{+}^N$ is defined as: $L(\vec{X},\vec{Y})=0$ implies that $X \preceq Y$ according to the reversed product order. If the order is not satisfied, the penalty is positive. The model requires a set of positive pairs $PP$ (i.e., true LE pairs) and a set of negative pairs $NP$ for training. Finally, to learn an approximate mapping to an order embedding space, a max-margin loss is used, which encourages positive examples to have zero penalty, and negative examples to have penalty greater than a margin $\gamma$: Positive and negative examples are task-dependent. For the standard ungraded LE evaluations, positive pairs for the training set $PP$ are extracted from the WordNet hierarchy. The set $NP$ is obtained by artificially constructing “corrupted” pairs [@Socher:2013nips], that is, by replacing one of the two concepts from positive examples with a randomly selected concept. This model is called [OrderEmb]{}. #### Graded LE with Order Embeddings Order embeddings are trained for the binary LE detection task, but not explicitly for the graded LE task. To measure how much one such off-the-shelf order embedding model captures LE on the continuous scale, we test three different distance measures: \(1) [OrderEmb-Cos]{}: A standard cosine similarity is used on vector representations. \(2) [OrderEmb-DistAll]{}: The sum of the absolute distance between all coordinates of the vectors $\vec{X}$ and $\vec{Y}$ is used as a distance function: This measure is based on the training penalty defined by eq. . The idea is that for order embeddings the space is sorted based on the degree of hypernymy/hyponymy violation in each dimension: the absolute coordinate distance may be used as an indicator of the LE strength. \(3) [OrderEmb-DistPos]{}: This variant extends the [*DistAll*]{} distance by only adding up those coordinates fulfilling the criterion defined in the reversed product order in eq. : Standard (“Similarity”) Embeddings {#ss:standard} ---------------------------------- A majority of other word embedding models available in the literature target the symmetric relation of semantic relatedness and similarity, and the strength of the similarity relation is modeled by a symmetric similarity measure such as cosine. It was shown that human subjects often consider “closer” LE pairs quite semantically similar [@Geffet:2005acl; @Agirre:2009naacl; @Hill:2015cl].[^28] For instance, pairs [*(assignment, task)*]{} or [*(author, creator)*]{} are judged as strong LE pairs (with average scores 9.33 and 9.30 in HyperLex respectively), they are assigned the labels `hyp-1` and `hyp-2` according to WordNet respectively, and are also considered semantically very similar (their SimLex-999 scores are 8.70 and 8.02). In another example, the WordNet `syn` pairs [*(foe, enemy)*]{} and [*(summit, peak)*]{} have graded LE scores of 9.72 and 9.58 in HyperLex. The rationale behind these experiments is then to test to what extent these symmetric models are capable of quantifying the degree of lexical entailment, and to what degree these two relations are interlinked. We test the following benchmarking semantic similarity models: (1) Unsupervised models that learn from distributional information in text, including the skip-gram negative-sampling model (*SGNS*) [@Mikolov:2013nips] with various contexts ([BOW]{} = bag of words; [DEPS]{} = dependency contexts) as described by Levy and Goldberg ; (2) Models that rely on linguistic hand-crafted resources or curated knowledge bases. Here, we rely on models using currently holding the peak scores in word similarity tasks: sparse binary vectors built from linguistic resources ([Non-Distributional]{}, [@Faruqui:2015aclnon]), vectors fine-tuned to a paraphrase database ([Paragram]{}, [@Wieting:2015tacl]) further refined using linguistic constraints ([Paragram+CF]{}, [@Mrksic:2016naacl]). Since these models are not the main focus of this work, the reader is referred to the relevant literature for detailed descriptions. Gaussian Embeddings {#ss:gaussian} ------------------- An alternative approach to learning word embeddings was proposed by Vilnis and McCallum . They represent words as Gaussian densities rather than points in the embedding space. Each concept $X$ is represented as a multivariate $K$-dimensional Gaussian parameterised as $\mathcal{N}(\mathbold{\mu}_{X},\mathbold{\sigma}_{X})$, where $\mathbold{\mu}_{X}$ is a $K$-dimensional vector of means, and $\mathbold{\sigma}_{X}$ in the most general case is a $K \times K$ covariance matrix.[^29] Word types are embedded into soft regions in space: the intersection of these regions could be straightforwardly used to compute the degree of lexical entailment. This allows a natural representation of hierarchies using e.g. the asymmetric Kullback-Leibler (KL) divergence. KL divergence between Gaussian probability distributions is straightforward to calculate, naturally asymmetric, and has a geometric interpretation as an inclusion between families of ellipses. To train the model, they define an energy function that returns a similarity-like measure of the two probabilities. It is possible to train the model to better capture “standard semantic similarity” (see Sect. \[ss:standard\]) by using expected likelihood (EL) as the energy function. On the other hand, KL divergence is a natural energy function for representing entailment between concepts – a low KL divergence from $X$ to $Y$ indicates that we can encode $Y$ easily as $X$, implying that $Y$ entails $X$. This can be interpreted as a soft form of inclusion between the level sets of ellipsoids generated by the two Gaussians – if there is a relatively high expected log-likelihood ratio (negative KL), then most of the mass of $Y$ lies inside $X$. We refer the reader to the original work [@Vilnis:2015iclr; @He:2015cikm] for a more detailed description of the idea and actual low-level modelling steps. We evaluate two variants of the model on the graded LE task following [@Vilnis:2015iclr]: (i) [Word2Gauss-EL-Cos]{} and [Word2Gauss-EL-KL]{} use EL in training, but the former uses cosine between vectors of means as a (symmetric) measure of similarity between concepts, and the latter relies on the (asymmetric) KL divergence between full Gaussians; (ii) [Word2Gauss-KL-Cos]{} and [Word2Gauss-KL-KL]{} use KL divergence as the energy function. Results and Discussion {#s:results} ====================== Training Data and Parameters {#ss:parameters} ---------------------------- Since we evaluate a plethora of heterogeneous models and architectures on the graded LE task, we first provide a quick overview of their training setup regarding training data, their parameter settings and other modeling choices. #### DEMs and SLQS Directional entailment measures $DEM_1$-$DEM_4$ and both SLQS variants (i.e., [SLQS-Basic]{} and [SLQS-Sim]{}) are based on the cleaned, tokenised and lowercased Polyglot Wikipedia [@AlRfou:2013conll]. We have used two setups for the induction of word representations, the only difference being that in [*Setup 1*]{} context/feature vectors are extracted from the Polyglot Wiki directly based on bigram co-occurrence counts, while in [*Setup 2*]{}, these vectors are extracted from the [TypeDM]{} tensor [@Baroni:2010cl] as in the original work of Lenci and Benotto .[^30] Both setups use the positive LMI weighting calculated on syntactic co-occurrence links between each word and its context word [@Gulordava:2011gems]: $LMI(w_1,w_2) = C(w_1,w_2) * \log_2 \frac{C(w_1,w_2)*Total}{C(w_1) C(w_2)}$, where $C(w)$ is the unigram count in the Polyglot Wiki for the word $w$, $C(w_1,w_2)$ is the dependency based co-occurrence count of the two tokens $w_1$ and $w_2$., i.e. $(w_1,(dep\_rel,w_2))$, and $Total$ is the number of all such tuples. The Polyglot Wiki was parsed with Universal Dependencies [@Nivre:2015ud] as in the work of Vulić and Korhonen .[^31] The context vocabulary (i.e., words $w_2$) is restricted to the 10K most frequent words in the Polyglot Wiki. The same two setups were used for the SLQS model. We also use frequency counts collected from the Polyglot Wiki for the frequency ratio model. WordNet-based similarity measures rely on the latest WordNet 3.1 release. #### Word Embeddings We use 300-dimensional pre-trained order embeddings of Vendrov et al. available online.[^32] For the detailed description of the training procedure, we refer the reader to the original paper. Gaussian embeddings are trained on the Polyglot Wiki with the vocabulary of the top 200K most frequent single words. We train $300$-dimensional representations using the online tool and default settings suggested by Vilnis and McCallum :[^33] spherical embeddings trained for 200 epochs on a max-margin objective with margin set to 2. We also use pre-trained standard “semantic similarity” word embeddings available online from various sources. $300$-dimensional SGNS-BOW/DEPS vectors are also trained on the Polyglot Wiki: these are the same vectors from [@Levy:2014acl].[^34] $300$-dimensional [Paragram]{} vectors are the same as in [@Wieting:2015tacl][^35], while their extension using a retrofitting procedure ([Paragram+CF]{}) has been made available by Mrkšić et al. .[^36] Sparse [Non-Distributional]{} vectors of Faruqui and Dyer are also available online.[^37] Results {#ss:results} ------- Due to a wide variety of models and a large space of results used in this work, it is not feasible to present all results at once or provide detailed analyses across all potential dimensions of comparison. Therefore, we have decided to make a gradual selection of the most interesting experiments and results, and stress (what we consider to be) the most important aspects of the HyperLex evaluation set and modeling architectures in our comparisons. #### Experiment I: Ungraded LE Approaches In the first batch of experiments, we evaluate a series of state-of-the-art traditional LE modelling aproaches in the graded LE task on the entire HyperLex evaluation set. The models are described in Sect. \[ss:dem\]-Sect. \[ss:wnsim\]. A summary of the results is provided in Tab. \[tab:results\_dem\]. Comparing model scores with the inter-annotator agreements suggests that the graded LE task, although well-defined and understandable by average native speakers, poses a challenge for current ungraded LE models. The absolute difference in scores between human and system performance indicates that there is vast room for improvement in future work. The gap also illustrates the increased difficulty of the graded LE task compared to previous ungraded LE evaluations (see also Exp. IV). For instance, the best unsupervised LE directionality and detection models from Tab. \[tab:results\_dem\] reach up over 70% and up to 90% in precision scores [@Santus:2014eacl; @Kiela:2015acl inter alia] on BLESS and other datasets discussed in Sect. \[ss:sets\]. [l XX]{} & [**Setup 1**]{} & [**Setup 2**]{}\ (lr)[2-3]{} [[FR]{} ($\alpha=0.02, \theta=0.25$)]{} & [0.279]{} & [0.240]{}\ [[FR]{} ($\alpha=0, \theta=0$)]{} & [0.268]{} & [0.265]{}\ & [0.162]{} & [0.162]{}\ [$\text{DEM}_2$]{} & [0.171]{} & [0.180]{}\ [$\text{DEM}_3$]{} & [0.150]{} & [0.150]{}\ [$\text{DEM}_4$]{} & [0.153]{} & [0.153]{}\ & [0.225]{} & [0.221]{}\ [SLQS-Sim]{} & [0.228]{} & [0.226]{}\ & [0.207]{} & [0.207]{}\ [WN-LCh]{} & [0.214]{} & [0.214]{}\ [WN-WuP]{} & [0.234]{} & [0.234]{}\ & [0.203]{} & [0.203]{}\ [Vis-Cent ($\alpha=0.02, \theta=0$)]{} & [0.209]{} & [0.209]{}\ & [0.854]{} & [0.854]{}\ [IAA-2]{} & [0.864]{} & [0.864]{}\ \[tab:results\_dem\] Previous work on ungraded LE evaluation also detected that frequency is a surprisingly competitive baseline in LE detection/directionality experiments [@Herbelot:2013acl; @Weeds:2014coling; @Kiela:2015acl]. This finding stems from an assumption that the informativeness of a concept decreases and generality increases as frequency of the concept increases [@Resnik:1995ijcai]. Although the assumption is a rather big simplification [@Herbelot:2013acl], the results based on simple frequency scores in this work further suggest that the FR model may be used as a very competitive baseline in the graded LE task. The results also reveal that visual approaches are competitive to purely textual distributional ones. In Tab. \[tab:results\_dem\], we have set the parameters according to [@Kiela:2015acl]. Varying the $\alpha$ parameter leads to even better results, e.g., the [Vis-ID]{} model scores $\rho=0.229$ and [Vis-Cent]{} scores $\rho=0.228$ with $\alpha=1$. This finding supports recent trends in multi-modal semantics and calls for more expressive multi-modal LE models as discussed previously by Kiela et al. . To our own surprise, the FR model was the strongest model in this first comparison, while directional measures fall short of all other approaches, although prior work suggested that they are tailored to capture the LE relation in particular. As we do not observe any major difference between two setups for [DEM]{}s and [SLQS]{}, all subsequent experiments use Setup 1. The observed strong correlation between frequency and graded LE supports the intuition that prototypical class instances will be more often cited in text, and therefore simply more frequent. Even WN-based measures do not lead to huge improvements over [DEM]{}s and fall short of [FR]{}. Since WordNet lacks annotations pertinent to the idea of graded LE, such simple WN-based measures cannot quantify the actual LE degree. The inclusion of the basic “semantic relatedness detector” (as controlled by the parameter $\theta$) does not lead to any significant improvements (e.g., as evident from the comparison of [SLQS-Sim]{} vs. [SLQS-Basic]{}, or $DEM_2$ vs. $DEM_1$). In summary, the large gap between human and system performances along with the FR superiority over more sophisticated LE approaches from prior work unambiguously calls for the next generation of distributional models tailored for graded lexical entailment in particular. #### Experiment II: Word Embeddings In the next experiment, we evaluate a series of state-of-the-art word embedding architectures, covering order embeddings (Sect. \[ss:orderemb\]), standard semantic similarity embeddings optimised on SimLex-999 and related word similarity tasks (Sect. \[ss:standard\]), and Gaussian embeddings (Sect. \[ss:gaussian\]). A summary of the results is provided in Tab. \[tab:results\_emb\]. The scores again reveal the large gap between the system performance and human ability to consistently judge the graded LE relation. The scores on average are similar to or even lower than scores obtained in Exp. I. One trivial reason behind the failure is as follows: word embeddings typically apply the cosine similarity in the Euclidean space to measure the distance between $X$ and $Y$. In practice, this leads to the symmetry: $dist(X,Y) = dist(Y,X)$ for each pair $(X,Y)$, which is an undesired model behaviour for graded LE in practice, as corroborated by our analysis of asymmetry in human judgements (see Tab. \[tab:relations\] and Tab. \[tab:reversed\]). This finding again calls for a new methodology capable of tackling the asymmetry of the graded LE problem in future work. [l XXX]{} & [**All**]{} & [**Nouns**]{} & [**Verbs**]{}\ (lr)[2-4]{} [[FR]{} ($\alpha=0.02, \theta=0.25$)]{} & [0.279]{} & [0.283]{} & [0.239]{}\ [[FR]{} ($\alpha=0, \theta=0$)]{}& [0.268]{} & [0.283]{} & [0.091]{}\ (`win=2`) & [0.167]{} & [0.148]{} & [0.289]{}\ [SGNS-DEPS]{} & [0.205]{} & [0.182]{} & [0.352]{}\ & [0.158]{} & [0.115]{} & [0.543]{}\ [Paragram]{} & [0.243]{} & [0.200]{} & [0.492]{}\ [Paragram+CF]{} & [0.320]{} & [0.267]{} & [0.629]{}\ & [0.156]{} & [0.162]{} & [0.005]{}\ [OrderEmb-DistAll]{} & [0.180]{} & [0.180]{} & [0.130]{}\ [OrderEmb-DistPos]{} & [0.191]{} & [0.195]{} & [0.120]{}\ & [0.192]{} & [0.171]{} & [0.207]{}\ [Word2Gauss-EL-KL]{} & [0.206]{} & [0.192]{} & [0.209]{}\ [Word2Gauss-KL-Cos]{} & [0.190]{} & [0.179]{} & [0.160]{}\ [Word2Gauss-KL-KL]{} & [0.201]{} & [0.189]{} & [0.172]{}\ & [0.854]{} & [0.854]{} & [0.855]{}\ [IAA-2]{} & [0.864]{} & [0.864]{} & [0.862]{}\ \[tab:results\_emb\] Dependency-based contexts (SGNS-DEPS) seem to have a slight edge over ordinary bag-of-words contexts (SGNS-BOW) which agrees with findings from prior work on ungraded LE [@Roller:2016arxiv; @Shwartz:2017eacl]. We observe no clear advantage with [OrderEmb]{} and [Word2Gauss]{}, two word embedding models tailored for capturing the hierarchical LE relation naturally in their training objective. We notice slight but encouraging improvements with [OrderEmb]{} when resorting to more sophisticated distance metrics, e.g., moving from the symmetric straightforward [Cos]{} measure to [DistPos]{} with [OrderEmb]{}, or using [KL]{} instead of [Cos]{} with [Word2Gauss]{}. As discussed in Sect. \[ss:orderemb\], the off-the-shelf [OrderEmb]{} model was trained for the binary ungraded LE detection task: its expressiveness for graded LE thus remains limited. One line of future work might utilise the [OrderEmb]{} framework with a true graded LE objective, and investigate new [OrderEmb]{}-style representation models fully adapted to the graded LE setting. #### Lexical Entailment and Similarity Hill, Reichart, and Korhonen report that there is strong correlation between `hyp-N` word pairs and semantic similarity as judged by human raters. For instance, given the same $[0,10]$ continuous rating scale in SimLex-999, the average similarity score in SimLex-999 for SimLex-999 `hyp-1` pairs is 6.62, it is 6.19 for `hyp-2` pairs, and 5.70 for `hyp-3` and `hyp-4`. In fact, the only group scoring higher than `hyp-N` pairs in SimLex-999 are `syn` pairs with the average score of 7.70. Therefore, we also evaluate state-of-the-art word embedding models obtaining peak scores on SimLex-999, some of them even obtaining scores above the SimLex-999 IAA-1. The rationale is to test whether HyperLex really captures the fine-grained and subtle notion of graded lexical entailment, or the HyperLex annotations were largely driven by decisions at the broader level of semantic similarity. ![Results on the intersection subset of 111 concept pairs annotated both in SimLex-999 (for similarity) and in HyperLex (for graded LE).](./simhyp_intersect_new.pdf){width="0.77\linewidth"} \[fig:simhyp\] Another look into Tab. \[tab:results\_emb\] indicates an evident link between the LE relation and semantic similarity. Positive correlation scores for all models reveal that pairs with high graded LE scores naturally imply some degree of semantic similarity, e.g., [*author / creator*]{}. However, the scores with similarity-specialised models are much lower than the human performance in the graded LE task, which suggests that they cannot capture intricacies of the task accurately. More importantly, there is a dramatic drop in performance when evaluating exactly the same models in the semantic similarity task (i.e., graded synonymy) on SimLex-999 vs. the graded LE task on HyperLex. For instance, two best performing word embedding models on SimLex-999 are [Paragram]{} and [Paragram+CF]{} reaching Spearman’s $\rho$ correlation of 0.685 and 0.742, respectively, with SimLex-999 IAA-1 = 0.673, IAA-2 = 0.778. At the same time, the two models score 0.243 and 0.320 on HyperLex respectively, where the increase in scores for [Paragram+CF]{} may be attributed to its explicit control of antonyms through dictionary-based constraints. A similar decrease in scores is observed with other models in our comparisons, e.g., [SGNS-BOW]{} falls from 0.415 on SimLex-999 to 0.167 on HyperLex. To further examine this effect, we have performed a simple experiment using only the intersection of the two evaluation sets comprising 111 word pairs in total (91 nouns and 20 verbs) for evaluation. The results of selected embedding models on the 111 pairs are shown in Fig. \[fig:simhyp\]. It is evident that all state-of-the-art word embedding models are significantly better at capturing semantic similarity. In summary, the analysis of results with distributed representation models on SimLex-999 and HyperLex suggests that the human understanding of the graded LE relation is not conflated with semantic similarity. Human scores assigned to word pairs in both SimLex-999 and HyperLex reflect truly the nature of the annotated relation: semantic similarity in case of SimLex-999 and graded lexical entailment in case of HyperLex. #### Experiment III: Nouns vs. Verbs In the next experiment, given the theoretical likelihood of variation in model performance across POS categories mentioned in Sect. \[ss:choice\], we assess the differences in results on noun (N) and verb (V) subsets of HyperLex. The results of “traditional” LE models (Exp. I) are provided in Tab. \[tab:results\_nv\]. Tab. \[tab:results\_emb\] shows results of word embedding models. IAA scores on both POS subsets are very similar and reasonably high, implying that human raters did not find it more difficult to rate verb pairs. However, we observe differences in performance over the two POS-based HyperLex subsets. First, DEMs obtain much lower scores on the verb subset. It may be attributed to a larger variability of context features for verbs, which also affects the pure distributional models relying on the distributional inclusion hypothesis. WN-based approaches, relying on an external curated knowledge base, do not show the same pattern, with comparable results over pairs of both word classes. Visual models also score better on nouns, which may again be explained by the increased level of abstractness when dealing with verbs. This, in turn, leads to a greater visual variability and incoherence in visual concept representations. [l XX]{} & [**Nouns**]{} & [**Verbs**]{}\ (lr)[2-3]{} [[FR]{} ($\alpha=0.02, \theta=0.25$)]{} & [0.283]{} & [0.239]{}\ [[FR]{} ($\alpha=0, \theta=0$)]{} & [0.283]{} & [0.091]{}\ & [0.180]{} & [0.018]{}\ [$\text{DEM}_2$]{} & [0.170]{} & [0.047]{}\ [$\text{DEM}_3$]{} & [0.164]{} & [0.108]{}\ [$\text{DEM}_4$]{} & [0.167]{} & [0.109]{}\ & [0.224]{} & [0.247]{}\ [SLQS-Sim]{} & [0.229]{} & [0.232]{}\ & [0.240]{} & [0.263]{}\ [WN-LCh]{} & [0.214]{} & [0.260]{}\ [WN-WuP]{} & [0.214]{} & [0.269]{}\ & [0.253]{} & [0.137]{}\ [Vis-Cent ($\alpha=1, \theta=0$)]{} & [0.252]{} & [0.132]{}\ & [0.854]{} & [0.855]{}\ [IAA-2]{} & [0.864]{} & [0.862]{}\ \[tab:results\_nv\] For word embedding models, we notice that scores for the V subset are significantly higher than for the N subset. To isolate the influence of test set size, we have also repeated experiments with random subsets of the N subset, equal to the V subset in size (453 pairs). We observe the same trend even with such smaller N test sets, leading to a conclusion that difference in results stems from the fundamental difference in how humans perceive nouns and verbs. Human raters seem to associate the LE relation with similarity more frequently in case of verbs, and they do it consistently (based on the IAA scores). We speculate that it is indeed easier for humans to think in terms of semantic taxonomies when dealing with real-world entities (e.g., concrete nouns), than with more abstract events and actions, as expressed by verbs. Another reason could be that, when humans make judgements over verb semantics, syntactic features become more important and implicitly influence the judgements. This effect is supported by the research on automatic acquisition of verb semantics, in which syntactic features have proven particularly important [@Kipper:2008lrec; @Korhonen:2010rs inter alia]. We leave the underlying causes at the level of speculation. A deeper exploration here is beyond the scope of this work, but this preliminary analysis already highlights how the principal word classes integrated in HyperLex are pertinent to a range of questions concerning distributional, lexical, and cognitive semantics. #### Experiment IV: Ungraded vs. Graded LE We also analyse the usefulness of HyperLex as a data set for ungraded LE evaluations and study the differences between graded LE and one ungraded LE task: hypernymy/LE directionality (see Sect. \[ss:evalprot\]). First, we have converted a subset of HyperLex into a data set for LE directionality experiments similar to BLESS by retaining only `hyp-N` pairs from HyperLex (as indicated by WordNet) with the graded LE score $\geq$ 7.0. The subset contains 940 $(X,Y)$ pairs in total (of which 121 pairs are verb pairs), where $Y$ in each pair may be seen as the hypernym. Following that, we run a selection of ungraded LE models from Sect. \[s:experiments\] tailored to capture directionality, and compare the scores of the same models in the graded LE task on this HyperLex subset containing “true hypernymy-hyponymy” pairs. [l XXX XXX]{} & &\ (lr)[2-4]{} (lr)[5-7]{} [Model]{} & [**All**]{} & [**Nouns**]{} & [**Verbs**]{} & [**All**]{} & [**Nouns**]{} & [**Verbs**]{}\ (lr)[2-4]{} (lr)[5-7]{} [[FR]{} ($\alpha=0, \theta=0$)]{} & [0.760]{} & [0.778]{} & [0.636]{} & [0.089]{} & [0.104]{} & [0.032]{}\ (lr)[2-4]{} (lr)[5-7]{} [$\text{DEM}_1$]{} & [0.700]{} & [0.696]{} & [0.726]{} & [-0.072]{} & [-0.102]{} & [-0.071]{}\ [$\text{DEM}_2$]{} & [0.700]{} & [0.696]{} & [0.726]{} & [-0.070]{} & [-0.050]{} & [-0.042]{}\ [$\text{DEM}_3$]{} & [0.696]{} & [0.684]{} & [0.777]{} & [0.036]{} & [0.063]{} & [0.115]{}\ [$\text{DEM}_4$]{} & [0.696]{} & [0.684]{} & [0.777]{} & [0.036]{} & [0.064]{} & [0.110]{}\ (lr)[2-4]{} (lr)[5-7]{} [SLQS-Basic]{} & [0.747]{} & [0.734]{} & [0.835]{} & [0.088]{} & [0.121]{} & [-0.036]{}\ [SLQS-Sim]{} & [0.749]{} & [0.734]{} & [0.851]{} & [0.163]{} & [0.126]{} & [-0.012]{}\ (lr)[2-4]{} (lr)[5-7]{} [OrderEmb]{} & [0.578]{} & [0.578]{} & [0.571]{} & [0.048]{} & [0.068]{} & [0.029]{}\ \[tab:ungraded\] The frequency baseline considers the more frequent concept as the hypernym in the pair. For ${DEM}_1$-${DEM}_4$ models (Sect. \[ss:dem\]), the prediction of directionality is based on the asymmetry of the measure: if $DEM_i(X,Y) > DEM_i(Y,X)$, it means that the inclusion of the features of $X$ within the features of $Y$ is higher than the reverse, which in turn implies that $Y$ is the hypernym in the pair. Further, $SLQS(X,Y) > 0$ implies that $Y$ is a semantically more general concept and is therefore the hypernym (see Sect. \[ss:gem\]).[^38] With <span style="font-variant:small-caps;">OrderEmb</span>, smaller coordinates mean higher position in the partial order: we compute and compare $DistPos(\vec{X},\vec{Y})$ and $DistPos(\vec{Y},\vec{X})$ scores to find the hypernym. The results provided as binary precision scores are summarised in Tab. \[tab:ungraded\]. They reveal that frequency is a strong indicator of directionality, but further improvements, especially for verbs, may be achieved by resorting to asymmetric and generality measures. The reasonably high scores observed in our ungraded directionality experiments are also reported for the detection task in prior work [@Shwartz:2017eacl]. The graded LE results on the HyperLex subset are prominently lower than the results with the same models on the entire HyperLex: this shows that fine-grained differences in human ratings in the high end of the graded LE spectrum are even more difficult to capture with current statistical models. The main message conveyed by the results from Tab. \[tab:ungraded\] is that the output from the models built for ungraded LE indeed cannot be used as an estimate of graded LE. In other words, the relative entropy or the measure of distributional inclusion between two concepts can be used to reliably detect which concept is the hypernym in the directionality task, or to distinguish between LE and other relations in the detection task, but it leads to a poor global estimate of the LE strength for graded LE experiments. Supervised Settings: Regression Models {#ss:regression} -------------------------------------- We also conduct preliminary experiments in supervised settings, relying on the random and lexical splits of HyperLex introduced in Sect. \[s:analysis\] (see Tab. \[tab:stats\]). We experiment with several well-known supervised models from the literature: they typically represent concept pairs as a combination of each concept’s embedding vector: concatenation $\vec{X} \oplus \vec{Y}$ [@Baroni:2012eacl], difference $\vec{Y} - \vec{X}$ [@Roller:2014coling; @Weeds:2014coling; @Fu:2014acl], or element-wise multiplication $\vec{X} \odot \vec{Y}$ [@Levy:2015naacl]. Based on state-of-the-art word embeddings such as [SGNS-BOW]{} or [Paragram]{}, these methods are easy to apply, and show very good results in ungraded LE tasks [@Baroni:2012eacl; @Weeds:2014coling; @Roller:2014coling]. Using two standardised HyperLex splits, the experimental setup is as follows: we learn a regression model on the [Training]{} set, optimise parameters (if any) on [Dev]{}, and test the model’s prediction power on [Test]{}. We experiment with two linear regression models: (1) standard [*ordinary least squares*]{} ([ols]{}), and (2) [*ridge regression*]{} or Tikhonov regularisation ([ridge]{}) [@Myers:1990book]. Ridge regression is a variant of least squares regression in which a regularisation term is added to the training objective to favour solutions with certain properties. The regularisation term is the Euclidian L2-norm of the inferred vector of regression coefficients. This term ensures that the regression favors lower coefficients and a smoother solution function, which should provide better generalisation performance than simple [ols]{} linear regression. The [ridge]{} objective is to minimise the following: where $\vec{a}$ is the vector of regression coefficients, $\mathbf{Q}$ is a matrix of feature representations for each concept pair $(X,Y)$ obtained using concatenation, difference, or element-wise multiplication. $\vec{s}$ is the vector of graded LE strengths for each concept pair, and $\mathbf{\Gamma}$ is some suitably chosen Tikhonov matrix. We rely on the most common choice: it is a multiple of the identity matrix $\mathbf{\Gamma} = \beta \mathbf{I}$. The effect of regularisation is thus varied via the $\beta$ hyperparameter, which is optimised on the [Dev]{} set. Setting $\beta=0$ reduces the model to the unregularised [ols]{} solution. #### Results {#results} Following related work, we rely on the selection of state-of-the-art word embedding models to provide feature vectors $\vec{X}$ and $\vec{Y}$. The results of a variety of tested regression models are summarised in Fig. \[fig:regrandom\] (random split) and Fig. \[fig:reglexical\] (lexical split). As another reference point, we also report results with several unsupervised models on the two smaller test sets in Tab. \[tab:randomlexical\]. [l XX]{} & [**Random**]{} & [**Lexical**]{}\ (lr)[2-3]{} [[FR]{} ($\alpha=0.02, \theta=0.25$)]{} & [0.299]{} & [0.199]{}\ & [0.212]{} & [0.188]{}\ [$\text{DEM}_2$]{} & [0.220]{} & [0.142]{}\ [$\text{DEM}_3$]{} & [0.142]{} & [0.177]{}\ [$\text{DEM}_4$]{} & [0.145]{} & [0.178]{}\ & [0.223]{} & [0.179]{}\ & [0.189]{} & [0.255]{}\ [WN-WuP]{} & [0.212]{} & [0.261]{}\ & [0.203]{} & [0.201]{}\ [Vis-Cent ($\alpha=1, \theta=0$)]{} & [0.207]{} & [0.209]{}\ & [0.849]{} & [0.846]{}\ [IAA-2]{} & [0.862]{} & [0.857]{}\ \[tab:randomlexical\] The IAA scores from Tab. \[tab:randomlexical\] again indicate that there is firm agreement between annotators for the two test sets, and that automatic systems still display a large gap to the human performance. The scores on the smaller test sets follow similar patterns as on the entire HyperLex. We see a slight increase of performance for similarity-specialised models (e.g., [WN]{}-based models or [Paragram+CF]{}) on the lexical split. We attribute this increase to the larger percentage of verb pairs in the lexical test set, shown to be better modelled with similarity-oriented embeddings in the graded LE task. Verb pairs constitute 17.3% of the entire random [test]{} set, the same percentage as in the entire HyperLex, while the number is 26.4% for the lexical [test]{} set. We reassess that supervised distributional methods indeed perform worse on a lexical split [@Levy:2015naacl; @Shwartz:2016arxiv]. Besides operating with a smaller training set in a lexical split, the finding is also explained by the effect of lexical memorisation with a random split: if high scores are systematically assigned to training pairs *($X_1$, animal)* or *($X_2$, appliance)*, the model will simply memorise that each pair *($Y_1$, animal)* or *($Y_2$, appliance)* should be assigned a high score during inference. The impact of lexical memorisation is illustrated by Tab. \[tab:memorisation\] using a sample of concept pairs containing ’prototypical hypernyms’ [@Roller:2016arxiv] such as *animal*: the regression models assign high scores even to clear negatives such as *(plant, animal)*. However, the effect of lexical memorisation also partially explains the improved performance of all regression models over [Unsupervised]{} baselines for a random split, as many *($X_t$, animal)* pairs are indeed assigned high scores in the test set. [X XXX]{} & [HyperLex]{} & [ols]{} & [ridge]{}\ (lr)[2-4]{} (plant, animal) & [0.13]{} & [6.95]{} & [7.39]{}\ (mammal, animal) & [10.0]{} & [7.14]{} & [7.43]{}\ (animal, mammal) & [1.25]{} & [6.61]{} & [4.99]{}\ (rib, animal) & [0.35]{} & [6.94]{} & [7.08]{}\ (reader, person) & [7.43]{} & [7.47]{} & [6.97]{}\ (foot, plant) & [0.42]{} & [7.86]{} & [6.05]{}\ (fungus, plant) & [4.75]{} & [7.94]{} & [7.51]{}\ (dismiss, go) & [3.97]{} & [4.22]{} & [4.29]{}\ (dinner, food) & [4.85]{} & [9.36]{} & [8.63]{}\ \[tab:memorisation\] On the other hand, we also notice that almost all [ols]{} regression models and a large number of [ridge]{} models in a lexical split cannot beat unsupervised model variants without any model learning. This suggests that the current state-of-the-art methodology in supervised settings is indeed limited in such scenarios and cannot learn satisfying generalisations regarding the type-of relation between words in training pairs. We suspect that another reason behind strong results with the semantically specialised [Paragram+CF]{} model in the unsupervised setting for the lexical split is the larger percentage of verbs in the lexical test set as well as explicit handling of antonymy, as mentioned earlier. The model explicitly penalises antonyms through dictionary-based constraints (i.e., pushes them away from each other in the vector space), a property which is desired both for semantic similarity *and* graded LE (see the low scores for the `ant` relation in Tab. \[tab:relations\]). The variation in results across the tested supervised model variants also indicates that the performance of a regression model is strongly dependent on the actual choice of the underlying representation model, feature transformation, as well as the chosen regression algorithm. First, the results on a random split reveal that the best unsupervised representation model does not necessarily yield the best supervised model, e.g., higher results are observed with [SGNS-DEPS]{} than with [Paragram]{} in that setting. [OrderEmb]{} is by far the weakest model in our comparison. Second, there is no clear winner in the comparison of three different feature representations. While vector difference ($\vec{Y}-\vec{X}$) and concatenation ($\vec{Y}\oplus\vec{X}$) seem to yield higher scores overall for a majority of models, element-wise multiplication obtains highest scores overall in a lexical split with [Paragram]{} and [Paragram+CF]{}. The variation clearly suggests that supervised models have to be carefully tuned in order to perform effectively on the graded LE task. Finally, consistent improvements of [ridge]{} over [ols]{} across all splits, models, and feature transformations reveal that the choice of a regression model matters. This preliminary analysis advocates the use of more sophisticated learning algorithms in future work. Another path of research work could investigate how to exploit more training data from resources other than HyperLex to yield improved graded LE models. Further Discussion: Specialising Semantic Spaces {#ss:further} ------------------------------------------------ Following the growing interest in word representation learning, this work also touches upon the ideas of vector/semantic space specialisation: a desirable property of representation models is their ability to steer their output vector spaces according to explicit linguistic and dictionary knowledge [@Yu:2014acl; @Wieting:2015tacl; @Faruqui:2015naacl; @Astudillo:2015acl; @Liu:2015acl; @Mrksic:2016naacl; @Vulic:2017acl inter alia]. Previous work showed that it is possible to build vector spaces specialised for capturing different lexical relations, e.g., antonymy [@Yih:2012emnlp; @Ono:2015naacl], or distinguishing between similarity and relatedness [@Kiela:2015emnlp]. Yet, it is to be seen how to build a representation model specialised for the graded LE relation. An analogy with (graded) semantic similarity is appropriate here: it was recently demonstrated that vector space models specialising for similarity and scoring high on SimLex-999 and SimVerb-3500 are able to boost performance of statistical systems in language understanding tasks such as *dialogue state tracking* [@Mrksic:2016naacl; @Mrksic:2017acl; @Vulic:2017acl]. Along the same line, we assume that the specification of what the degree of LE means for each individual pair may also boost performance of statistical end-to-end systems in another language understanding task in future work: natural language inference [@Bowman:2015emnlp; @Parikh:2016emnlp; @Agic:2017arxiv]. Owing to their adaptability and versatility, we believe that representation architectures inspired by neural networks, e.g., [@Mrksic:2016naacl; @Vendrov:2016iclr], are a promising avenue for future modeling work on graded lexical entailment in both unsupervised and supervised settings, despite their low performance on the graded LE task at present. Application Areas: A Quick Overview {#s:application} =================================== The proposed data set should have an immediate impact in the cognitive science research, providing means to analyse the effects of typicality and gradience in concept representations [@Hampton:2007cogsci; @Decock:2014nous]. Besides this, a variety of other research domains share interest in taxonomic relations, automatic methods for their extraction from text, completion of rich knowledge bases, etc. Here, we provide a quick overview of such application areas for the graded lexical entailment framework and the HyperLex data set. #### Natural Language Processing As discussed in depth in Sect. \[s:motivation\], lexical entailment is an important linguistic task in its own right [@Rimell:2014eacl]. Graded LE introduces a new challenge and a new evaluation protocol for data-driven distributional LE models. In current binary evaluation protocols targeting ungraded LE detection and directionality, even simple methods modeling lexical generality are able to yield very accurate predictions. However, our preliminary analysis in Sect. \[ss:results\] demonstrates their fundamental limitations for graded lexical entailment. In addition to the use of HyperLex as a new evaluation set, we believe that the introduction of graded LE will have implications on how the distributional hypothesis [@Harris:1954word] is exploited in distributional models targeting taxonomic relations in particular [@Rubinstein:2015acl; @Shwartz:2016arxiv; @Roller:2016arxiv inter alia]. Further, a tight connection of LE with the broader phrase-/sentence-level task of recognising lexical entailment (RTE) [@Dagan:2006pascal; @Dagan:2013book] should lead to further implications for text generation [@Biran:2013ijcnlp], metaphor detection [@Mohler:2013metaws], question answering [@Sacaleanu:2008coling], paraphrasing [@Androutsopoulos:2010jair], etc. #### Representation Learning The previous work on representation learning has mostly focused on the relations of semantic similarity and relatedness, as evidenced by the surge in interest in evaluation of word embeddings on datasets such as SimLex-999, WordSim-353, MEN [@Bruni:2014jair], Rare Words [@Luong:2013conll], etc. This strong focus towards similarity and relatedness means that other fundamental semantic relations such as lexical entailment have been largely overlooked in the representation learning literature. Notable exceptions building word embeddings for LE have appeared only recently (see the work of Vendrov et al. and a short overview in Sect. \[ss:further\]), but a comprehensive evaluation resource for intrinsic evaluation of such [*LE embeddings*]{} is still missing. There is a pressing need to improve, broaden, and introduce new evaluation protocols and datasets for representation learning architectures [@Schnabel:2015emnlp; @Tsvetkov:2015emnlp; @Yaghoobzadeh:2016acl; @Faruqui:2016arxiv; @Batchkarov:2016repeval inter alia].[^39] We believe that one immediate application of HyperLex is its use as a comprehensive, wide-coverage large evaluation set for representation-learning architectures focused on the fundamental <span style="font-variant:small-caps;">type-of</span> taxonomic relation. #### Data Mining: Extending Knowledge Bases Ontologies and knowledge bases such as WordNet, Yago, or DBPedia are useful resources in a variety of applications such as text generation, question answering, information retrieval, or for simply providing structured knowledge to users. Since they typically suffer from incompleteness and a lack of reasoning capability, a strand of research [@Snow:2004nips; @Suchanek:2007www; @Bordes:2011aaai; @Socher:2013nips; @Lin:2015aaai] attempts to extend existing knowledge bases using patterns or classifiers applied to large text corpora. One of the fundamental relations in all knowledge bases is the [type-of]{}/[instance-of]{}/[is-a]{} LE relation (see Tab. \[tab:kbs\] in Sect. \[ss:sets\]). HyperLex may be again used straightforwardly as a wide-coverage evaluation set for such knowledge base extension models: it provides an opportunity to evaluate statistical models that tackle the problem of graded LE. #### Cognitive Science Inspired by theories of prototypicality and graded membership, HyperLex is a repository of human graded LE scores which could be exploited in cognitive linguistics research [@Taylor:2003book] and other applications in cognitive science [@Gardenfors:2004book; @Hampton:2007cogsci]. For instance, reasoning over lexical entailment is related to analogical transfer: transferring information from the past experience (the source domain) to the new situation (the target domain) [@Gentner:1983cogsci; @Holyoak:2012book], e.g., seeing an unknown animate object called [*wampimunk*]{} or [*huhblub*]{} which resembles a [*dog*]{}, one is likely to conclude that such [*huhblubs*]{} are to a large extent types of [*animals*]{}, although definitely not prototypical instances such as [*dogs*]{}. #### Information Search Graded LE may find application in relational Web search [@Cafarella:2006www; @Kato:2009cikm; @Kopliku:2011jcdl]. A user of a relational search engine might pose the query: [*“List all animals with four legs”*]{} or [*“List manners of slow movement.”*]{} A system aware of the degree of LE would be better suited to relational search than a simple discrete classifier: the relational engine could rank the output list so that more prototypical instances are cited first (e.g., [*dogs*]{}, [*cats*]{} or [*elephants*]{} before [*huhblubs*]{} or [*wampimunks*]{}). This has a direct analogy with how standard search engines rank documents or Web pages in descending order of relevance to the user’s query. Further, taxonomy keyword search [@Song:2011ijcai; @Liu:2012kdd; @Wu:2012sigmod] is another prominent problem in information search and retrieval where such knowledge of lexical entailment relations may be particularly useful. #### Beyond the Horizon: Multi-Modal Modeling From a high-level perspective, autonomous artificial agents will need to jointly model vision and language in order to parse the visual world and communicate with people. Lexical entailment, textual entailment, and image captioning can be seen as special cases of a partial order over unified visual-semantic hierarchies [@Deselaers:2011cvpr; @Vendrov:2016iclr], see also Fig. \[fig:vissem\] again. For instance, image captions may be seen as abstractions of images, and they can be expressed at various levels in the hierarchy. The same image may be abstracted as, e.g., [*A boy and a girl walking their dog*]{}, [*People walking their dog*]{}, [*People walking*]{}, [*A boy, a girl, and a dog*]{}, [*Children with a dog*]{}, [*Children with an animal*]{}, etc. Lexical entailment might prove helpful in research on e.g. image captioning [@Hodosh:2013jair; @Socher:2014tacl; @Bernardi:2016jair] or cross-modal information retrieval [@Pereira:2014pami] based on such visual-semantic hierarchies, but it is yet to be seen whether the knowledge of gradience and prototypicality may contribute to image captioning systems. Image generality is closely linked to semantic generality as is evident from recent work [@Deselaers:2011cvpr; @Kiela:2015acl]. The data set could also be very useful in evaluating models that ground language in the physical world [@Silberer:2012emnlp; @Silberer:2014acl; @Bruni:2014jair inter alia]. Future work might also investigate attaching graded LE scores to large hierarchical image databases such as ImageNet [@Deng:2009cvpr; @Russakovsky:2015ijcv]. Conclusion {#s:conclusion} ========== While the ultimate test of semantic models is their usefulness in downstream applications, the research community is still in need of wide-coverage comprehensive gold standard resources for intrinsic evaluation [@Camacho:2015acl; @Schnabel:2015emnlp; @Tsvetkov:2015emnlp; @Hashimoto:2016tacl; @Gladkova:2016repeval inter alia]. Such resources can measure the general quality of the representations learned by semantic models, prior to their integration in end-to-end systems. We have presented HyperLex, a large wide-coverage gold standard resource for the evaluation of semantic representations targeting the lexical relation of [*graded*]{} lexical entailment (LE) also known as hypernymy-hyponymy or [type-of]{} relation, a relation which is fundamental in construction and understanding of concept hierarchies, that is, semantic taxonomies. Given that the problem of concept category membership is central to many cognitive science problems focused on semantic representation, we believe that HyperLex will also find its use in this domain. The development of HyperLex was principally inspired and motivated by several factors. First, unlike prior work on lexical entailment in NLP, it focuses on the relation of graded or soft lexical entailment at a continuous scale: the relation quantifies the strength of the [type-of]{} relation between concepts rather than simply making a binary decision as with the ungraded LE variant surveyed in Sect. \[s:motivation\]. Graded LE is firmly grounded in cognitive linguistic theory of class prototypes [@Rosch:1973:natural; @Rosch:1975cognitive] and graded membership [@Hampton:2007cogsci], stating that some concepts are more central to a broader category/class than others (prototypicality) or that some concepts are only within the category to some extent (graded membership). For instance, [*basketball*]{} is more frequently cited as a prototypical [*sport*]{} than [*chess*]{} or [*wrestling*]{}. One purpose of HyperLex is to examine the effects of prototypicality and graded membership in human judgements, as well as to provide a large repository (i.e., HyperLex contains 2,616 word pairs in total) of concept pairs annotated for graded lexical entailment. A variety of analyses in Sect. \[s:analysis\] show that the effects are indeed prominent. Second, while existing gold standards measure the ability of models to capture similarity or relatedness, HyperLex is the first crowdsourced data set with the relation of (graded) lexical entailment as its primary target. As such, it will serve as an invaluable evaluation resource for representation learning architectures tailored for the principal lexical relation, which has plenty of potential applications as indicated in Sect. \[s:application\]. Analysis of the HyperLex ratings from more than 600 annotators, native English speakers, showed that subjects can consistently quantify graded LE, and distinguish it from a broader notion of similarity/relatedness and other prominent lexical relations (e.g., cohyponymy, meronymy, antonymy) based on simple non-expert intuitive instructions. This is supported by high inter-annotator agreement scores on the entire data set, as well as on different subsets of HyperLex (e.g., POS categories, WordNet relations). Third, as we wanted HyperLex to be wide-coverage and representative, the construction process guaranteed that the data set covers concept pairs of different POS categories (nouns and verbs), at different levels of concreteness, and concept pairs standing in different relations according to WordNet. The size and coverage of HyperLex makes it possible to compare the strengths and weaknesses of various representation models via statistically robust analyses on specific word classes, and investigate human judgements in relation to such different properties. The size of HyperLex also enables supervised learning, for which we provide two standard data set splits [@Levy:2015naacl; @Shwartz:2016arxiv] into training, test, and development subsets. To dissect the key properties of HyperLex, we conducted a spectrum of experiments and evaluations with most prominent state-of-the-art classes of lexical entailment and embedding models available in the literature. One clear conclusion is that current lexical entailment models optimised for the ungraded LE variant perform very poorly in general. There is clear room under the inter-rating ceiling to guide the development of the next generation of distributional models: the low performance can be partially mitigated by focusing models on the graded LE variant, and developing new and more expressive architectures for LE in future work. Even analyses with a selection of prominent supervised LE models reveal the huge gap between the human and system performance in the graded LE task. Future work thus needs to find a way to conceptualise and encode the graded LE idea into distributional models to tackle the task effectively. Despite their poor performance at present, we believe that a promising step in that direction are neural net inspired approaches to LE proposed recently [@Vilnis:2015iclr; @Vendrov:2016iclr], mostly due to their conceptual distinction from other distributional modeling approaches complemented with their modeling adaptability and flexibility. In addition, in order to model hierarchical semantic knowledge more accurately, in future work we may require algorithms that are better suited to fast learning from few examples [@Lake:2011cogsci], and have some flexibility with respect to sense-level distinctions [@Reisinger:2010naacl; @Neelakantan:2014emnlp; @Jauhar:2015naacl; @Suster:2016naacl]. Despite the abundance of reported experiments and analyses in this work, we have only scratched the surface in terms of the possible analyses with HyperLex and use of such models as components of broader phrase- and sentence-level textual entailment systems, as well as in other applications, as quickly surveyed in Sect. \[s:application\]. Beyond the preliminary conclusions from these initial analyses, we believe that the benefit of HyperLex will become evident as researchers use it to probe the relationship between architectures, algorithms and representation quality for a wide range of concepts. A better understanding of how to represent the full diversity of concepts (with LE grades attached) in hierarchical semantic networks should in turn yield improved methods for encoding and interpreting the hierarchical semantic knowledge which constitutes much of the important information in language. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by the ERC Consolidator Grant (no 648909). DK and FH performed their work while they were still at the University of Cambridge. [^1]: Language Technology Lab (LTL), Department of Theoretical and Applied Linguistics, University of Cambridge, 9 West Road, CB3 9DP Cambridge, UK. E-mail: `{iv250|dsg40|alk23}@cam.ac.uk` [^2]: Facebook AI Research, 770 Broadway, NY 10003, New York City, NY, USA. E-mail: `dkiela@fb.com` [^3]: Google DeepMind, 7 Pancras Square, London NC14AG, UK. E-mail: `felixhill@google.com` [^4]: Due to dual and inconsistent use in prior work, in this work we use the term [*lexical entailment (LE)*]{} in its stricter definition: it refers precisely to the taxonomical [*hyponymy-hypernymy*]{} relation, also known as [type-of]{}, or [is-a]{} relation. More details on the distinction between taxonomical and substitutable LE are provided in Sect. \[s:graded\]. [^5]: HyperLex is available online at: `http://people.ds.cam.ac.uk/iv250/hyperlex.html` [^6]: For instance, Turney and Mohammad argue that in the sentences [*Jane dropped the glass*]{} and [*Jane dropped something fragile*]{}, the concept [*glass*]{} should entail [*fragile*]{}. [^7]: Other variants of the same definition replace [type-of]{} with [kind-of]{} or [instance-of]{}. [^8]: For instance, given the lexical relation classification scheme of Bejar et al. , LE or [Class-Inclusion]{} is only one of the 10 high-level relation classes. [^9]: From the SimLex-999 guidelines: “Two words are syonymys if they have very similar meanings. Synonyms represent the same type or category (...) you are asked to compare word pairs and to rate how [*similar*]{} they are...” Synonymy and LE capture different aspects of meaning regarding semantic hierarchies/taxonomies: e.g., while the pair [*(mouse, rat)*]{} receives a score of 7.78 in SimLex-999 (on the scale 0-10), the same pair has a graded LE score of 2.22 in HyperLex. [^10]: The terms intension and extension assume classical intensional and extensional definitions of a concept, e.g., [@VanBenthem:1996book; @Baronett:2012book]. [^11]: Typical choices are feature vector concatenation ($\vec{X} \oplus \vec{Y}$), difference ($\vec{Y} - \vec{X}$), or element-wise multiplication ($\vec{X} \odot \vec{Y}$) where $\vec{X}$ and $\vec{Y}$ are feature vectors of concepts X and Y. [^12]: http://w3.usf.edu/FreeAssociation/ [^13]: https://wordnet.princeton.edu/ [^14]: We have decided to leave out adjectives: they are represented in USF to a lesser extent than nouns and verbs, and it is thus not possible to sample large enough subsets of adjectives pairs across different lexical relations and lexical entailment levels, i.e., only `syn` and `ant` adjective pairs are available in USF. [^15]: POS categories are generally considered to reflect very broad ontological classes [@Fellbaum:1998wn]. We thus felt it would be very difficult, or even counter-intuitive, for annotators to rate mixed POS pairs. [^16]: The numbers are available as part of USF annotations. [^17]: Note that the pairs with the same concept without any offensive connotation were included in the pools, e.g., [*weed / grass*]{}, [*weed / plant*]{}, or [*ecstasy / feeling*]{}. [^18]: The final number was obtained after randomly discarding a small number of pairs for each relation in order to distribute the pairs of both POS categories into tranches of equal size in the crowdsourcing study, see Sect. \[ss:questionnaire\]. [^19]: Determining the set of exact senses for a given concept, and then the set of contexts that represent those senses, introduces a high degree of subjectivity into the design process. Furthermore, in the infrequent case that some concept $X$ in a pair $(X,Y)$ is genuinely (etymologically) polysemous, $Y$ can provide sufficient context to disambiguate $X$ [@Hill:2015cl; @Leviant:2015arxiv]. [^20]: https://prolific.ac/ (We chose PA for logistic reasons.) [^21]: https://www.qualtrics.com/ [^22]: Note that the IAAs are not computed on the entire data set, but are in fact computed per tranche, as one worker annotated only one tranche. Exactly the same IAA computation was used previously by Hill et al. . [^23]: For instance, if the training set contains concept pairs [*(dog / animal)*]{}, [*(cow, animal)*]{}, and [*(cat, animal)*]{}, all assigned very high LE scores or annotated as positive examples in case of ungraded LE evaluation, the algorithm may learn that [*animal*]{} is a prototypical hypernym, assigning any new $(X, animal)$ pair a very high score, regardless of the actual relation between $X$ and [*animal*]{}; additional analyses provided in Sect. \[ss:regression\]. [^24]: Note that, unlike with similarity scores, the score now refers to an asymmetric relation stemming from the question [*“Is X a type of Y”*]{} for the word pair $(X,Y)$. Therefore, the scores for two reverse pairs $(X,Y)$ and $(Y,X)$ should be different, see also Tab. \[tab:reversed\]. [^25]: E.g., if [*animal*]{} is a hypernym of [*crocodile*]{}, one expects that (i) a number of context features of [*crocodile*]{} are also features of [*animal*]{}, and (ii) that a number of context features of [*animal*]{} are not context features of [*crocodile*]{}. As a semantically broader concept, [*animal*]{} is also found in contexts in which also occur [*animals*]{} other than [*crocodiles*]{}. [^26]: As is common practice in multi-modal semantics, each image representation is obtained by extracting the $4096$-dimensional pre-softmax layer from a forward pass in a convolutional neural network (CNN) [@Krizhevsky:2012nips; @Simonyan:2015iclr] that has been trained on the ImageNet classification task using Caffe [@Jia:2014mm; @Russakovsky:2015ijcv]. [^27]: Smaller coordinates imply higher position in the partial order. The origin is then the top element of the order, representing the most general concept. [^28]: ’Closeness’ or hypernymy level for $(X,Y)$ may be measured by the shortest WN path connecting $X$ and $Y$. [^29]: Vilnis and McCallum use a simplification where $\mathbold{\sigma}_{X}$ is represented as a $K$-dimensional vector (so-called [*diagonal*]{} Gaussian embeddings) or a scalar ([*spherical*]{} embeddings). [^30]: TypeDM is a variant of the Distributional Memory (DM) framework, where distributional info is represented as a set of weighted word-link-word tuples $\langle\langle w_1,l,w_2 \rangle,\delta \rangle$ where $w_1$ and $w_2$ are word tokens, $l$ is a syntactic co-occurrence link between the words (e.g. a typed dependency link), and $\delta$ is a weight assigned to the tuple (e.g., LMI or PMI). [^31]: We have also experimented with the TypeDM scores directly and negative LMI values. We do not report these results as they are significantly lower than the reported results obtained by the other two setups. [^32]: https://github.com/ivendrov/order-embedding [^33]: https://github.com/seomoz/word2gauss [^34]: https://levyomer.wordpress.com/2014/04/25/dependency-based-word-embeddings/ [^35]: http://ttic.uchicago.edu/\~wieting/ [^36]: https://github.com/nmrksic/counter-fitting [^37]: https://github.com/mfaruqui/non-distributional [^38]: Following the same idea, also discussed in [@Lazaridou:2015naacl; @Kiela:2015acl], a concept with a higher word embedding standard deviation or embedding entropy could be considered semantically more general and therefore the hypernym. However, we do not report the scores with word embeddings as they were only slightly better than the random baseline with the precision of 0.5. [^39]: The need for finding better evaluation protocols for representation learning models is further exemplified by the initiative focused on designing better evaluation protocols for semantic representation models (RepEval):\ `https://sites.google.com/site/repevalacl16/`\ `https://repeval2017.github.io/`
--- abstract: | Dynamical symmetry breaking is investigated for a four-fermion Nambu-Jona-Lasinio model in external electromagnetic and gravitational fields. An effective potential is calculated in the leading order of the large-N expansion using the proper-time Schwinger formalism. Phase transitions accompanying a chiral symmetry breaking in the Nambu-Jona-Lasinio model are studied in detail. A magnetic calalysis phenomenon is shown to exist in curved spacetime but it turns out to lose its universal character because the chiral symmetry is restored above some critical positive value of the spacetime curvature. author: - | E. Elizalde$^{a,b,}$[^1], Yu. I. Shil’nov$^{a,c,}$[^2]\ $^a$[*Consejo Superior de Investigaciones Científicas,*]{}\ [*IEEC, Edifici Nexus-204, Gran Capità 2-4, 08034, Barcelona, Spain*]{}\ $^b$[*Department ECM, Faculty of Physics, University of Barcelona,*]{}\ [*Diagonal 647, 08028, Barcelona, Spain*]{}\ $^c$ [*Department of Theoretical Physics, Faculty of Physics,*]{}\ [*Kharkov State University, Svobody Sq. 4, 310077, Kharkov, Ukraine*]{} title: 'Dynamical symmetry breaking in Nambu-Jona-Lasinio model under the influence of external electromagnetic and gravitational fields' --- Introduction {#introduction .unnumbered} ============ Different four-fermion models [@ESH:NJL], [@ESH:GN] have been considered to be one of the most convenient ways for an investigation of the low-energy physics of strong interactions. A dynamical symmetry breaking phenomenon (DSB) has been proved to take place within those models, partecularly Nambu-Jona-Lasinio (NJL) one, which seems to show up a nontrivial phase structure. Usually the symmetry to be broken under the DSB mechanism is the chiral one. Dynamical version of fermions mass generation and dynamical chiral symmetry breaking have been investigated very carefully and some fruitful applications for the real high-energy physics have been found [@ESH:DSB], [@ESH:PhysRep]. However it has turned out to be very difficult to realize the idea of DSB because all of the calculations should be performed out of pertubation theory. This leads to study already simplified models and that is why we have to investigate any possible generalizations within these models those of nonzero temperature and chemical potential, arbitrary dimensions, external fields including gravitational one and so on as some kind of laboratory in order to collect as much new information as we can. Despite of essential difficulties caused by the nonpertubative character of DSB phenomenon, it has been applied successfully to describe the overcritical behavior of quantum electrodynamics, the top quark condensate mechanism of mass generation in the Weinberg- Salam model of electroweak interactions, technicolor models and, especially, to investigate the composite fields generation in the NJL model. In the frameworks of Schwinger proper-time method this model has been studied in external electromagnetic field by many authors for 20 years [@ESH:Sch] - [@ESH:emf]. Recently a new sample of papers devoted to DSB in external electromagnetic field have been published [@ESH:emf2], [@ESH:GMS]. It shed a new light onto the universal character of magnetic catalysis, which means that magnetic field breaks chiral symmetry for any value of its strength. Furthermore it has been shown that this phenomenon occurs in quantum electrodynamics, 2+1, 3+1 dimensional nonsupersymmetrical and 3+1 supersymmetrical NJL models. So the statement about [**the universal character of magnetic catalysis**]{} has been made. Investigations of the influence of a classical gravitational field on the DSB phenomenon in the NJL model have been carried out for some years [@ESH:ELOS]. It has been shown that curvature-induced phase transitions exist and might play some essential role in more or less realistic early Universe model. It turns out that, in spite of the relatively small value of the curvature-dependent corrections at the low energy scale to be investigated within the NJL model, these corrections appear to be inescapable, in the sense that they must be taken into account when one performs the necessary “fine tuning” of the different cosmological parameters. Furthemore, positive spacetime curvature changes the universal character of magnetic catalysis dramatically. It has been shown that the early Universe could contain a large primodial magnetic field and have a huge electrical conductivity. The vicinities of magnetized black holes and neutron stars are the other possible points of application of our model. Therefore both classical external gravitational and electromagnetic fields should be taken into account for the description of a wide sample of events in the Universe. In the present paper we describe our recent results concerning the DSB under the simultaneous influence of both gravitational and electromagnetic fields in NJL model [@ESH:OWN]. The phase transitions accompanying the DSB process on the spacetime curvature, as well as the values of electric or magnetic field strength are investigated. Dynamical symmetry breaking by a magnetic field in flat spacetime {#dynamical-symmetry-breaking-by-a-magnetic-field-in-flat-spacetime .unnumbered} ================================================================= In an arbitrary dimensional flat spacetime the NJL model has the following action: $$S=\!\int d^d x \left\{ i \overline{\psi}\gamma^\mu D_\mu \psi + {\lambda \over 2N} \left[ (\overline{\psi}\psi)^2+ (\overline{\psi} i \gamma_5 \psi)^2 \right] \right\},$$ where the covariant derivative $D_{\mu}$ includes the electromagnetic potential $A_{\mu}$ and $N$ is the number of bispinor fields $\psi_a$. Introducing the auxiliary fields $$\sigma=-{\lambda \over N }(\overline{\psi} \psi ), \pi=-{\lambda\over N}( \overline{\psi} i \gamma_5 \psi)$$ we can rewrite the action as: $$S=\int d^d\! x \left\{ i\overline{\psi}\gamma^\mu D_\mu \psi - {N \over 2\lambda}(\sigma^2+\pi^2)- \overline{\psi}(\sigma+i\pi\gamma_5)\psi \right\}.$$ The effective action in the leading $1/N$ order is $$\frac{1}{N} S_{eff}=-\int d^d\! x {\sigma^2+\pi^2 \over 2\lambda}- i \ln \det\left[ i\gamma^\mu D_\mu-(\sigma+i\gamma_5\pi)\right]$$ Then the effective potential (EP), defined for the constant configurations of $\pi$ and $\sigma$ as $V_{eff} = -S_{eff}/ N\! {\displaystyle \int}\! d^d\!x$, is given by the formula $$V_{eff}={\sigma^2 \over 2\lambda }+i \Sp \ln \langle x| [ i\gamma^\mu D_\mu - \sigma ] |x \rangle$$ Here we put $\pi=0$ because the final expression will depend on the combination $\sigma^2+ \pi^2$ only within our approximation. This means that we are actually considering the Gross-Neveu model. But if we take into account kinetic terms of the fields $\pi$ and $\sigma$ generated by quantum corrections we will obtain different dynamics of these two fields. It should be noted that $\sigma$ will be a massive scalar field in the supercritical area while $\pi$ will be massless Goldstone particle. By means of the usual Green function (GF), which obeys the equation $$(i \gamma^\mu D_\mu-\sigma)_x G(x,x',\sigma)=\delta(x-x'),$$ we obtain the following formula $$V_{eff}'(\sigma)={ \sigma \over \lambda }-i \Sp G(x,x,\sigma).$$ Now we can substitute in this equation the fermion GF in constant magnetic field $$G(x,x',\sigma)=\Phi(x,x')\tilde{G}(x-x',\sigma),$$ where $$\Phi(x,x')= \exp \biggl[ ie\int\limits^x_{x'} A^\mu (x'') dx'' \biggr]$$ $$\begin{aligned} \tilde{G_0}(z,\sigma)=e^{-i{\pi\over 4}d}\int\limits_0^\infty \frac{ds}{(4\pi s)^\frac{d}{2}} e^{-is\sigma^2} exp(-\frac{i}{4s}z_\mu C^{\mu\nu}z_\nu)\times \\ \nonumber \biggl(\sigma+\frac{1}{2s}\gamma^\mu C_{\mu\nu}z^{\nu}- \frac{e}{2}\gamma^\mu F_{\mu\nu}z^\nu\biggr) \biggl[\tau \coth \tau-\frac{es}{2}\gamma^\mu\gamma^\nu F_{\mu\nu}\biggr].\end{aligned}$$ Let us describe the 3D case to avoid some more complicated expressions. The EP is given by $$V_{eff}(\sigma)={\sigma^2\over 2\lambda}+ {1\over 4\pi^{3/2}} \int\limits_{1/\Lambda^2}^\infty {ds\over s^{5/2}} e^{-s\sigma^2}(eBs) \coth(eBs)$$ The most reliable method to keep all of the divergences is the cut-off parameter introduction. So we can make the following trick: write the integral in the EP like $$\int\limits_{1/\Lambda^2}^\infty {ds\over s^{5/2}} e^{-s\sigma^2}\left[ (eBs) \coth(eBs) - 1\right] + \int\limits_{1/\Lambda^2}^\infty {ds\over s^{5/2}} e^{-s\sigma^2}$$ and calculate the last one keeping $\Lambda$ finite while the first integral is finite already and we can put $1/\Lambda^2=0$. Then it appears to be possible to calculate it as a limit $\mu \to -1/2$ using the formula $$\int\limits_0^\infty dx x^{\mu - 1}e^{-ax}\coth (cx) = \Gamma (\mu) \left[ 2^{1 - \mu} (c)^{-\mu}\zeta(\mu , \frac{a}{2c})-a^{-\mu}\right].$$ Finally the EP has the form $$V_{eff}(\sigma)={\sigma^2\over 2\lambda}- \left[ \frac{\Lambda \sigma^2}{2\pi^{3/2}}+ \frac{\sqrt{2}}{\pi} (eB)^{3/2} \zeta \left( -\frac{1}{2}, 1+ \frac{\sigma^2}{2eB} \right)+ \frac{1}{2\pi}eB \sigma \right]$$ There are two ways of justifying the introduction of the $\Lambda$ parameter in the formula for the EP. The first one is the standard renormalization procedure, by means of the UV cut-off method. Then, in the limit $\Lambda\to\infty$, after renormalization of the coupling constant $$\frac{1}{\lambda_R}=\frac{1}{\lambda}-\frac{\Lambda}{\pi^{3/2}},$$ we have the expression for the renormalized EP in 3D spacetime $$V_{eff}^{ren}(\sigma)={\sigma^2\over 2\lambda_R}- -\frac{\sqrt{2}}{\pi} (eB)^{3/2} \zeta \left( -\frac{1}{2}, 1+ \frac{\sigma^2}{2eB} \right) - \frac{1}{2\pi}eB \sigma$$ For $B=0$ dynamical symmery breaking takes place when $$\lambda > \lambda_c=\frac{\pi^{3/2}}{\Lambda}$$ if only we keep the finite cut-off $\Lambda$ meanwhile the renormalized NJL model does not admit this phenomenon in general. However any finite value of the external magnetic field changes the situation dramatically and dynamical symmetry breaking occurs for any coupling constant. For $\sigma^2\ll eB$ nontrivial solution of the gap equation defining a nontrivial minimum of the EP is given by $$\sigma=\frac{eB\lambda_R}{2\pi}$$ The same calculations have been done for a constant electric field. The nonzero imaginary part appears in this case caused by vacuum instability of the quantum field theory in electric field. But treating the real part of the EP we can find that electrical field restores the chiral symmetry, initially broken for the finite cut-off parameter case. Fig.1 illustrates the universal character of magnetic catalysis. It is a plot of 3D $V_{eff}^{ren}(\sigma)$ with $\mu=100;$ $\lambda\mu=100$. Starting from above the curves correspond to the following electromagnetic field configurations: $eE/\mu^2 = 0.0002, B=0$; $B=E=0$; $eB\mu^2=0.0002, E=0$. After renormalization the chiral symmetry exists without external field but the magnetic field creates the non-zero minimum that indicates that DSB takes place. Meanwhile the external electric field works evidently against symmetry breaking. In all figures, an arbitrary dimensional parameter, $\mu$, defining a typical scale in the model, is introduced in order to perform the plots in terms of dimensionless variables. FIGURE 1 General expression for effective potential in external electromagnetic and gravitational fields {#general-expression-for-effective-potential-in-external-electromagnetic-and-gravitational-fields .unnumbered} =============================================================================================== We have just the same expression for the EP in curved spacetime $$V_{eff}'(\sigma)={ \sigma \over \lambda }-i \Sp G(x,x,\sigma)$$ To calculate the linear curvature corrections the local momentum expansion formalism is the most convinient one. Then in the special Riemannian normal coordinate framework $$g_{\mu\nu}(x)=\eta_{\mu\nu}-{1\over 3 } R_{\mu\rho\sigma\nu}y^\rho y^\sigma$$ with corresponding formulae for the others values and $y=x-x'$. Then choosing the vector potential of the external electromagnetic field in the form $$A_\mu(x)=-{1 \over 2 } F_{\mu\nu} x^\nu ,$$ where $F_{\mu\nu}$ is the constant matrix of electromagnetic field strength tensor we find that: $$G(x, x', \sigma)= \Phi (x, x')\left[ \tilde {G}_0 (x - x', \sigma)+ \tilde {G}_1 (x - x', \sigma) \dots \right],$$ where $G_n \sim R^n $. Therefore we obtain the iterative sequence of equations for the GF and the linear- curvature corrections are given by $$\begin{aligned} \tilde{G}_1(x-x',\sigma)=\int\! dx''G_{00}(x-x'',\sigma)\times\\ \left[ -{i \over 6} \gamma^a R^\mu{}_{\!\rho\sigma a}(x''-x')^\rho (x''-x')^\sigma\partial_\mu^{x''} \tilde{G}_0(x''-x',\sigma)-\right.\\ \nonumber \left.{i \over 4}\gamma^a \sigma^{bc}R_{bca\lambda}(x''-x')^\lambda\right] \tilde{G}_0(x''-x',\sigma) \nonumber\end{aligned}$$ Here $G_{00}(x-x',\sigma)$ is a free fermion GF. Substituting an exact flat spacetime GF of fermions in external electromagnetic field into this formula after some algebra we have evident expression for the EP with the linear- curvature accuracy in the constant curvature spacetime. External constant magnetic field case {#external-constant-magnetic-field-case .unnumbered} ------------------------------------- For 3D spacetime the EP is given by $$\begin{aligned} V_{eff}(\sigma)={\sigma^2\over 2\lambda}+{1\over 4\pi^{3/2}} \int_{1/\Lambda^2}^\infty {ds\over s^{5/2}}\exp(-s\sigma^2)\tau \coth\tau-\\ {R\over 144\pi^{3/2}}\int_{1/\Lambda^2}^\infty \int_{1/\Lambda^2}^\infty \frac{ds dt}{(t+s)^{5/2}(1+\kappa \coth\tau)^2}\exp[-(t+s)\sigma^2]\times\\ \nonumber \biggl[2\kappa(\kappa+\tau)+ (9\tau+5\kappa)\coth\tau+\kappa(\tau-3\kappa)\coth^2\tau\biggr]\end{aligned}$$ where $\tau=eBs, \kappa= eBt$. We can perform the same renormalization procedure as in flat spacetime because no new divergences appear in the linear-curvature corrections. But we keep the cut of scheme here to study the most general situation. The results are presented on Fig.2. It shows a plot of 3D $V_{eff}^{ren}(\sigma)$ with $\mu=100; \lambda\mu=100$ and fixed $eB\mu^2= 0.0002$. Starting from above the curves correspond to the different values of spacetime curvature $R\mu^2= 0.0025, 0.002, 0.001, 0$. Second-order phase transition ruled by the spacetime curvature takes place. FIGURE 2 External constant electrical field case {#external-constant-electrical-field-case .unnumbered} --------------------------------------- We have for renormalized EP the following expression $$\begin{aligned} V_{eff}^{ren}(\sigma)={\sigma^2\over 2\lambda_R}- {(2ieE)^{3/2} \over 4\pi} \biggl[ 2 \zeta (-{1\over2},{\sigma^2 \over 2ieE})- \biggl({\sigma^2\over 2ieE}\biggr)^{1/2} \biggr]+\\ {R\sigma \over 24\pi}+{iR(eE)^{1/6} \over 2\pi^2 3^{7/3}} \exp(-\pi{\sigma^2 \over eE})\Gamma(\frac{2}{3})\sigma^{2/3}. \nonumber\end{aligned}$$ Here we have performed a small electric field expansion in the R-dependent term. A numerical analysis of $\Re V_{eff}(\sigma)$ for negative coupling constant gives the typical behaviour of a first–order phase transition, as shown in Fig. 3. The critical values are defined as usual: $R_{c1}$ corresponds to the spacetime curvature for which a local nonzero minimum appears, $R_{c}$, when the real part of EP is equal at zero and at the local minimum, and $R_{c2}$, when the zero extremum becomes a maximum. There is a plot of 3D $\Re V_{eff}^{ren}/\mu^3$ as a function of $\sigma/\mu$ for fixed $eE/\mu^2= 0.00005$ and $ \lambda\mu= -100$. From above to below, the curves in the plot correspond to the following values of $R/\mu^2=0.006; 0.005; 0.004; 0.0032; 0$, respectively. The critical values, defined as usual, are given by: $R_{c1}/\mu^2=0.005$; $R_{c}/\mu^2=0.0032$; $R_{c2}/\mu^2=0$. $\Lambda$ obviously does not appear anywhere because after renormalization it must be sent to infinity, $\Lambda\to\infty$. FIGURE 3 Conclusions {#conclusions .unnumbered} =========== We clearly observe that a positive spacetime curvature tries to restore chiral symmetry even in the presence of external magnetic field. Therefore the universal character of magnetic catalysis doesn’t survive in curved spacetime. From the other hand electric field increases the critical value of coupling constant as it does in flat spacetime. It should be noted that for $D<4$ is renormalizable and these conclusions don’t depend already on the cut-off scale $\Lambda$. This work has been partly financed by DGICYT (Spain), project PB96-0095, and by CIRIT (Generalitat de Catalunya), grant 1995SGR-00602. The work of Yu.I.Sh. was supported in part by Ministerio de Educación y Cultura (Spain), grant SB96-AN4620572. Nambu, Y., Jona-Lasinio G., [*Phys. Rev.*]{} [**122**]{}, 345 (1961). Gross, D., Neveu, A., [*Phys. Rev.*]{} [**D10**]{}, 3235 (1974). Fahri, E., Jackiw, R., Eds., [*Dynamical Symmetry Breaking*]{} (World Scientific, Singapore, 1981); Muta, T., Yamawaki K., Eds., [*Proceedings of the Workshop on Dynamical Symmetry Breaking*]{} (Nagoya, 1990); Bardeen, W. A., Kodaira, J., Muta, T., Eds., [*Proceedings of the International Workshop on Electroweak Symmetry Breaking*]{} (World Scientific, Singapore, 1991) Bando, M., Kugo, T., Yamawaki, K., [*Phys. Rep.*]{} [**164**]{}, 217 (1988); Rosenstein, B., Warr, B. J., Park, S. H., [*ibid.*]{} [**205**]{}, 59 (1991); Hatsuda, T., Kinuhiro, T., [*ibid.*]{} [**247**]{}, 221 (1994); Bijnens, J., [*ibid.*]{} [**265**]{}, 369 (1996). Schwinger, J., [*Phys. Rev.*]{} [**82**]{}, 664 (1951). Harrington, B. J., Park, S. Y., Yildiz, A., [*Phys. Rev.*]{} [**D11**]{}, 1472 (1975); Stone, M. [*ibid.*]{} [**D14**]{}, 3568 (1976); Kawati, S., Konisi, G., Miyata, H., [*ibid.*]{} [**D28**]{} 1537 (1983). Klevansky, S. P., Lemmer, R. H., [*Phys. Rev.*]{} [**D39**]{}, 3478 (1989); Klevansky, S., [*Rev. Mod. Phys.*]{} [**64**]{}, 649 (1992); Klimenko, K. G., [*Theor. Math. Phys.*]{} [**89**]{}, 211, 388 (1991); [*Z. Phys.*]{} [**C54**]{}, 323 (1992); Krive I. V., Naftulin, S., [*Phys. Rev.*]{} [**D46**]{}, 2337 (1992); Suganuma, H., T. Tatsumi, T., [*Ann. Phys. (NY)*]{} [**208**]{}, 470 (1991); [*Progr. Theor. Phys.*]{} [**90**]{}, 379 (1993). Cangemi, D., Dunne, G., D’Hoker, E., [*Phys. Rev.*]{} [**D51**]{}, R2513 (1995); [**D52**]{}, R3163 (1995); Leung, C. N., Ng, Y. J., Ackly, A. W., [*ibid.*]{} [**D54**]{}, 4181 (1996); Lee, D.-S., Leung, C. N., Ng, Y. J., [*ibid.*]{} [**D55**]{}, 6504 (1997); Ishi-i, M., Kashiwa, T., Tanemura, N., [*Nambu-Jona-Lasinio model coupled to constant electromagnetic fields in D-dimension*]{}, KYUSHU-HET-40, hep-th/9707248; Shushpanov, I. A., Smilga, A. V., [*Phys. Lett*]{} [**B402**]{}, 351 (1997); Ebert, D., Zhukovsky, V. Ch., [*Mod. Phys. Lett.*]{} [**A12**]{}, 2567 (1997); Hong, D. K., [*Phys. Rev.*]{} [**D57**]{}, 3759 (1998); Kanemura, S., Sato H.-T., Tochimura, H., [*Nucl. Phys.*]{} [**B517**]{}, 567 (1998). Gusynin, V., Miransky, V., Shovkovy, I., [*Phys. Rev.*]{} [**D52**]{}, 4718 (1995); Gusynin, V., Miransky, V., Shovkovy, I., [*Phys. Lett.*]{} [**B349**]{}, 477 (1995); Gusynin, V., Miransky, V., Shovkovy, I., [*Nucl. Phys.*]{} [**B462**]{}, 249 (1996); Elias, V., McKeon, D. G. C., Miransky, V., Shovkovy, I., [*Phys. Rev.*]{} [**D54**]{}, 7884 (1996); Babansky, A. Yu., Gorbar, E. V., Shchepanyuk, G. V., [*Phys. Lett.*]{} [**B419**]{}, 272 (1998); Miransky, V. A. [*Magnetic catalysis of dynamical symmetry breaking and Aharonov-Bohm effect*]{}, hep-th/9805159. Muta, T., Odintsov, S. D., [*Mod. Phys. Lett.*]{} [**A6**]{}, 3641 (1991); Hill, C. T. , Salopek, D. S., [*Ann. Phys. (NY)*]{}, [**213**]{}, 21 (1992); Inagaki, T., Muta, T., Odintsov, S. D., [*Mod. Phys. Lett.*]{} [**A8**]{}, 2117 (1993); Elizalde, E., Odintsov, S. D., Shil’nov, Yu. I., [*ibid.*]{} [**A9**]{}, 913 (1994); Inagaki, T., Mukaigawa, S., Muta, T., [*Phys. Rev.*]{} [**D52**]{}, R4267 (1996); Elizalde, E., Leseduarte, S., Odintsov, S. D., Shil’nov, Yu. I., [*ibid.*]{} [**D53**]{}, 1917 (1996); Kanemura, S., Sato, H.-T., [*Mod. Phys. Lett.*]{} [**A24**]{},1777 (1995); Miele, G., Vitale, P., [*Nucl. Phys.*]{} [**B494**]{}, 365 (1997). Gitman, D. M., Odintsov, S. D., Shil’nov, Yu. I., [*Phys. Rev.*]{} [**D54**]{}, 2968 (1996); Geyer, B., Granda, L. N., Odintsov, S. D., [*Mod. Phys. Lett.*]{} [**A11**]{}, 2053 (1996); Elizalde, E., Odintsov, S. D., Romeo, A., [*Phys. Rev.*]{} [**D54**]{}, 4152 (1996); Inagaki, T., Odintsov, S. D., Shil’nov, Yu. I., [*Dynamical symmetry breaking in the external gravitational and constant magnetic fields*]{} KOBE-TH-97-02, hep-th/9709077; Elizalde, E., Shil’nov, Yu. I. , Chitov, V. V., [*Class. Quant. Grav.*]{} [**15**]{}, 735 (1998). [^1]: E-mail: eli@zeta.ecm.ub.es, elizalde@io.ieec.fcr.es [^2]: E-mail: shil@kink.univer.kharkov.ua, visit2@ieec.fcr.es
--- abstract: '$K3$ surfaces with non-symplectic symmetry of order $3$ are classified by open sets of twenty-four complex ball quotients associated to Eisenstein lattices. We show that twenty-two of those moduli spaces are rational.' address: - 'Graduate School of Mathematics, Nagoya University, Nagoya 464-8602, Japan' - 'Department of Mathematics, Faculty of Science and Technology, Tokyo University of Science, 2641 Yamazaki, Noda, Chiba, 278-8510, Japan' - 'School of Information Environment, Tokyo Denki University, 2-1200 Muzai Gakuendai, Inzai-shi, Chiba 270-1382, Japan' author: - Shouhei Ma - Hisanori Ohashi - Shingo Taki title: Rationality of the moduli spaces of Eisenstein $K3$ surfaces --- [^1] [^2] Introduction ============ The study of $K3$ surfaces with non-symplectic symmetry has arisen as an application of the Torelli theorem, and by now, it has been recognized as closely related to classical geometry and special arithmetic quotients. They were first studied systematically in the involution case by Nikulin [@Ni], who classified their topological types using the lattices of $2$-cycles (anti-)invariant under the involutions. The anti-invariant lattices also provide the period domains, which are Hermitian symmetric of type IV and whose quotients by the orthogonal groups give the moduli spaces. What comes next to the involution case is the case of automorphisms of order $3$. Kondō, Dolgachev, and van Geemen [@D-G-K], [@Ko] studied two moduli spaces of $K3$ surfaces with such symmetry, in connection with genus $4$ curves and cubic surfaces. Subsequently, Artebani-Sarti [@A-S] and the third-named author [@Ta] gave a topological classification of such automorphisms. Let $X$ be a $K3$ surface with a non-symplectic symmetry $G\subset{\operatorname{Aut}}(X)$, $G\simeq{{\mathbb{Z}}}/3{{\mathbb{Z}}}$. Let $L(X, G)\subset H^2(X, {{\mathbb{Z}}})$ be the lattice of $G$-invariant cycles, and $E(X, G)\subset H^2(X, {{\mathbb{Z}}})$ be its orthogonal complement. These lattices have 3-elementary discriminant groups, analogous to the 2-elementary property in the involution case. What is more crucial is that $E(X, G)$ is endowed with the structure of an Eisenstein lattice, namely a Hermitian form over the ring of Eisenstein integers. Then a result of [@A-S] and [@Ta] says that the topological types of such pairs $(X, G)$ are, by associating $E(X, G)$, in one-to-one correspondence with certain Eisenstein lattices embeddable in the $K3$ lattice. In view of this, we shall call such a pair $(X, G)$ an *Eisenstein $K3$ surface*. According to [@A-S], [@Ta], $E(X, G)$ is in turn encoded in the pair $(r, a)$ where $r$ is the rank of $L(X, G)$ and $a$ is the length of its discriminant group, and there are exactly twenty-four such $(r, a)$. For each $(r, a)$, the period domain for Eisenstein $K3$ surfaces $(X, G)$ of that type is the complex ball associated to $E(X, G)$. One obtains the moduli space ${{\mathcal{M}_{r,a}}}$ of those Eisenstein $K3$ surfaces as the quotient of the ball by the unitary group of $E(X, G)$, with a Heegner divisor removed. This story is similar to the involution case, but note that the types of period domains are different. In this article we study the birational types of ${{\mathcal{M}_{r,a}}}$. The spaces $\mathcal{M}_{2,2}$ and $\mathcal{M}_{12,5}$, studied in [@Ko] and [@A-C-T], [@D-G-K] respectively, have been known to be rational by the corresponding results for the moduli of genus $4$ curves ([@SB]) and of cubic surfaces (classical). We show that this property actually holds for most ${{\mathcal{M}_{r,a}}}$. \[main\] The moduli space ${{\mathcal{M}_{r,a}}}$ of Eisenstein $K3$ surfaces of type $(r, a)$ is rational, possibly except for $(r, a)=(8, 7)$ and $(10, 6)$. A similar rationality result is known in the involution case ([@Ko0], [@Ma2], [@D-K]). It is natural to expect analogous results for other non-symplectic symmetry, and the present article goes into the Eisenstein case. In fact, it appears that automorphisms of order $2$ and $3$ cover a wide range of non-symplectic automorphisms: as the order grows, there seem to be only a few number of moduli spaces, of rather small dimension (though the classification is not yet completed). We will prove Theorem \[main\] case-by-case. A basic strategy is to first find a *canonical* triple cover construction of general members of ${{\mathcal{M}_{r,a}}}$ using $-\frac{3}{2}K_Y$-curves on a Hirzebruch surface $Y={{\mathbb{F}}}_{2N}$, $0\leq N\leq3$. More precisely, we consider an explicit locus $U\subset|-\frac{3}{2}K_Y|$ parametrizing curves with prescribed type of singularities and irreducible decomposition. We obtain a period map $$\mathcal{P} : U/{\operatorname{Aut}}({{\mathbb{F}}}_{2N}) \dashrightarrow {{\mathcal{M}_{r,a}}}$$ by taking the resolutions of cyclic triple covers of ${{\mathbb{F}}}_{2N}$ branched over $B\in U$. We can calculate the degree of such maps $\mathcal{P}$ in a systematic manner (see §\[ssec: recipe\]). If we could find those $U$ with $\deg(\mathcal{P})=1$, the problem is reduced to the rationality of $U/{\operatorname{Aut}}({{\mathbb{F}}}_{2N})$ which we prove by studying the ${\operatorname{Aut}}({{\mathbb{F}}}_{2N})$-action. This strategy is analogous to the one in the involution case [@Ma2], but hidden behind the similarity are some subtle features in the present case. The first is the existence of isolated fixed points of $(X, G)=\mathcal{P}(B)$, which appear over the singular points of $B$. By the above construction, we keep away from such fixed points, in a sense. Secondly, asking the triple cover to have canonical singularities is a strong demand, so that the singularities of $B$ are quite limited (at worst ramphoid cusps). Finally, smooth rational surfaces $Y$ with $3K_Y\in2{\rm Pic}(Y)$ are rare: they are only ${{\mathbb{F}}}_{2N}$. The above easy construction offers period maps of degree $1$ for as many as seventeen ${{\mathcal{M}_{r,a}}}$, but does not cover all cases. To analyze the remaining five ($\mathcal{M}_{4,3}$, $\mathcal{M}_{6,4}$, $\mathcal{M}_{8,5}$, $\mathcal{M}_{10,4}$ and $\mathcal{M}_{12,3}$), we develop a theory of branch *curves* that deals with isolated fixed points more substantially. This is the notion of *mixed branch*. It contains and is more flexible than $-\frac{3}{2}K_Y$-curves, and using it we can work with fixed curves and isolated fixed points quite satisfactorily. Those five ${{\mathcal{M}_{r,a}}}$ are provided with birational period maps using mixed branch. The rationality problem is open for $\mathcal{M}_{8,7}$ and $\mathcal{M}_{10,6}$. They are unirational by the constructions in [@A-S], [@A-S-T]. Unfortunately, for those two we failed to find a canonical and effective construction as above, due to which we could not approach them. The rest of the article is as follows. §\[sec: preliminary\] contains the preliminaries on Eisenstein lattices and automorphisms of ${{\mathbb{F}}}_{n}$. In §\[sec: EK3\] we recall/reformulate basic results on Eisenstein $K3$ surfaces. We introduce mixed branches in §\[ssec:mixed branch\], and then study $-\frac{3}{2}K_{{{\mathbb{F}}}_{2N}}$-curves in §\[ssec:pure branch\]. The method of degree calculation is explained in §\[ssec: recipe\]. After these preliminaries, the proof of Theorem \[main\] begins in §\[sec:g=5\]. We proceed according to the maximal genus $g$ of fixed curves: the cases with genus $g$ are treated in §$10-g$. We adopt this division policy because it exhibits the degeneration relations among the moduli spaces with a common $g$. Throughout this article we shall denote by $A_n$, $D_m$, $E_l$ the *negative*-definite root lattice of type $A_n$, $D_m$, $E_l$ respectively. We denote by $U$ the even indefinite unimodular lattice of rank $2$. **Acknowledgement.** H. O. is grateful to Professor Kondō for his encouragements. Preliminaries {#sec: preliminary} ============= In this section we prepare some results on Eisenstein lattices (§\[ssec: E lattice\]) and automorphisms of Hirzebruch surfaces (§\[ssec: Hirze\]). They are a technical basis for the rest of the article. The reader may skip for the moment and return when necessary. Eisenstein lattices {#ssec: E lattice} ------------------- Let $E$ be an [*[even lattice]{}*]{}, namely a free $\mathbb{Z}$-module endowed with a nondegenerate integral symmetric bilinear form $(\ ,\ )$ such that $(l,l)\in 2\mathbb{Z}$ for every $l\in E$. A structure of [*[Eisenstein lattice]{}*]{} on $E$ is a self-isometry $\rho$ of $E$ of order $3$, such that $\rho (l) \neq l$ for any $0\neq l\in E$. Equivalently, a self-isometry $\rho$ gives an Eisenstein structure if it satisfies $\rho^2+\rho+\mathrm{id}=0$. In this subsection we study some properties of such a pair $(E, \rho)$. First we justify the naming ”Eisenstein lattice”. Let $R={{\mathbb{Z}}}[\zeta]$, $\zeta=e^{2\pi i/3}$, be the ring of Eisenstein integers. For an Eisenstein lattice $(E,\rho )$ as above, the ${{\mathbb{Z}}}$-module $E$ is naturally equipped with an $R$-module structure by $\zeta \cdot l = \rho (l)$. Then we have an $R$-valued Hermitian form on $E$ by $$\label{eqn:Eisen lattice I} (l,l')_{\mathcal{E}} := (l,l')+\zeta (l,\rho (l')) + \zeta^2 (l,\rho^2 (l')) \in R.$$ If we decompose $E\otimes_{{\mathbb{Z}}}{{\mathbb{C}}}=V\oplus\overline{V}$ by the $\rho$-action where $\rho|_V=\zeta$, and consider the projection $\pi: E\to V$, then we have $(l,l')_{\mathcal{E}}=3(\pi(l), \overline{\pi(l')})$. Conversely, if $E$ is a free $R$-module equipped with a Hermitian form $(\ ,\ )_{\mathcal{E}}$, the symmetric bilinear form $$\label{eqn:Eisen lattice II} (l,l') := \frac{2}{3} \Re e ((l,l')_{\mathcal{E}})$$ defines (in general not integral) a lattice structure on the $\mathbb{Z}$-module $E$ which naturally has an Eisenstein structure $\rho$ defined by the action of $\zeta$. One checks that the constructions and are converse to each other. The bilinear form $(\ ,\ )$ is even if and only if the Hermitian form $(\ ,\ )_{\mathcal{E}}$ satisfies $$\label{eqn:Eisen lattice III} (l, l)_{\mathcal{E}}\in3{{\mathbb{Z}}}$$ for all $l\in E$. Thus Eisenstein lattices in our sense naturally correspond to Hermitian lattices over $R$ with the property . In this article we will work rather in the category of quadratic forms. Note that the signature of $E$ as a quadratic form is twice that of $E$ as a Hermitian form. We denote by $E^{\vee}={{\operatorname{Hom}}}_{{\mathbb{Z}}}(E, {{\mathbb{Z}}})$ the dual quadratic form (inside $E\otimes_{{\mathbb{Z}}}{{\mathbb{Q}}}$), and $E^{\ast}={{\operatorname{Hom}}}_R(E, R)$ the dual Hermitian form (inside $E\otimes_R{{\mathbb{Q}}}(\zeta)$). If we identify $E\otimes_{{\mathbb{Z}}}{{\mathbb{Q}}}$ with $E\otimes_R{{\mathbb{Q}}}(\zeta)$ naturally, then $E^{\vee}$ is equal to $\sqrt{-3}E^{\ast}$. Let $A_E=E^{\vee}/E$ be the discriminant group of $E$, which is endowed with the discriminant form $q_A: A_E\to{{\mathbb{Q}}}/2{{\mathbb{Z}}}$. \[eee\] (1) A fundamental example is the root lattice $E=A_2$. Up to taking square, it has a unique isometry $\rho$ of order $3$ which gives $E$ the structure of an Eisenstein lattice. The corresponding Hermitian form of rank $1$ is $\langle -3\rangle$.\ (2) Since we have an isometry $A_2\oplus A_2(-1)\simeq U \oplus U(3)$ of quadratic forms, by (1) the lattice $E=U\oplus U(3)$ has the structure of an Eisenstein lattice which corresponds to the Hermitian form $\langle3, -3\rangle$. Moreover, since $\rho$ acts trivially on the discriminant group $A_E$, it preserves the overlattices of $E$ which are isomorphic to $U\oplus U$. Hence we also obtain an Eisenstein structure on $U\oplus U$.\ (3) Since the root lattices $E_6$ and $E_8$ both can be obtained as overlattices of some direct sum of $A_2$, by the same reasoning as (2), these have the structure of an Eisenstein lattice, too. We shall fix the above Eisenstein structures on $U\oplus U$, $U\oplus U(3)$, $E_6$ and $E_8$. The unitary group $\mathrm{U} (E)$ of an Eisenstein lattice $(E, \rho )$ is naturally embedded in the orthogonal group ${\rm O}(E)$ by $$\mathrm{U}(E)= \{\gamma \in \mathrm{O} (E) \mid \gamma \circ \rho = \rho \circ \gamma\}.$$ In particular, we have a natural homomorphism ${\rm U}(E)\to{\rm O}(A_E)$ to the orthogonal group of the discriminant form. We prove that it is surjective for some special Eisenstein lattices, as an analogue of the surjectivity property of [@Ni0] for orthogonal groups. \[surj I\] (1) Let $E$ be the indefinite Eisenstein lattice $A_2(-1)\oplus A_2^n$. Then the homomorphism $\mathrm{U} (E)\rightarrow {\rm O}(A_E)$ is surjective.\ (2) Let $E$ be the definite Eisenstein lattice $A_2^n$ with $n\leq3$. Then the homomorphism $\mathrm{U} (E)\rightarrow {\rm O}(A_E)$ is surjective. The groups ${\rm O}(A_E)$ are in fact full orthogonal groups in characteristic $3$. Our proof relies on the fundamental fact that they are generated by reflections in non-isotropic vectors (see, e.g., [@Ki] Chapter 1.2). Let $L$ be the odd unimodular lattice $\langle1\rangle^n\oplus\langle-1\rangle$ (resp. $\langle1\rangle^n$) in the case (1) (resp. (2)). Then $E$ can be identified with the tensor product $L\otimes A_2$, including the correspondence of Gram matrices. Using this tensor notation, the Eisenstein structure of $E$ has the form $\mathrm{id}_L \otimes \rho$, where $\rho$ is from Example \[eee\] (1). Now for $g\in \mathrm{O} (L)$, we can define an element of ${\rm U}(E)$ by $\alpha (g) = g\otimes \mathrm{id}_{A_2}$. This defines an injective homomorphism $\alpha \colon \mathrm{O} (L)\rightarrow \mathrm{U}(E)$. Consider the composite of $\alpha$ and $\mathrm{U} (E)\rightarrow {\rm O}(A_E)$. By taking a natural basis of $A_E=A_{L\otimes A_2}$, it is identified with the reduction map $\beta\colon {\rm O}(L)\to{\rm O}(L/3L)$, where $L/3L$ is naturally equipped with a quadratic form over ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$. To prove the proposition, now it suffices to show that the reduction map $\beta$ is surjective. Let $(\ ,\ )$ be the bilinear form on $L$. Then the bilinear form on $L/3L$ is just given by $(\ ,\ ) \mod 3$, hence we use the same notation $(\ ,\ )$ for them. Since ${\rm O}(L/3L)$ is an orthogonal group in odd characteristic, it is generated by reflections $r_a$ for non-isotropic elements $a\in L/3L$, where $$\label{reflection} r_a \colon x\mapsto x-\frac{2(x,a)}{(a, a)}a.$$ If $l\in L$ satisfies $(l, l)\in \{\pm 1, \pm 2\}$, then the reflection $r_l$ defined by the same formula as gives an element of ${\rm O}(L)$, and its image in ${\rm O}(L/3L)$ is the reflection in $[l]\in L/3L$. Thus our surjectivity assertion is reduced to the “liftability of reflection vectors”, that is, the following problem: for any non-isotropic element $a \in L/3L$, find a lift $l\in L$ of $a$ (or $2a$, since they define the same reflections) such that $(l, l)\in \{\pm 1, \pm 2\}$. This purely arithmetic step is realized in the next lemma. Let $L$ be the odd unimodular lattice of signature $(n,1)$ for (1), or of signature $(n,0)$ for (2) respectively. In case (2) suppose $n\leq 3$. Then for any non-isotropic element $a\in L/3L$, there exists a lift $l\in L$ of $a$ or $2a$ such that $(l, l)\in \{\pm 1, \pm 2\}$. Case (2) is easily done by hand, so we prove only (1). We take the coordinates for $L$ so that the quadratic form on $L$ is given by $$q (x_0, \cdots, x_n)=-x_0^2+x_1^2+\cdots +x_n^2.$$ Let $(y_0,\cdots, y_n)\in L/3L$ be a given non-isotropic element. We have to show the existence of $l=(x_0, \cdots, x_n)\in L$ such that $(l, l)\in \{\pm 1, \pm 2\}$ and $x_i \mod 3$ is equal to the given $y_i$. This is purely an arithmetic problem. One solution is given as follows. First we ignore the zero coordinates $y_i\equiv 0 (i>0)$ by using $x_i=0$. Moreover for $y_i\equiv 1$ or $\equiv 2$, we can use $x_i=1, -2$ or $=-1, 2$ respectively so that $x_i^2$ takes the value $1$ or $4$ at any rate. These two steps reduce the equation to $$-x_0^2+(1+1+\cdots +4+4+\cdots)\in \{\pm 1, \pm 2\} \text{(exactly $n$ terms in the parentheses)}.$$ When $y_0\equiv 0$, take the positive integer $s$ such that $$3s^2-6s+4 \leq [n/3] < 3(s+1)^2-6(s+1)+4.$$ (If $[n/3]=0$ then we take $s=0$.) Then putting $x_0=3s$ gives one solution to the above equation $$-(3s)^2+1\cdot ([n/3]+n-3s^2)+4\cdot (3s^2-[n/3])=1\text{ or }2.$$ (We can see that $[n/3]+n-3s^2\geq 4(3(s-1)^2+1)-3s^2=(3s-4)^2\geq 0$, and so on.) When $y_0\equiv 1$, take the positive integer $s$ such that $$3s^2-4s+2 \leq [(n-1)/3] < 3(s+1)^2-4(s+1)+2.$$ (If $[(n-1)/3]=0$ then we take $s=0$.) Then putting $x_0=3s+1$ gives one solution to the above equation $$-(3s+1)^2+1\cdot ([(n-1)/3]+n-3s^2-2s)+4\cdot (3s^2+2s-[(n-1)/3])=1 \text{ or } 2.$$ Finally when $y_0\equiv -1$, we can find $x$ with $x\equiv -y$ by previous argument. All the cases are covered and the lemma is proved. As a consequence of Proposition \[surj I\], we have the following. \[surj II\] Let $E$ be one of the following Eisenstein lattices: $$A_2(-1)\oplus A_2^n\oplus E_8^m, \quad U^2\oplus A_2^l\oplus E_8^k \; (l\leq3).$$ Then the natural homomorphism ${\rm U}(E)\to {\rm O}(A_E)$ is surjective. Among the twenty-four Eisenstein lattices associated to Eisenstein $K3$ surfaces, twenty-two excepting $U^2\oplus A_2^4$ and $U^2\oplus A_2^5$ may be written in the above form (see §\[ssec: EK3\]). We will see in §\[sec:g=1\] that the surjectivity property also holds for those two, by geometric arguments. Automorphisms of Hirzebruch surfaces {#ssec: Hirze} ------------------------------------ We recall some basic facts about Hirzebruch surfaces (see, e.g., [@Ma2] §3 for more detail). For $n\geq0$ let ${{\mathbb{F}}}_n={{{\mathbb P}}}({{\mathcal{O}_{{\mathbb P}^{1}}}}(n)\oplus {{\mathcal{O}_{{\mathbb P}^{1}}}})$ be the $n$-th Hirzebruch surface with the natural projection $\pi\colon{{\mathbb{F}}}_n \to {{{\mathbb P}}}^1$. The ${{{\mathbb P}}}^1$-fibration $\pi$ has a $(-n)$-section $\Sigma$ (which is unique in case $n>0$), and also a section $H_0$ with $(H_0, H_0)=n$ that is disjoint from $\Sigma$. The Picard group of ${{\mathbb{F}}}_n$ is freely generated by $H_0$ and a fiber $F$ of $\pi$. We shall denote $L_{a,b}={{\mathcal{O}_{\mathbb{F}_{n}}}}(aH_0+bF)$. For example, $\Sigma$ belongs to $|L_{1,-n}|$; the canonical bundle $K_{{{\mathbb{F}}}_n}$ is isomorphic to $L_{-2,n-2}$. We take two distinct $\pi$-fibers $F_0$, $F_{\infty}$ and set $$\begin{aligned} U_1={{\mathbb{F}}}_n\backslash (F_{\infty}+H_0), & & U_2={{\mathbb{F}}}_n\backslash (F_0+H_0), \\ U_3={{\mathbb{F}}}_n\backslash (F_{\infty}+\Sigma), & & U_4={{\mathbb{F}}}_n\backslash (F_0+\Sigma). \end{aligned}$$ These open sets are isomorphic to ${{\mathbb{C}}}^2$ and form a covering of ${{\mathbb{F}}}_n$. Each $U_i$ has a coordinate $(x_i, y_i)$ with the transformation rules $$x_1=x_3=x_2^{-1}=x_4^{-1},$$ $$y_3=y_1^{-1}, \quad y_4=y_2^{-1}, \quad y_2=x_1^ny_1, \quad y_4=x_3^{-n}y_3,$$ and such that $\pi$ is given (inhomogeneously) by $(x_i, y_i)\mapsto x_i$. The restriction to $U_3$ of a curve $C\subset{{\mathbb{F}}}_n$ is defined by $F(x_3, y_3)=0$ for a polynomial $F$ of $x_3, y_3$. This identifies $H^0(L_{a,b})$ for $a, b\geq0$ with the following linear space of polynomials, up to constant: $$\label{def eq} \left\{ \sum_{i=0}^{a}f_i(x_3)y_3^{a-i}, \: {\rm deg}f_i\leq b+in \right\}.$$ If $C\in|L_{a,b}|$ is defined by $\sum_if_i(x_3)y_3^{a-i}=0$ on $U_3$, then on $U_1$ (resp. $U_4$, $U_2$) it is defined by $\sum_if_i(x_1)y_1^i=0$ (resp. $\sum_if_i(x_4^{-1})x_4^{b+in}y_4^{a-i}=0$, $\sum_if_i(x_2^{-1})x_2^{b+in}y_2^i=0$). *In the rest of this section we assume $n>0$.* Then we have the exact sequence $$\label{Aut(Hir)} 1 \to R \to {\operatorname{Aut}}({{\mathbb{F}}}_n) \to {\operatorname{Aut}}(\Sigma) \to 1$$ where $R={\operatorname{Aut}}({{\mathcal{O}_{{\mathbb P}^{1}}}}(n)\oplus {{\mathcal{O}_{{\mathbb P}^{1}}}})/{{\mathbb{C}}}^{\times}$. This sequence splits when $n$ is even. The group $R$ is isomorphic to ${{\mathbb{C}}}^{\times}\ltimes H^0({{\mathcal{O}_{{\mathbb P}^{1}}}}(n))$ and consists of the automorphisms $$\label{$R$-action in coordinate} g_{\alpha, s} : U_3\ni(x_3, y_3) \mapsto (x_3, \alpha y_3 + \sum_{i=0}^{n}\lambda_ix_3^i)\in U_3,$$ where $\alpha\in{{\mathbb{C}}}^{\times}$ and $s=\sum_{i=0}^{n}\lambda_ix^i\in H^0({{\mathcal{O}_{{\mathbb P}^{1}}}}(n))$. Later we will also use the following automorphisms: $$\label{auto coord 1} h_{\beta} : U_3\ni(x_3, y_3) \mapsto (\beta x_3, y_3)\in U_3, \quad \beta\in{{\mathbb{C}}}^{\times},$$ $$\label{auto coord 3} \iota : U_3\ni(x_3, y_3) \mapsto (x_3, y_3)\in U_4,$$ $$\label{auto coord 4} i_{\lambda} : U_2\ni(x_2, y_2) \mapsto (x_2+\lambda, y_2) \in U_2, \quad \lambda\in{{\mathbb{C}}}.$$ These rational maps actually extend to automorphisms of ${{\mathbb{F}}}_n$. We will need to know the action of ${\operatorname{Aut}}({{\mathbb{F}}}_n)$ on some spaces. \[stab of (pt, \*)\] The group ${\operatorname{Aut}}({{\mathbb{F}}}_n)$ acts on ${{\mathbb{F}}}_n$ (resp. ${{\mathbb{F}}}_n\times\Sigma$, ${{\mathbb{F}}}_n\times{{\mathbb{F}}}_n$) almost transitively with the stabilizer $G$ of a general point being connected and solvable. The almost transitivity is checked immediately. Let $p_i$ denote the point $(x_i, y_i)=(0, 0)$ in $U_i$. We may normalize a general point of ${{\mathbb{F}}}_n$ (resp. ${{\mathbb{F}}}_n\times\Sigma$, ${{\mathbb{F}}}_n\times{{\mathbb{F}}}_n$) to be $p_3$ (resp. $(p_3, p_2)$, $(p_3, p_4)$). In view of the exact sequence $$\label{seq:stab of pt} 0\to G\cap R \to G \to {\rm Im}(G\to{\operatorname{Aut}}(\Sigma))\to 1,$$ it suffices to show that both $G_1=G\cap R$ and $G_2={\rm Im}(G\to{\operatorname{Aut}}(\Sigma))$ are connected and solvable. In the case of ${{\mathbb{F}}}_n$, $G_2$ is the stabilizer in ${\operatorname{Aut}}(\Sigma)$ of $p_1$ and hence isomorphic to ${{\mathbb{C}}}^{\times}\ltimes{{\mathbb{C}}}$, while $G_1$ is $\{ g_{\alpha,s}\in R \, | \, \lambda_0=0\}$ which is isomorphic to ${{\mathbb{C}}}^{\times}\ltimes{{\mathbb{C}}}^n$. In the case of ${{\mathbb{F}}}_n\times\Sigma$, $G_2$ is the stabilizer of the two ordered points $(p_1, p_2)$ and thus isomorphic to ${{\mathbb{C}}}^{\times}$, while $G_1$ is the same as the case of ${{\mathbb{F}}}_n$. Finally, in the case of ${{\mathbb{F}}}_n\times{{\mathbb{F}}}_n$, $G_2$ is the same as the case of ${{\mathbb{F}}}_n\times\Sigma$, and $G_1$ is $\{ g_{\alpha,s}\in R \, | \, \lambda_0=\lambda_n=0\}$ which is isomorphic to ${{\mathbb{C}}}^{\times}\ltimes{{\mathbb{C}}}^{n-1}$. \[linear system\] We have the following. $(1)$ ${\operatorname{Aut}}({{\mathbb{F}}}_n)$ acts on $|L_{0,1}|\simeq\Sigma$ transitively with connected and solvable stabilizer. $(2)$ ${\operatorname{Aut}}({{\mathbb{F}}}_n)$ acts transitively on the open locus in $|L_{1,0}|$ of smooth curves. If $G$ is the stabilizer of $H_0\in|L_{1,0}|$, the natural homomorphism $G\to{\operatorname{Aut}}(\Sigma)$ is surjective with the kernel $\{ g_{\alpha,0} \, | \, \alpha\in{{\mathbb{C}}}^{\times}\}$. $(3)$ Let $U\subset|L_{2,0}|$ be the open locus of smooth curves. A geometric quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_n)$ exists and is naturally isomorphic to the moduli space $\mathcal{H}_{n-1}$ of hyperelliptic curves of genus $n-1$. Finally, let $C\subset{{\mathbb{F}}}_n$ be a curve in $|L_{2,0}|$ disjoint from $\Sigma$ (not necessarily smooth nor irreducible). We let $$\label{eqn: HE invol} \iota_C : {{\mathbb{F}}}_n \to {{\mathbb{F}}}_n$$ be the involution of ${{\mathbb{F}}}_n$ which on each $\pi$-fiber $F$ exchanges the two points $C|_F$ (or fixes $C|_F$ when they coincide) and fixes the one point $\Sigma|_F$. This extends the hyperelliptic involution of $C$. The fixed locus of $\iota_C$ is written as $H+\Sigma$ for a smooth $H\in|L_{1,0}|$. We thus have the ${\operatorname{Aut}}({{\mathbb{F}}}_n)$-equivariant map $$\label{average} \varphi : |L_{2,0}|\dashrightarrow |L_{1,0}|, \quad C\mapsto H,$$ which will be used repeatedly in this article. The section $H$ must pass through the singular points of $C$. If we normalize $H$ to be $H_0$, the involution $\iota_C$ is given by $(x_3, y_3)\mapsto(x_3, -y_3)$ in the coordinate. Therefore, we have $\varphi(C)=H_0$ if and only if the equation $\sum_{i=0}^{2}f_i(x_3)y_3^{2-i}=0$ of $C$ satisfies $f_1\equiv0$. Eisenstein $K3$ surfaces {#sec: EK3} ======================== Eisenstein $K3$ surfaces {#ssec: EK3} ------------------------ Let $X$ be a complex $K3$ surface with an automorphism group $G\subset{\operatorname{Aut}}(X)$ of order $3$ which acts on $H^0(K_X)$ faithfully. We shall call such a pair $(X, G)$ an *Eisenstein $K3$ surface*. We first review the basic theory of Eisenstein $K3$ surfaces following [@A-S], [@Ta] and [@A-S-T]. Let $$\label{eqn: inv lattice} L(X, G) = H^2(X, {{\mathbb{Z}}})^G$$ be the lattice of $G$-invariant cycles, and let $$\label{eqn: anti-inv lattice} E(X, G) = L(X, G)^{\perp}\cap H^2(X, {{\mathbb{Z}}})$$ be its orthogonal complement. The presence of $G$ automatically implies that $X$ is algebraic, so that $L(X, G)$ is a hyperbolic lattice. We shall denote by $r$ the rank of $L(X, G)$. By the relation , the discriminant forms of $L(X, G)$ and $E(X, G)$ are canonically anti-isometric ([@Ni0]): $$\label{eqn: isom disc form} (A_{L(X, G)}, q_{L(X, G)}) \simeq (A_{E(X, G)}, -q_{E(X, G)}).$$ By [@A-S], [@Ta] these discriminant groups are 3-elementary, namely $A_{L(X, G)}\simeq({{\mathbb{Z}}}/3{{\mathbb{Z}}})^a$ for some $a\geq0$. By the definition, the group $G$ acts on $E(X, G)$ with no non-zero invariant vector. Therefore, by choosing the distinguished generator $\rho\in G$ acting on $H^0(K_X)$ by $e^{2\pi i/3}$, the even lattice $E(X, G)$ is canonically endowed with the structure of an Eisenstein lattice in the sense of §\[ssec: E lattice\]. Moreover, since $G$ acts on $L(X, G)$ trivially, it acts on $A_{E(X, G)}$ trivially by . Our usage of the terminology ”Eisenstein $K3$ surface” comes from the viewpoint that $E(X, G)$ plays a fundamental role in the theory of such $K3$ surfaces. Artebani-Sarti [@A-S] and the third-named author [@Ta] classified Eisenstein $K3$ surfaces in terms of the pair $(r, a)$. \[fixed locus\] The fixed locus $X^G$ of an Eisenstein $K3$ surface $(X, G)$ is of the form $$\label{eqn: fixed locus} X^G = C^g \sqcup F_1 \sqcup\cdots\sqcup F_k\sqcup \{ p_1,\cdots, p_n\}$$ where $C^g$ is a genus $g$ curve, $F_i$ are $(-2)$-curves, and $p_j$ are isolated points with $$\label{eqn: (g,k,n)} g=\frac{22-r -2a}{4}, \qquad k=\frac{2+r-2a}{4}, \qquad n=\frac{r-2}{2}.$$ In the case $(r, a)=(8, 7)$ for which $(g, k)=(0, -1)$, this means fixed locus consisting of $3$ isolated points and no curve component. \[AST classification\] The deformation type of an Eisenstein $K3$ surface $(X, G)$ is determined by the invariant $(r, a)$. All possible $(r, a)$ are shown in Figure \[geography\]. ![Distribution of invariants $(r,a)$[]{data-label="geography"}](geography.eps){width="9cm"} In other terms, Theorem \[AST classification\] says that the deformation type of an Eisenstein $K3$ surface $(X, G)$ is determined by the Eisenstein lattice $E(X, G)$, which in turn is determined by the signature $(2, 20-r)$ and $a=l(A_{E(X, G)})$. \[condition E lattice\] An indefinite Eisenstein lattice $(E, \rho)$ is isomorphic to $E(X, G)$ for an Eisenstein $K3$ surface $(X, G)$ if and only if $E$ can be primitively embedded into the $K3$ lattice $\Lambda_{K3}=U^3\oplus E_8^2$ as an even lattice, and $\rho$ acts trivially on $A_E$. Let $E\subset\Lambda_{K3}$ be such an Eisenstein lattice, which must have signature $(2, s)$ for some even number $s$. Let $L=E^{\perp}\cap\Lambda_{K3}$. By our assumption, $\rho$ extends to an isometry of $\Lambda_{K3}$ by acting trivially on $L$. We shall denote that extension also by $\rho$. Let $E\otimes{{\mathbb{C}}}=V\oplus\overline{V}$ be the eigendecomposition for $\rho$, where $\rho$ acts on $V$ by $e^{2\pi i/3}$. We choose a point ${{\mathbb{C}}}\omega\in{{{\mathbb P}}}V$ such that $(\omega, \bar{\omega})>0$ and $(\omega, \delta)\ne0$ for any $(-2)$-vector $\delta\in E$. Since $(\omega, \omega)=0$, by the surjectivity of the period mapping we can find a $K3$ surface $X$ for which we have a Hodge isometry $\Phi\colon H^2(X, {{\mathbb{Z}}})\to (\Lambda_{K3}, {{\mathbb{C}}}\omega)$. Composing $\Phi$ with some reflections with respect to $(-2)$-curves on $X$, we may assume that $\Phi^{-1}(L)$ contains an ample class of $X$. Then by the Torelli theorem we have an automorphism $g$ of $X$ with $g^{\ast}=\Phi^{-1}\circ\rho\circ\Phi$. By the construction, $g$ is non-symplectic of order $3$ and we have a Hodge isometry $\Phi\colon E(X, \langle g\rangle)\to (E, {{\mathbb{C}}}\omega)$ preserving the Eisenstein structures. By Theorem \[AST classification\] and Lemma \[condition E lattice\], the deformation types of Eisenstein $K3$ surfaces are in one-to-one correspondence with the isomorphism classes of Eisenstein lattices $E$ as in Lemma \[condition E lattice\], and Figure \[geography\] may be regarded as classifying such Eisenstein lattices. Moreover, the proof of Lemma \[condition E lattice\] tells that for two such Eisenstein lattices $E, E'\subset\Lambda_{K3}$ with the same invariant $(r, a)$, there exists an isometry $\gamma\in{\rm O}(\Lambda_{K3})$ such that $\gamma|_E$ gives an isomorphism $E\to E'$ of Eisenstein lattices. Here we list concrete forms of the Eisenstein lattices $E$ for each fixed $g$: $$\begin{aligned} A_2(-1)\oplus A_2^{a-1}, & \qquad & g=0 \\ U^2\oplus A_2^a, & \qquad & g=1 \\ A_2(-1)\oplus A_2^{a-1}\oplus E_8, & \qquad & g=2 \\ U^2\oplus A_2^a\oplus E_8, & \qquad & g=3 \\ A_2(-1)\oplus A_2^{a-1}\oplus E_8^2, & \qquad & g=4 \\ U^2\oplus E_8^2, & \qquad & g=5 \end{aligned}$$ Next we study a relationship between the invariant lattice $L(X, G)$ and the fixed locus $X^G$. Let $\hat{X}\rightarrow X$ be the blow-up at the isolated fixed points $p_1,\cdots, p_n$ of $G$, and $E_i\subset\hat{X}$ the $(-1)$-curve over $p_i$. The $G$-action extends to $\hat{X}$ with the fixed locus $$\hat{X}^G = C^g + F_1 + \cdots + F_k + E_1 + \cdots + E_n.$$ We shall denote $L(\hat{X}, G)=H^2(\hat{X}, {{\mathbb{Z}}})^G$, which is freely generated by $L(X, G)$ and $E_1,\cdots, E_n$. Since $\hat{X}^G$ is a curve, the quotient surface $\hat{Y}=\hat{X}/G$ is smooth. It is easy to see that $\hat{Y}$ is rational. Let $\hat{f}\colon\hat{X}\to\hat{Y}$ be the quotient morphism. Substituting the relation $K_{\hat{X}}\sim \sum_iE_i$ into the ramification formula for $\hat{f}$, we obtain $$\label{eqn:relation in L/NS_Y} -\hat{f}^{\ast}K_{\hat{Y}} \sim 2C^g+2\sum_{i=1}^{k}F_i+\sum_{j=1}^{n} E_j,$$ which we regard as a relation among the curves $C^g, F_i, E_j$ in $L(\hat{X}, G)/\hat{f}^{\ast}NS_{\hat{Y}}$. \[gene gene L\] The invariant lattice $L(\hat{X}, G)$ is generated by the sublattice $\hat{f}^{\ast}NS_{\hat{Y}}$ and the classes of the fixed curves $C^g, F_i, E_j$. First note that $\hat{f}^{\ast}NS_{\hat{Y}}$ is of finite index in $L(\hat{X}, G)$, because for any $l\in L(\hat{X}, G)$ we have $3l=\hat{f}^{\ast}\hat{f}_{\ast}l \in \hat{f}^{\ast}NS_{\hat{Y}}$. Both $L(\hat{X}, G)$ and $\hat{f}^{\ast}NS_{\hat{Y}}\simeq NS_{\hat{Y}}(3)$ have $3$-elementary discriminant groups of length $a$, ${\rm rk}(NS_{\hat{Y}})$ respectively. Since $${\rm rk}(NS_{\hat{Y}})={\rm rk}(L(\hat{X}, G))=r+n,$$ the sublattice $\hat{f}^{\ast}NS_{\hat{Y}}$ is of index $3^{(r+n-a)/2}$ in $L(\hat{X}, G)$. We have $\frac{r+n-a}{2}=k+n$ by , so that the assertion reduces to the following lemma. \[1kodake\] Up to $\pm1$, is the only relation among $\{ C^{g}, F_i, E_j \}_{i,j}$ in the vector space $L(\hat{X}, G)/\hat{f}^{\ast }NS_{\hat{Y}}$ over ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$. Let $$\label{eqn: relation I} \alpha C^g + \sum_i\beta_iF_i + \sum_j\gamma_jE_j \equiv 0, \qquad \alpha, \beta_i, \gamma_j \in {{\mathbb{Z}}}/3{{\mathbb{Z}}},$$ be a relation among $C^g, F_i, E_j$ in $L(\hat{X}, G)/\hat{f}^{\ast }NS_{\hat{Y}}$. Since $\hat{f}_{\ast}\hat{f}^{\ast}NS_{\hat{Y}}=3NS_{\hat{Y}}$, we apply $\hat{f}_{\ast}$ to to obtain $$\label{eqn: relation II} \alpha \hat{f}_{\ast}C^g + \sum_i\beta_i\hat{f}_{\ast}F_i + \sum_j\gamma_j\hat{f}_{\ast}E_j \equiv 0 \quad \textrm{in} \: \: NS_{\hat{Y}}/3NS_{\hat{Y}}.$$ We can identify $NS_{\hat{Y}}/3NS_{\hat{Y}}$ with $H_2(\hat{Y}, {{\mathbb{Z}}}/3{{\mathbb{Z}}})$ by the Poincaré duality and the universal coefficient theorem. Therefore gives an element of the kernel of the map $$\label{pushforward fixed curve} \hat{f}_{\ast} : H_2(\hat{X}^G, {{\mathbb{Z}}}/3{{\mathbb{Z}}}) \to H_2(\hat{Y}, {{\mathbb{Z}}}/3{{\mathbb{Z}}}).$$ Regarding $\hat{X}^G$ as a curve on $\hat{Y}$ naturally, fits into the homology exact sequence for the pair $(\hat{Y}, \hat{X}^G)$: $$\cdots \to H_3(\hat{Y}, \hat{X}^G, {{\mathbb{Z}}}/3{{\mathbb{Z}}}) \to H_2(\hat{X}^G, {{\mathbb{Z}}}/3{{\mathbb{Z}}}) \stackrel{\hat{f}_{\ast}}{\to} H_2(\hat{Y}, {{\mathbb{Z}}}/3{{\mathbb{Z}}}) \to \cdots .$$ Then we have $h_3(\hat{Y}, \hat{X}^G, {{\mathbb{Z}}}/3{{\mathbb{Z}}})=1$ by [@A-S-T] Lemma 2.5. This proves our claim. Moduli spaces {#ssec: classification} ------------- Let $(r, a)$ be an invariant in Figure \[geography\]. We fix an Eisenstein lattice $(E, \rho)$ of signature $(2, 20-r)$ such that $A_E\simeq({{\mathbb{Z}}}/3{{\mathbb{Z}}})^a$ and that $\rho$ acts on $A_E$ trivially. Let $E\otimes{{\mathbb{C}}}=V\oplus\overline{V}$ be the eigendecomposition for $\rho$ where $\rho|_V=e^{2\pi i/3}$. The Hermitian form on $V$ defined by $(v, \bar{w})$ for $v, w\in V$, is isometric to $E\otimes{{\mathbb{R}}}$ up to a scaling (§\[ssec: E lattice\]) and thus has signature $(1, 10-r/2)$. Therefore the domain $$\label{eqn: ball} \mathcal{B}_{E} = \{ {{\mathbb{C}}}\omega\in{{{\mathbb P}}}V, \; (\omega, \bar{\omega})>0 \}$$ is a complex ball of dimension $10-r/2$. The unitary group ${\rm U}(E)$ of $E$ acts on $\mathcal{B}_E$. We define a complex analytic divisor $\mathcal{H}$ in $\mathcal{B}_E$ by $\mathcal{H}=\sum_{\delta}\delta^{\perp}$ where $\delta$ range over $(-2)$-vectors in $E$. Then we consider the open set of the ball quotient (or Picard modular variety) $$\label{def moduli} {{\mathcal{M}_{r,a}}} = {\rm U}(E) \backslash (\mathcal{B}_E - \mathcal{H}),$$ which is a normal quasi-projective variety of dimension $10-r/2$. Let $(X, G)$ be an Eisenstein $K3$ surface of invariant $(r, a)$. By Theorem \[AST classification\] there exists an isomorphism $\Phi\colon E(X, G)\to E$ of Eisenstein lattices. The ${{\mathbb{C}}}$-linear extension of $\Phi$, also denoted by $\Phi$, maps $H^{2,0}(X)$ to a point of $\mathcal{B}_E$. Then $\Phi(H^{2,0}(X))$ is contained in the complement of $\mathcal{H}$ (cf. [@D-K0], [@A-S-T]), and we define the period of $(X, G)$ by $$\label{eqn: def of period} \mathcal{P}(X, G)=[\Phi(H^{2,0}(X))]\in{{\mathcal{M}_{r,a}}}.$$ This is independent of the choice of $\Phi$. \[moduli\] The variety ${{\mathcal{M}_{r,a}}}$ is the moduli space of Eisenstein $K3$ surfaces of type $(r, a)$ in the following sense. $(1)$ For any family $(\mathcal{X}\to U, G)$ of such Eisenstein $K3$ surfaces over a variety $U$, the period map $\mathcal{P}\colon U\to{{\mathcal{M}_{r,a}}}$ is a morphism of varieties. $(2)$ Via the period mapping the points of ${{\mathcal{M}_{r,a}}}$ are in one-to-one correspondence with the isomorphism classes of such Eisenstein $K3$ surfaces. The fact that period maps are morphisms is a consequence of Borel’s extension theorem [@Bo]. The surjectivity of the period mapping is proved in [@D-K0] §11 and also in [@A-S-T] (cf. the proof of Lemma \[condition E lattice\]). Here we shall supplement the proof of the injectivity, which is more or less asserted in [@A-S-T] §9 without proof. Let us begin with the following basic lemma. \[Weyl action nef\] Let $(X, G)$ be an Eisenstein $K3$ surface and let $W(X)$ be the Weyl group of $NS_X$ generated by $(-2)$-reflections. For every $l\in L(X, G)$ with $(l, l)\geq0$ there exists $w\in W(X)$ commuting with the $G$-action such that either $w(l)$ or $-w(l)$ is nef. This is analogous to [@B-H-P-V] Proposition VIII 21.1. We may assume that $(l, h_0)\geq0$ for an ample class $h_0\in NS_X$. Let $D\subset X$ be a $(-2)$-curve with $(l, D)<0$. Then for a generator $\rho\in G$ we have $(D, \rho(D))\leq0$. If not, the effective divisor class $C=D+\rho(D)+\rho^{-1}(D)$ in $L(X, G)$ would have norm $\geq0$ and satisfy $(l, C)<0$, which is a contradiction. Therefore $D$ is either preserved by $G$ or disjoint from $\rho(D)$. In the former case we apply to $l$ the reflection with respect to $D$, which commutes with the $G$-action. In the latter case the three curves $D$, $\rho(D)$ and $\rho^{-1}(D)$ are pairwise disjoint. Then we apply to $l$ the composition of the three reflections with respect to these curves, which also commutes with the $G$-action. As in [@B-H-P-V], this process will terminate and $l$ will be finally mapped to a nef class. Returning to the proof of Theorem \[moduli\], we let two Eisenstein $K3$ surfaces $(X, G), (X', G')$ of type $(r, a)$ have the same period in ${{\mathcal{M}_{r,a}}}$. This means that we have an isomorphism $\gamma:E(X, G)\to E(X', G')$ of Eisenstein lattices preserving the Hodge structures. We want to extend $\gamma$ to a Hodge isometry $\Phi\colon H^2(X, {{\mathbb{Z}}})\to H^2(X', {{\mathbb{Z}}})$. Since $L(X, G)$ and $L(X', G')$ are isometric, by a standard argument of discriminant group (cf. [@Ni0]) it suffices to show that the natural homomorphism ${\rm O}(L(X, G))\to{\rm O}(A_{L(X, G)})$ is surjective. When $(r, a)\ne(2, 2), (4, 3), (8, 7)$, we have $r\geq a+2$ so that our claim follows from [@Ni0] Theorem 1.14.2. The case $(r, a)=(2, 2)$ is easily checked. For the remaining two cases, we may resort to the assertions (i), (iii) of the Theorem of [@M-M]. Thus we obtain a desired extension $\Phi$ of $\gamma$. By the above lemma we may compose $\Phi$ with a $G$-equivariant $w\in W(X)$ so that $\Phi\circ w$ preserves the ample cones. By the Torelli thorem we have an isomorphism $\varphi\colon X'\to X$ with $\varphi^{\ast}=\Phi\circ w$. Then $\varphi$ is ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariant because $\varphi^{\ast}$ is so. Therefore $(X, G)$ is isomorphic to $(X', G')$. We set $g=(22-r-2a)/4$ as in . Let $\mathcal{M}_g$ be the moduli space of genus $g$ curves. When $g>0$, we have the *fixed curve map* $$\label{eqn: def fixed curve map} {{\mathcal{M}_{r,a}}} \to \mathcal{M}_g, \qquad (X, G)\mapsto C^g,$$ where $C^g$ is the genus $g$ curve in $X^G$. This map will be analyzed for some ${{\mathcal{M}_{r,a}}}$ in the rest of the article. Marked Eisenstein $K3$ surfaces {#ssec: cover} ------------------------------- We define a Galois cover of ${{\mathcal{M}_{r,a}}}$ that will be used in our degree calculation of period maps (§\[ssec: recipe\]). It is also treated systematically in [@D-K0] §11. Let $E$ be the Eisenstein lattice used in the definition of ${{\mathcal{M}_{r,a}}}$. The natural homomorphism ${\rm U}(E) \to {\rm O}(A_E)$ is surjective by Corollary \[surj II\] (for $(r, a)\ne(8, 5), (10, 4)$) and Propositions \[birat (8,5)\], \[birat (10,4)\] (for $(r, a)=(8, 5), (10, 4)$ respectively). Let $\widetilde{{\rm U}}(E)$ be the kernel of ${\rm U}(E) \to {\rm O}(A_E)$. We consider the ball quotient $$\label{def cover} {{\widetilde{\mathcal{M}}_{r,a}}} = \widetilde{{\rm U}}(E) \backslash \mathcal{B}_E.$$ Its open set over ${{\mathcal{M}_{r,a}}}$ is a Galois cover of ${{\mathcal{M}_{r,a}}}$ with Galois group ${\rm O}(A_E)/\pm1$. In particular, the degree of the projection ${{\widetilde{\mathcal{M}}_{r,a}}}\dashrightarrow{{\mathcal{M}_{r,a}}}$ is given by $$\label{proj degree} \left\{ \begin{array}{cl} |{\rm O}(A_E)|/2, & \quad a>0, \\ 1, & \quad a=0. \\ \end{array} \right.$$ Since $(A_E, q_E)$ is a finite quadratic form in characteristic $3$, we can calculate $|{\rm O}(A_E)|$ by referring to, e.g., [@Atlas]. We shall use the following standard notation for orthogonal groups in characteristic $3$: ${\rm GO}(2m+1, 3)$, ${\rm GO}^+(2m, 3)$ and ${\rm GO}^-(2m, 3)$. As essentially explained in [@D-K0] §10 – §11, ${{\widetilde{\mathcal{M}}_{r,a}}}$ is birationally a moduli space of Eisenstein $K3$ surfaces with marking of its invariant lattice. We fix an even hyperbolic $3$-elementary lattice $L$ of rank $r$ and $l(A_L)=a$, a primitive embedding $L\subset\Lambda_{K3}$, and an isometry $E\simeq L^{\perp}\cap\Lambda_{K3}$ of quadratic forms. We extend the ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-action on $E$ to $\Lambda_{K3}$ by the trivial action on $L$. Suppose that we are given an Eisenstein $K3$ surface $(X, G)$ with an isometry $j\colon L\to L(X, G)$ of quadratic forms. By the surjectivity of ${\rm U}(E) \to {\rm O}(A_E)$,[^3]\[footnote1\] the embedding $j$ extends to a ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariant isometry $\Phi\colon\Lambda_{K3}\to H^2(X, {{\mathbb{Z}}})$. Since the restriction of $\Phi$ to $L$ is fixed, the isometry $\Phi|_E\colon E\to E(X, G)$ is determined up to the action of $\widetilde{{\rm U}}(E)$ by [@Ni0]. Then we define the period of the Eisenstein $K3$ surface $(X, G)$ with the lattice-marking $j$ by $$\label{lifted period} \widetilde{\mathcal{P}}((X, G), j) = [ \Phi|_E^{-1}(H^{2,0}(X)) ] \in {{\widetilde{\mathcal{M}}_{r,a}}}.$$ Clearly, two such lattice-marked Eisenstein $K3$ surfaces $((X, G), j)$, $((X', G'), j')$ have the same $\widetilde{\mathcal{P}}$-period in ${{\widetilde{\mathcal{M}}_{r,a}}}$ if and only if there exists a ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariant Hodge isometry $\Psi \colon H^2(X, {{\mathbb{Z}}}) \to H^2(X', {{\mathbb{Z}}})$ with $\Psi\circ j = j'$. The open set of ${{\widetilde{\mathcal{M}}_{r,a}}}$ over ${{\mathcal{M}_{r,a}}}$ parametrizes such equivalence classes of Eisenstein $K3$ surfaces with lattice-marking. Triple cover construction ========================= Mixed branch {#ssec:mixed branch} ------------ We develop triple cover construction of Eisenstein $K3$ surfaces in a moderate generality sufficient for the proof of Theorem \[main\]. We propose the notion of *mixed branch* as an analogue of DPN pair [@A-N], that is, singular branch *curve* on smooth surface. The key idea is to distinguish the branch components turning to isolated fixed points from those components turning to fixed curves by multiplicity of divisor. The formality of the resolution process works keeping this geometric idea. \[def:mixed branch\] Let $Y$ be a smooth rational surface. A mixed branch on $Y$ is a ${{\mathbb{Q}}}$-divisor $B=B_1+\frac{1}{2}B_2$ linearly equivalent to $-\frac{3}{2}K_Y$, where $B_1, B_2$ are reduced curves having no common component, with the following properties. $(1)$ ${\rm Sing}(B_1)$ are at most nodes, cusps, tacnodes and ramphoid cusps. $(2)$ $B_2$ is a union of rational curves, and its singularities (if any) are only ordinary triple points disjoint from ${\rm Sing}(B_1)$. $(3)$ If $B_2$ passes through a singular point $p$ of $B_1$, then $p$ is a node or cusp of $B_1$, and $B_1+B_2$ has more than one tangent at $p$. We call $\frac{1}{2}B_2$ the *shadow part*[^4] of $B$. The condition $(1)$ comes from the demand that the local triple cover around $p\in{\rm Sing}(B_1)$ branched over $B_1$ has only A-D-E singularities (see the next §\[ssec:pure branch\]). Let us denote $(B_i)_{sm}=B_i\backslash{\rm Sing}(B_i)$. The multiplicity of $B$ at a singular point $p$ of $B_1+B_2$ is classified as follows: - $3/2$ ($p\in{\rm Sing}(B_2)\backslash B_1$ or $p\in(B_1)_{sm}\cap(B_2)_{sm}$) - $2$ ($p\in{\rm Sing}(B_1)\backslash B_2$) - $5/2$ ($p\in{\rm Sing}(B_2)\cap B_1$ or $p\in{\rm Sing}(B_1)\cap B_2$) We can resolve a mixed branch $B=B_1+\frac{1}{2}B_2$ in the following way. Let $Y'\to Y$ be the blow-up at a singular point $p$ of $B_1+B_2$. We define a mixed branch on $Y'$ by$$\label{resol process} B_1'+\frac{1}{2}B_2' = \widetilde{B}_1+\frac{1}{2}\widetilde{B}_2+(m-\frac{3}{2})E,$$ where $\widetilde{B}_i$ is the strict transform of $B_i$, $m$ is the multiplicity of $B$ at $p$, and $E$ is the $(-1)$-curve over $p$. One checks that $B'=B_1'+\frac{1}{2}B_2'$ is linearly equivalent to $-\frac{3}{2}K_{Y'}$ and satisfies the conditions $(1)$–$(3)$ in Definition \[def:mixed branch\]. Continuing this resolution process $\cdots\to(Y'', B'')\to(Y', B')$, we finally obtain a mixed branch $(\hat{Y}, \hat{B}_1+\frac{1}{2}\hat{B}_2)$ with $\hat{B}_1+\hat{B}_2$ smooth. We shall call this procedure the *right resolution* of $(Y, B)$. Substituting the relation $2\hat{B}_1+\hat{B}_2 \sim -3K_{\hat{Y}}$ into the adjunction formula, we see that every rational component of $\hat{B}_1$ (resp. $\hat{B}_2$) is a $(-6)$-curve (resp. $(-3)$-curve). Since $\hat{B}_1-\hat{B}_2 \sim 3(K_{\hat{Y}}+\hat{B}_1)$, we can take a cyclic triple cover $\hat{f}\colon\hat{X}\to\hat{Y}$ branched over $\hat{B}_1+\hat{B}_2$ by the following general lemma. Let $Y$ be a complex manifold and $D_1, D_2$ be disjoint smooth divisors on $Y$ with $D_1-D_2\in d {{{\rm Pic}}}(Y)$. Then there exists a cyclic cover $X\to Y$ of degree $d$ branched over $D_1+D_2$. As usual, we choose a line bundle $L$ with an isomorphism $L^{\otimes d}\simeq {{\mathcal{O}}}_Y(D_1-D_2)$. We compactify the total space of $L$ to $\overline{L}={{{\mathbb P}}}({{\mathcal{O}}}_Y\oplus L)$ (adding $\infty$ to each fiber). If $s$ is a meromorphic section of $L^{\otimes d}$ with ${\rm div}(s)=D_1-D_2$, then the divisor $\{ v\in \overline{L}, v^{\otimes d}=s\}$ in $\overline{L}$ gives the desired covering. Alternatively, by the relation $2\hat{B}_1+\hat{B}_2 \in {\rm Pic}(\hat{Y})$ we can take a cyclic triple cover $\hat{X}'\to\hat{Y}$ branched over $2\hat{B}_1+\hat{B}_2$. This $\hat{X}'$ has cuspidal singularities along $\hat{B}_1$, and $\hat{X}$ can also be obtained as the normalization of $\hat{X}'$. By the ramification formula we see that $$K_{\hat{X}} \sim \hat{f}^{\ast}(K_{\hat{Y}} + \hat{B}_1+\hat{B}_2) - \hat{f}^{-1}(\hat{B}_1+\hat{B}_2) \sim \hat{f}^{-1}(\hat{B}_2),$$ where $\hat{f}^{-1}(\hat{B}_i)$ denotes the reduced inverse image. The divisor $\hat{f}^{-1}(\hat{B}_2)$ is a disjoint union of $(-1)$-curves. Blowing them down, we obtain a surface $X$ with $K_X\simeq{{\mathcal{O}}}_X$, namely a $K3$ or abelian surface. The ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-action on $\hat{X}\to\hat{Y}$ equips $X$ with a non-symplectic symmetry $G$ of order $3$. The abelian case does happen, but is quite rare. Specifically, \[abelian case\] The surface $X$ is abelian if and only if $B_1=0$ and $B_2$ has nine components. If $X$ is abelian, the fixed locus $X^G$ is either the union of isolated points or of disjoint elliptic curves (cf. [@B-L]). In the latter case the quotient $X/G$ is again an abelian surface, which is out of the present situation. In the former case we have $|X^G|=9$ by [@B-L] Example 13.2.7, and thus $B_2$ has nine components and $B_1$ is empty. Conversely, if $B_1=0$ and $B_2$ has nine components, $X$ cannot be $K3$ by Figure \[geography\]. When $X$ is a $K3$ surface, we thus obtain an Eisenstein $K3$ surface associated to the mixed branch $(Y, B_1+\frac{1}{2}B_2)$. Let $E\subset Y$ be one of the following types of $(-1)$-curves: - those $E$ transverse to $B_1+B_2$; - components $E$ of $B_1$ with $(E, B_2)=1$; - components $E$ of $B_2$ which are disjoint from other components of $B_2$. If $\pi\colon Y\to \overline{Y}$ is the blow-down of $E$, then $(\overline{Y}, \pi(B_1)+\frac{1}{2}\pi(B_2))$ is again a mixed branch. In this way, by composing blow-up and such blow-down, we can pass from a given mixed branch to another one with common smooth model. Regrettably we have restriction on the type of blow-down, due to the singularity conditions in Definition \[def:mixed branch\]. For that we could also extend the definition of mixed branch by allowing any blown-down image of smooth mixed branch (cf. §\[ssec:(8,5)\]), but with less effectivity at present. Anyway, the present generality is handy, and sufficient for giving canonical construction of general members of most ${{\mathcal{M}_{r,a}}}$. Actually, for seventeen ${{\mathcal{M}_{r,a}}}$ we will use mixed branch with no shadow. Thus in the next subsection we shall be more specific in that case. We were led to the notion of mixed branch by tracking resolution of $-\frac{3}{2}K_{{{\mathbb{F}}}_n}$-curves on ${{\mathbb{F}}}_n$ (see §\[ssec:pure branch\]). It seems that the rule would also explain the resolution process in [@O-T] for certain singular del Pezzo surfaces, by detecting the shadow part $B_2$ by discrepancy. Anti-tri-halfcanonical curves on Hirzebruch surfaces {#ssec:pure branch} ---------------------------------------------------- A mixed branch with no shadow is just a reduced curve $B\sim-\frac{3}{2}K_Y$ with at most nodes, cusps, tacnodes and ramphoid cusps as the singularities. Since $3K_Y\in2{{{\rm Pic}}}(Y)$ and $|\!-\!\frac{3}{2}K_Y|$ contains a reduced member, $Y$ must be a Hirzebruch surface ${{\mathbb{F}}}_n$ with $n\in\{0, 2, 4, 6\}$. In this case, we have $B\in3{{{\rm Pic}}}({{\mathbb{F}}}_n)$ so that we may take a cyclic triple cover $\overline{X}\to{{\mathbb{F}}}_n$ branched over $B$. Looking at the local equations of the singularities of $B$, we see that the singularities of $\overline{X}$ (lying over ${\rm Sing}(B)$) are as follows: - $A_2$-points ($z^3=x^2+y^2$) over nodes ($x^2+y^2=0$), - $D_4$-points ($z^3=x^2+y^3$) over cusps ($x^2+y^3=0$), - $E_6$-points ($z^3=x^2+y^4$) over tacnodes ($x^2+y^4=0$), - $E_8$-points ($z^3=x^2+y^5$) over ramphoid cusps ($x^2+y^5=0$). In particular, $\overline{X}$ has only A-D-E singularities. Since $K_{\overline{X}}\sim\mathcal{O}_{\overline{X}}$, we can resolve ${\rm Sing}(\overline{X})$ to obtain a $K3$ surface $X$ with a non-symplectic symmetry $G$ of order $3$. ($X$ cannot be an abelian surface by Lemma \[abelian case\].) It is clear that this Eisenstein $K3$ surface $(X, G)$ coincides with the one obtained in §\[ssec:mixed branch\] using resolution of $B$. A virtue in the present situation is that we have a natural projection $f\colon X\to{{\mathbb{F}}}_n$. Let $L\in{{{\rm Pic}}}({{\mathbb{F}}}_n)$ be the bundle $L_{1,0}$ (resp. ${{\mathcal{O}}}_{{{\mathbb{F}}}_0}(1, 1)$) when $n=2, 4, 6$ (resp. $n=0$). The subspace $f^{\ast}H^0(L)\subset H^0(f^{\ast}L)$ is the eigenspace for $G$ with eigenvalue $1$. The morphism $X\to f^{\ast}|L|^{\vee}$ associated to the linear system $f^{\ast}|L|$ is the composition of $f$ and the morphism ${{\mathbb{F}}}_n\to|L|^{\vee}$ associated to $L$. The last one is the contraction of the $(-n)$-curve $\Sigma$ (resp. an embedding) when $n\geq2$ (resp. $n=0$). Checking that $f^{\ast}|L|\subset|f^{\ast}L|$ has strictly larger dimension than the other two eigenspaces, we have the following useful \[recovery\] Let $B, B'\in|\!-\!\frac{3}{2}K_{{{\mathbb{F}}}_n}|$ be as above, and $(X, G), (X', G')$ be the associated Eisenstein $K3$ surfaces with the projections $f\colon X\to{{\mathbb{F}}}_n, f'\colon X'\to{{\mathbb{F}}}_n$. If we have an isomorphism $\varphi\colon(X, G)\to(X', G')$ with $\varphi^{\ast}(f')^{\ast}L\simeq f^{\ast}L$, then we have an automorphism $\psi$ of ${{\mathbb{F}}}_n$ with $f'\circ\varphi=\psi\circ f$. Let us describe the configurations of curves lying over the singularities of $B$. Let $(\hat{Y}, \hat{B}_1+\frac{1}{2}\hat{B}_2)$ be the right resolution of $({{\mathbb{F}}}_n, B)$ and $\hat{X}\to\hat{Y}$ be the triple cover branched over $\hat{B}_1+\hat{B}_2$. Let $p$ be a singular point of $B$. Following the blow-up procedure , we see that the dual graph of the curves on $\hat{Y}$ contracted to $p$ is, according to the type of singularity, as follows. (100,25)\[graph A\_2\] (10,-5)[[$A_2$]{}]{} (60,0) (75,-3)[$\bigstar$]{} (100,0) (64,0)[(1,0)[12]{}]{} (84,0)[(1,0)[12]{}]{} (100,25)\[graph D\_4\] (10,-5)[[$D_4$]{}]{} (60,0) (80,0) (95,-3)[$\bigstar$]{} (64,0)[(1,0)[12]{}]{} (84,0)[(1,0)[12]{}]{} (100,55)\[graph E\_6\] (10,-5)[[$E_6$]{}]{} (60,0) (75,-3)[$\bigstar$]{} (100,0) (120,0) (120,0) (120,20) (114.9,35)[$\bigstar$]{} (140,0) (155,-3)[$\bigstar$]{} (180,0) (64,0)[(1,0)[12]{}]{} (84,0)[(1,0)[12]{}]{} (104,0)[(1,0)[12]{}]{} (120,4)[(0,1)[12]{}]{} (120,24)[(0,1)[12]{}]{} (124,0)[(1,0)[12]{}]{} (144,0)[(1,0)[12]{}]{} (164,0)[(1,0)[12]{}]{} (100,60)\[graph E\_8\] (10,-5)[[$E_8$]{}]{} (55,-3)[$\bigstar$]{} (80,0) (100,0) (100,0) (120,0) (135,-3)[$\bigstar$]{} (160,0) (180,0) (180,0) (200,0) (215,-3)[$\bigstar$]{} (240,0) (180,20) (174.9,35)[$\bigstar$]{} (64,0)[(1,0)[12]{}]{} (84,0)[(1,0)[12]{}]{} (104,0)[(1,0)[12]{}]{} (124,0)[(1,0)[12]{}]{} (144,0)[(1,0)[12]{}]{} (164,0)[(1,0)[12]{}]{} (180,4)[(0,1)[12]{}]{} (180,24)[(0,1)[12]{}]{} (184,0)[(1,0)[12]{}]{} (204,0)[(1,0)[12]{}]{} (224,0)[(1,0)[12]{}]{} Here a white circle represents a $(-1)$-curve; a black circle represents a $(-2)$-curve (disjoint from $\hat{B}_1+\hat{B}_2$); a double circle represents a $(-6)$-curve (a component of $\hat{B}_1$); and a star represents a $(-3)$-curve (a component of $\hat{B}_2$). The reduced inverse images of those curves by $\hat{X}\to\hat{Y}$ are respectively a $(-3)$-curve; three disjoint $(-2)$-curves; a $(-2)$-curve; and a $(-1)$-curve. Blowing-down the last $(-1)$-curves, we obtain the configuration of exceptional curves of the resolution $X\to\overline{X}$ over $p$. Its dual graph $\Gamma_p$ (isomorphic to the Dynkin graph of $A_2$-, $D_4$-, $E_6$- or $E_8$-type) is obtained from the graph in Figure \[dual graph\] by multiplying the black circle thrice and contracting the stars. Thus the stars turn to isolated fixed points of $G$, and the double circles turn to fixed curves. When $p$ is a cusp, $G$ acts on $\Gamma_p$ by the cyclic permutations; in other cases $G$ acts on $\Gamma_p$ trivially. One should note that, when $p$ is a node or tacnode, there are *two* identifications of our geometric dual graph $\Gamma_p$ with the abstract $A_2$- or $E_6$-graph. A choice of such an identification corresponds to a labeling for the two branches of $B$ at $p$. On the other hand, when $p$ is a ramphoid cusp, such an identification is unique. From these we can compute the topological invariants of $(X, G)$ as follows. Let $k_0+1$ be the number of components of $B$, and let $a_2, d_4, e_6$ and $e_8$ denote the number of nodes, cusps, tacnodes, and ramphoid cusps of $B$ respectively. Then the number $k+1$ of fixed curves of $(X, G)$ is given by $$\label{calculate k 3.3} k = k_0 + e_6 + 2e_8,$$ and the number $n$ of isolated fixed points of $(X, G)$ is given by $$\label{calculate n 3.3} n = a_2 + d_4 + 3e_6 + 4e_8.$$ The rank $r$ of the invariant lattice $L(X, G)$ is the Picard number of $\hat{Y}$ minus $n$, which is given by $$\label{calculate r 3.3} r = 2 + 2a_2 + 2d_4 + 6e_6 + 8e_8.$$ In the rest of this subsection we work under the following “genericity" assumption: $$\label{genericity assumption} \textit{${\rm Sing}(B)$ does not contain cusps.}$$ Then for a singular point $p\in B$, we denote by $\Lambda_p\subset NS_X$ the root lattice generated by the exceptional curves of the resolution $X\to\overline{X}$ over $p$. As observed above, $\Lambda_p$ is contained in the invariant lattice $L(X, G)$. Let $B=\sum_{i=0}^{k_0}B_i$ be the irreducible decomposition of $B$, and $F_i\subset X$ be the fixed curve of $G$ with $f(F_i)=B_i$. \[gene L\] The invariant lattice $L(X, G)$ is generated by the sublattice $f^{\ast}NS_{{{\mathbb{F}}}_n}\oplus(\oplus_p\Lambda_p)$ where $p\in{\rm Sing}(B)$, and the classes of $F_i$, $0\leq i\leq k_0$. Consider the blow-up $\pi: \hat{X}\to X$ of the isolated fixed points. By Proposition \[gene gene L\] and Figure \[dual graph\], the invariant lattice $L(\hat{X}, G)$ of $(\hat{X}, G)$ is generated by $\pi^{\ast}(f^{\ast}NS_{{{\mathbb{F}}}_n}\oplus(\oplus_p\Lambda_p))$, the classes of $\pi^{\ast}F_i$, and the classes of exceptional curves of $\pi$. Contracting the exceptional curves, we see our assertion for $L(X, G)$. Let us emphasize (again) that when $p$ is a ramphoid cusp, we have a unique isometry $E_8\to\Lambda_p$ that maps the natural root basis to the classes of $(-2)$-curves, while when $p$ is a node (resp. tacnode), we have two such natural isometries $A_2\to\Lambda_p$ (resp. $E_6\to\Lambda_p$) corresponding to the two labelings of the branches of $B$ at $p$. Finally, we shall construct an ample class in $L(X, G)$ using the above objects. We denote by $e_{i\pm}, e_i$ the root basis of the $E_6$- and $E_8$-lattices according to the following numberings for the vertices of the $E_6$- and $E_8$-graphs: (100,50)\[numbering\] (30,0) (50,0) (70,0) (70,20) (90,0) (110,0) (30,0)[(1,0)[20]{}]{} (50,0)[(1,0)[20]{}]{} (70,0)[(0,1)[20]{}]{} (70,0)[(1,0)[20]{}]{} (90,0)[(1,0)[20]{}]{} (31,-11)[1+]{} (51,-11)[2+]{} (71,-11)[3]{} (73,11)[4]{} (91,-11)[2-]{} (111,-11)[1-]{} (170,0) (190,0) (210,0) (230,0) (250,0) (250,20) (270,0) (290,0) (170,0)[(1,0)[20]{}]{} (190,0)[(1,0)[20]{}]{} (210,0)[(1,0)[20]{}]{} (230,0)[(1,0)[20]{}]{} (250,0)[(0,1)[20]{}]{} (250,0)[(1,0)[20]{}]{} (270,0)[(1,0)[20]{}]{} (171,-11)[7]{} (191,-11)[6]{} (211,-11)[5]{} (231,-11)[4]{} (251,-11)[3]{} (253,11)[8]{} (271,-11)[2]{} (291,-11)[1]{} For a tacnode $p\in{\rm Sing}(B)$, let $D_p\in\Lambda_p$ be the image of $e_3+\sum_{i=1}^{2}3^{3-i}(e_{i+}+e_{i-})$ by either of the natural isometries $E_6\to\Lambda_p$; For a ramphoid cusp $p\in{\rm Sing}(B)$, let $D_p\in\Lambda_p$ be the image of $\sum_{i=1}^{6}3^{6-i}e_i$ by the natural isometry $E_8\to\Lambda_p$. \[ample\] For an arbitrary ample line bundle $L\in{\rm Pic}({{\mathbb{F}}}_n)$, the class $$3^{20}f^{\ast}L + 3^{10}\sum_{i=0}^{k_0}F_i + \sum_{p}D_p,$$ where $p$ run over tacnodes and ramphoid cusps of $B$, is ample. Check the Nakai criterion (see, e.g., [@B-H-P-V] Chapter IV.6). Degree of period map {#ssec: recipe} -------------------- As in §\[ssec:pure branch\], let ${{\mathbb{F}}}_n$ be a Hirzebruch surface with $n\in\{0, 2, 4, 6\}$. Suppose we have an irreducible, ${\operatorname{Aut}}({{\mathbb{F}}}_n)$-invariant locus $U\subset|-\frac{3}{2}K_{{{\mathbb{F}}}_n}|$ such that $({\rm i})$ every member $B_u\in U$ has only nodes, tacnodes and ramphoid cusps as the singularities, and $({\rm ii})$ the number of singularities of $B_u$ of each type and the number of components of $B_u$ are constant. Then the Eisenstein $K3$ surfaces associated to $({{\mathbb{F}}}_n, B_u)$ have constant invariant $(r, a)$, and we obtain a period map $p\colon U\to\mathcal{M}_{r, a}$ as a morphism of varieties. Since this construction is invariant under ${\operatorname{Aut}}({{\mathbb{F}}}_n)$, the morphism $p$ descends to a rational map $$\label{period map} \mathcal{P} : U/{\operatorname{Aut}}({{\mathbb{F}}}_n) \dashrightarrow \mathcal{M}_{r, a}.$$ Here $U/{\operatorname{Aut}}({{\mathbb{F}}}_n)$ stands for a rational quotient, i.e., an arbitrary model of the invariant field ${{\mathbb{C}}}(U)^{{\operatorname{Aut}}({{\mathbb{F}}}_n)}$. In this subsection we shall explain a systematic method to calculate the degree of $\mathcal{P}$, which is a fundamental in this article. It is parallel to the one in the involution case [@Ma2], though some points need to be modified. We use the Galois cover ${{\widetilde{\mathcal{M}}_{r,a}}}$ of ${{\mathcal{M}_{r,a}}}$ defined in . Recall that an open set of ${{\widetilde{\mathcal{M}}_{r,a}}}$ parametrizes the equivalence classes of lattice-marked Eisenstein $K3$ surfaces $((X, G), j)$, where $j$ is a marking of the invariant lattice $L(X, G)$ by some reference lattice $L$. For the calculation of $\deg(\mathcal{P})$, we define a certain cover $\widetilde{U}$ of $U$ and construct a generically injective lift $${{\widetilde{\mathcal{P}}}} : \widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_n)_0 \dashrightarrow {{\widetilde{\mathcal{M}}_{r,a}}}$$ of $\mathcal{P}$, where ${\operatorname{Aut}}({{\mathbb{F}}}_n)_0$ is the identity component of ${\operatorname{Aut}}({{\mathbb{F}}}_n)$. We then compare the two projections $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_n)_0\dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_n)$ and ${{\widetilde{\mathcal{M}}_{r,a}}}\dashrightarrow{{\mathcal{M}_{r,a}}}$. More precisely, - we define a cover $\widetilde{U} \to U$ parametrizing curves $B_u\in U$ endowed with reasonable labelings $\mu$ of the singularities, the branches at nodes and tacnodes, and the components. - Proposition \[gene L\] implies an appropriate definition of the reference lattice $L$. Then for each $(B_u, \mu)\in\widetilde{U}$, the labeling $\mu$ naturally induces a lattice-marking $j\colon L\to L(X, G)$ for the Eisenstein $K3$ surface $(X, G)=p(B_u)$. Considering the period of $((X, G), j)$ as defined in , we obtain a lift $\tilde{p}\colon\widetilde{U}\to{{\widetilde{\mathcal{M}}_{r,a}}}$ of $p$. - We check that $\tilde{p}$ is invariant under ${\operatorname{Aut}}({{\mathbb{F}}}_n)_0$, which acts trivially on $NS_{{{\mathbb{F}}}_n}$. Thus $\tilde{p}$ descends to a rational map ${{\widetilde{\mathcal{P}}}}\colon\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_n)_0 \dashrightarrow {{\widetilde{\mathcal{M}}_{r,a}}}$ which is a lift of $\mathcal{P}$. - We show that ${{\widetilde{\mathcal{P}}}}$ is generically injective by proving that the $\tilde{p}$-fibers are ${\operatorname{Aut}}({{\mathbb{F}}}_n)_0$-orbits. If two $(B_u, \mu), (B_{u'}, \mu')\in\widetilde{U}$ have the same $\tilde{p}$-period, we have a ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariant Hodge isometry $\Phi \colon H^2(X', {{\mathbb{Z}}}) \to H^2(X, {{\mathbb{Z}}})$ preserving the lattice-markings for the associated Eisenstein $K3$ surfaces. Then $\Phi$ preserves the ample cones by Lemma \[ample\], so that we obtain an isomorphism $\varphi\colon X\to X'$ with $\varphi^{\ast}=\Phi$ by the Torelli theorem. The isomorphism $\varphi$ is ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariant because $\varphi^{\ast}$ is so. Using Lemma \[recovery\], we see that $\varphi$ induces an automorphism $\psi$ of ${{\mathbb{F}}}_n$ with $\psi\circ f=f'\circ\varphi$, where $f\colon X\to{{\mathbb{F}}}_n$, $f'\colon X'\to{{\mathbb{F}}}_n$ are the natural projections. Then $\psi$ acts trivially on $NS_{{{\mathbb{F}}}_n}$ and maps $(B_u, \mu)$ to $(B_{u'}, \mu')$. This verifies our assertion. - Now assume that $U/{\operatorname{Aut}}({{\mathbb{F}}}_n)$ has the same dimension as ${{\mathcal{M}_{r,a}}}$. Since ${{\widetilde{\mathcal{M}}_{r,a}}}$ is irreducible, ${{\widetilde{\mathcal{P}}}}$ is then birational. Therefore $\deg(\mathcal{P})$ is equal to divided by the degree of the projection $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_n)_0\dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_n)$. The latter may be calculated by geometric consideration. We shall exhibit typical examples that illustrate how this recipe actually works and how one should define $\widetilde{U}$ and ${{\widetilde{\mathcal{P}}}}$, which is left ambiguous in the above explanation. In the rest of the article the recipe will be applied over and over. To avoid repetition we will leave the detail of argument there, which can be worked out by referring to the examples below as models. \[ex1\] We consider curves on the Hirzebruch surface ${{\mathbb{F}}}_6$. Let $U\subset|L_{2,0}|$ be the locus of irreducible curves having three nodes and no other singularity. For $C\in U$ we associate the $-\frac{3}{2}K_{{{\mathbb{F}}}_6}$-curve $C+\Sigma$. By the triple cover construction this defines an Eisenstein $K3$ surface $(X, G)$ of invariant $(g, k)=(2, 1)$, and we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_6) \dashrightarrow \mathcal{M}_{8,3}$. Let $f\colon X\to{{\mathbb{F}}}_6$ be the natural projection. By Proposition \[gene L\] the invariant lattice $L(X, G)$ is generated by $f^{\ast}NS_{{{\mathbb{F}}}_6}\simeq U(3)$, three copies of the $A_2$-lattice obtained from the nodes of $C$, and the classes of fixed curves. In view of this, we shall define a reference lattice $L$ as follows. Let $M$ be the lattice $U(3)\oplus A_2^3$ with a natural basis $\{ u, v, e_{1+}, e_{1-},\cdots, e_{3-}\}$, where $\{ u, v\}$ are basis of $U(3)$ with $(u, u)=(v, v)=0$ and $(u, v)=3$, and $\{ e_{i+}, e_{i-}\}$ are root basis of the $i$-th $A_2$-lattice with $(e_{i+}, e_{i-})=1$. We define vectors $f_0, f_1\in M^{\vee}$ by $3f_0=2(u+3v)-3\sum_{i=1}^3(e_{i+}+e_{i-})$ and $3f_1=u-3v$. Then let $L$ be the overlattice $L=\langle M, f_0, f_1\rangle$, which is even and 3-elementary of invariant $(r, a)=(8, 3)$. In order to calculate $\deg(\mathcal{P})$, for $C\in U$ we first distinguish its three nodes, and then the two branches at each node. This is realized by an $\frak{S}_3\ltimes(\frak{S}_2)^3$-cover $\widetilde{U}\to U$. Explicitly, $\widetilde{U}$ may be defined as the locus in $U\times({{{\mathbb P}}}T{{\mathbb{F}}}_6)^6$ of those $(C, v_{1+}, v_{1-},\cdots, v_{3-})$ such that $v_{i+}$ and $v_{i-}$ are the two tangents of $C$ at a node, say $p_i$, and that ${\rm Sing}C=\{ p_1, p_2, p_3\}$. This labels the nodes and the branches at them compatibly. Accordingly, we denote by $E_{i\pm}\subset X$ the $(-2)$-curve lying over the infinitely near point $v_{i\pm}$ of $p_i$. Then $E_{i+}$ and $E_{i-}$ form a root basis of the $A_2$-lattice over $p_i$. The fixed curve of $(X, G)$ is decomposed as $F_0+F_1$ such that $F_0$ (resp. $F_1$) is the component with $f(F_0)=C$ (resp. $f(F_1)=\Sigma$). Then we have a natural isometry $j\colon L\to L(X, G)$ by sending $j(a(u+3v)+bv)=f^{\ast}L_{a,b}$, $j(e_{i\pm})=[E_{i\pm}]$, and $j(f_i)=[F_i]$. In this way we associate a lattice-marked Eisenstein $K3$ surface $((X, G), j)$ to $(C, v_{i\pm})$. This defines a morphism $\tilde{p}\colon\widetilde{U}\to{{\widetilde{\mathcal{M}}}}_{8,3}$, which descends to a lift ${{\widetilde{\mathcal{P}}}}\colon\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6) \dashrightarrow {{\widetilde{\mathcal{M}}}}_{8,3}$ of $\mathcal{P}$ because ${\operatorname{Aut}}({{\mathbb{F}}}_6)$ acts trivially on $NS_{{{\mathbb{F}}}_6}$. We shall show that the $\tilde{p}$-fibers are ${\operatorname{Aut}}({{\mathbb{F}}}_6)$-orbits. If $\tilde{p}(C, v_{i\pm})=\tilde{p}(C', v_{i\pm}')$ for two $(C, v_{i\pm}), (C', v_{i\pm}') \in \widetilde{U}$, there exists a ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariant Hodge isometry $\Phi \colon H^2(X', {{\mathbb{Z}}}) \to H^2(X, {{\mathbb{Z}}})$ with $\Phi\circ j'=j$ for the associated $((X, G), j)$ and $((X', G'), j')$. By Lemma \[ample\] and the Torelli theorem we obtain an isomorphism $\varphi\colon X\to X'$ with $\varphi^{\ast}=\Phi$. The last equality implies that $\varphi^{\ast}G'=G$, $\varphi(E_{i\pm})=E_{i\pm}'$, and $\varphi^{\ast}((f')^{\ast}L_{a,b})=f^{\ast}L_{a,b}$, where $f$, $E_{i\pm}$ (resp. $f'$, $E_{i\pm}'$) are the objects constructed from $(C, v_{i\pm})$ (resp. $(C', v_{i\pm}')$) as above. Then by Lemma \[recovery\] we obtain an automorphism $\psi$ of ${{\mathbb{F}}}_6$ with $f'\circ\varphi=\psi\circ f$. This shows that $\psi(v_{i\pm})=v_{i\pm}'$. We also have $\psi(C)=C'$ because $\psi$ maps the branch curve of $f$ to that of $f'$. This proves our assertion, and hence ${{\widetilde{\mathcal{P}}}}$ is generically injective. Since ${\dim}(U/{\operatorname{Aut}}({{\mathbb{F}}}_6))=6$, ${{\widetilde{\mathcal{P}}}}$ is actually birational. Finally, we compare the two projections $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ and ${{\widetilde{\mathcal{M}}}}_{8,3}\dashrightarrow\mathcal{M}_{8,3}$. The latter has degree $|{\rm O}(A_L)|/2$, where $|{\rm O}(A_L)|=|{\rm GO}(3, 3)|=2^3\cdot3!$ by [@Atlas]. On the other hand, the stabilizer in ${\operatorname{Aut}}({{\mathbb{F}}}_6)$ of a general $C\in U$ is generated by its hyperelliptic involution $\iota_C$ defined in . It follows that $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ has degree $|\frak{S}_3\ltimes(\frak{S}_2)^3|/2$. Therefore $\mathcal{P}$ is birational. \[ex2\] We consider curves on ${{\mathbb{F}}}_2$. Let $U\subset|L_{2,0}|\times|L_{0,2}|$ be the locus of pairs $(C, D)$ where $C$ and $D=D_1+D_2$ are smooth and transverse to each other. We consider the six-nodal $-\frac{3}{2}K_{{{\mathbb{F}}}_2}$-curves $C+D+\Sigma$ to obtain Eisenstein $K3$ surfaces of invariant $(g, k)=(1, 3)$. This defines a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_2) \dashrightarrow \mathcal{M}_{14,2}$. We prepare a reference lattice $L$ as follows. Let $M$ be the lattice $U(3)\oplus A_2^6$ with a natural basis $\{ u, v, e_{1+}, e_{1-},\cdots, e_{6-}\}$ defined in the same way as Example \[ex1\]. We define vectors $f_0,\cdots, f_3\in M^{\vee}$ by $3f_0 = 2(u+v)-\sum_{i=1}^{4}(2e_{i-}+e_{i+})$, $3f_1 = v-\sum_{i=1}^3(2e_{(2i-1)+}+e_{(2i-1)-})$, $3f_2 = v-\sum_{i=1}^3(2e_{(2i)+}+e_{(2i)-})$, and $3f_3 = u-v-\sum_{i=5}^{6}(2e_{i-}+e_{i+})$. Then the overlattice $L=\langle M, f_0,\cdots, f_3\rangle$ is even and 3-elementary of invariant $(r, a)=(14, 2)$. For the calculation of $\deg(\mathcal{P})$, we first distinguish the two components of $D$, and then the intersection points of each component with $C$. Specifically, we consider the locus $\widetilde{U}\subset U\times({{\mathbb{F}}}_2)^4$ of those $(C, D, p_1,\cdots, p_4)$ such that $\{ p_i\}_{i=1}^4=C\cap D$ and that $p_1, p_3$ lie on the same component of $D$. We accordingly denote by $D_1$ (resp. $D_2$) the component of $D$ through $p_1, p_3$ (resp. $p_2, p_4$). Thus the components of $D$ and the four nodes $C\cap D$ are labelled compatibly. The projection $\widetilde{U}\to U$ is an $\frak{S}_2\ltimes(\frak{S}_2)^2$-covering. The remaining data for $C+D+\Sigma$ are labelled automatically: we denote $p_5=D_1\cap\Sigma$; $p_6=D_2\cap\Sigma$; $v_{i+}$ the tangent of $D$ at $p_i$; and $v_{i-}$ the tangent of $C+\Sigma$ at $p_i$. In this way we obtain a complete labeling for $C+D+\Sigma$. Then let $(X, G)=\mathcal{P}(C, D)$ and $f\colon X\to{{\mathbb{F}}}_2$ be the natural projection. We denote by $E_{i\pm}\subset X$ the $(-2)$-curve lying over the infinitely near point $v_{i\pm}$ of $p_i$. The fixed curve for $(X, G)$ is decomposed as $F_0+\cdots+F_3$ such that $f(F_0)=C$, $f(F_i)=D_i$ for $i=1, 2$, and $f(F_3)=\Sigma$. As before, we have an isometry $j\colon L\to L(X, G)$ by $j(a(u+v)+bv)=f^{\ast}L_{a,b}$, $j(e_{i\pm})=[E_{i\pm}]$, and $j(f_i)=[F_i]$. Considering the period of $((X, G), j)$, we obtain a lift ${{\widetilde{\mathcal{P}}}}\colon\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2) \dashrightarrow {{\widetilde{\mathcal{M}}}}_{14,2}$ of $\mathcal{P}$. By a similar argument as in Example \[ex1\], we see that ${{\widetilde{\mathcal{P}}}}$ is generically injective. Since ${\dim}(U/{\operatorname{Aut}}({{\mathbb{F}}}_2))=3$, ${{\widetilde{\mathcal{P}}}}$ is then birational. The projection ${{\widetilde{\mathcal{M}}}}_{14,2}\dashrightarrow\mathcal{M}_{14,2}$ has degree $|{\rm O}(A_L)|/2$. Since $L$ is isometric to $U\oplus E_8\oplus A_2^2$, we have $|{\rm O}(A_L)|=2^3$ by a direct calculation. On the other hand, a general $(C, D)\in U$ has no nontrivial stabilizer in ${\operatorname{Aut}}({{\mathbb{F}}}_2)$ other than the hyperelliptic involution $\iota_C$ of $C$. Hence the projection $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2) \dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_2)$ has degree $4$, and so the map $\mathcal{P}$ is birational. \[ex3\] Our recipe for $-\frac{3}{2}K_{{{\mathbb{F}}}_n}$-curves may also be utilized for some general mixed branches, via a birational transformation. As an illustrative example, let $U\subset|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|\times|{{\mathcal{O}_{{\mathbb P}^{2}}}}(1)|$ be the open set of pairs $(C, L)$ such that $C$ is a smooth quartic transverse to the line $L$. We regard $(C, L)$ as a mixed branch $C+\frac{1}{2}L$ on ${{{\mathbb P}}}^2$. By the resolution of $C+\frac{1}{2}L$, we obtain an Eisenstein $K3$ surface of invariant $(g, k)=(3, 0)$. This defines a period map $\mathcal{P}\colon U/{{{\rm PGL}}}_3\to\mathcal{M}_{4,3}$. To calculate $\deg(\mathcal{P})$, let $\widetilde{U}$ be the locus in $U\times({{{\mathbb P}}}^2)^4$ of those $(C, L, p_1,\cdots, p_4)$ such that $C\cap L=\{ p_i\}_{i=1}^4$. The space $\widetilde{U}$ is an $\frak{S}_4$-cover of $U$ parametrizing mixed branches $C+\frac{1}{2}L$ endowed with labelings of the four intersection points $C\cap L$. We want to show that $\mathcal{P}$ lifts to a birational map $\widetilde{U}/{{{\rm PGL}}}_3\to{{\widetilde{\mathcal{M}}}}_{4,3}$. For that we blow-up $p_1, p_2$ and then blow-down (the strict transform of) $L$. This transforms $C+\frac{1}{2}L$ to a one-nodal curve $C^{\dag}$ of bidegree $(3, 3)$ on $Q={{{\mathbb P}}}^1\times{{{\mathbb P}}}^1$. The two branches of $C^{\dag}$ at its node are distinguished by the labeling $(p_3, p_4)$, and the two rulings on $Q$ are distinguished by the labeling $(p_1, p_2)$. Specifically, we assign the $i$-th projection $Q\to{{{\mathbb P}}}^1$ to the pencil of lines through $p_i$. Conversely, given a general one-nodal $C^{\dag}\in|\mathcal{O}_Q(3, 3)|$, we blow-up $Q$ at $p={\rm Sing}(C^{\dag})$ and then blow-down the two ruling fibers $F_1, F_2$ through $p$ to obtain a smooth plane quartic $C$. Let $L\subset{{{\mathbb P}}}^2$ be the image of the $(-1)$-curve over $p$. Among the four points $C\cap L$, two correspond to the two branches of $C^{\dag}$ at $p$, and the rest two are given by $F_i\cap C^{\dag}\backslash p$. Hence the four points $C\cap L$ are labelled after one distinguishes the two branches of $C^{\dag}$ and the two rulings on $Q$ respectively. Summing up, if $V\subset|{{\mathcal{O}}}_Q(3, 3)|$ is the locus of one-nodal curves and $\widetilde{V}\to V$ is the double cover labeling the branches at nodes, we have a natural birational identification $\widetilde{U}/{{{\rm PGL}}}_3\sim\widetilde{V}/({{{\rm PGL}}}_2)^2$. Here $({{{\rm PGL}}}_2)^2$ is the identity component of ${\operatorname{Aut}}(Q)$ preserving the two rulings. Now we may apply our recipe to $\widetilde{V}$ to obtain a birational map $\widetilde{V}/({{{\rm PGL}}}_2)^2\dashrightarrow{{\widetilde{\mathcal{M}}}}_{4,3}$. This gives a desired lift $\widetilde{U}/{{{\rm PGL}}}_3\to{{\widetilde{\mathcal{M}}}}_{4,3}$ of $\mathcal{P}$. The quotient $\widetilde{U}/{{{\rm PGL}}}_3$ is an $\frak{S}_4$-cover of $U/{{{\rm PGL}}}_3$, while the Galois group of ${{\widetilde{\mathcal{M}}}}_{4,3}\to\mathcal{M}_{4,3}$ is ${\rm O}(A_L)/\pm1$ for the lattice $L=U(3)\oplus A_2$. We have $|{\rm O}(A_L)|=|{\rm GO}(3, 3)|=2\cdot4!$ by [@Atlas]. Therefore $\mathcal{P}$ is birational. \[variant recipe\] In Example \[ex3\], we could also apply a variant of the recipe *directly* to the mixed branches $C+\frac{1}{2}L$. Indeed, a labeling of the four points $C\cap L$ defines a marking of the blown-up invariant lattice $L(\hat{X}, G)$, which induces that of $L(X, G)$. The lattice $L(X, G)$ encodes all the relevant geometric informations: (i) the $G$-invariant rational map $f\colon X\dashrightarrow {{{\mathbb P}}}^2$ can be recovered from the line bundle $f^{\ast}{{\mathcal{O}_{{\mathbb P}^{2}}}}(1)$, which is free of degree $4$; and (ii) every point of $C\cap L$ is the image by $f$ of a $(-2)$-curve on $X$ preserved by $G$. A similar recipe is proposed in the involution case [@Ma2] for the degree calculation for double cover construction. It utilizes geometric labeling for the branch curves as well, but does not require to label the branches at double points. This is the main difference with the present recipe. The case $g=5$ {#sec:g=5} ============== We begin the proof of Theorem \[main\]. We first study the case $g=5$ using curves on the Hirzebruch surface ${{\mathbb{F}}}_6$. Let $U\subset|L_{2,0}|$ be the open set of smooth curves. By Lemma \[linear system\] (3), $U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ is identified with the moduli space $\mathcal{H}_5$ of hyperelliptic curves of genus $5$. For $C\in U$ we take the triple cover $X\to{{\mathbb{F}}}_6$ branched over the $-\frac{3}{2}K_{{{\mathbb{F}}}_6}$-curve $C+\Sigma$. This defines the period map $\mathcal{P}\colon\mathcal{H}_5\to\mathcal{M}_{2,0}$. Then $\mathcal{P}$ is injective because the fixed curve map for $\mathcal{M}_{2,0}$ gives the left inverse. Since ${\dim}\mathcal{H}_5={\dim}\mathcal{M}_{2,0}$, then $\mathcal{P}$ is dominant (actually isomorphic). Katsylo [@Ka1] proved that $\mathcal{H}_5$ is rational. Summing up, \[rational (2,0)\] The space $\mathcal{M}_{2,0}$ is naturally birational to $\mathcal{H}_5$ and thus is rational. The case $g=4$ {#sec:g=4} ============== In this section we study the case $g=4$. Kondō [@Ko] proved that $\mathcal{M}_{2,2}$ is birational to the moduli space of genus $4$ curves, which is proven to be rational by Shepherd-Barron [@SB]. Here we study the space $\mathcal{M}_{4,1}$. We consider curves on ${{\mathbb{F}}}_6$. Let $U\subset|L_{2,0}|$ be the locus of irreducible one-nodal curves. For $C\in U$ we take the triple cover of ${{\mathbb{F}}}_6$ branched over the nodal $-\frac{3}{2}K_{{{\mathbb{F}}}_6}$-curve $C+\Sigma$. This defines a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow\mathcal{M}_{4,1}$. \[period map (4,1)\] The map $\mathcal{P}$ is birational. Let $\widetilde{U}\subset U\times({{{\mathbb P}}}T{{\mathbb{F}}}_6)^2$ be the locus of $(C, v_1, v_2)$ such that $\{ v_1, v_2\}$ are the tangents of $C$ at its node. The space $\widetilde{U}$ is a double cover of $U$ labelling the branches at the nodes of $C$. As in Example \[ex1\], we will see that $\mathcal{P}$ lifts to a birational map $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{4,1}$. Since ${\rm O}(A_L)=\{\pm1\}$ for the invariant lattice $L=U\oplus A_2$, we actually have ${{\widetilde{\mathcal{M}}}}_{4,1}=\mathcal{M}_{4,1}$. On the other hand, we have $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)=U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ because the stabilizer in ${\operatorname{Aut}}({{\mathbb{F}}}_6)$ of every $C\in U$ contains its hyperelliptic involution $\iota_C$ defined in , which exchanges the two branches of $C$ at its node. Therefore $\mathcal{P}$ has degree $1$. \[rational (4,1)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ is rational. Therefore $\mathcal{M}_{4,1}$ is rational. We perform the elementary transformation at the node of $C\in U$, which transforms $C$ to a smooth curve $C^{\dag}\in|L_{2,0}|$ on ${{\mathbb{F}}}_5$. This induces the birational equivalence $$\label{ele trans (4,1)} U/{\operatorname{Aut}}({{\mathbb{F}}}_6) \sim (|L_{2,0}|\times|L_{0,1}|)/{\operatorname{Aut}}({{\mathbb{F}}}_5).$$ By the slice method (cf. [@Do]), the right side is birational to $|L_{2,0}|/G$ where $G\subset{\operatorname{Aut}}({{\mathbb{F}}}_5)$ is the stabilizer of a point of $|L_{0,1}|\simeq\Sigma$. Then $G$ is connected and solvable by Lemma \[linear system\] (1), and our assertion follows from Miyata’s theorem [@Miy]. By and Lemma \[linear system\] (3), we see that the fixed curve map for $\mathcal{M}_{4,1}$ is a dominant morphism onto the hyperelliptic locus $\mathcal{H}_4$ whose general fibers are birationally identified with the hyperelliptic pencils. The case $g=3$ {#sec:g=3} ============== The rationality of $\mathcal{M}_{4,3}$ -------------------------------------- Let $U\subset|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|\times|{{\mathcal{O}_{{\mathbb P}^{2}}}}(1)|$ be the open set of pairs $(C, L)$ such that $C$ is smooth and transverse to $L$. We use the ${{\mathbb{Q}}}$-divisors $C+\frac{1}{2}L$ as mixed branches. The associated Eisenstein $K3$ surfaces have invariant $(g, k)=(3, 0)$. In Example \[ex3\] we showed that the induced period map $U/{{{\rm PGL}}}_3\dashrightarrow\mathcal{M}_{4,3}$ is birational. \[rational (4,3)\] The quotient $U/{{{\rm PGL}}}_3$ is rational. Therefore $\mathcal{M}_{4,3}$ is rational. Using the no-name lemma (cf. [@Do]) for the projection $|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|\times|{{\mathcal{O}_{{\mathbb P}^{2}}}}(1)|\to|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|$, we have $U/{{{\rm PGL}}}_3\sim{{{\mathbb P}}}^2\times(|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|/{{{\rm PGL}}}_3)$. The quotient $|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|/{{{\rm PGL}}}_3$ is rational by Katsylo [@Ka2]. Since $|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|/{{{\rm PGL}}}_3$ is canonically birational to the moduli space $\mathcal{M}_3$ of genus $3$ curves, the fixed curve map $\mathcal{M}_{4,3}\to\mathcal{M}_3$ is dominant with general fibers birationally identified with the canonical systems. The rationality of $\mathcal{M}_{6,2}$ -------------------------------------- We consider curves on ${{\mathbb{F}}}_6$. Let $U\subset|L_{2,0}|$ be the locus of irreducible two-nodal curves $C$. Taking the triple covers of ${{\mathbb{F}}}_6$ branched over $C+\Sigma$, we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow\mathcal{M}_{6,2}$. \[period map (6,2)\] The map $\mathcal{P}$ is birational. Let $\widetilde{U}\subset U\times({{{\mathbb P}}}T{{\mathbb{F}}}_6)^4$ be the locus of $(C, v_{11}, v_{12}, v_{21}, v_{22})$ such that $\{ v_{ij}\}_{i,j}$ are the tangents of $C$ at its nodes and that $v_{11}, v_{12}$ share the base points. By $\widetilde{U}$ the nodes and the branches at them are labelled compatibly. The projection $\widetilde{U}\to U$ is an $\frak{S}_2\ltimes(\frak{S}_2)^2$-covering. As in Example \[ex1\], $\mathcal{P}$ lifts to a birational map $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{6,2}$. Since the invariant lattice $L$ is isometric to $U\oplus A_2^2$, we have $|{\rm O}(A_L)/\pm1|=4$. On the other hand, a general $C\in U$ has no stabilizer other than its hyperelliptic involution $\iota_C$, which exchanges the two tangents at each node. Thus the projection $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\to U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ has degree $2^{-1}\cdot2^3$. Therefore $\mathcal{P}$ is birational. \[rational (6,2)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ is rational. Therefore $\mathcal{M}_{6,2}$ is rational. As in the proof of Proposition \[rational (4,1)\], we perform the elementary transformations at the nodes of $C\in U$. This induces the birational equivalence $$\label{ele trans (6,2)} U/{\operatorname{Aut}}({{\mathbb{F}}}_6) \sim (|L_{2,0}|\times|L_{0,2}|)/{\operatorname{Aut}}({{\mathbb{F}}}_4).$$ We consider the ${\operatorname{Aut}}({{\mathbb{F}}}_4)$-equivariant map $$\psi = (\varphi, {\rm id}) : |L_{2,0}|\times|L_{0,2}| \dashrightarrow |L_{1,0}|\times|L_{0,2}|, \quad (C, F_1+F_2) \mapsto (H, F_1+F_2),$$ where $\varphi$ is as defined in . By Lemma \[linear system\] (2), ${\operatorname{Aut}}({{\mathbb{F}}}_4)$ acts on $|L_{1,0}|\times|L_{0,2}|$ almost transitively. We normalize $H$ to be $H_0$ in §\[ssec: Hirze\], and $F_i$ to be $\{ x_i=0\}$. Then the stabilizer of $(H_0, F_1+F_2)$ is given by $$\label{stabilizer section+two fibers} G = \{ g_{\alpha,0}\}_{\alpha\in{{\mathbb{C}}}^{\times}} \times (\langle\iota\rangle\ltimes\{ h_{\beta}\}_{\beta\in {{\mathbb{C}}}^{\times}}) \simeq {{\mathbb{C}}}^{\times}\times(\frak{S}_2\ltimes{{\mathbb{C}}}^{\times}),$$ where $g_{\alpha,0}$, $\iota$, $h_{\beta}$ are as defined in –. On the other hand, we identify $H^0(L_{2,0})$ with the linear space $\{ \sum_{i=0}^2 f_i(x_3)y_3^{2-i}\}$ as in . Then, as explained in the end of §\[ssec: Hirze\], the fiber $\psi^{-1}(H_0, F_1+F_2)=\varphi^{-1}(H_0)$ is an open set of the linear subspace ${{{\mathbb P}}}V\subset|L_{2,0}|$ defined by $f_1\equiv0$. By the slice method for $\psi$, we have $$(|L_{2,0}|\times|L_{0,2}|)/{\operatorname{Aut}}({{\mathbb{F}}}_4) \sim {{{\mathbb P}}}V/G.$$ We expand the polynomials $f_2$ as $f_2(x_3)=\sum_{j=0}^{8}a_jx_3^j$. The generators $g_{\alpha,0}, \iota, h_{\beta}$ of $G$ act on $V$ by $$\label{action g_alpha} g_{\alpha,0} : y_3^2 \mapsto \alpha^{-2}y_3^2, \quad x_3^j \mapsto x_3^j,$$ $$\label{action h_beta} h_{\beta} : y_3^2 \mapsto \beta^{4}y_3^2, \quad x_3^j \mapsto \beta^{4-j}x_3^j,$$ $$\label{action iota} \iota : y_3^2 \mapsto y_3^2, \quad x_3^j \mapsto x_3^{8-j}.$$ Thus the $G$-representation $V$ is decomposed as $$V= {{\mathbb{C}}}y_3^2 \oplus \mathop{\bigoplus}_{i=0}^{4} W_i, \qquad W_i={{\mathbb{C}}}\langle x_3^{4-i}, x_3^{4+i} \rangle.$$ If we consider the subrepresentation $W=\oplus_{i=0}^{4}W_i$ and the subgroup $H=\langle\iota\rangle\ltimes\{ h_{\beta}\}_{\beta\in {{\mathbb{C}}}^{\times}}$, then ${{{\mathbb P}}}V/G$ is birational to ${{{\mathbb P}}}W/H$. We set $W'=W_1\oplus W_2$ and $W''=W_0\oplus W_3\oplus W_4$. The projection ${{{\mathbb P}}}W-{{{\mathbb P}}}W''\to{{{\mathbb P}}}W'$ from $W''$ is an $H$-linearized vector bundle. Since $H$ acts on ${{{\mathbb P}}}W'$ almost freely, we have ${{{\mathbb P}}}W/H \sim{{\mathbb{C}}}^5\times({{{\mathbb P}}}W'/H)$ by the no-name lemma. Then ${{{\mathbb P}}}W'/H$ is rational because it is $2$-dimensional. By and Lemma \[linear system\] (3), the fixed curve map for $\mathcal{M}_{6,2}$ is a dominant morphism to the hyperelliptic locus $\mathcal{H}_3$ whose general fibers are birationally identified with the canonical systems. The rationality of $\mathcal{M}_{8,1}$ -------------------------------------- We consider curves on ${{\mathbb{F}}}_4$. Let $U\subset|L_{2,0}|\times|L_{0,1}|$ be the open set of those $(C, F)$ such that $C$ is smooth and transverse to $F$. For $(C, F)\in U$ we take the triple cover of ${{\mathbb{F}}}_4$ branched over the nodal $-\frac{3}{2}K_{{{\mathbb{F}}}_4}$-curve $C+F+\Sigma$. This defines a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow\mathcal{M}_{8,1}$. \[period map (8,1)\] The map $\mathcal{P}$ is birational. We consider a double cover $\widetilde{U}\to U$ whose fiber over $(C, F)\in U$ corresponds to the labelings of the two nodes $C\cap F$ of $C+F+\Sigma$. The remaining node $F\cap\Sigma$ and the two tangents at each node are respectively distinguished by the irreducible decomposition of $C+F+\Sigma$. Thus we will obtain a birational lift $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{8,1}$ of $\mathcal{P}$. Since ${{{\rm O}}}(A_L)=\{\pm1\}$ for the invariant lattice $L=U\oplus E_6$, we actually have ${{\widetilde{\mathcal{M}}}}_{8,1}=\mathcal{M}_{8,1}$. We also have $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)=U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ because the hyperelliptic involutions of $C$ give the covering transformation of $\widetilde{U}\to U$. \[rational (8,1)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is rational. Therefore $\mathcal{M}_{8,1}$ is rational. This is a consequence of the slice method for the projection $|L_{2,0}|\times|L_{0,1}|\to|L_{0,1}|$, Lemma \[linear system\] $(1)$, and Miyata’s theorem [@Miy]. Via the fixed curve map, $\mathcal{M}_{8,1}$ becomes birationally a fibration over $\mathcal{H}_3$ whose general fibers are the hyperelliptic pencils. The latter can also be identified with the moduli of pointed hyperelliptic curves of genus $3$. The degeneration relation to the fixed curve map for $\mathcal{M}_{6,2}$ is visible by regarding the hyperelliptic pencils as natural conics in the canonical systems. The rationality of $\mathcal{M}_{10,0}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_4$. Let $U\subset|L_{2,0}|\times|L_{0,1}|$ be the locus of those $(C, F)$ such that $C$ is smooth and tangent to $F$. Taking the triple covers of ${{\mathbb{F}}}_4$ branched over $C+F+\Sigma$, we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow\mathcal{M}_{10,0}$. \[period map (10,0)\] The map $\mathcal{P}$ is birational. The singularities of $C+F+\Sigma$ are the node $F\cap\Sigma$ and the tacnode $F\cap C$, which are obviously distinguished. Also the two branches at each double point are distinguished by the irreducible decomposition of $C+F+\Sigma$. Thus we need no additional marking to obtain a birational lift $U/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{10,0}$ of $\mathcal{P}$. Since the invariant lattice $L\simeq U\oplus E_8$ is unimodular, we have ${{\widetilde{\mathcal{M}}}}_{10,0}=\mathcal{M}_{10,0}$. \[rational (10,0)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is rational. Therefore $\mathcal{M}_{10,0}$ is rational. We have the ${\operatorname{Aut}}({{\mathbb{F}}}_4)$-equivariant morphism $\psi\colon U\to{{\mathbb{F}}}_4$, $(C, F)\mapsto C\cap F$. The $\psi$-fibers are open sets of sub-linear systems of $|L_{2,0}|$. Then our assertion follows from the slice method for $\psi$, Lemma \[stab of (pt, \*)\], and Miyata’s theorem. By Proposition \[period map (10,0)\], $\mathcal{M}_{10,0}$ is birational to the divisor of Weierstrass points in the moduli of pointed genus $3$ hyperelliptic curves, via the fixed curve map. The case $g=2$ {#sec:g=2} ============== The rationality of $\mathcal{M}_{6,4}$ -------------------------------------- We consider curves on ${{\mathbb{F}}}_3$. Let $U\subset|L_{2,0}|\times|L_{1,0}|$ be the open set of pairs $(C, H)$ such that $C$ and $H$ are smooth and transverse to each other. For $(C, H)\in U$ we associate the ${{\mathbb{Q}}}$-divisor $C+\frac{1}{2}(H+\Sigma)$ as a mixed branch. The associated Eisenstein $K3$ surface has invariant $(g, k)=(2, 0)$, and we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_3)\dashrightarrow\mathcal{M}_{6,4}$. \[period map (6,4)\] The map $\mathcal{P}$ is birational. We argue as in Example \[ex3\]. Let $\widetilde{U}\subset U\times({{\mathbb{F}}}_3)^6$ be the locus of $(C, H, p_1,\cdots, p_6)$ such that $C\cap H=\{ p_i\}_{i=1}^6$. The space $\widetilde{U}$ is an $\frak{S}_6$-cover of $U$ endowing $C+H+\Sigma$ with labelings of its six nodes. For $(C, \cdots, p_6)\in\widetilde{U}$ we make the following birational transformations successively: $(1)$ blow-up $p_1+p_2+p_3+p_4$; $(2)$ blow-down the strict transforms of the $\pi$-fibers through $p_3+p_4$; and $(3)$ blow-down the strict transforms of $H+\Sigma$. Then $C$ is transformed to a bidegree $(3, 3)$ curve $C^{\dag}$ on $Q={{{\mathbb P}}}^1\times{{{\mathbb P}}}^1$ having two nodes, say $q_1$ and $q_2$, which are respectively the blown-down points of $H$ and $\Sigma$. The $(-1)$-curves over $p_1$ and $p_2$ turn to complementary ruling fibers of $Q$, the $\pi$-fibers through $p_3$ and $p_4$ turn to the tangents of $C^{\dag}$ at $q_2$, and the points $p_5$ and $p_6$ turn to the tangents of $C^{\dag}$ at $q_1$. Thus $C^{\dag}$ is naturally endowed with a labeling of the nodes and tangents at them, and the two rulings of $Q$ are also distinguished (by $p_1$ and $p_2$). Remembering such labellings, one may reverse this construction. Therefore, if we denote by $\widetilde{V}$ the space of two-nodal curves of bidegree $(3, 3)$ on $Q$ endowed with suitable labelings of the nodes and tangents there, we have a natural birational equivalence $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_3)\sim\widetilde{V}/({{{\rm PGL}}}_2)^2$. Using the recipe in §\[ssec: recipe\], we then see that $\mathcal{P}$ lifts to a birational map $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_3)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{6,4}$. Since ${\operatorname{Aut}}({{\mathbb{F}}}_3)$ acts on $U$ almost freely, $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_3)$ is an $\frak{S}_6$-cover of $U/{\operatorname{Aut}}({{\mathbb{F}}}_3)$. On the other hand, we have $|{\rm O}(A_L)|=|{\rm GO}^-(4, 3)|=2\cdot6!$ for the invariant lattice $L=U(3)\oplus A_2^2$. Hence the projection ${{\widetilde{\mathcal{M}}}}_{6,4}\to\mathcal{M}_{6,4}$ also has degree $6!$. \[rational (6,4)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_3)$ is rational. Therefore $\mathcal{M}_{6,4}$ is rational. We consider the ${\operatorname{Aut}}({{\mathbb{F}}}_3)$-equivariant map $$\psi : U \to |L_{1,0}|\times|L_{1,0}|, \qquad (C, H)\mapsto (H', H),$$ where $H'=\varphi(C)$ is as defined in . By Lemma \[linear system\] $(2)$, the group ${\operatorname{Aut}}({{\mathbb{F}}}_3)$ acts on $|L_{1,0}|\times|L_{1,0}|$ almost transitively, and the stabilizer $G$ of a general point $(H', H)$ is the permutation group of the three points $H\cap H'$. The fiber $\psi^{-1}(H', H)$ is an open set of a linear system ${{{\mathbb P}}}V\subset|L_{2,0}|$ as before, with $G$ acting on $V$ linearly. Hence we have $U/{\operatorname{Aut}}({{\mathbb{F}}}_3)\sim {{{\mathbb P}}}V/G$ by the slice method. It is well-known that ${{{\mathbb P}}}V'/\frak{S}_3$ is rational for any $\frak{S}_3$-representation $V'$. (Apply the no-name lemma [@Do] to the irreducible decomposition of $V'$.) The restriction of $|L_{1,0}|$ to a smooth $L_{2,0}$-curve $C$ gives $|3K_C|$. Thus the fixed curve map makes $\mathcal{M}_{6,4}$ birationally a fibration over $\mathcal{M}_2$ whose general fibers are the quotients of the tri-canonical systems by the hyperelliptic involutions. The rationality of $\mathcal{M}_{8,3}$ -------------------------------------- We consider curves on ${{\mathbb{F}}}_6$. Let $U\subset|L_{2,0}|$ be the locus of irreducible three-nodal curves. Associating to $C\in U$ the triple cover of ${{\mathbb{F}}}_6$ branched over $C+\Sigma$, we obtain a period map $U/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow\mathcal{M}_{8,3}$. In Example \[ex1\] we proved that this map is birational. \[rational (8,3)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ is rational. Therefore $\mathcal{M}_{8,3}$ is rational. We argue as in the proof of Proposition \[rational (6,2)\]. First we have a birational equivalence $$\label{ele trans (8,3)} U/{\operatorname{Aut}}({{\mathbb{F}}}_6) \sim (|L_{2,0}|\times|L_{0,3}|)/{\operatorname{Aut}}({{\mathbb{F}}}_3)$$ via the elementary transformations at the nodes of $C\in U$. Next we apply the slice method to the ${\operatorname{Aut}}({{\mathbb{F}}}_3)$-equivariant map $$\psi = (\varphi, {\rm id}) : |L_{2,0}|\times|L_{0,3}| \dashrightarrow |L_{1,0}|\times|L_{0,3}|,$$ where $\varphi$ is as defined in . By Lemma \[linear system\] (2), ${\operatorname{Aut}}({{\mathbb{F}}}_3)$ acts on $|L_{1,0}|\times|L_{0,3}|$ almost transitively. If we normalize $H\in|L_{1,0}|$ to be $H_0$ in §\[ssec: Hirze\], the stabilizer $G$ of $(H_0, \sum_iF_i)\in|L_{1,0}|\times|L_{0,3}|$ with $\sum_iF_i$ general is given by $$1 \to \{ g_{\alpha,0}\}_{\alpha\in{{\mathbb{C}}}^{\times}} \to G \to \frak{S}_3 \to 1,$$ where $g_{\alpha,0}$ is as defined in , and $\frak{S}_3$ is the stabilizer in ${\operatorname{Aut}}(\Sigma)$ of the three points $\sum_iF_i|_{\Sigma}$. On the other hand, we identify $H^0(L_{2,0})$ with the linear space $\{ \sum_{i=0}^2 f_i(x_3)y_3^{2-i}\}$ as in . Then the fiber $\psi^{-1}(H_0, \sum_iF_i)$ is an open set of the linear subspace ${{{\mathbb P}}}V\subset|L_{2,0}|$ defined by $f_1\equiv0$. Therefore we have $$(|L_{2,0}|\times|L_{0,3}|)/{\operatorname{Aut}}({{\mathbb{F}}}_3) \sim {{{\mathbb P}}}V/G.$$ The elements $g_{\alpha,0}\in G$ act on $V$ by the same equation as . Thus, if we consider the hyperplane $W=\{ f_0=0\}$ of $V$, we have the $G$-decomposition $V={{\mathbb{C}}}y_3^2\oplus W$, and hence ${{{\mathbb P}}}V/G\sim{{{\mathbb P}}}W/\frak{S}_3$. Since $\frak{S}_3$ acts on $W$ linearly, ${{{\mathbb P}}}W/\frak{S}_3$ is rational as is well-known. By , the general fibers of the fixed curve map $\mathcal{M}_{8,3}\to\mathcal{M}_2$ are birationally identified with the third symmetric products of the hyperelliptic pencils. The rationality of $\mathcal{M}_{10,2}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_4$. Let $U\subset|L_{2,0}|\times|L_{0,1}|$ be the locus of pairs $(C, F)$ such that $C$ is irreducible and one-nodal, and $F$ is transverse to $C$. Considering the $-\frac{3}{2}K_{{{\mathbb{F}}}_4}$-curves $C+F+\Sigma$, we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow\mathcal{M}_{10,2}$. \[period map (10,2)\] The map $\mathcal{P}$ is birational. We label the two tangents of $C$ at the node and the two points $C\cap F$ independently: this is realized by an $\frak{S}_2\times\frak{S}_2$-cover $\widetilde{U}\to U$. The two tangents at each point of $F\cap (C+\Sigma)$ are distinguished by the irreducible decomposition of $C+F+\Sigma$. Therefore we have a birational lift $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{10,2}$ of $\mathcal{P}$ as before. Since the invariant lattice $L$ is isometric to $U\oplus E_6\oplus A_2$, we have ${\rm O}(A_L)\simeq({{\mathbb{Z}}}/2{{\mathbb{Z}}})^2$ so that ${{\widetilde{\mathcal{M}}}}_{10,2}$ is a double cover of $\mathcal{M}_{10,2}$. On the other hand, the hyperelliptic involution $\iota_C$ defined in exchanges the two tangents of $C$ and the two points $C\cap F$ simultaneously. Therefore $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is also a double covering. \[rational (10,2)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is rational. Hence $\mathcal{M}_{10,2}$ is rational. We apply the slice method to the ${\operatorname{Aut}}({{\mathbb{F}}}_4)$-equivariant map $$U\to{{\mathbb{F}}}_4\times|L_{0,1}|, \qquad (C, F)\mapsto({\rm Sing}(C), F),$$ whose general fiber is an open set of a sub-linear system of $|L_{2,0}|$. Then we may use Lemma \[stab of (pt, \*)\] and Miyata’s theorem. Let $\mathcal{X}_2$ be the moduli space of pointed genus $2$ curves (whose general fibers over $\mathcal{M}_2$ are the hyperelliptic pencils). As before, we see that the fixed curve map makes $\mathcal{M}_{10,2}$ birational to the fibration $\mathcal{X}_2\times_{\mathcal{M}_2}\mathcal{X}_2$ over $\mathcal{M}_2$. The rationality of $\mathcal{M}_{12,1}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_4$. Let $U\subset|L_{2,0}|\times|L_{0,1}|$ be the locus of those $(C, F)$ such that $C$ is irreducible and one-nodal, and $F$ is tangent to $C$ at a smooth point. By considering the triple covers of ${{\mathbb{F}}}_4$ branched over $C+F+\Sigma$, we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow\mathcal{M}_{12,1}$. \[period map (12,1)\] The map $\mathcal{P}$ is birational. As before, we consider a double cover $\widetilde{U}\to U$ whose fiber over $(C, F)\in U$ corresponds to the labelings of the two branches of $C$ at the node. The rest singularities of $C+F+\Sigma$ are the node $F\cap\Sigma$ and the tacnode $F\cap C$, where the branches of $C+F+\Sigma$ are distinguished by the irreducible decomposition of $C+F+\Sigma$. Following the recipe in §\[ssec: recipe\], we will obtain a birational lift $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{12,1}$ of $\mathcal{P}$. Since the invariant lattice $L$ is isometric to $U\oplus E_8\oplus A_2$, we have ${\rm O}(A_L)\simeq\{\pm1\}$ so that ${{\widetilde{\mathcal{M}}}}_{12,1}=\mathcal{M}_{12,1}$. We also have $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)=U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ because the hyperelliptic involutions give the covering transformation of $\widetilde{U}\to U$. \[rational (12,1)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is rational. Hence $\mathcal{M}_{12,1}$ is rational. Consider the ${\operatorname{Aut}}({{\mathbb{F}}}_4)$-equivariant map $$\psi : U\to{{\mathbb{F}}}_4\times{{\mathbb{F}}}_4, \qquad (C, F)\mapsto({\rm Sing}(C), C\cap F).$$ The $\psi$-fiber over a general $(p, q)$ is an open set of the linear system in $|L_{2,0}|$ of curves singular at $p$ and branched at $q$ over $\Sigma$. Then we apply the slice method for $\psi$, and use Lemma \[stab of (pt, \*)\] and Miyata’s theorem. Let $\mathcal{W}\subset\mathcal{X}_2$ be the divisor of Weierstrass points. Then the fixed curve map identifies $\mathcal{M}_{12,1}$ birationally with the fibration $\mathcal{X}_2\times_{\mathcal{M}_2}\mathcal{W}$ over $\mathcal{M}_2$. The case $g=1$ {#sec:g=1} ============== In this section we study the case $g=1$. The cases $k=0, 1$ are beyond the previous method and we have to analyze symmetry by the Weyl groups $W(E_6)$, $W(F_4)$ respectively. When $k\geq4$, we have ${\dim}{{\mathcal{M}_{r,a}}}\leq2$ so that it is enough to give a unirational parameter space that dominates ${{\mathcal{M}_{r,a}}}$. But for future reference, we shall take extra effort to present degree $1$ period maps. The rationality of $\mathcal{M}_{8,5}$ {#ssec:(8,5)} -------------------------------------- Let us first recall few basic facts about cubic surfaces. Let $Y\subset{{{\mathbb P}}}^3$ be a smooth cubic surface. For each point $p\in Y$, the tangent plane section of $Y$ at $p$ gives the unique $-K_Y$-curve $C_p$ singular at $p$. When $C_p$ is irreducible, it is cuspidal at $p$ if and only if $p$ lies on the intersection of $Y$ with its Hessian quartic; otherwise $C_p$ is nodal at $p$. A *marking* of $Y$ is an isometry $I_{1,6}=\langle1\rangle\oplus\langle-1\rangle^6 \to NS_Y$ of lattices which maps $3h-\sum_{i=1}^{6}e_i$ to $-K_Y$, where $h, e_1,\cdots, e_6$ is a natural orthogonal basis of $I_{1,6}$. Such a marking realizes $Y$ as the blow-up of ${{{\mathbb P}}}^2$ at six general points $p_1,\cdots, p_6$, for which the pullback of ${{\mathcal{O}_{{\mathbb P}^{2}}}}(1)$ corresponds to $h$ and the $(-1)$-curve over $p_i$ corresponds to $e_i$. By that blow-down $Y\to{{{\mathbb P}}}^2$, the $-K_Y$-curves are mapped to plane cubics through $p_1,\cdots, p_6$. The stabilizer in ${{{\rm O}}}(I_{1,6})$ of the vector $3h-\sum_ie_i$ is the Weyl group $W(E_6)$. It acts transitively on the set of markings of $Y$. Equivalently, $W(E_6)$ transforms the ordered point set $(p_1,\cdots, p_6)$ to another one up to ${{{\rm PGL}}}_3$. To sum up, the moduli space ${{\widetilde{\mathcal{M}}_{{\rm cub}}}}$ of marked cubic surfaces is identified with the configuration space of six general points in ${{{\mathbb P}}}^2$, on which $W(E_6)$ acts with the quotient the moduli space ${{\mathcal{M}_{{\rm cub}}}}$ of smooth cubic surfaces. Now we consider the parameter space $U\subset|{{\mathcal{O}_{{\mathbb P}^{3}}}}(3)|\times{{{\mathbb P}}}^3\times|{{\mathcal{O}_{{\mathbb P}^{3}}}}(1)|$ of triplets $(Y, p, H)$ such that $({\rm i})$ $Y$ is a smooth cubic surface, $({\rm ii})$ $p\in Y$, $({\rm iii})$ the $-K_Y$-curve $C_p$ is irreducible and cuspidal at $p$, and $({\rm iv})$ the $-K_Y$-curve $C=H|_Y$ is smooth and tangent to $C_p$ at $p$. Note that $C$ and $C_p$ do not intersect outside $p$. To such a triplet $(Y, p, H)$ we associate the mixed branch $C+\frac{1}{2}C_p$ on $Y$. (Strictly speaking, this does not satisfy the conditions on singularity of mixed branch. But we can resolve $C+\frac{1}{2}C_p$ following the process to pass to a smooth mixed branch. Thus we shall abuse the terminology.) By associating Eisenstein $K3$ surfaces as explained in §\[ssec:mixed branch\], we obtain a period map $\mathcal{P}\colon U/{{{\rm PGL}}}_4 \dashrightarrow \mathcal{M}_{8,5}$. \[birat (8,5)\] The period map $\mathcal{P}$ is birational. To calculate the degree of $\mathcal{P}$, we make use of markings of $Y$ in an auxiliary way (cf. [@Ma2] §12). Let $\mu$ be a marking of $Y$ and $\pi\colon Y\to{{{\mathbb P}}}^2$ the corresponding blow-down. The pair $(C, C_p)$ of $-K_Y$-curves is mapped to the pair $(B_1, B_2)=(\pi(C), \pi(C_p))$ of irreducible plane cubics such that $({\rm i})$ $B_2$ is cuspidal and $({\rm ii})$ $B_1$ is smooth, tangent to $B_2$ at its cusp, and transverse to $B_2$ elsewhere. The six intersection points $B_1\cap B_2\backslash{\rm Sing}(B_2)$ are the blown-up points of $\pi$ and hence ordered by $\mu$. This leads us to consider the space $\widetilde{U}\subset|{{\mathcal{O}_{{\mathbb P}^{2}}}}(3)|^2\times({{{\mathbb P}}}^2)^6$ of those $(B_1, B_2, p_1,\cdots, p_6)$ such that the cubics $B_1, B_2$ satisfy the conditions (i), (ii) above and that $B_1\cap B_2\backslash{\rm Sing}(B_2)=\{ p_1,\cdots, p_6\}$. Regarding ${{\widetilde{\mathcal{M}}_{{\rm cub}}}}$ as the configuration space of six points in ${{{\mathbb P}}}^2$, we may identify $\widetilde{U}/{{{\rm PGL}}}_3$ birationally with the moduli space of marked cubic surfaces $(Y, \mu)$ with mixed branches $C+\frac{1}{2}C_p$ such that $(Y, p, C)\in U$. Thus we have a quotient map $\widetilde{U}/{{{\rm PGL}}}_3 \to U/{{{\rm PGL}}}_4$ by $W(E_6)$, where $W(E_6)$ acts on $\widetilde{U}/{{{\rm PGL}}}_3$ by the Cremona transformations. The point is that the period map $\mathcal{P}$ lifts to a birational map ${{\widetilde{\mathcal{P}}}}\colon \widetilde{U}/{{{\rm PGL}}}_3 \dashrightarrow {{\widetilde{\mathcal{M}}}}_{8,5}$. Indeed, we may view $\widetilde{U}$ as parametrizing mixed branches $B_1+\frac{1}{2}B_2$ on ${{{\mathbb P}}}^2$ endowed with labelings of the six points $B_1\cap B_2\backslash{\rm Sing}(B_2)$. The composition $$\label{eqn: W(E_6)-symmetry} \widetilde{U}/{{{\rm PGL}}}_3 \to U/{{{\rm PGL}}}_4 \stackrel{\mathcal{P}}{\to} \mathcal{M}_{8,5}$$ associates Eisenstein $K3$ surfaces to those labelled mixed branches in the way of §\[ssec:mixed branch\]. Then we can follow the idea in Remark \[variant recipe\]. The ordering of the six points induces a marking of $L(X, G)$; conversely, from this marking we can recover the labelled mixed branch by looking the $(-2)$-curves over the six points and the pullback of ${{\mathcal{O}_{{\mathbb P}^{2}}}}(1)$ to $X$ or $\hat{X}$. This enables us to construct a lift[^5] of to ${{\widetilde{\mathcal{M}}}}_{8,5}$ and show that it has degree $1$. The Galois group of ${{\widetilde{\mathcal{M}}}}_{8,5}\to\mathcal{M}_{8,5}$ is a subgroup of ${\rm O}(A_E)/\pm1$. By [@Atlas] we have $|{\rm O}(A_E)|=|{\rm GO}(5, 3)|=2\cdot|W(E_6)|$. Comparing the two coverings ${{\widetilde{\mathcal{M}}}}_{8,5}\to\mathcal{M}_{8,5}$ and $\widetilde{U}/{{{\rm PGL}}}_3 \to U/{{{\rm PGL}}}_4$, we conclude that the Galois group is actually whole ${\rm O}(A_E)/\pm1$ and that $\mathcal{P}$ has degree $1$. \[rational (8,5)\] The quotient $U/{{{\rm PGL}}}_4$ is rational. Therefore $\mathcal{M}_{8,5}$ is rational. Let $V\subset|{{\mathcal{O}_{{\mathbb P}^{3}}}}(3)|\times{{{\mathbb P}}}^3$ be the locus of pairs $(Y, p)$ such that $p$ lies on the intersection of $Y$ with its Hessian quartic. We have a natural projection $U\to V$, whose fibre over $(Y, p)$ is an open set of the pencil in $|{{\mathcal{O}_{{\mathbb P}^{3}}}}(1)|$ of planes that contain the tangent line of $C_p$ at $p$. Therefore $U$ is birationally the projectivization of an ${{{\rm SL}}}_4$-linearized vector bundle $\mathcal{E}$ over $V$. The element $\sqrt{-1}\in{{{\rm SL}}}_4$ acts on $\mathcal{E}$ by the scalar multiplication by $\sqrt{-1}$. We tensor $\mathcal{E}$ with the pullback $\mathcal{L}$ of the hyperplane bundle on $|{{\mathcal{O}_{{\mathbb P}^{3}}}}(3)|$, on which $\sqrt{-1}\in{{{\rm SL}}}_4$ acts by the multiplication by $-\sqrt{-1}$. Then $\mathcal{E}\otimes\mathcal{L}$ is ${{{\rm PGL}}}_4$-linearized, and ${{{\mathbb P}}}(\mathcal{E}\otimes\mathcal{L})$ is canonically identified with ${{{\mathbb P}}}\mathcal{E}$. Since ${{{\rm PGL}}}_4$ acts on $V$ almost freely, we may use the no-name lemma for $\mathcal{E}\otimes\mathcal{L}$ to obtain $$U/{{{\rm PGL}}}_4 \sim {{{\mathbb P}}}(\mathcal{E}\otimes\mathcal{L})/{{{\rm PGL}}}_4 \sim {{{\mathbb P}}}^1\times(V/{{{\rm PGL}}}_4).$$ Next let $W$ be the space of flags $p\in l\subset P\subset{{{\mathbb P}}}^3$, where $l$ is a line and $P$ is a plane. We have the ${{{\rm PGL}}}_4$-equivariant map $$\label{eqn: flag} \varphi : V\to W, \qquad (Y, p)\mapsto(p, T_pC_p, T_pY),$$ whose fiber is a linear subspace of $|{{\mathcal{O}_{{\mathbb P}^{3}}}}(3)|$. The group ${{{\rm SL}}}_4$ acts on $W$ transitively with a connected and solvable stabilizer. Therefore we may apply the slice method to $\varphi$ and then use Miyata’s theorem to see that $V/{{{\rm PGL}}}_4$ is rational. We can also use $C+C_p$ as $-2K_Y$-curves to obtain 2-elementary $K3$ surfaces with $(r, a, \delta)=(14, 6, 0)$ (cf.  [@Ma2]). This turns out to be a canonical construction for general members of their moduli space $\mathcal{M}_{14,6,0}$. Thus we have a geometric birational map $\mathcal{M}_{8,5}\dashrightarrow\mathcal{M}_{14,6,0}$ via $U/{{{\rm PGL}}}_4$. Since $\mathcal{M}_{14,6,0}$ is proven to be rational in [@Ma2] by another method, this offers a second proof of the rationality of $\mathcal{M}_{8,5}$. The rationality of $\mathcal{M}_{10,4}$ {#ssec:(10,4)} --------------------------------------- We study $\mathcal{M}_{10,4}$ using cubic surfaces with Eckardt points. In addition to the anti-canonical model and the blown-up ${{{\mathbb P}}}^2$ model as used in §\[ssec:(8,5)\], we will also use the Sylvester form of (general) smooth cubic surfaces $Y$: $$\label{Sylvester} \sum_{i=0}^{4} \lambda_iX_i^3 = \sum_{i=0}^{4} X_i = 0, \qquad \lambda_i\in{{\mathbb{C}}},$$ where $[X_0,\cdots,X_4]$ is the homogeneous coordinate of ${{{\mathbb P}}}^4$. This expression of $Y$ is unique up to the permutations of $\lambda_0,\cdots, \lambda_4$ and the scalar multiplications on $(\lambda_0,\cdots, \lambda_4)$. For details about Eckardt points, we refer to [@Se], [@Na] and [@D-G-K]. Let $Y\subset{{{\mathbb P}}}^3$ be a smooth cubic surface. A point $p\in Y$ is called an *Eckardt point* if the tangent plane section $C_p=T_pY|_Y$ is a union of three lines meeting at $p$. In the Sylvester form , $Y$ has such a point if and only if $\prod_{i<j}(\lambda_i-\lambda_j)=0$. For simplicity, we may assume $\lambda_3=\lambda_4$. Then $p=[0,0,0,1,-1]$ is an Eckardt point of $Y$. The surface $Y$ has an involution $\iota$, called *harmonic homology*, given by $X_3\leftrightarrow X_4$ and $X_i\mapsto X_i$ for $i\leq2$. If $Y$ is general in the locus $\{ \lambda_3=\lambda_4 \}$, it has no other nontrivial automorphism. \[harmonic homology\] The harmonic homology $\iota$ acts trivially on the linear space of anti-canonical forms vanishing at $p$. Let $H=\sum_{i=0}^{4}X_i$. Since $Y\subset\{ H=0\}$ is anti-canonically embedded, we may identify $H^0(-K_Y)$ with $H^0(\mathcal{O}_{{{{\mathbb P}}}^4}(1))/{{\mathbb{C}}}H$. If we express linear forms on ${{{\mathbb P}}}^4$ as $\sum_i\alpha_iX_i$, the space in question is identified with the hyperplane $\{ \alpha_3=\alpha_4\}\subset H^0(-K_Y)$. Then the assertion holds apparently. Now we consider the locus $U\subset|{{\mathcal{O}_{{\mathbb P}^{3}}}}(3)|\times{{{\mathbb P}}}^3\times|{{\mathcal{O}_{{\mathbb P}^{3}}}}(1)|$ of triplets $(Y, p, H)$ such that (i) $Y$ is smooth, (ii) $p$ is an Eckardt point of $Y$, and (iii) the $-K_Y$-curve $C=H|_Y$ is smooth and passes through $p$. By using $C+\frac{1}{2}C_p$ as mixed branches, we obtain Eisenstein $K3$ surfaces with $(g, k)=(1, 1)$. We thus have a period map $\mathcal{P}\colon U/{{{\rm PGL}}}_4\to\mathcal{M}_{10,4}$. In order to show that $\mathcal{P}$ is birational, we describe $U/{{{\rm PGL}}}_4$ in a different way. Let ${{\mathcal{M}_{{\rm cub}}}}$, ${{\widetilde{\mathcal{M}}_{{\rm cub}}}}$ be the moduli spaces defined in §\[ssec:(8,5)\], and $\pi\colon{{\widetilde{\mathcal{M}}_{{\rm cub}}}}\to{{\mathcal{M}_{{\rm cub}}}}$ be the quotient map by the Weyl group $W(E_6)$. We have a universal family $f\colon\mathcal{Y}\to{{\widetilde{\mathcal{M}}_{{\rm cub}}}}$ of marked cubic surfaces, on which $W(E_6)$ acts equivariantly (cf. [@Na] §1, [@Ma2] §12.1). Let $\mathcal{E}\subset{{\mathcal{M}_{{\rm cub}}}}$ be the codimension $1$ locus of cubic surfaces having exactly one Eckardt point. Then $\pi^{-1}(\mathcal{E})$ has $45$ irreducible components which are permuted transitively by $W(E_6)$. Let $\widetilde{\mathcal{E}}\subset\pi^{-1}(\mathcal{E})$ be either one component and $G\subset W(E_6)$ the stabilizer of $\widetilde{\mathcal{E}}$. ($G$ is the Weyl group $W(F_4)$.) The center of $G$ is ${{\mathbb{Z}}}/2{{\mathbb{Z}}}$, which acts on $\widetilde{\mathcal{E}}$ trivially and on the restricted family $$f' = f|_{f^{-1}(\widetilde{\mathcal{E}})} : f^{-1}(\widetilde{\mathcal{E}}) \to \widetilde{\mathcal{E}}$$ by the harmonic homologies. We consider the sub-vector bundle $\mathcal{F}\subset f'_{\ast}K_{f'}^{-1}$ whose fibers are the linear spaces of anti-canonical forms vanishing at the Eckardt points. Note that $\mathcal{F}$ is $G$-linearized because $f_{\ast}K_{f}^{-1}$ is $W(E_6)$-linearized. Forgetting the markings of cubic surfaces, we see that $U/{{{\rm PGL}}}_4$ is birationally identified with ${{{\mathbb P}}}\mathcal{F}/G$. Now we can prove \[birat (10,4)\] The period map $\mathcal{P}\colon {{{\mathbb P}}}\mathcal{F}/G\to\mathcal{M}_{10,4}$ is birational. We show that $\mathcal{P}$ lifts to a birational map ${{{\mathbb P}}}\mathcal{F}\to{{\widetilde{\mathcal{M}}}}_{10,4}$. Let $V\subset({{{\mathbb P}}}^2)^6$ be the locus of six distinct points $(p_1,\cdots,p_6)$ such that the three lines $L_i=\overline{p_ip_{i+3}}$ ($1\leq i\leq3$) intersect at one point, say $p$. Regarding ${{\widetilde{\mathcal{M}}_{{\rm cub}}}}$ as the configuration space of six points in ${{{\mathbb P}}}^2$, we have a natural birational identification $V/{{{\rm PGL}}}_3\sim \widetilde{\mathcal{E}}$. Therefore, if $\widetilde{U}\subset V\times|{{\mathcal{O}_{{\mathbb P}^{2}}}}(3)|$ is the locus of those $(p_1,\cdots,p_6, C)$ such that $C$ is smooth and passes through $p_1,\cdots,p_6, p$, then ${{{\mathbb P}}}\mathcal{F}$ is birationally identified with $\widetilde{U}/{{{\rm PGL}}}_3$. We may regard $\widetilde{U}$ as parametrizing mixed branches $C+\frac{1}{2}\sum_iL_i$ endowed with labelings of the six intersection points $C\cap\sum_iL_i\backslash p$ that are compatible with the irreducible decomposition of $\sum_iL_i$. Then the composition $$\widetilde{U}/{{{\rm PGL}}}_3 \sim {{{\mathbb P}}}\mathcal{F} \to {{{\mathbb P}}}\mathcal{F}/G \stackrel{\mathcal{P}}{\to} \mathcal{M}_{10,4}$$ maps such a labelled mixed branch $C+\frac{1}{2}\sum_iL_i$ to the Eisenstein $K3$ surface associated as in §\[ssec:mixed branch\]. Hence by arguing as in the proof of Proposition \[birat (8,5)\], we will obtain a desired birational lift $\widetilde{U}/{{{\rm PGL}}}_3\to{{\widetilde{\mathcal{M}}}}_{10,4}$. The degree of ${{\widetilde{\mathcal{M}}}}_{10,4}\to\mathcal{M}_{10,4}$ divides $|{{{\rm O}}}(A_E)|/2=|{\rm GO}^+(4, 3)|/2=(4!)^2$ for the Eisenstein lattice $E=U^2\oplus A_2^4$. On the other hand, ${{{\mathbb P}}}\mathcal{F}\to{{{\mathbb P}}}\mathcal{F}/G$ has degree $|G|/2=|W(E_6)|/90=(4!)^2$ because the center of $G$ acts on ${{{\mathbb P}}}\mathcal{F}$ trivially by Lemma \[harmonic homology\]. Comparing the two projections ${{\widetilde{\mathcal{M}}}}_{10,4}\to\mathcal{M}_{10,4}$ and ${{{\mathbb P}}}\mathcal{F}\to{{{\mathbb P}}}\mathcal{F}/G$, we find that $\mathcal{P}$ has degree $1$ and that the Galois group of the former is ${{{\rm O}}}(A_E)/\pm1$. \[rational (10,4)\] The quotient ${{{\mathbb P}}}\mathcal{F}/G$ is rational. Therefore $\mathcal{M}_{10,4}$ is rational. By Lemma \[harmonic homology\], the center of $G$ acts on $\mathcal{F}$ trivially. Replacing $G$ by its central quotient and applying the no-name lemma to the $G$-linearized vector bundle $\mathcal{F}\to\widetilde{\mathcal{E}}$, we have $${{{\mathbb P}}}\mathcal{F}/G \sim {{{\mathbb P}}}^2\times(\widetilde{\mathcal{E}}/G) \sim {{{\mathbb P}}}^2\times\mathcal{E}.$$ By the Sylvester form , the Eckardt locus $\mathcal{E}$ is biratinal to ${{{\mathbb P}}}W/\frak{S}_3$ where $W=\{ \lambda_3=\lambda_4\}\subset{{\mathbb{C}}}^5$ and $\frak{S}_3$ acts on $W$ by the permutations of $(\lambda_0, \lambda_1, \lambda_2)$. Therefore $\mathcal{E}$ is rational. The rationality of $\mathcal{M}_{12,3}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_1$. Let $V\subset|L_{2,2}|$ be the locus of curves $C$ which have a cusp at $C\cap\Sigma$ and are smooth elsewhere. ($C$ is the blow-up of a plane quartic with a ramphoid cusp.) Let $U\subset V\times|L_{1,0}|$ be the open set of pairs $(C, H)$ such that $H$ is smooth and transverse to $C$. For $(C, H)\in U$ we consider the mixed branch $C+\frac{1}{2}(H+\Sigma)$. The associated Eisenstein $K3$ surface has invariant $(g, k)=(1, 2)$. Hence we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_1)\dashrightarrow\mathcal{M}_{12,3}$. \[period map (12,3)\] The period map $\mathcal{P}$ is birational. This is analogous to Example \[ex3\] and Proposition \[period map (6,4)\]: we label the four nodes $C\cap H$ by an $\frak{S}_4$-cover $\widetilde{U}\to U$. By blowing-up the “first” and “second” nodes and then blowing-down the strict transforms of $H$ and $\Sigma$, the curve $C$ is transformed to a bidegree $(3, 3)$ curve $C^{\dag}$ on ${{{\mathbb P}}}^1\times{{{\mathbb P}}}^1$ which has a node and a ramphoid cusp. The given labeling of $C\cap H$ induces that of the tangents of $C^{\dag}$ at the node, and of the two rulings of ${{{\mathbb P}}}^1\times{{{\mathbb P}}}^1$. Then we see as in Example \[ex3\] that $\mathcal{P}$ lifts to a birational map $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_1)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{12,3}$. The group ${\operatorname{Aut}}({{\mathbb{F}}}_1)$ acts on $U$ almost freely, so that $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_1)$ is an $\frak{S}_4$-cover of $U/{\operatorname{Aut}}({{\mathbb{F}}}_1)$. On the other hand, we have ${\rm O}(A_L)\simeq{\rm GO}(3, 3)$ for the invariant lattice $L=U\oplus E_6\oplus A_2^2$. Then $|{\rm O}(A_L)|=2\cdot4!$ by [@Atlas], and hence $\mathcal{P}$ has degree $1$. \[period map (12,3)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_1)$ is rational. Therefore $\mathcal{M}_{12,3}$ is rational. We first apply the slice method to the ${\operatorname{Aut}}({{\mathbb{F}}}_1)$-equivariant map $$\psi : U \to \Sigma \times |L_{1,0}|, \qquad (C, H)\mapsto({\rm Sing}(C), H).$$ By Lemma \[linear system\] $(2)$, ${\operatorname{Aut}}({{\mathbb{F}}}_1)$ acts on $\Sigma \times |L_{1,0}|$ almost transitively. If we normalize $H$ to be $H_0$, and ${\rm Sing}(C)$ to be the point $p_0=(0, 0)$ in $U_1$, then the stabilizer $G_1$ of $(p_0, H_0)\in\Sigma \times |L_{1,0}|$ is $$G_1 = \{ g_{\alpha,0}\}_{\alpha\in{{\mathbb{C}}}^{\times}} \times ( \{ h_{\beta}\}_{\alpha\in{{\mathbb{C}}}^{\times}}\ltimes\{ i_{\lambda}\}_{\lambda\in{{\mathbb{C}}}}) \simeq {{\mathbb{C}}}^{\times} \times ({{\mathbb{C}}}^{\times}\ltimes{{\mathbb{C}}}),$$ where $g_{\alpha,0}$, $h_{\beta}$, $i_{\lambda}$ are as defined in –. The fiber $\psi^{-1}(p_0, H_0)$ is regarded as a (nonlinear) sublocus of $|L_{2,2}|$. Then we have $U/{\operatorname{Aut}}({{\mathbb{F}}}_1)\sim \psi^{-1}(p_0, H_0)/G_1$. Next we apply the slice method to the $G_1$-equivariant map $$\phi : \psi^{-1}(p_0, H_0)\to{{{\mathbb P}}}T_{p_0}{{\mathbb{F}}}_1, \quad C\mapsto T_{p_0}C,$$ where $T_{p_0}C$ denotes the unique tangent of $C$ at $p_0$. A general $\phi$-fiber is an open set of a *linear* system ${{{\mathbb P}}}V\subset|L_{2,2}|$. Since $G_1$ acts on ${{{\mathbb P}}}T_{p_0}{{\mathbb{F}}}_1$ almost transitively, we have $\psi^{-1}(p_0, H_0)/G_1 \sim {{{\mathbb P}}}V/G_2$ for the stabilizer $G_2\subset G_1$ of a general point of ${{{\mathbb P}}}T_{p_0}{{\mathbb{F}}}_1$. If we use $y_1^{-1}x_1$ as the inhomogeneous coordinate of ${{{\mathbb P}}}T_{p_0}{{\mathbb{F}}}_1$, then $g_{\alpha,0}$ acts on ${{{\mathbb P}}}T_{p_0}{{\mathbb{F}}}_1$ by $\alpha$, $h_{\beta}$ by $\beta$, and $i_{\lambda}$ trivially. This shows that $G_2$ is isomorphic to ${{\mathbb{C}}}^{\times}\ltimes{{\mathbb{C}}}$. Hence ${{{\mathbb P}}}V/G_2$ is rational by Miyata’s theorem. The rationality of $\mathcal{M}_{14,2}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_2$. Let $U\subset|L_{2,0}|\times|L_{0,2}|$ be the open set of pairs $(C, F_1+F_2)$ such that $C$ and $F_1+F_2$ are smooth and transverse to each other. We associate the $-\frac{3}{2}K_{{{\mathbb{F}}}_2}$-branch $C+F_1+F_2+\Sigma$ to obtain a period map $U/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow\mathcal{M}_{14,2}$. In Example \[ex2\] we proved that this map is birational. \[period map (14,2)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_2)$ is rational. Hence $\mathcal{M}_{14,2}$ is rational. As in the proof of Proposition \[rational (6,2)\], we apply the slice method to the ${\operatorname{Aut}}({{\mathbb{F}}}_2)$-equivariant map $$\psi = (\varphi, {\rm id}) : U \to |L_{1,0}|\times|L_{0,2}|, \quad (C, F_1+F_2)\mapsto(H, F_1+F_2),$$ where $\varphi$ is as defined in . By Lemma \[linear system\] (2), ${\operatorname{Aut}}({{\mathbb{F}}}_2)$ acts on $|L_{1,0}|\times|L_{0,2}|$ almost transitively. If we normalize $H=H_0$ and $F_i=\{ x_i=0\}$, the stabilizer $G$ of $(H_0, F_1+F_2)$ is described by the same equation as . On the other hand, if we identify $H^0(L_{2,0})$ with the linear space $\{ \sum_{i=0}^2 f_i(x_3)y_3^{2-i}\}$ as in , the fiber $\psi^{-1}(H_0, F_1+F_2)$ is an open set of the linear subspace ${{{\mathbb P}}}V\subset|L_{2,0}|$ defined by $f_1\equiv0$. Therefore we have $$U/{\operatorname{Aut}}({{\mathbb{F}}}_2) \sim {{{\mathbb P}}}V/G.$$ Let $W\subset V$ be the hyperplane $\{ f_0=0\}$. As in the proof of Proposition \[rational (6,2)\], we see that the $G$-representation $V$ decomposes as $V={{\mathbb{C}}}y_3^2\oplus W$. If we consider the $G$-representation $W'=({{\mathbb{C}}}y_3^2)^{\vee}\otimes W$, then ${{{\mathbb P}}}V/G$ is birational to $W'/G$. Since $W'/G\sim{{\mathbb{C}}}^{\times}\times({{{\mathbb P}}}W'/G)$ and ${{{\mathbb P}}}W'/G$ is $2$-dimensional, $W'/G$ is rational. The rationality of $\mathcal{M}_{16,1}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_2$. Let $U\subset|L_{2,0}|\times|L_{0,1}|^2$ be the locus of triplets $(C, F_1, F_2)$ such that $C$ is smooth, $F_1$ is transverse to $C$, and $F_2$ is tangent to $C$. Considering the $-\frac{3}{2}K_{{{\mathbb{F}}}_2}$-branches $C+F_1+F_2+\Sigma$, we have a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow\mathcal{M}_{16,1}$. \[period map (16,1)\] The map $\mathcal{P}$ is birational. We consider a double cover $\widetilde{U}\to U$ to label the two points $C\cap F_1$. The rest datum for $C+F_1+F_2+\Sigma$ are a priori labelled: $F_1$ and $F_2$ are distinguished by their intersection with $C$, and the two branches at each (tac)node of $C+F_1+F_2+\Sigma$ are distinguished by the irreducible decomposition of $C+F_1+F_2+\Sigma$. Thus we will obtain a birational lift $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{16,1}$ of $\mathcal{P}$. We have $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2)=U/{\operatorname{Aut}}({{\mathbb{F}}}_2)$ due to the hyperelliptic involutions of $C$. We also have ${{\widetilde{\mathcal{M}}}}_{16,1}=\mathcal{M}_{16,1}$ because ${{{\rm O}}}(A_L)=\{ \pm1\}$ for the invariant lattice $L=U\oplus E_6\oplus E_8$. Since $U$ is rational and $\mathcal{M}_{16,1}$ has dimension $2$, we see that \[rational (16,1)\] The space $\mathcal{M}_{16,1}$ is rational. By associating to $(C, F_1, F_2)$ the elliptic curve $(C, F_2\cap C)$ with a point $p\in F_1\cap C$, we obtain a birational map from $\mathcal{M}_{16,1}$ to the Kummer modular surface for ${{{\rm SL}}}_2({{\mathbb{Z}}})$, whose projection to the modular curve gives the fixed curve map. The rationality of $\mathcal{M}_{18,0}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_2$. Let $U\subset|L_{2,0}|\times|L_{0,2}|$ be the locus of pairs $(C, F_1+F_2)$ such that $C$ is smooth, $F_1\ne F_2$, and both $F_i$ are tangent to $C$. We obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow\mathcal{M}_{18,0}$ by considering the $-\frac{3}{2}K_{{{\mathbb{F}}}_2}$-branches $C+F_1+F_2+\Sigma$. \[period map (18,0)\] The map $\mathcal{P}$ is birational. As before, we distinguish $F_1$ and $F_2$ by a double cover $\widetilde{U}\to U$ to obtain a birational lift $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{18,0}$ of $\mathcal{P}$. Since the invariant lattice $L=U\oplus E_8^2$ is unimodular, ${{\widetilde{\mathcal{M}}}}_{18,0}$ coincides to $\mathcal{M}_{18,0}$. On the other hand, for each $(C, F_1+F_2)\in U$, we have an automorphism of ${{\mathbb{F}}}_2$ preserving $C$ and exchanging $F_1$ and $F_2$ (which is an extension of a translation automorphism of $C$). Hence we also have $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2)=U/{\operatorname{Aut}}({{\mathbb{F}}}_2)$. Since $U$ is rational and ${\dim}\mathcal{M}_{18,0}=1$, we have \[rational (18,0)\] The space $\mathcal{M}_{18,0}$ is rational. The two points $p_1=F_1\cap C$, $p_2=F_2\cap C$ on the elliptic curve $C$ satisfy $2(p_1-p_2)\sim0$. This shows that $\mathcal{M}_{18,0}$ is naturally birational to the elliptic modular curve for $\Gamma_0(2)$ through the fixed curve map. The case $g=0$ {#sec:g=0} ============== In this section we study the case $g=0$. The space $\mathcal{M}_{8,7}$ is unirational by the constructions in [@A-S] and [@A-S-T], where a complete intersection model and an elliptic fibration model for the generic member are given respectively. Similarly, $\mathcal{M}_{10,6}$ is unirational by the quartic model given in [@A-S]. Here we shall present another triple cover construction for those two. The space $\mathcal{M}_{12,5}$ is birational to the moduli space of cubic surfaces ([@A-C-T], [@D-G-K]), which is rational as is well-known. Below we (re)prove that ${{\mathcal{M}_{r,a}}}$ is unirational for $k\leq0$, and rational for $k\geq2$. As in §\[sec:g=1\], even when ${\dim}{{\mathcal{M}_{r,a}}}\leq2$, we make a detour to present birational period maps. The unirationality of $\mathcal{M}_{8,7}$ {#ssec:(8,7)} ----------------------------------------- We construct general members of $\mathcal{M}_{8,7}$ using certain triangles of anti-canonical curves on quadric del Pezzo surfaces. To begin with, let $U\subset|{{\mathcal{O}_{{\mathbb P}^{2}}}}(4)|\times({{{\mathbb P}}}^2)^3$ be the locus of quadruplets $(C, p_1, p_2, p_3)$ such that $({\rm i})$ $C$ is a smooth quartic, $({\rm ii})$ $p_i\in C$, and $({\rm iii})$ if $L_i$ is the tangent line of $C$ at $p_i$, then $L_1$ (resp. $L_2$, $L_3$) passes through $p_2$ (resp. $p_3$, $p_1$). The space $U$ is rational of dimension $14$. Indeed, if we use the homogeneous coordinate of ${{{\mathbb P}}}^2$ to normalize $p_1=[0,0,1]$, $p_2=[0,1,0]$, $p_3=[1,0,0]$ and express quartic forms as $\sum_{i,j,k}a_{ijk}X^iY^jZ^k$ with $i+j+k=4$, then the conditions $({\rm ii})$ and $({\rm iii})$ are given by $$a_{400}=a_{301}=0, \quad a_{040}=a_{130}=0, \quad a_{004}=a_{013}=0.$$ For $(C, p_1, p_2, p_3)\in U$, the double cover $\pi\colon Y\to{{{\mathbb P}}}^2$ branched over $C$ is a quadric del Pezzo surface. The curves $D_i=\pi^{\ast}L_i$ are nodal $-K_Y$-curves such that $$D_1\cap D_2={\rm Sing}(D_2), \quad D_2\cap D_3={\rm Sing}(D_3), \quad D_3\cap D_1={\rm Sing}(D_1).$$ Then the curve $B=D_1+D_2+D_3$ has ordinary triple points at the nodes of $D_i$. We consider $\frac{1}{2}B$ as a mixed branch on $Y$ with all components shadow. The associated Eisenstein $K3$ surface has three isolated fixed points and no fixed curve. Thus we obtain a period map $\mathcal{P}\colon U/{{{\rm PGL}}}_3\dashrightarrow \mathcal{M}_{8,7}$. The map $\mathcal{P}$ is dominant. Since ${\dim}(U/{{{\rm PGL}}}_3)={\dim}\mathcal{M}_{8,7}$, it suffices to show that $\mathcal{P}$ has countable fibers. The natural projection $g\colon\hat{Y}\to Y\to{{{\mathbb P}}}^2$ is recovered from the degree $2$ line bundle $H=g^{\ast}{{\mathcal{O}_{{\mathbb P}^{2}}}}(1)$ as the associated projective morphism $\phi_H\colon\hat{Y}\to|H|^{\vee}$. Hence we have surjective maps onto the $\mathcal{P}$-fibers from subsets of ${{{\rm Pic}}}(\hat{Y})\simeq{{\mathbb{Z}}}^{11}$. In this way, we obtain a proof of The space $\mathcal{M}_{8,7}$ is unirational. The unirationality of $\mathcal{M}_{10,6}$ ------------------------------------------ We consider a degeneration of our model for $\mathcal{M}_{8,5}$. Let $U\subset|{{\mathcal{O}_{{\mathbb P}^{3}}}}(3)|\times({{{\mathbb P}}}^3)^2$ be the locus of triplets $(Y, p, q)$ such that (i) $Y$ is a smooth cubic surface containing $p$ and $q$, (ii) the $-K_Y$-curve $C_p=T_pY|_Y$ is irreducible and cuspidal, and (ii) the $-K_Y$-curve $C_q=T_qY|_Y$ is irreducible, nodal, and tangent to $C_p$ at $p$. Considering the mixed branches $C_q+\frac{1}{2}C_p$, we obtain Eisenstein $K3$ surfaces in $\mathcal{M}_{10,6}$. As before, one checks that the induced period map $U/{{{\rm PGL}}}_4\to\mathcal{M}_{10,6}$ is dominant. Since $U$ is rational, we have \[unirat (10,6)\] The space $\mathcal{M}_{10,6}$ is unirational. Using $C_q+C_p$ as $-2K_Y$-branches will give a canonical construction of general 2-elementary $K3$ surfaces of type $(15, 7, 1)$. Thus, via $U/{{{\rm PGL}}}_4$ we have a natural birational map from an intermediate cover of ${{\widetilde{\mathcal{M}}}}_{10,6}\to\mathcal{M}_{10,6}$ to the orthogonal modular variety $\mathcal{M}_{15,7,1}$. The rationality of $\mathcal{M}_{14,4}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_6$. Let $U\subset|L_{2,0}|$ be the locus of reducible curves $H_1+H_2$ such that $H_1, H_2$ are smooth members of $|L_{1,0}|$ transverse to each other. We associate the $-\frac{3}{2}K_{{{\mathbb{F}}}_6}$-curves $H_1+H_2+\Sigma$ to obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow\mathcal{M}_{14,4}$. \[period map (14,4)\] The map $\mathcal{P}$ is birational. We label independently the two curves $H_1, H_2$ and the six points $H_1\cap H_2$. This is realized by an $\frak{S}_2\times\frak{S}_6$-cover $\widetilde{U}\to U$. The two branches of $H_1+H_2+\Sigma$ at each of $H_1\cap H_2$ are distinguished by the given distinction of $H_1$ and $H_2$. Hence $\mathcal{P}$ lifts to a birational map $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{14,4}$ as before. Since ${\rm O}(A_L)\simeq {\rm GO}^-(4, 3)$ for the invariant lattice $L\simeq U\oplus E_6\oplus A_2^3$, the projection ${{\widetilde{\mathcal{M}}}}_{14,4}\to\mathcal{M}_{14,4}$ has degree $|{\rm GO}^-(4, 3)|/2=6!$ by [@Atlas]. On the other hand, the hyperelliptic involutions of $H_1+H_2$ exchange $H_1$ and $H_2$, so that the projection $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_6)\to U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ is an $\frak{S}_6$-covering. Therefore $\mathcal{P}$ has degree $1$. \[rational (14,4)\] The quotient $U/{\operatorname{Aut}}({{\mathbb{F}}}_6)$ is rational. Therefore $\mathcal{M}_{14,4}$ is rational. We consider the ${\operatorname{Aut}}({{\mathbb{F}}}_6)$-equivariant map $\varphi\colon U\dashrightarrow |L_{1,0}|$ defined in . By Lemma \[linear system\] $(2)$, we may apply the slice method for $\varphi$ to see that $$U/{\operatorname{Aut}}({{\mathbb{F}}}_6)\sim \varphi^{-1}(H)/G,$$ where $H\in|L_{1,0}|$ is a smooth member and $G\simeq {{\mathbb{C}}}^{\times}\times{{{\rm PGL}}}_2$ is the stabilizer of $H$ in ${\operatorname{Aut}}({{\mathbb{F}}}_6)$. Let $\iota_H$ be the involution of ${{\mathbb{F}}}_6$ which on each $\pi$-fiber $F$ fixes the two points $H|_F$, $\Sigma|_F$. Then $\varphi^{-1}(H)$ is an open set of the locus $\{ H'+\iota_H(H'), H'\in|L_{1,0}|\}$ in $|L_{2,0}|$. Thus $\varphi^{-1}(H)/G$ is birational to $(|L_{1,0}|/\iota_H)/G \sim |L_{1,0}|/G$. It is straightforward to see that the natural map $|L_{1,0}|\dashrightarrow|{{\mathcal{O}}}_H(6)|$, $H'\mapsto H'|_H$, makes $\varphi^{-1}(H)/G$ birational to $|{{\mathcal{O}}}_H(6)|/{\operatorname{Aut}}(H)$. Then $|{{\mathcal{O}}}_H(6)|/{\operatorname{Aut}}(H)$ is birational to the moduli $\mathcal{M}_2$ of genus $2$ curves, which is rational by Igusa [@Ig]. By the proof, we have a natural birational map $\mathcal{M}_{14,4}\dashrightarrow\mathcal{M}_2$. This might be related to the Janus example in [@H-W] Main Theorem (i). The rationality of $\mathcal{M}_{16,3}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_4$. Let $U\subset|L_{2,0}|\times|L_{0,1}|$ be the locus of pairs $(H_1+H_2, F)$ such that $H_1, H_2$ are smooth members of $|L_{1,0}|$ transverse to each other, and $F$ is transverse to $H_1+H_2$. Considering the nodal $-\frac{3}{2}K_{{{\mathbb{F}}}_4}$-curves $H_1+H_2+F+\Sigma$, we obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow\mathcal{M}_{16,3}$. \[period map (16,3)\] The map $\mathcal{P}$ is birational. As in the proof of Proposition \[period map (14,4)\], we distinguish the two sections $H_1, H_2$ and the four points $H_1\cap H_2$ independently. This defines an $\frak{S}_2\times\frak{S}_4$-cover $\widetilde{U}\to U$. The rest datum for $H_1+H_2+F+\Sigma$ are then automatically labelled, and $\mathcal{P}$ will lift to a birational map $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{16,3}$. The projection $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_4)\to U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is an $\frak{S}_4$-covering as before, while ${{\widetilde{\mathcal{M}}}}_{16,3}\to\mathcal{M}_{16,3}$ has degree $|{\rm O}(A_L)|/2$ for the invariant lattice $L=U\oplus E_8\oplus A_2^3$. It is straightforward to calculate that ${\rm O}(A_L)\simeq\frak{S}_3\ltimes({{\mathbb{Z}}}/2{{\mathbb{Z}}})^3$. Since $U$ is unirational and ${\dim}\mathcal{M}_{16,3}=2$, we have \[rational (16,3)\] The space $\mathcal{M}_{16,3}$ is rational. Arguing as in the proof of Proposition \[rational (14,4)\], one will see that $U/{\operatorname{Aut}}({{\mathbb{F}}}_4)$ is naturally birational to the Kummer modular surface for ${{{\rm SL}}}_2({{\mathbb{Z}}})$. The rationality of $\mathcal{M}_{18,2}$ --------------------------------------- We consider curves on ${{\mathbb{F}}}_2$. Let $U\subset|L_{2,0}|\times|L_{0,2}|$ be the locus of pairs $(H_1+H_2, F_1+F_2)$ such that $H_1, H_2\in|L_{1,0}|$ are smooth and transverse to each other, and $F_1, F_2\in|L_{0,1}|$ are distinct and transverse to $H_1+H_2$. We associate the nodal $-\frac{3}{2}K_{{{\mathbb{F}}}_2}$-curves $H_1+H_2+F_1+F_2+\Sigma$ to obtain a period map $\mathcal{P}\colon U/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow\mathcal{M}_{18,2}$. \[period map (18,2)\] The map $\mathcal{P}$ is birational. We distinguish independently the two sections $H_1, H_2$, the two fibers $F_1, F_2$, and the two points $H_1\cap H_2$. This is realized by an $(\frak{S}_2)^3$-cover $\widetilde{U}\to U$. As before, we see that these labelings induce a birational lift $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow{{\widetilde{\mathcal{M}}}}_{18,2}$ of $\mathcal{P}$. Then ${{\widetilde{\mathcal{M}}}}_{18,2}$ is a double cover of $\mathcal{M}_{18,2}$ because we have ${\rm O}(A_L)\simeq({{\mathbb{Z}}}/2{{\mathbb{Z}}})^2$ for the invariant lattice $L=U(3)\oplus E_8^2$. On the other hand, the stabilizer in ${\operatorname{Aut}}({{\mathbb{F}}}_2)$ of a general $(\sum_iH_i, \sum_iF_i)\in U$ is $({{\mathbb{Z}}}/2{{\mathbb{Z}}})^2$ generated by the hyperelliptic involution of $H_1+H_2$ and by an element exchanging the two points $H_1\cap H_2$ and the two fibers $F_1, F_2$ respectively. Thus $\widetilde{U}/{\operatorname{Aut}}({{\mathbb{F}}}_2)\dashrightarrow U/{\operatorname{Aut}}({{\mathbb{F}}}_2)$ is also a double covering. Since $U$ is rational and ${\dim}\mathcal{M}_{18,2}=1$, we have \[rational (18,2)\] The space $\mathcal{M}_{18,2}$ is rational. Let $H=\varphi(H_1+H_2)$ be the section defined by . As in the proof of Proposition \[rational (14,4)\], considering the configuration of $2+2$ points $H_1\cap H_2$, $F_1+F_2|_H$ on $H$ makes $U/{\operatorname{Aut}}({{\mathbb{F}}}_2)$ birational to the elliptic modular curve for $\Gamma_0(2)$. For completeness, we finish the article with a comment on $\mathcal{M}_{20,1}$, which consists of one point. Its unique member is obtained from the curve $\sum_{i=1}^{3}F_{i+}+\sum_{i=1}^{3}F_{i-}$ on ${{{\mathbb P}}}^1\times{{{\mathbb P}}}^1$, where $F_{i+}, F_{i-}$ are ruling fibers of bidegree $(1, 0), (0, 1)$ respectively. [99]{} Alexeev, V.; Nikulin, V. V. *Del Pezzo and K3 surfaces.* MSJ Memoirs, **15**. Math. Soc. Japan, 2006. Allcock, D.; Carlson, J. A.; Toledo, D. *The complex hyperbolic geometry of the moduli space of cubic surfaces.* J. Algebraic Geom. **11** (2002), no. 4, 659–724. Artebani, M.; Sarti, A. *Non-symplectic automorphisms of order 3 on K3 surfaces.* Math. Ann. **342** (2008), no. 4, 903–921. Artebani, M.; Sarti, A.; Taki, S. *$K3$ surfaces with non-symplectic automorphisms of prime order.* Math. Z. **268** (2011), no. 1-2, 507–533. Barth, W. P.; Hulek, K.; Peters, C. A. M.; Van de Ven, A. *Compact complex surfaces.* Springer-Verlag, 2004. Birkenhake, C.; Lange, H. *Complex abelian varieties.* (Second edition) Springer-Verlag, 2004. Borel, A. *Some metric properties of arithmetic quotients of symmetric spaces and an extension theorem.* J. Diff. Geom. **6** (1972), 543–560. Conway, J. H.; Curtis, R. T.; Norton, S. P.; Parker, R. A.; Wilson, R. A. *ATLAS of finite groups.* Oxford University Press, 1985. Dolgachev, I. V. *Rationality of fields of invariants.* Algebraic geometry, Bowdoin, 1985, 3–16, Proc. Symp. Pure Math., **46**, Part 2, Amer. Math. Soc., Providence, 1987. Dolgachev, I.; van Geemen, B.; Kondō, S. *A complex ball uniformization of the moduli space of cubic surfaces via periods of K3 surfaces.* J. Reine Angew. Math. **588** (2005), 99–148. Dolgachev, I.; Kondō, S. *Moduli of K3 surfaces and complex ball quotients.* Arithmetic and geometry around hypergeometric functions, 43–100, Progr. Math., **260**, Birkhäuser, 2007. Dolgachev, I.; Kondō, S. *The rationality of the moduli spaces of Coble surfaces and of nodal Enriques surfaces.* Izvestiya Math **77** (3), (2013) 509–524. Hunt, B.; Weintraub, S. H. *Janus-like algebraic varieties.* J. Diff. Geom. **39** (1994), no. 3, 509–557. Igusa, J. *Arithmetic variety of moduli for genus two.* Ann. of Math. (2) **72** (1960) 612–649. Katsylo, P. I. *Rationality of the moduli spaces of hyperelliptic curves.* Izv. Akad. Nauk SSSR. **48** (1984), 705–710. Katsylo, P. I. *Rationality of the moduli variety of curves of genus $3$.* Comment. Math. Helv. **71** (1996), no. 4, 507–524. Kitaoka, Y. *Arithmetic of quadratic forms.* Cambridge University Press, 1993. Kondō, S. *The rationality of the moduli space of Enriques surfaces.* Compositio Math. **91** (1994), 159–173. Kondō, S. *The moduli space of curves of genus 4 and Deligne-Mostow’s complex reflection groups.* Algebraic geometry 2000, Azumino, 383–400, Adv. Stud. Pure Math., 36, Math. Soc. Japan, 2002. Ma, S. *Rationality of the moduli spaces of $2$-elementary $K3$ surfaces.* arXiv:1110.5110, to appear in J. Alg. Geom. Miranda, R.; Morrison, D. R. *The number of embeddings of integral quadratic forms. II.* Proc. Japan Acad. Ser. A Math. Sci. **62** (1986), no. 1, 29–32. Miyata, T. *Invariants of certain groups. I.* Nagoya Math. J. **41** (1971), 69–73. Naruki, I. *Cross ratio variety as a moduli space of cubic surfaces.* Proc. London Math. Soc. **45** (1982), no. 1, 1–30. Nikulin, V.V. *Integral symmetric bilinear forms and some of their applications.* Math. USSR Izv., **14** (1980), 103–167. Nikulin, V.V. *Factor groups of groups of automorphisms of hyperbolic forms with respect to subgroups generated by 2-reflections.* J. Soviet Math. **22** (1983), 1401–1476. Ohashi, H.; Taki, S. *$K3$ surfaces and log del Pezzo surfaces of index three.* Manuscripta Math. **139** (2012), no. 3-4, 443–471. Segre, B. *The Non-singular Cubic Surfaces.* Oxford University Press, 1942. Shepherd-Barron, N. I. *The rationality of certain spaces associated to trigonal curves.* Algebraic geometry, Bowdoin, 1985, 165–171, Proc. Symp. Pure Math., 46, Part 1, Amer. Math. Soc., Providence, RI, 1987. Taki, S. *Classification of non-symplectic automorphisms of order 3 on $K3$ surfaces.* Math. Nachr. **284** (2011), no. 1, 124–135. [^1]: S. M. was supported by Grant-in-Aid for JSPS fellows \[21-978\] and Grant-in-Aid for Scientific Research (S), No 22224001. [^2]: H. O. was supported by Grant-in-Aid for Scientific Research (S), No 22224001 and for Young Scientists (B) 23740010. [^3]: For §\[ssec:(8,5)\] and §\[ssec:(10,4)\]: If the surjectivity of ${\rm U}(E) \to {\rm O}(A_E)$ is yet uncertain at this moment, one should consider only those $((X, G), j)$ such that $j$ *can be* ${{\mathbb{Z}}}/3{{\mathbb{Z}}}$-equivariantly extended to $\Lambda_{K3}\to H^2(X, {{\mathbb{Z}}})$. In this case, the Galois group of ${{\widetilde{\mathcal{M}}_{r,a}}}\dashrightarrow{{\mathcal{M}_{r,a}}}$ is a priori just a subgroup of ${\rm O}(A_E)/\pm1$ (but in fact the whole ${\rm O}(A_E)/\pm1$). [^4]: One might draw $\frac{1}{2}B_2$ as a half-transparent curve. [^5]: Since the surjectivity of ${\rm U}(E)\to{{{\rm O}}}(A_E)$ for the Eisenstein lattice $E=U^2\oplus A_2^5$ is yet uncertain at this moment, here we should narrow the moduli interpretation of ${{\widetilde{\mathcal{M}}}}_{8,5}$ as indicated in the footnote in p.. The lattice-markings induced from our labelled mixed branches do meet the requirement there, because, e.g., the connectivity of $\widetilde{U}$ ensures that the Eisenstein $K3$ surfaces can be deformed to each other preserving the markings.
--- abstract: 'Quantum process tomography—a primitive in many quantum information processing tasks—can be cast within the framework of the theory of design of experiments (DoE), a branch of classical statistics that deals with the relationship between inputs and outputs of an experimental setup. Such a link potentially gives access to the many ideas of the rich subject of classical DoE for use in quantum problems. The classical techniques from DoE cannot, however, be directly applied to the quantum process tomography due to the basic structural differences between the classical and quantum estimation problems. Here, we properly formulate quantum process tomography as a DoE problem, and examine several examples to illustrate the link and the methods. In particular, we discuss the common issue of nuisance parameters, and point out interesting features in the quantum problem absent in the usual classical setting.' author: - Yonatan Gazit - Hui Khoon Ng - Jun Suzuki title: Quantum process tomography via optimal design of experiments --- Introduction {#sec1} ============ Design of experiments (DoE) is a branch of mathematical statistics that examines efficient methods to understand the relationship between inputs and outputs for an experimental setup. Founded by R. A. Fisher in the early $20^{\text{th}}$ century, further developments in DoE were made by several mathematical statisticians like Wald, Kiefer, Chernoff, and Fedorov, to name a few. One of the celebrated results is the so-called equivalence theorem put forth by Kiefer and Wolfowitz [@kw60], which established equivalence among different optimal designs. The problem of quantum process (or channel) tomography—a primitive in many quantum information processing tasks—can be considered as an optimal design problem. Here, the goal is to estimate the quantum process/channel, considered here as a black box that takes in an input quantum state and puts out a modified output state. The experimenter chooses the input probe states and decides on how to measure the outputs to obtain a description of the inner workings of the black box. One can naturally formulate the problem of estimating a parametric family of quantum channels based on the theory of [optimal DoE]{}, permitting the application of the established machinery of classical statistics to finding optimal quantum tomography strategies. It turns out, however, that many of the classical techniques from DoE cannot be directly applied to the quantum problem in a straightforward manner. A major obstacle is the differences in the structure of the state and measurement spaces of quantum estimation problems compared to the classical case. Moreover, many of the previous [optimal DoE]{} studies in statistics were carried out for the linear regression model and its variants, inapplicable to tomography problems in quantum systems. Here, we extend methodologies for non-linear models studied in Refs. [@fedorov; @pukelsheim; @fh97; @fl14; @pp13] to the more general formulation of [optimal DoE]{}, which is applicable for any probabilistic model. There are already previous attempts to apply the theory of [optimal DoE]{} to quantum tomography. This was first done in Ref. [@kwr04], and following that study, several more papers on the subject appeared over the past decade [@nunn10; @bh10; @bh11; @bhp12; @rvh12]. These studies, however, dealt only with limited cases, e.g., discrete design problems, or the optimization of the relative frequencies for different experimental settings. The former setting is practically important, yet it is hard in general hard to get the solution even numerically; the latter setting was analyzed under the assumption of given tomographic measurement settings. Here, we formulate the problem of quantum process tomography in the most general setting, including the discrete design problem as a special case, and address also the common issue of nuisance parameters. In Sec. \[sec2\], we provide a review of the relevant concepts in the classical theory of [optimal DoE]{}, necessary to familiarize the reader with the basic ideas. We reformulate those ideas in the quantum setting in Sec. \[sec3\]. In Sec. \[sec4\], we elucidate different features of the problem through examples for qubit models, and illustrate the usefulness of the [optimal DoE]{} framework in quantum tomography problems. Supplemental materials about the classical theory of [optimal DoE]{} are given in the Appendix. Note that we focus only on tomography strategies that do not require expensive resources such as entangled states, or the ability to perform join measurements [@fujiwara01; @fi03; @glm11]. Of course, the use of such additional resources gives higher performance in general, but practically, entangled resources and joint operations remain difficult to achieve in the lab today. The [optimal DoE]{} approach requires only control over the input probe states and output measurements. The formulation presented in this paper can easily be extended to more general settings that make use of these quantum mechanical resources. Classical theory of [optimal DoE]{} {#sec2} =================================== In this section, we provide a brief summary of the classical theory of [optimal DoE]{} developed along the lines of Refs. [@fedorov; @pukelsheim; @fh97; @fl14; @pp13]. To simplify matters, we focus on point estimation problems about parameter models and probability distributions on discrete sets. Other statistical inference problems, such as hypothesis testing, model discrimination, and so on, can be formulated in a similar manner. Local optimal design {#sec2-1} -------------------- An $n$-parameter coordinate system is denoted by ${{\theta_{}}}=({\theta_{1}},{\theta_{2}},\dots,{\theta_{n}})$ to describe the object of interest. The parameter ${{\theta_{}}}=({\theta_{i}})$, the [*model parameter*]{}, takes values in ${\Theta}$, an open subset of $\bbr^n$. Let us introduce a [*design*]{} $e$ describing a particular experimental setup, and let $\cE$ be the set of all possible designs. A [*model function*]{} $f$ is a mapping from ${\Theta}\times\cE$ to a set of probability distributions on $\cX (\equiv \PsetX$), that is, $f:\, ({\theta_{}},e)\mapsto p_{{\theta_{}}}(\cdot|e)\in\PsetX$ where, $\forall x\in\cX$, $ p_{{\theta_{}}}(x|e)\ge0$ and $\sum_{x\in\cX}p_{{\theta_{}}}(x|e)=1$. Note that the concept of a model function is not introduced in the classical [optimal DoE]{}. But this is essential for extending the formalism of linear regression models to the more general probabilistic models. We assume that the model set ${\Theta}$ is continuous. The design set $\cE$, on the other hand, can be arbitrary, and is determined by the given experimental configuration or constraints. The element $e\in\cE$ can be a vector, a matrix, or a more general object (see concrete examples below). Given an unknown object smoothly parametrized by ${\theta_{}}$, we choose a proper design $e$ that gives a particular statistical model, $$\label{eq:stat.model} M(e)= \{p_{{\theta_{}}}(\cdot|e)\,|\, {\theta_{}}\in{\Theta}\},$$ according to a known model function $f$. The experimental data $X$ is a random variable defined by $p_{{\theta_{}}}(\cdot|e)$. The value of $ {\theta_{}} $ is inferred from some data $x\in\cX$ by using an estimator ${{\hat{\theta}_{}}:\cX\to{\Theta}}$, ${{\hat{\theta}_{}}=({\hat{\theta}_{1}},\dots,{\hat{\theta}_{n}})}$. We use the mean-square error (MSE) matrix, a non-negative $n\times n$ real matrix, as a measure of an estimator’s error. Let ${E_{{\theta_{}}}[X|e]= \sum_{x\in\cX} x p_{{\theta_{}}}(x|e) }$ be the expectation value of a random variable $X$ with respect to $p_{{\theta_{}}}(\cdot|e)$. The MSE matrix is defined by $$\label{eq:mse.matrix} V_{{\theta_{}}}[{\hat{\theta}_{}}|e]=\Big[ E_{{\theta_{}}}[({\hat{\theta}_{i}}-{\theta_{i}}) ({\hat{\theta}_{j}}-{\theta_{j}})|e] \Big]_{i,j}.$$ When reconstructing the value of $ {\theta_{}} $ from the data, the goal is to find the estimator that minimizes the MSE matrix for a design $e\in\cE$. As is well known, there cannot in general be a universally optimal estimator that minimizes the MSE matrix for all ${\theta_{}}\in{\Theta}$ [@rao73; @kiefer87; @lc98]. We thus look for an optimal estimator within a subclass of estimators. In this paper, we consider only locally unbiased estimators, defined as follows: An estimator ${\hat{\theta}_{}}$ is said to be [*locally unbiased at ${\theta_{0}}$*]{} for a design $e\in\cE$ if $E_{{\theta_{0}}}[{\hat{\theta}_{i}}|e]={\theta_{i}}$ and $\frac{\del}{\del{\theta_{j}}}E_{{\theta_{0}}}[{\hat{\theta}_{i}}|e]\big|_{{\theta_{}} = {\theta_{0}}}=\delta_{i,j}$ are satisfied for $\forall i,j$ at a particular point ${\theta_{0}}$. We can now make use of the Cramér-Rao theorem [@rao73; @kiefer87; @lc98] for a fixed design $e$, assuming that the model $M(e)$ satisfies the usual regularity conditions. The well-known Cramér-Rao (CR) theorem states that the MSE matrices for all locally unbiased estimators are bounded by $$\label{eq:crb} V_{{\theta_{}}}[{\hat{\theta}_{}}|e]\ge \Big( J_{{\theta_{}}}[e]\Big)^{-1}.$$ Here $J_{{\theta_{}}}[e]$ is the Fisher information matrix about the statistical model $M(e)$ for the design $e$, defined as $$\label{eq:cl.fish} J_{{\theta_{}}}[e]= \Big[ E_{{\theta_{}}}[ \frac{\del \ell_{{\theta_{}}}(X|e)}{\del{\theta_{i}}} \frac{\del\ell_{{\theta_{}}}(X|e)}{\del{\theta_{j}}} \Big| e] \Big]_{i,j},$$ where $\ell_{{\theta_{}}}(x|e)=\log p_{{\theta_{}}}(x|e) $ is the logarithmic likelihood function. Importantly, the above CR inequality can be saturated asymptotically (i.e., in the sample size $N\to\infty$ limit). An optimal design $e$, therefore, maximizes the Fisher information matrix $J_{{\theta_{}}}[e]$. However, it is usually impossible to minimize a matrix function as a matrix inequality because the matrix ordering does not yield a totally ordered set for all information matrices. In such cases, one has to adopt some other suitably chosen optimal criteria. These optimal criteria can be expressed in terms of an [*optimality function*]{} $ \Psi $, a function of non-negative matrices such that $\Psi(A)\ge0 $ for all $A\ge0$. We can then formulate the optimization problem in terms of the chosen optimality function $ \Psi $: $$\begin{aligned} \Psi_*&=\min_{e\in\cE} \Psi\Big(J_{{\theta_{}}}[e]\Big),\\ e_*&=\mathrm{arg}\min_{e\in\cE} \Psi\Big(J_{{\theta_{}}}[e]\Big). \end{aligned}$$ The optimal design $e_*$ is said to be [*$\Psi$-optimal*]{}. In the theory of [optimal DoE]{}, there are various optimality criteria commonly used to define the best design. We list below some standard criteria by which to define an optimality function $ \Psi $ (see Appendix Sec. \[sec-app\_supp1\] for supplemental material and Refs [@pukelsheim; @fh97; @fl14; @pp13] for more details). - \ $e_*$ is Löwner optimal $\DEF$ $\exists e_*\in\cE$ such that\ $\forall e\in\cE\, J_{{\theta_{}}}[e]\le J_{{\theta_{}}}[e_*]$ and $\exists e',\,J_{{\theta_{}}}[e']< J_{{\theta_{}}}[e_*]$. - \ $e_*$ is $A$-optimal $\DEF$ $e_*=\arg\min {\mathrm{Tr}\Big\{J_{{\theta_{}}}[e]^{-1}\Big\}}$. - \ $e_*$ is $D$-optimal $\DEF$ $e_*=\arg\min {\mathrm{Det}\{J_{{\theta_{}}}[e]^{-1}\}} $\ $\Lra$ $e_*=\arg\max {\mathrm{Det}\{J_{{\theta_{}}}[e]\}} $. - \ $e_*$ is $E$-optimal $\DEF$ $e_*=\arg\min \lambda_{\max}({J_{{\theta_{}}}[e]^{-1}}) $\ $\Lra$ $e_*=\arg\max \lambda_{\min}({J_{{\theta_{}}}[e]}) $ ,\ where $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ are the largest and smallest eigenvalues, respectively, of a symmetric matrix $A$. - \ $e_*$ is $c$-optimal $\DEF$ $\ e_*=\arg\min c^\mathrm{T}J_{{\theta_{}}}[e]^{-1} c$,\ where $c\in\bbr^n$ is a given column vector. - \ $e_*$ is $\gamma$-optimal ($\gamma\in(0,\infty)$)\ $\DEF$ $e_*=\arg\min \left(\frac1n{\mathrm{Tr}\Big\{J_{{\theta_{}}}[e]^{-\gamma}\Big\}}\right)^{1/\gamma}$,\ where $n$ is the dimension of the parameter set ${\Theta}$. The $D$-, and $E$-optimal designs are named as the average optimal design, the optimal about the determinant, and the optimal about the extremal eigenvalue, respectively. We list some terminology concerning designs below. If an optimal design $e_*$ is a function of the unknown parameter(s) ${\theta_{}}$, it is called a [*local optimal design*]{} in the sense that it is optimal at a specific point ${\theta_{0}}$. Without [*a priori*]{} knowledge about ${\theta_{}}$, it is impossible to immediately perform this optimal design $e_*$, but there exist various sequential algorithms realizing $e_*$ in the sample size $N\to\infty$ limit. On the other hand, when $e_*$ is ${\theta_{}}$-independent, it is called a [*globally optimal design*]{}. A well-known example of a globally optimal design is for the linear regression model, where the optimal design is always ${\theta_{}}$ independent. Alternatively, one can look for an averaged optimal design, a Bayesian optimal design, or a min-max optimal design to avoid ${\theta_{}}$ dependence in $e_*$, see Refs. [@fedorov; @pukelsheim; @fh97; @fl14; @pp13]. In this paper, we mainly focus on local optimal designs. Another terminology concerns singular behavior of designs. When the Fisher information matrix $J_{{\theta_{}}}[e]$ is not full rank for a design $e$, we say $e$ is a [*singular design*]{}. The singular design problem is discussed in Appendix Sec. \[sec-app\_sing\]. A few remarks about the optimality criteria are in order. First, $A$-optimality can be generalized to minimizing ${\mathrm{Tr}\Big\{WJ_{{\theta_{}}}[e]^{-1}\Big\}}$, where $W\ge0$ is a non-negative matrix, called a weight matrix, utility matrix, or loss matrix. The introduction of an appropriate $ W $ allows one to focus on the parameters of interest, and this formulation is often adopted in parameter estimation of quantum states. The Löwner optimal design is the strongest criterion in the sense that if there exists a Löwner optimal design $e_*$, then all other optimality criteria are automatically satisfied. However, this occurs only for very special models. We elaborate on this point in Appendix Sec. \[sec-app\_lowner\]. The $\gamma$-optimality criterion contains the $A$-optimal ($\gamma=1$), $D$-optimal ($\gamma\to0$), and $E$-optimal ($\gamma\to\infty$) criteria as special cases. But a closed expression for the $\gamma$-optimal design is hard in general to obtain. (See also Appendix Sec. \[sec-app\_supp1\].) Discrete and continuous design problems {#sec2-5} --------------------------------------- In this subsection, we extend our discussion to multiple design problems. When considering a situation of $N$ repetitions of an experiment, there are two distinct strategies to choose from:\ i) . Repeat the same design $e$ for a total of $N$ times. Let us refer to the design of this strategy as $e^N\in\cE^N$. The probability distribution for model becomes an independently and identically distributed (i. i. d.) one, $$p_{{\theta_{}}}(x^N|e^N)=\prod _{t=1}^N p_{{\theta_{}}}(x_t|e),$$ because of the additivity of the Fisher information matrix, $J_{{\theta_{}}}[e^N]=NJ_{{\theta_{}}}[e]$. The problem is solved by considering the $N=1$ case.\ ii) . Let $N(m)$ be an $m$-partition of a positive integer $N$, i.e., $N(m)=(n_1,n_2,\dots,n_m)$ such that $\sum_{i=1}^mn_i=N$ and $n_i\ge0$. The mixed strategy involves repeating a design $e_1$ for $n_1$ times, $e_2$ for $n_2$ times, and so on, for all $m$ designs. Let us refer to this strategy’s design as $e[N(m)]$. The probability distribution is then $$p_{{\theta_{}}}\bigl(x^N|e[N(m)]\bigr)=\prod _{i=1}^{m} p_{{\theta_{}}}(x^{n_i}|e_{i}^{n_i})=\prod _{i=1}^{m} \prod_{t_i=1}^{n_i}p_{{\theta_{}}}(x_{t_i}|e_{i}),$$ and the Fisher information matrix for $e[N(m)]$ is $$J_{{\theta_{}}}\big[e[N(m)]\big]=\sum_{i=1}^m n_i J_{{\theta_{}}}[e_i].$$ When $N$ is fixed, the optimization corresponds to finding the partition $N(m)$ that minimizes the optimality function $\Psi\bigl(J_{{\theta_{}}}\big[e[N(m)]\big]^{-1}\bigr)$. This optimization is known as a [*discrete design*]{} or [*exact design*]{} problem. In the very special situation where a Löwner optimal solution exists, an optimal mixed strategy corresponds to the i. i. d. strategy. Although the combinatoric optimization of a discrete design problem is practically important, it is in general hard to find an optimal solution, even numerically. The standard approach to finding an approximate optimal solution is to consider instead a [*continuous design*]{} problem (also known as an [*approximate design*]{} problem). Taking the $N\to\infty$ limit, the normalized proportions become relative frequencies, $\nu_i=\lim_{N\to\infty} (n_i/N)$. The goal is then to find the optimal relative frequencies $\v{\nu}=(\nu_i)\in\cP(m)$ and the set of designs $\v{e}=(e_i)\in\cE^m$ that minimize $\Psi\bigl(J_{{\theta_{}}}\big[e(m)]\big]^{-1}\bigr)$. Here, we denote the design of this continuous design problem by $$\begin{aligned} e(m)&=(\v{\nu},\v{e})\in\cP(m)\times\cE^m\\ &=\Big((\nu_1,\dots,\nu_m),\,(e_1,\dots,e_m) \Big).\end{aligned}$$ The Fisher information matrix for the design $e(m)$ is then $$J_{{\theta_{}}}[e(m)]=\sum_{i=1}^m \nu_iJ_{{\theta_{}}}[e_i].$$ This is equivalent to the Fisher information of the joint probability distribution $\sum_i \nu_ip_{{\theta_{}}}(x|e_i)$. To phrase it differently, the mixed strategy amounts to maximizing the Fisher information for the statistical model $$M\left(e(m)\right)={\left\{\sum_{i=1}^m \nu_ip_{{\theta_{}}}(\cdot|e_i)\,|\,{\theta_{}}\in{\Theta}\right\}}.$$ The continuous design problem can be summarized as follows: Given an optimality function $ \Psi $ and a positive integer $m$, one must find an optimal design $e_*(m)=(\v{\nu}_*,\v{e}_*)$ defined by $$\label{eq:opt.design} e_*(m)=\arg\hspace{-5mm}\min_{e(m)\in\cP(m)\times\cE^m} \Psi\bigg(\sum_{i=1}^m \nu_iJ_{{\theta_{}}}[e_i]\bigg).$$ We plan to find an optimal value for $ m $ by sequentially finding the optimal design for different values of $ m $. That is, for some fixed $ m $ we find $ e_{*}(m),e_{*}\mbox{(m+1)},e_{*}(m+2), $ and so on. By comparing the optimal designs of various $m$ values, we can search for the optimal $e_*(m_*)$ over all possible designs. The general theorem (Carathéodory’s theorem) guarantees that an optimal design can be found by using no more than designs, where $n$ is the number of parameters to be estimated [@fedorov; @pukelsheim; @fh97; @fl14; @pp13]. In the presence of $\ell$ independent constraints on the design $e$, this upper bound becomes $n(n+1)/2+\ell$ [@fh97; @fl14]. Before closing this section, we have one remark. From the expression in Eq. , it is clear that a closed expression for the optimal continuous design cannot be obtained except in special cases. Therefore, we often have to use numerical search instead to find the optimal design. This has also been an area of active research in the field of [optimal DoE]{} [@pukelsheim; @fh97; @fl14; @pp13]. Nuisance Parameters {#sec2-3} ------------------- For an $n$-parameter object, often only $k<n$ parameters are of interest. The parameters not of interest are called [*nuisance parameters*]{} in statistics. The nuisance parameter problem is very important in many areas of statistics and have been studied since Fisher’s work in 1935 [@fisher35]. In classical statistics, there are various methods to eliminate nuisance parameters and find a good estimator for the parameters of interest; see, for example, textbooks [@amari85; @lc98; @bnc94; @an00] and Refs. [@basu77; @rc87; @ak88; @bs94; @zr94]. We can formulate the nuisance parameter problem by dividing an $n$-parameter object into two groups ${\theta_{}}=({\theta_{I}},{\theta_{N}})$. ${\theta_{I}}=({\theta_{1}},{\theta_{2}},\dots,{\theta_{k}})$ are the parameters of interest and ${\theta_{N}}=({\theta_{k+1}},{\theta_{k+2}},\dots,{\theta_{n}})$ are the nuisance parameters. Our aim is then to find a good design $e$ and to construct a good estimator ${\hat{\theta}_{I}}=({\hat{\theta}_{1}},{\hat{\theta}_{2}},\dots,{\hat{\theta}_{k}})$ for the parameters of interest. Let us decompose $J_{{\theta_{}}}$ and $J_{{\theta_{}}}^{-1}$ into block matrices according to the parameter group ${\theta_{}}=({\theta_{I}},{\theta_{N}})$: $$\begin{aligned} J_{{\theta_{}}}&=\left(\begin{array}{cc}J_{{\theta_{}},II} & J_{{\theta_{}},IN} \\ J_{{\theta_{}},NI} & J_{{\theta_{}},NN}\end{array}\right),\\ J_{{\theta_{}}}^{-1}&=\left(\begin{array}{cc}J_{{\theta_{}}}^{II} & J_{{\theta_{}}}^{IN} \\[0.5ex] J_{{\theta_{}}}^{NI} & J_{{\theta_{}}}^{NN}\end{array}\right),\end{aligned}$$ where we have dropped the $ e $-dependence in the notation. $(J_{{\theta_{}}}^{II}[e])^{-1}$ is called the [*partial Fisher information*]{} about the model [@zr94], as its inverse $J_{{\theta_{}}}^{II}[e]$ provides a bound for the estimation error about the parameters of interest. The MSE matrix $V_{{\theta_{I}}}$ for the parameters of interest is a $k\times k$ real symmetric matrix and is bounded by the partial Fisher information, $$\label{eq:CRnuisance} V_{{\theta_{I}}}[{\hat{\theta}_{I}}|e]\ge J_{{\theta_{}}}^{II}[e].$$ Using standard matrix analysis, the partial Fisher information can be expressed as $$(J_{{\theta_{}}}^{II}[e])^{-1} = J_{{\theta_{}},II} - J_{{\theta_{}},IN}J_{{\theta_{}},NN}^{-1}J_{{\theta_{}},NI}.$$ Therefore, $(J_{{\theta_{}}}^{II}[e])^{-1}\le J_{{\theta_{}},II}$, with equality if and only if $J_{{\theta_{}},IN}=0$. If all nuisance parameters ${\theta_{N}}$ are known, then the problem is reduced to a $k$-parameter estimation problem and the CR inequality for ${\theta_{I}}$ becomes $$\label{eq:CRnuisance2} V_{{\theta_{I}}}[{\hat{\theta}_{I}}|e]\ge (J_{{\theta_{}},II}[e])^{-1}.$$ Comparing the CR inequality in Eqs. (\[eq:CRnuisance\]) and (\[eq:CRnuisance2\]), we can conclude that the two lower bounds are the same if and only if $J_{{\theta_{}},IN}=0$. Otherwise, we cannot ignore the effect of nuisance parameters in estimating the parameters of interest. The presence of nuisance parameters leads to a larger lower bound for the error. Here, we will only concern ourselves with the $A$-optimal design when it comes to nuisance parameter estimation problems. We modify the optimality function to be $\Psi_{W}(J)={\mathrm{Tr}\Big\{WJ^{-1}\Big\}}$ and set the weight matrix as $$W=\left(\begin{array}{cc}W_{I} & 0 \\ 0& 0\end{array}\right).$$ where $W_{I}$ is a $k\times k$ positive matrix. When $W_{I}=I_k$ (the identity matrix for the $k\times k$ sub-block), $\Psi_{W}(J) $ is equivalent to optimizing ${\mathrm{Tr}\Big\{J_{{\theta_{}}}^{II}[e]\Big\}}$, the $A$-optimality function for the parameters of interest. Similar extensions can be done to define other optimal designs in the presence of nuisance parameters. Quantum channel parameter estimation {#sec3} ==================================== Building upon the last section, we now connect the theory of [optimal DoE]{} to quantum process tomography. More specifically, we discuss [optimal DoE]{} for estimating the parameters of a given family of quantum processes, also known as quantum channels. We first list several definitions (axioms) of quantum system (see, for example, Ref. [@NC; @petz] for details) before formulating [optimal DoE]{} in a quantum setting. Definitions ----------- Q1) . A [*quantum system*]{} is represented by a $d$-dimensional complex vector space $\bbc^d$. With the standard inner product, it becomes a Hilbert space denoted by $\cH=\bbc^d$. When the dimension of the system is two, we speak of “qubit", the simplest quantum system. To simplify our discussion we only consider quantum systems with a fixed dimension $d<\infty$.\ Q2) . A [*quantum state*]{} is represented by a non-negative matrix $\rho$ on $\cH$ with unit trace. The set of all quantum states on $\cH$ is denoted by $\sofh=\{\rho\,|\,\rho\ge0,{\mathrm{Tr}\{\rho\}}=1 \}$. When we analyze the qubit problem, a convenient representation of qubit states is as follows. Define a bijective map from a $2\times2$ Hermitian matrix $A\in\bbc^{2\times2}$ ($A^\dagger=A$, where $A^\dagger$ denotes the Hermitian conjugate of a complex matrix $A$) to a three-dimensional real vector ${\mathbf{s}}=(s_i)$ via $s_i={\mathrm{Tr}\{\rho\sigma_i\}}$ where $\sigma_i$ ($i=1,2,3$) are the Pauli matrices [@pauli]. $\rho$ is a physical quantum state if and only if $|{\mathbf{s}}|\le1$. This real vector is referred to as the [*Bloch vector representation*]{} of the state $\rho$. A state with a Bloch vector of unit length, i.e., $|{\mathbf{s}}|=1$, is referred to as a [*pure state*]{}, corresponding to the situation where the quantum system is in a definite state. Pure states are the extremal points of the convex state space $\sofh$.\ Q3) . A [*measurement*]{} $\Pi$ on a given quantum state $\rho$ is described as a set of non-negative matrices $\Pi=\{\Pi_x\}_{x\in\cX}$ such that $\sum_{x\in\cX}\Pi_x=I_d$, where $\cX$ is the index set of all the measurement outcomes $\Pi_x$s. The probability of observing the measurement outcome represented by $\Pi$ for a state $\rho$ is given by the [*Born’s rule*]{}, which defines the model function, $$p_\rho(x|\Pi)={\mathrm{Tr}\{\rho\Pi_x\}}.$$ $\Pi$ is often called a [*positive operator-valued measure*]{} (POVM) in the literature. We denote the set of all possible POVMs on $\cH$ by $\cM(\cH)$.\ Q4) . A [*quantum channel*]{} (also known as a [*quantum process*]{}) $\cT$ is a linear map from the input quantum state space $\sofh$ to the output state space ${\cal S}({\cal H}')$. We only consider cases where $\cH'=\cH$. Axiomatically, a channel is defined as a completely positive and trace-preserving map [@NC; @petz]. A convenient representation of a quantum channel is the [*Kraus representation*]{}, defined as $$\cT(\rho)=\sum_{k=1}^K E_k \rho E_k^{\dagger},$$ where the Kraus operators $E_k\in\bbc^{d\times d}$ satisfy the trace-preserving condition: $\sum_{k=1}^KE_k^{\dagger} E_k=I_d$. Formulation of the problem -------------------------- We can now formulate the problem of quantum channel parameter estimation in the framework of [optimal DoE]{}. We start with a family of $n$-parameter quantum channels $$M^Q=\{\cT_{{\theta_{}}}\,|\,{\theta_{}}\in{\Theta}\subset\bbr^n\},$$ assuming that ${\theta_{}}\mapsto\cT_{{\theta_{}}}$ is one-to-one and smooth mapping. The design $e$ is a set of input quantum states $\rho\in\sofh$ and a POVM $\Pi\in\cM(\cH)$ on the output quantum state $\cT_{{\theta_{}}}(\rho)$, i.e., $e=(\rho,\Pi)$. The design space is $\cE=\sofh\times\cM(\cH)$. The model function $ f $ is given by Born’s rule and the resulting probability distributions are $$p_{{\theta_{}}}(x|e)={\mathrm{Tr}\{\cT_{{\theta_{}}}(\rho) \Pi_x\}},$$ for a given quantum channel and a chosen design $e=(\rho,\Pi)$. Thus, the statistical model is $$M(e)=\{p_{{\theta_{}}}(\cdot|e)\,|\,{\theta_{}}\in{\Theta}\}.$$ We wish to find an optimal design $e_*=e_*(m_*)=(\v{\nu},\v{e})\in\cP(m_*)\times\cE^{m_*}$ that minimizes a properly chosen optimality criterion, i.e., $$\begin{aligned} e_*(m)&=\arg\min_{e(m)}\Psi(J_{{\theta_{}}}[e(m)] ),\nonumber\\ m_*&=\arg\min_{m\in\bbn}\Psi\left(J_{{\theta_{}}}[e_*(m)] \right). \end{aligned}$$ Solving the optimization problem, however, can be difficult because the design is composed of two distinct parts: a state $\rho$ and a measurement $\Pi$. This difficulty can be partially assuaged by introducing the quantum extension of the Cramér-Rao bound [@helstrom; @holevo; @petz], $$\label{eq:qm.crb} V_{{\theta_{}}}[{\hat{\theta}_{}}|e]\ge \Big( J_{{\theta_{}}}[e]\Big)^{-1} \ge \Big( J_{{\theta_{}}}^{QM}[\rho] \Big)^{-1}$$ Here, $J_{{\theta_{}}}^{QM}[\rho]$ is the quantum Fisher information (QFI), which depends only on the input state $\rho$. Just as its classical counterpart is a measure of how much information about a parameter can be extracted from a statistical model, the quantum Fisher information is a measure of how much information about parameters $ {\theta_{}} $ can be extracted from a quantum state. We consider only the Symmetric Logarithmic Derivative (SLD) QFI $J_{{\theta_{}}}^{QM}={J_{{\theta_{}}}^{SLD}} $, defined as $$\label{eq:qfi.def} [{J_{{\theta_{}}}^{SLD}}]_{ij} = \tfrac{1}{2}{\mathrm{Tr}\Big\{\cT_{{\theta_{}}}(\rho)(\mathcal{L}_{{\theta_{}},i}\mathcal{L}_{{\theta_{}},j} + \mathcal{L}_{{\theta_{}},j} \mathcal{L}_{{\theta_{}},i})\Big\}},$$ where the quantum score functions $ \mathcal{L}_{{\theta_{}},i} $ are solutions to the equation $$\label{eq:sld.score} \frac{\partial \cT_{{\theta_{}}}(\rho)}{\partial {\theta_{i}}} =\frac12 \mathcal{L}_{{\theta_{}},i} \cT_{{\theta_{}}}(\rho) + \frac12\cT_{{\theta_{}}}(\rho) \mathcal{L}_{{\theta_{}},i}.$$ The matrix inequality in Eq. (\[eq:qm.crb\]) follows from the monotonicity of the SLD QFI under further action of a quantum channel (in this case, that of the POVM, considered as a quantum channel) [@petz96]. The second inequality in Eq.  cannot be, in general, saturated, but this inequality is useful for deriving a bound for a given optimality function as $$\label{eq:qfi.bound} \Psi( J_{{\theta_{}}}[e(m)])\ge \Psi(J_{{\theta_{}}}^{QM}[(\v{\nu},\v{\rho})])\ge\Psi(J_{{\theta_{}}}^{QM}[(\v{\nu}_*,\v{\rho}_*)]),$$ where $$J_{{\theta_{}}}^{QM}[(\v{\nu},\v{\rho})]\equiv \sum_{i=1}^m\nu_iJ_{{\theta_{}}}^{QM}[\rho_i]$$ and $(\v{\nu}_*,\v{\rho}_*)=\arg\min_{\v{\nu},\v{\rho}} \Psi(J_{{\theta_{}}}^{QM}[(\v{\nu},\v{\rho})])$ optimizes the given optimality criterion. Since this optimization involves input states $\v{\rho}=(\rho_1,\dots,\rho_m)$ only, it is much easier to handle. It is clear that all the previously reviewed methods of finding an optimal design using the Fisher information are also applicable to this optimization problem about the quantum Fisher information. Note that if the lower bound set by the QFI is saturated by the classical Fisher information and $J_{{\theta_{}}}[e] = {J_{{\theta_{}}}^{SLD}}[\rho] $, the CR bound is then likewise saturated and the Fisher information is equal to the MSE matrix [@young; @nagaoka87; @bc94]. As in the classical case, the above optimization problem generally yields a *local* optimal design. The solution then depends on the unknown parameter ${\theta_{}}$ in general. We stress again that ${\theta_{}}$-dependent optimal estimation strategies, in particular, ${\theta_{}}$-dependent optimal measurements, are generic in the theory of optimal DoE. In the context of quantum state estimation problems, a number of authors proposed and analyzed adaptive methods to implement such ${\theta_{}}$-dependent POVMs; see for example Refs. [@nagaoka89-2; @HM98; @BNG00; @fujiwara06; @stm12]. Experimental realization of these adaptive estimation methods are also an active subject over the last decade; see Refs. [@oioyift12; @mrdfbks13; @ksrhhk13; @hzxlg16; @ooyft17] and also review article [@zlwjn17] on the subject. Discussions and extensions -------------------------- Several remarks are in order about the extension of [optimal DoE]{} to the quantum setting. First, usually not all input states are realizable in experiments and $\rho$ can only come from a subset of $\sofh$, say ${\cal S}_0$. Similar practical constraints also often apply to the measurement space, and one can consider only a subset of all measurements, $\cM_0\subset\cM(\cH)$. A common restriction is to take $ \cM_0 $ as the set of projective measurements, or projection-valued measures (PVMs), denoted by $\cM_{PVM}$. The design space is then $\cE={\cal S}_0\times \cM_{PVM}$. We also list three variants of possible design spaces that may arise from experimental constraints. 1. If the input state is fixed such that ${\cal S}_0=\{\rho_0\}$, the problem is reduced to that of quantum state estimation. In this case, we optimize over only the POVM $\v{e}=(\Pi(i))$ and relative frequencies $\v{\nu}=(\nu_i)$. 2. When the measurement $\Pi$ is fixed, on the other hand, we see that the problem becomes one of finding the best set of input states and relative frequencies $\v{\nu}$. One of us (J. Suzuki) has already reported on this problem for the channel-parameter estimation problem in classical information theory [@js16sita]. A general formula for the optimal design for a binary-input two-parameter case is given in the next subsection. Let us briefly go over this problem. We let $J_{{\theta_{}}}[\rho]$ be the Fisher information matrix for an input state $\rho$ with fixed measurement $\Pi$. The Fisher information matrix for the design $e(m)=(\v{\nu},\v{e})$, where $\v{e}=(\rho_1,\rho_2,\dots,\rho_m)$ is now a set of input states, is $$J_{{\theta_{}}}[e(m)]=\sum_i \nu_iJ_{{\theta_{}}}[\rho_i].$$ Since the Fisher information matrix is convex with respect to the input state, i.e., $$\hspace*{1cm} J_{{\theta_{}}}[p \rho_1 + (1 - p)\rho_2]\le p J_{{\theta_{}}}[\rho_1]+(1-p)J_{{\theta_{}}}[\rho_2],$$ $\forall p \in[0,1]$, the optimal input states are pure states. This point is important when dealing with the general optimization case. Since this statement is true for any POVM, optimal input states are always pure states [@fujiwara01]. In other words, we can always restrict to optimal pure input states and then optimize over the POVM. 3. If both the input set ${\cal S}_0\subset\sofh$ and the POVM set $\cM_0\subset\cM(\cH)$ are fixed, we optimize over only the relative frequencies $\v{\nu}=(\nu_1,\nu_2,\dots,\nu_m)$. We will use the standard process tomography setting as an example, where one adopts the design $e_i=(\psi_i,\Pi_i)$. Here, $\psi_i$ are pure states and $\Pi_i$ are the corresponding PVMs, such that $\{e_i\}$ comprises an informationally complete estimation strategy for the quantum channel space. This class of optimal design problems was discussed in Refs. [@kwr04; @nunn10]. A convex structure for the design space $\cE$ can be introduced as follows. For the input states $\rho_1,\rho_2$, the convex sum of two states is defined as $\rho_p=p\rho_1+(1-p)\rho_2$, for $p\in[0,1]$, which is still in the set $\sofh$. For two measurements $\Pi(1)=\{\Pi_1,\dots,\Pi_{k_1}\}$ and $\Pi(2)=\{\Pi'_1,\dots,\Pi'_{k_2}\}$, we can define a convex sum as $\Pi_p=p\Pi(1)\bigcup(1-p)\Pi(2)=\{p\Pi_1,\dots,p\Pi_{k1},(1-p)\Pi'_1,\dots,(1-p)\Pi'_{k_2}\}$. Statistically, this convex sum is equivalent to performing measurement $\Pi(1)$ with probability $p$ and measurement $\Pi(2)$ with probability $1-p$. Such a measurement is called a [*randomized measurement*]{} since it can be realized with (pseudo-)random numbers [@dpp05; @fujiwara06; @yamagata11]. Here we see that the theory of optimal design of experiments unifies previously studied optimization problems in a systematic way. Lastly, we briefly mention possible extensions of the above estimation strategy to other estimation settings. There are several distinct strategies for utilizing quantum resources, such as entangled states, ancilla states, or joint measurements on output states for quantum estimation problems. It has been known in the literature that these extended estimation strategies can lower the estimation errors in general. We have so far not mentioned these methods, but they can also be formulated as optimal experimental design problems. As an example, let us consider an ancillary assisted estimation strategy. Let $\cH_A$ be the Hilbert space of the ancilla states and $id_{A}$ be the identity map on it. The family of quantum channels to be estimated is expressed as $\{\cT_{{\theta_{}}}\otimes id_A|{\theta_{}}\in{\Theta}\}$ and the input state space is extended to ${\cal S}(\cH\otimes\cH_A)$. Likewise, the measurement space can be extended. Then, the optimization problem takes the same form as before except that the design space is extended. Analytical results ------------------ A closed-form expression for an optimal design cannot usually be obtained analytically except in very special cases. In the following, we briefly discuss some of these special cases. ### Löwner optimal design {#sec-loevner} As mentioned before, the existence of the Löwner optimal design is a special case where a closed-form expression can be derived. Suppose there exists a design $e_*$ that is Löwner optimal, and its expression is obtained. We show below that mixed strategies do not give any advantage over the i. i. d. strategy for most popular optimality criteria. Consider the i.i.d strategy, using $e_*$ repeatedly. This design is also Löwner optimal for any design $e(m)$ since any mixed strategy $e_p=\big((p, 1-p), (e_1,e_2)\big)$ for $m=2$ with $e_1,e_2\in\cE$ obeys the inequality $$\begin{aligned} J_{{\theta_{}}}[e_p]&=p J_{{\theta_{}}}[e_1]+(1-p)J_{{\theta_{}}}[e_2]\\ &\le p J_{{\theta_{}}}[e_*]+(1-p)J_{{\theta_{}}}[e_*]=J_{{\theta_{}}}[e_*]. \end{aligned}$$ Therefore, $e_*(2)=\big((p, 1-p), (e_*,e_*)\big)$. We can repeat this argument to show that $e_*(m)$ has a similar structure, and conclude that an optimal design is the i. i. d. strategy. Next, remember that when a Löwner optimal design exists, it is also optimal for other optimality criteria. Consider an optimality function $\Psi$ satisfying the isotonicity property discussed in Appendix Sec. \[sec-app\_supp2\]. This, together with the argument that the i. i. d. strategy is optimal in the L[ö]{}wner optimal case, tells us that any mixed strategy cannot further minimize the function $\Psi(J_{{\theta_{}}}[e(m)])$. ### Single-parameter family of quantum channels {#sec-1para} When considering a single-parameter family of quantum channels, we can find an optimal solution analytically in the language of the theory of [optimal DoE]{} (see also Refs. [@sm06; @js16pra].) Let $M^Q=\{\cT_{{\theta_{}}}|{\theta_{}}\in{\Theta}\subset\bbr\}$ be a one-real-parameter family of quantum channels. The design space is $\cE = \sofh\times \cM(\cH)$. Eq. (\[eq:qm.crb\]) then gives $$\begin{aligned} \label{1paraopt2}J_{{\theta_{}}}[(\rho,\Pi)]&\le J^{\sld}_{{\theta_{}}}[\rho]\\ &\le J^{\sld}_{{\theta_{}}}[\rho_*]. \label{1paraopt} \end{aligned}$$ An optimal measurement that attains the first equality \[Eq. \] is known [@young; @nagaoka87; @bc94]. The second inequality \[Eq. \] follows from the maximization of the SLD quantum Fisher information over all input states, $$\rho_*=\arg\min_{\rho:\sofh}J^{\sld}_{{\theta_{}}}[\rho].$$ Note that the optimizer $\rho_*$ is not unique in general and it can always be a pure state as argued earlier. Hence, we can bound all possible classical Fisher information by the optimal one in Eq. . This then must be the Löwner optimal design and optimal among all possible designs including the mixed strategy. ### Two-parameter binary-design problem {#sec:twoBin} Let us consider a generic two-parameter binary-design problem, where $M^Q=\{\cT_{{\theta_{}}}\,|\,{\theta_{}}=({\theta_{1}},{\theta_{2}})\in{\Theta}\}$ and the design space has only two elements $\cE=\{e_1,e_2\}$. In the quantum setting, this is equivalent to setting ${\cal S}_0=\{\rho_1,\rho_2\}$ and fixing the POVMs for each output state $\cT_{{\theta_{}}}(\rho_i)$ as $\Pi_i$ ($i=1,2$). We assume that the corresponding statistical model $$M(e_i)=\{p_{{\theta_{}}}(\cdot|e_i)\,|\,{\theta_{}}\in{\Theta}\},$$ is regular. We first analyze the conditions for the existence of the Löwner optimal design here. Let us introduce some notation for our convenience. Let $J_{i}$ be the Fisher information matrices for the $i$th model $M(e_i)$, i.e., $J_1=J_{{\theta_{}}}[e_1]$ and $J_2=J_{{\theta_{}}}[e_2]$, and define $$\begin{aligned} &T_1={\mathrm{Tr}\{J_1\}},\quad T_2={\mathrm{Tr}\{J_2\}},\\ &D_1={\mathrm{Det}\{J_1\}},\ D_2={\mathrm{Det}\{J_2\}},\ D_{\pm}={\mathrm{Det}\{J_1\pm J_2\}}. \end{aligned}$$ We assume that $J_1, J_2$ are positive definite, i.e., the designs $e_1,e_2$ are regular. A Löwner optimal design exists when a matrix ordering is possible, i.e., $J_1\ge J_2$ (if $J_1\le J_2$, we swap the labeling $1\leftrightarrow 2$). For a symmetric $2\times2$ matrix $A$, which is not equal to the zero matrix, the positive semidefinite relation $A \ge 0$ fails to hold if and only if $A$ has two distinct eigenvalues with opposite signs. This is equivalent to ${\mathrm{Det}\{A\}}<0$. Therefore, the following case is the generic one to be analyzed, $$\label{2paracond} D_-={\mathrm{Det}\{J_1-J_2\}} <0,$$ which, if satisfied, indicates that there is no Löwner optimal design. The $A$- and $D$-optimal designs for this two-parameter binary-design problem can be found analytically. . For two given designs, the optimization problem is equivalent to finding the optimal relative frequency $\v{\nu}=(\nu_1,\nu_2)$ such that ${\mathrm{Det}\{\nu_1J_1+\nu_2J_2\}}$ is maximized. When the optimal $\v{\nu}_*$ is located at extremal points, i.e., either $(1,0)$ or $(0,1)$, an optimal design is the i. i. d. strategy. This is because using any mixed strategy cannot further maximize the function ${\mathrm{Det}\{\nu_1J_1+\nu_2J_2\}}$. We parametrize $\nu_1=\frac{1}{2}(1+\lambda)$ and $\nu_2=\frac{1}{2}(1-\lambda)$, with $\lambda\in[-1,1]$, and define the function $$\begin{aligned} \gamma_{{\theta_{}}}(\lambda)&=4{\mathrm{Det}\{\nu_1J_1+\nu_2J_2\}}\\ &={\mathrm{Det}\{J_1+J_-+\lambda (J_1-J_2)\}}\\ &=D_-\lambda^2+2 (D_1-D_2)\lambda +D_+. \end{aligned}$$ Since we are considering the case where there is no Löwner optimal design, condition needs to be imposed. The optimal $D$-design is then found by maximizing the quadratic function $ \gamma_{{\theta_{}}}(\lambda) $, yielding $$\max_{\lambda\in[-1,1]}\gamma_{{\theta_{}}}(\lambda)= \begin{cases} \gamma_{{\theta_{}}}(\lambda^*)\qquad (\textrm{if }|D_1-D_2|<-D_-),\\[1ex] \max\{ \gamma_{{\theta_{}}}(1),\gamma_{{\theta_{}}}(-1)\}\quad (\textrm{otherwise}), \end{cases}$$ with $\lambda^*=-(D_1-D_2)/D_-$. The optimal design is then $\v{\nu}_*=\big(\frac{1}{2}(1+\lambda^*),\frac{1}{2}(1-\lambda^*)\big)$ when $|D_1-D_2|<-D_-$ is satisfied; otherwise, the optimal design is extremal, $\v{\nu}=(1,0)$ or $(0,1)$, depending on $\arg\max\{ \gamma_{{\theta_{}}}(1),\gamma_{{\theta_{}}}(-1)\}$. In the latter case, an optimal design is the i. i. d. strategy as mentioned earlier. However, the relation $ \gamma_{{\theta_{}}}(1)\ge\gamma_{{\theta_{}}}(-1)\Leftrightarrow D_1\ge D_2\Leftrightarrow {\mathrm{Det}\{J_{{\theta_{}}}[e_1]\}}\ge{\mathrm{Det}\{J_{{\theta_{}}}[e_2]\}}$ indicates that an optimal design at $\theta$ may depend on the unknown value of $\theta$ in general. This is the typical behavior of local optimal designs. Note that the case where both $e_1$ and $e_2$ are singular designs can also be treated similarly. In that case, $D_1=D_2=0$, which simplifies $ \gamma_{{\theta_{}}}(\lambda) $, giving $\gamma_{{\theta_{}}}(\lambda)=D_-\lambda^2+D_+$. The optimal design in this case is then $\v{\nu}_*=(1/2,1/2)$. . We now consider $A$-optimal designs with a weight matrix $W>0$. We now define the function (of $\lambda$) $ \gamma_{{\theta_{}}}[W](\lambda) $, dependent on both $W$ and ${\theta_{}}$, as $$\label{eq:gamfun.defn} \gamma_{{\theta_{}}}[W](\lambda)={\mathrm{Tr}\Big\{W{\left[\tfrac{1}{2}(1+\lambda)J_1+\tfrac{1}{2}(1-\lambda)J_2\right]}^{-1}\Big\}}.$$ We can set a lower bound for the $A$-optimal design as $$\Psi_A(e(2))\ge \gamma_{{\theta_{}}}^*[W]= \min_{\lambda\in[-1,1]}\gamma_{{\theta_{}}}[W](\lambda).$$ The following result is our contribution: For a two-parameter binary-design problem, when condition holds, the bound for the $A$-optimal design is given by (from straightforward, though lengthy, calculations) \[2pararesult\] \^\*\_[[\_]{}]{}\[W\]= +\ ( D\_1 \_[[\_]{}]{}(1)=D\_2\_[[\_]{}]{}(-1),\ |D\_1-D\_2|-|D\_-|&gt;0),\ \ ( D\_1 \_[[\_]{}]{}(1)=D\_2\_[[\_]{}]{}(-1),\ |D\_1-D\_2|-|D\_-|0),\ \_[[\_]{}]{}(\^\*)( D\_1 \_[[\_]{}]{}(1)D\_2\_[[\_]{}]{}(-1),\ \_[[\_]{}]{}(\_)&gt;0, |\_\*|1),\ {\_[[\_]{}]{}(1),\_[[\_]{}]{}(-1) }(). Here, $\lambda_\pm (\lambda_+\ge\lambda_-)$ are roots of the quadratic equation D\_-\^2+2(D\_1-D\_2)+D\_+=0, and $\lambda_*$, characterizing the optimal input, is given by $$\label{eq:opt.lambda} \lambda_*=\frac{\sqrt{\gamma_{{\theta_{}}}(-1)}\lambda_++\sqrt{\gamma_{{\theta_{}}}(1)}\lambda_- }{\sqrt{\gamma_{{\theta_{}}}(1)}+\sqrt{\gamma_{{\theta_{}}}(-1)}}.$$ Note that, in Eq. , the first two cases are special ones, since $D_1 \gamma_{{\theta_{}}}(1)=D_2\gamma_{{\theta_{}}}(-1)$ is equivalent to ${\mathrm{Tr}\{WJ_1^{-1}\}}={\mathrm{Tr}\{WJ_2^{-1}\}}$. This is satisfied for a specific choice of the weight matrix. The third case is when the mixed estimation strategy brings the estimation error below that of the i. i. d. strategy. The last case is when an optimal design is located at extremal points. Examples {#sec4} ======== In this section, we analyze families of qubit channels as examples to illustrate our findings and point out special features. As a benchmark, we compare the [optimal DoE]{} strategies with that of the a simple, and commonly used, quantum process tomography scheme built upon the Pauli operators $\{\sigma_i\}_{i=1}^3$ (we denote this as Pauli-QPT): For each $i=1,2,3$, we send, as input to the channel, the eigenstates of $\sigma_i$, $\rho_\pm(i)$ with $\pm1$ eigenvalues, and then perform the projective measurement $\Pi(i)=\{ (I_d\pm\sigma_i)/2\}$ on the output state with the uniform probability. For the channels discussed in this section, it happens that $\rho_\pm(i)$ for each $i$ gives the exact same Fisher information for the projective measurement. Thus, we only consider the $+1$ eigenstates with equal probability $1/3$. Linear scaling channel {#sec4-1} ---------------------- Let us start with the simplest example in which we consider a three-parameter family of qubit channels specified as $$\label{eq:scaling.chan} \cT_{{\theta_{}}}(\rho) \leftrightarrow {\mathbf{s}}_{{\theta_{}}}= \begin{pmatrix} {\theta_{1}}s_1 \\ {\theta_{2}}s_2 \\ {\theta_{3}}s_3 \end{pmatrix}.$$ Here, ${\mathbf{s}}_{{\theta_{}}}$ is the Bloch vector of the output state $\cT_{{\theta_{}}}(\rho)$, ${\mathbf{s}}=(s_1,s_2,s_3)^\mathrm{T}$ is that of the input state $\rho$, and $\theta\in\Theta=\{ \theta\in\bbr^3| \sum_{i=1}^3 |\theta_i|^2\le 1\}$. Each member of this family of channels linearly scales the Bloch components of the input state. Some authors refer to this channel as a generalized Pauli channel [@po09; @bh10; @bh11]. Note that, following the procedure in Appendix Sec. \[sec-app\_lowner\], we can show that there is no Löwner optimal design. For this example, the SLD quantum Fisher information matrix for the input state $ {\mathbf{s}}=(s_1, s_2, s_3)^\mathrm{T}$ can be written as $$\label{eq:scaling.qfi} {J_{{\theta_{}}}^{SLD}}[\rho] = D(s)^{1/2}\left[ I + \frac{{\mathbf{s}}_{{\theta_{}}} {\mathbf{s}}_{{\theta_{}}}^{\hspace{0.6mm}T} }{1 - {\mathbf{s}}_{{\theta_{}}}^{\hspace{0.6mm}T}{\mathbf{s}}_{{\theta_{}}}} \right]D(s)^{1/2}$$ where $ D(s) $ is the positive matrix $\textrm{diag}(s_1^2,s_2^2, s_3^2) $. The convex structure of the problem means that an optimal design is composed of extremal points of the space of SLD quantum Fisher information matrices for all possible input states. These extremal points are rank-1 matrices. From Eq. (\[eq:scaling.qfi\]), it is clear that a rank-1 $ {J_{{\theta_{}}}^{SLD}} $ must have a rank-1 $ D(s) $. This corresponds to having $ {\mathbf{s}} $ in the 1, 2, or 3, direction, the Bloch vectors of the eigenstates of the Pauli operators. For such input states, projective measurements allow for designs that saturate the lower bound (as the Fisher information matrix will be singular), set by the SLD quantum Fisher information, i.e., $ J_{{\theta_{}}}[e_i] = {J_{{\theta_{}}}^{SLD}}[e_i] $, for $ i = 1, 2, 3 $. Such projective measurements correspond to measurement operators defined by $\Pi_{i,\pm} = \frac{1}{2}( I_d \pm \sigma_i)$; the eigenstate and the projective Pauli measurement together make the design $ e_i $. An optimal design is hence composed as a mixture of the three Pauli settings $\bigl(e_i=(\rho(i),\Pi(i))\bigr)$ with relative frequencies $\v{\nu}=(\nu_1,\nu_2,\nu_3)$, giving the Fisher information matrix $$J_{{\theta_{}}}[e(3)]=\mathrm{diag.}{\left(\frac{\nu_1}{1-{\theta_{1}}^2},\frac{\nu_2}{1-{\theta_{2}}^2},\frac{\nu_3}{1-{\theta_{3}}^2}\right)}.$$ By optimizing the $\v{\nu}$ degree of freedom, one can find the best estimation strategy, according to the desired optimality criterion. Observe that the Pauli-QPT design $e_{PT}$ is the case where $\v{\nu}=(1/3,1/3,1/3)$, so that $$J_{{\theta_{}}}[e_{PT}]=\frac13 \mathrm{diag.}{\left(\frac{1}{1-{\theta_{1}}^2},\frac{1}{1-{\theta_{2}}^2},\frac{1}{1-{\theta_{3}}^2}\right)}.$$ As an example, let us find the $\gamma$-optimal design for $\gamma>0$. The application of Jensen’s inequality and the convexity of $x^{1/(1+\gamma)}$ for $\gamma>0$ give $$\begin{aligned} &\min_{p\in\cP(m)}\sum_{i=1}^m{\left(\frac{a_i}{p_i}\right)}^\gamma= \biggl(\sum_{i=1}^ma_i^{\frac{\gamma}{1+\gamma}}\biggr)^{1+\gamma} ,\\[1ex] \textrm{and}\quad &p_*=\arg\min_{p\in\cP(m)}\sum_{i=1}^m{\left(\frac{a_i}{p_i}\right)}^\gamma= a_i^{\frac{\gamma}{1+\gamma}}/\sum_j a_j^{\frac{\gamma}{1+\gamma}} , \end{aligned}$$ for any a positive $m$-dimensional vector $a=(a_i)\in\bbr_+^m$. Here, $\cP(m)$ denotes the set of $m$-event (positive) probability distributions. This immediately solves the optimization problem at hand and yields the $\gamma$-optimal solution: $$\min_{\nu\in\cP(3)} \left\{\frac13{\mathrm{Tr}\Big\{J_{{\theta_{}}}[e(3)]^{-\gamma}\Big\}} \right\}^{1/\gamma} =\Big[\sum_i (\frac{1-{\theta_{i}}^2}{3})^{\frac{\gamma}{1+\gamma}}\Big]^{\frac{1+\gamma}{\gamma}}.$$ As an example, the $A$-optimal design ($\gamma=1$) $e_*(3)$ has $$\nu_i=\frac{\sqrt{1-{\theta_{i}}^2}}{\sum_{j} \sqrt{1-{\theta_{j}}^2}}.$$ Observe that ${\mathrm{Tr}\Big\{J_{{\theta_{}}}[e_{PT}]^{-1}\Big\}}\ge{\mathrm{Tr}\Big\{J_{{\theta_{}}}[e_{*}(3)]^{-1}\Big\}}$ always holds, with equality if and only if ${\theta_{1}}={\theta_{2}}={\theta_{3}}$, the case of an isotropic channel. Pauli channel ------------- The [*Pauli channel*]{} for the qubit is defined by $$\label{eq:PauliChannel} \cT_{{\theta_{}}}(\rho)=(1-\sum_{i}{\theta_{i}}) \rho+\sum_{i=1,2,3}{\theta_{i}}\sigma_i\rho\sigma_i,$$ where the channel parameter ${\theta_{}}=({\theta_{1}},{\theta_{2}},{\theta_{3}})$ are all positive and the sum of them is less than one ($1-\sum_{i}{\theta_{i}}>0$). In the Bloch vector representation, a state ${\mathbf{s}}$ is transformed as $$\cT_{{\theta_{}}}:\, {\mathbf{s}}\mapsto {\mathbf{s}}_{{\theta_{}}}= (\xi_1({\theta_{}})s_1,\xi_2({\theta_{}})s_2,\xi_3({\theta_{}})s_3) ^\mathrm{T},$$ where $\xi_i({\theta_{}})=1+2{\theta_{i}}-2\sum_{j}{\theta_{j}}$. Thus, the Pauli channel can be regarded as a different coordinate system representation (under an affine coordinate transformation) of the linear scaling channel. Therefore, a Löwner optimal design for this Pauli channel problem cannot exist, as was the case for the linear scaling channel. It is nevertheless interesting to see how the optimal design depends on the parameterization of the channel family. Following the same reasoning as before, an optimal design $e_*(3)$ is again a mixture of the Pauli settings $e_i=(\rho(i),\Pi(i))$ for $i=1,2,3$, with relative frequencies $\v{\nu}=(\nu_i)$. Its Fisher information matrix is given by $$J_{{\theta_{}}}[e(3)]=\sum_{i=1,2,3}\frac{4\nu_i}{1-(\xi_i)^2}{\mathbf{u}}_i {\mathbf{u}}_i^\mathrm{T},$$ with ${\mathbf{u}}_1=(0,1,1)^\mathrm{T}, {\mathbf{u}}_2=(1,0,1)^\mathrm{T}$, and ${\mathbf{u}}_3=(1,1,0)^\mathrm{T}$, three non-orthogonal vectors. Unlike the linear scaling example, the analytical formula for $\gamma$-optimality here cannot be expressed as a closed-form solution in general. The $A$-optimal ($\gamma=1$) solution, however, can be found as $$\min_{\v{\nu}}{\mathrm{Tr}\Big\{J_{{\theta_{}}}[e_{3}]^{-1}\Big\}}=\frac{3}{16}{\biggl[\sum_i\sqrt{1-(\xi_i)^2}\biggr]}^2.$$ The corresponding $A$-optimal design is given by the mixture of the Pauli settings with the optimal relative frequencies $$\nu_i=\frac{\sqrt{1-(\xi_i)^2}}{\sum_{j} \sqrt{1-(\xi_j)^2}}.$$ The $D$-optimal ($\gamma\rightarrow 0$) solution, on the other hand, is given by $$\begin{aligned} \min_{\v{\nu}}{\mathrm{Det}\{J_{{\theta_{}}}[e_{3}]^{-1}\}} &=\max_{\v{\nu}}2^{-8}\prod_i\frac{\nu_i}{1-(\xi_i)^2}\\ &= 2^{-8}3^{-3} \prod_i\frac{1}{1-(\xi_i)^2}, \end{aligned}$$ This $D$-optimal design coincides with the Pauli-QPT design, with $\v{\nu}_*=(1/3,1/3,1/3)$. This simple example shows that different optimality criteria result in different optimal designs. Detecting noise asymmetry {#sec5-3} ------------------------- We now turn to an example where a nuisance parameter arises naturally. We consider a two-parameter family of Pauli channels as in Eq. , all with ${\theta_{3}}=0$. It is convenient to use a different parameterization to describe the family, $$\label{eq:noise.params} \begin{aligned} {\vartheta_{1}} &\equiv {\theta_{1}} - {\theta_{2}}, \\ {\vartheta_{2}} &\equiv 1-({\theta_{1}} + {\theta_{2}}). \end{aligned}$$ Here, ${\vartheta_{1}}\in[-1,1]$ and ${\vartheta_{2}}\in[0,1]$, with ${\vartheta_{2}}\leq 1-|{\vartheta_{1}}|$ [@footnote2]. ${\vartheta_{1}}$ is the *asymmetry* of the channel, characterizing the imbalance between the strength of the $\sigma_1$ and $\sigma_2$ Kraus operators; $1-{\vartheta_{2}}={\theta_{1}}+{\theta_{2}}$ describes the deviation of the channel from the identity operation. Denoting the channel as $\cT_{{\vartheta_{}}}$, its action on the state Bloch vector is $$\cT_{{\vartheta_{}}}(\rho)\leftrightarrow {\mathbf{s}}_{{\vartheta_{}}}= {\left( \begin{array}{c} ({\vartheta_{1}}+{\vartheta_{2}})s_1\\ (-{\vartheta_{1}}+{\vartheta_{2}})s_2\\ (2{\vartheta_{2}}-1)s_3 \end{array} \right)}.$$ Viewing the Pauli channel as noise, $1-{\vartheta_{2}}$ is the noise strength, and ${\vartheta_{1}}$ is the noise asymmetry. $ {\vartheta_{1}} $ is a practically useful quantity in the control of noise in quantum information processing. Noise with a large asymmetry can be mitigated more efficiently by first reducing the noise asymmetry with a small error-correcting code, before a more resource-intensive code that does not pay attention to the asymmetry is used to reduce the overall noise strength. Within this context, we are interested in estimating the asymmetry ${\vartheta_{1}}$; ${\vartheta_{2}}$ is treated as a nuisance parameter. The task is to discover the optimal design for estimating ${\vartheta_{1}}$. ### Optimal design problem {#sec5-3-1} Following the notation from Sec. \[sec2-3\], our two-parameter problem ${\vartheta_{}}=({\vartheta_{1}},{\vartheta_{2}})$ is split into the parameter of interest, ${\vartheta_{I}}= {\vartheta_{1}}$, and the nuisance parameter ${\vartheta_{N}}={\vartheta_{2}}$. The presence of a nuisance parameter complicates the formal solution of the optimal design problem, compared to a full channel characterization. For a mixed strategy with $m$ partitions of the total input states, Carathéodory’s theorem mentioned in Sec. \[sec2-5\] ensures that an optimal design can be found for $m\le 7$. For a given $m$, we simplify the search for an optimal design $e_\mathrm{opt}=(\rho_\mathrm{opt},\Pi_\mathrm{opt})$ with a two-step approach. We first optimize the SLD Fisher information over $\rho$, which fixes $\rho_\mathrm{opt}$, and then optimize the classical Fisher information over $\Pi$ for the chosen $\rho_\mathrm{opt}$. Once we have found the optimal designs for each $ m $, we then compare them to find the true optimal design for estimating the noise asymmetry. Focusing on closed analytical forms for the optimal design, we will work up to $m=2$ only. For $m=3$, we analyze the specific case of Pauli-QPT, and compare its performance with that of the $m=2$ optimal design, as an indication of how much benefit one might expect from increasing $m$. We make one further simplifying assumption, that $s_3=0$ for $\rho_\mathrm{opt}$. This is reasonable, given that the transformation of $s_3$ under $\cT_{{\vartheta_{}}}$ does not involve the parameter of interest ${\vartheta_{1}}$. This significantly simplifies our analysis here. With $s_3=0$, straightforward algebra gives $${J_{{\vartheta_{}}}^{SLD}}=\frac{\left(s_1^2{\mathbf{v}}_1{\mathbf{v}}_1^\mathrm{T}+s_2^2{\mathbf{v}}_2{\mathbf{v}}_2^\mathrm{T}-2s_1^2s_2^2{\mathbf{u}}{\mathbf{u}}^\mathrm{T}\right)}{g(s_1,s_2)}\,,$$ where ${{\mathbf{v}}_1=(1,1)^\mathrm{T}/\sqrt{2}}$, ${{\mathbf{v}}_2=(1,-1)^\mathrm{T}/\sqrt{2}}$, ${{\mathbf{u}}=(-{\vartheta_{2}},{\vartheta_{1}})^\mathrm{T}}$, and ${g(s_1,s_2)= \frac{1}{2}(1-|{\mathbf{s}}_{{\vartheta_{}}}|^2)}$. This SLD quantum Fisher information has determinant $$\mathrm{Det}({J_{{\vartheta_{}}}^{SLD}}[\rho])=\frac{2s_1^2s_2^2}{g(s_1,s_2)},$$ which vanishes whenever $s_1s_2=0$. For the $m=1$ case, the local optimal design can be found by looking for $e(1)$ that satisfies $$\label{eq:loc_sing_opt} \min_{s_1,s_2:\,\sum_is_i^2=1} {\left({J_{{\vartheta_{}}}^{SLD}}[e(1)]^{-1}\right)}_{11}=\min \{ f_1^2\,,\,f_2^2 \} ,$$ where we define ${ f_{1,2} = \frac{1}{2}\sqrt{1-({\vartheta_{1}}\pm{\vartheta_{2}})^2}}$, and the inverse is the generalized inverse The solution is the singular design $(\rho_\pm(1),\Pi(1))$ if $f_1^2< f_2^2 \Leftrightarrow {\vartheta_{1}}>0$; it is $(\rho_\pm(2),\Pi(2))$ otherwise. Note that which design is optimal depends on the sign of the noise asymmetry ${\vartheta_{1}}$, which is unknown in advance. Furthermore, in either situation, the optimal design cannot extract the actual value of ${\vartheta_{1}}$ since the resulting probability distribution depends only on ${\vartheta_{1}}+{\vartheta_{2}}$ in the case of $(\rho_\pm(1),\Pi(1))$, or ${\vartheta_{1}}-{\vartheta_{2}}$ in the case of $(\rho_\pm(2),\Pi(2))$. Therefore, we exclude this singular case. We next analyze the case for regular designs. With $\mathrm{Det}({J_{{\vartheta_{}}}^{SLD}}[\rho])\neq 0$, the inverse exists, and $$\label{eq:asymm1} {\left({J_{{\vartheta_{}}}^{SLD}}[e(1)]^{-1}\right)}_{11}=\frac{1}{4}{\left(\frac{1}{s_1^2}+\frac{1}{s_2^2}\right)}-{\vartheta_{1}}^2.$$ Eq. is minimized under the constraint $s_1^2+s_2^2\leq 1$ when $s_1^2=s_2^2=\frac{1}{2}$, taking the value $1-{\vartheta_{1}}^2$. This SLD CR bound can be saturated by the projective measurement along the directions perpendicular to the optimal input Bloch vector $s_1^2=s_2^2=\frac{1}{2}$ [@footnote3]. Moving onto the $ m = 2 $ case, we look to build a convex structure from rank-1 Fisher information matrices as before. Taking $ {\mathbf{s}} $ to be either in the 1 or 2 direction ensures that $ \mathrm{Det}({J_{{\vartheta_{}}}^{SLD}}[\rho])=0 $, i.e., each matrix is rank-1. This suggests a mixed strategy with the two possible input states $ {\mathbf{s}}^{(1)} = (s_1, 0, 0)^\mathrm{T} $ and $ {\mathbf{s}}^{(2)}= (0, s_2,0)^\mathrm{T}$. The SLD quantum Fisher information of the mixture is the convex sum of the individual Fisher information matrices, $$\label{eq:asymm.convex.qfi} {J_{{\vartheta_{}}}^{SLD}}[e(2)] = \nu_1 \frac{{s_1}^2{\mathbf{v}}_1 {\mathbf{v}}_1^\mathrm{T}}{1-{s_1}^2({\vartheta_{1}} + {\vartheta_{2}})^2} + \nu_2 \frac{ {s_2}^2{\mathbf{v}}_2{\mathbf{v}}_2^\mathrm{T}}{1-{s_2}^2({\vartheta_{1}} - {\vartheta_{2}})^2},$$ where $ \nu_1 + \nu_2 = 1 $. Then, $$\begin{aligned} &{\left({J_{{\vartheta_{}}}^{SLD}}[e(2)]^{-1}\right)}_{11}\\ =& \frac{1}{4s_1^2\nu_1}{\left[1-s_1^2({\vartheta_{1}}+{\vartheta_{2}})^2\right]}+\frac{1}{4s_2^2\nu_2}{\left[1-s_2^2({\vartheta_{1}}-{\vartheta_{2}})^2\right]} \nonumber, \end{aligned}$$ which is minimized when $s^2_1=s^2_2=1$, i.e., ${\mathbf{s}}_1$ and ${\mathbf{s}}_2$ correspond to pure states. We then have, for these pure-state choices of ${\mathbf{s}}^{(1)}$ and ${\mathbf{s}}^{(2)}$, $$\begin{aligned} \left({J_{{\vartheta_{}}}^{SLD}}[e(2)]^{-1}\right)_{11} &= \frac{f_1^2}{\nu_1} + \frac{f_2^2}{\nu_2}. \label{eq:asymm.bin.qfi}\end{aligned}$$ As the inverse of the SLD quantum Fisher information matrix provides a lower bound for the MSE, which is attainable in our setting, we can already compare the lower bounds set by the $ m = 1$ and $m= 2$ designs. Taking the difference between Eqs.  (with the optimal setting of $s_1^2=s_2^2=\frac{1}{2}$) and , and setting $\nu_{1,2}= \frac{1}{2}(1\pm\lambda)$ for $\lambda\in[-1,1]$, we have $$\begin{aligned} \Delta{{J_{{\vartheta_{}}}^{SLD}}}^{-1} &= \left({J_{{\vartheta_{}}}^{SLD}}[e(1)]^{-1}\right)_{11} - \left({J_{{\vartheta_{}}}^{SLD}}[e(2)]^{-1} \right)_{11}\nonumber \\ &=1-{\vartheta_{1}}^2-{\left[\frac{f_1^2}{\nu_1}+\frac{f_2^2}{\nu_2}\right]}\nonumber\\ &=\frac{1}{4\nu_1\nu_2}{\left[{\vartheta_{2}}^2 - 2\lambda{\vartheta_{1}}{\vartheta_{2}}-\lambda^2(1-{\vartheta_{1}}^2)\right]}. \end{aligned}$$ For $ \lambda = 0 $, $\Delta{{J_{{\vartheta_{}}}^{SLD}}}^{-1} ={\vartheta_{2}}^2/4\nu_1\nu_2\geq 0$, i.e., when $\nu_1=\nu_2=\frac{1}{2}$, $ \Delta{{J_{{\vartheta_{}}}^{SLD}}}^{-1}$ is nonnegative regardless of the values of ${\vartheta_{1}}$ and ${\vartheta_{2}}$. This means that for all values of ${\vartheta_{1}}$ and ${\vartheta_{2}}$, there exists some $\nu_1$ and $\nu_2$ such that $\Delta{{J_{{\vartheta_{}}}^{SLD}}}^{-1} \geq 0$ (e.g., $\nu_1=\nu_2=\frac{1}{2}$). From this, we can conclude that it suffices to rule out the $m=1$ case for the optimal design, as $\left({J_{{\vartheta_{}}}^{SLD}}[e(2)]^{-1}\right)_{11}$ can always be equal to or smaller than $ \left({J_{{\vartheta_{}}}^{SLD}}[e(1)]^{-1}\right)_{11} $ from the $m=1$ strategy. ![(Color online.) Contour plot of the difference $\Delta J^{-1}[e_{PT}\textrm{--}e(2)]\equiv \left(J_{{\vartheta_{}}}[e_{PT}]^{-1}\right)_{11} - \left({J_{{\vartheta_{}}}^{SLD}}[e(2)]^{-1}\right)_{11} $ for all possible values of $ {\vartheta_{1}}, {\vartheta_{2}} $.[]{data-label="fig:e3-e2plotlabeled"}](Figure1.pdf){width="0.95\columnwidth"} This does not necessarily mean that $ m =2 $ gives the optimal design, as a larger value of $ m $ could yield an even better design. To test its optimality further, we compare $ e(2) $ with a specific $ m = 3 $ case, the Pauli-QPT $ (e_{PT}) $ from Section \[sec4-1\]. The full calculation of $ \left(J_{{\vartheta_{}}}[e_{PT}]^{-1}\right)_{11} $ is rather lengthy, so here we only include the final expression, $$\left(J_{{\vartheta_{}}}[e_{PT}]^{-1}\right)_{11} =3\, \frac{4 f_1^2f_2^2 + f_0^2(f_1^2+f_2^2)}{f_1^2 + f_2^2 + f_0^2},$$ where $f_0=\sqrt{(1-{\vartheta_{2}}) {\vartheta_{2}}} $. We compare this against the $e(2)$ value of $ \left({J_{{\vartheta_{}}}^{SLD}}[e(2)]^{-1}\right)_{11} $ with $ \nu_1 = \nu_2 = 1/2$ by plotting the difference between them; see Fig. \[fig:e3-e2plotlabeled\]. Pauli-QPT is more optimal only in two narrow slivers of the ${\vartheta_{1}}$–${\vartheta_{2}}$ domain along the upper edges of the colored triangle in Fig. \[fig:e3-e2plotlabeled\]. These are the regions of extreme asymmetry within the range allowed by the value of ${\vartheta_{2}}$. Since these regions are so small, and it is unlikely that we will have such strong prior information as to expect the ${\vartheta_{1}}$ and ${\vartheta_{2}}$ values to fall only in those small regions, it is reasonable for us to still consider the $m=2$ design as one that works well, compared with Pauli-QPT. This does not, of course, preclude another $m=3$ design from having a larger advantage over the $m=2$ case, or for some larger $ m $ design to be more optimal, as allowed by the Carath[é]{}odory argument. We leave this as an open question for further study. As was mentioned in the example for the linear scaling channel (Sec. \[sec4-1\]), the inequality relating the quantum and the classical Fisher information in the Cramér-Rao bound can be saturated by projective measurements along the respective states, which here correspond to the designs $ e_1 $ and $ e_2 $ of Sec. \[sec4-1\]. Note that the mixture of $e_1$ and $e_2$ is capable of yielding both channel parameters ${\vartheta_{1}}$ and ${\vartheta_{2}}$. We thus have a curious case where, even in the presence of a nuisance parameter, a strategy that *fully* characterizes the channel is still the optimal estimation strategy for the noise asymmetry $ {\vartheta_{1}}$ alone. ### Optimal binary design To complete the analysis for a continuous design following from the previous section, we must calculate the optimal relative frequencies $ \nu_1, \nu_2 $ for the $m=2$ strategy. Here, we consider the $A$-optimality criterion with the weight matrix $W=\textrm{diag}(1,0)$. With the knowledge that the optimal design is a mixture of $e_1$ and $e_2$, the problem is now equivalent to the binary design problem discussed in Sec. \[sec:twoBin\]. As before, we write $ \nu_1 = \frac{1}{2}(1 + \lambda) $, $ \nu_2 = \frac{1}{2}(1 - \lambda)$, and we can make use of the formula given in Eq. (\[eq:opt.lambda\]). To have a positive weight matrix, we regularize $W$ by setting $ W_\epsilon = \textrm{diag}(1, \epsilon) $ for $\epsilon>0$, and then taking the limit as $\epsilon\rightarrow 0^+$. The function $ \gamma_{{\vartheta_{}}}[W](\lambda) $ \[see Eq. \] is given by $$\gamma_{{\vartheta_{}}}(\lambda) = \begin{cases} f_1^2 & (\textrm{if }\lambda = 1),\\ f_2^2 & (\textrm{if }\lambda = -1), \\ 2\left( \frac{f_1^2}{1 +\lambda} + \frac{f_2^2}{1 - \lambda} \right) & \bigl(\textrm{if }\lambda \in (-1, 1)\bigr). \end{cases}$$ The optimal partition can be found by a derivative test or by using Eq. (\[eq:opt.lambda\]), where $ \lambda_{\pm} = \pm 1 $. The optimal $ \lambda $-value is then $$\label{eq:optL} \lambda^{*}=\frac{f_1-f_2}{f_1+f_2}.$$ Let $e_*(2)$ be the local optimal design with this choice of frequency $\v{\nu}_*=(\frac{1+\lambda^*}{2},\frac{1-\lambda^*}{2})$. The minimum value for the Quantum Fisher Information, and also the MSE matrix for a locally unbiased estimator, is then $$(J_{{\vartheta_{}}}[e_*(2)]^{-1})_{11}={\left(f_1+f_2\right)}^2.$$ From the above derivation, we again confirm that the singular design is the local optimal design, since ${\left(f_1+f_2\right)}^2\ge \min\{f_1^2,f_2^2\}$ always holds. The optimal value $\lambda^*$ depends on the unknown parameters ${\vartheta_{1}}$ and ${\vartheta_{2}}$. Without *a priori* knowledge of ${\vartheta_{1}}$ and ${\vartheta_{2}}$, one cannot implement the optimal design. However, observe in Fig. \[fig:optlam\], a contour plot of $\lambda^*$ in the ${\vartheta_{1}}$–${\vartheta_{2}}$ domain, that the magnitude $|\lambda^*|$ is relatively small and flat over a large central region and rises sharply only in the high asymmetry regions. This hints at the possibility of an adaptive approach for better performance: We can start with $\lambda^*=0$, or, equivalently, with equal weights on $e_1$ and $e_2$, and then adapt the relative weights towards the optimal $\lambda^*$ as we gather information about the actual values of ${\vartheta_{1}}$ and ${\vartheta_{2}}$. We expect this adaptation to be particularly important for the high asymmetry regions. To understand how much benefit we can gain, we examine this adaptive strategy in the next section. We will, however, use a discrete, rather than continuous, design, so that one can look at the performance with finite data, instead of the mathematical asymptotic limit. To make this transition to a discrete design, we consider a strategy $ e(N,\lambda) $ where a fixed number $N_1\equiv \frac{1}{2}(1+\lambda) N$ of uses of the channel is for the $e_1$ design and $N_2\equiv \frac{1}{2}(1-\lambda)N$ is for the $e_2$ one, for a total $N=N_1+N_2$ uses. $\lambda$ is now regarded as a parameter that characterizes the *fixed fraction*, rather than the probability, of the $N$ uses of the channel that employs design $e_1$ or $e_2$, and $ \lambda^{*} $ is the optimal fixed fraction. We will assume here that $N_1,N_2\geq 1$. As before, we are interested only in estimating ${\vartheta_{1}}$. Let $n_i$ denote the number of counts entering the detector for $\Pi_{i,+}$, out of $N_i$ counts that used design $e_i$, for $i=1,2$. We construct the estimator ${\hat{\vartheta}_{1}}$ for ${\vartheta_{1}}$ as $${\hat{\vartheta}_{1}}\equiv \frac{n_1}{N_1} - \frac{n_2}{N_2},$$ For given $N_1$ and $N_2=N-N_1$, the MSE is simply, from its definition \[see Eq. \], $$V_{{\vartheta_{}}}[{\hat{\vartheta}_{1}}|e(N,\lambda)] = \frac{f_1^2}{N_1}+\frac{f_2^2}{N_2}.$$ Compared to the SLD quantum Fisher information of Eq. , there is an additional factor of $\frac{1}{N}$ to account for the $N$ uses of the channel. ### Adaptive discrete design As in the continuous design case, one can expect the optimal discrete design $e(N,\lambda^*)$ to depend \[see Eq. \] on the values of $f_1$ and $f_2$. These in turn depend on the unknown values of ${\vartheta_{1}}$ and ${\vartheta_{2}}$. As argued above, one expects an adaptive strategy to to be helpful in such a situation. We implement one such strategy by dividing the total available uses of the channel $N$ into $K$ adaptive steps. In each step we decide on the relative proportion of $e_1$ and $e_2$ using estimates of $f_1$ and $f_2$ from the data gathered so far, up to the last completed step. Specifically, we let $N=M_1+M_2+\ldots+M_K$, where $M_k$, for $k=1,\ldots, K$, is the number of uses of the channel in the $k$th adaptive step. Let $\lambda_k$ be the $\lambda$ parameter for the $k$th step, i.e., we devote $N_{k,1}\equiv \frac{1}{2}(1+\lambda_k)M_k$ uses to the $e_1$ design and $N_{k,2}\equiv\frac{1}{2}(1-\lambda_k)M_k$ to $e_2$. We denote the total number $\sum_{\ell=1}^kN_{\ell,i}$ of uses for $e_i$ so far, up to and including the $k$th step, by $N_{1:k,i}$, for $i=1,2$. We further define $n_{k,i}$ to be the number of detector clicks for $\Pi_{i,+}$ in the $k$th step. Analogously, $n_{1:k,i}$ denotes the number of detector clicks for $\Pi_{i,+}$ so far, up to the $k$th step. After $k$ adaptive steps, we estimate $f_i$, for $i=1,2$, by $$\widehat f_{k,i}= {\left[\frac{n_{1:k,i}}{N_{1:k,i}}{\left(1-\frac{n_{1:k,i}}{N_{1:k,i}}\right)}\right]}^{1/2}.$$ We use these estimates of $f_1$ and $f_2$ to determine the optimal $\lambda_{k+1}$ for the next adaptive step. Eq.  suggests that the optimal $\lambda$ for the *total* number of uses of the channel for all $k+1$ steps, is $$\lambda_{1:k+1}= \frac{\widehat f_{k,1}-\widehat f_{k,2}}{\widehat f_{k,1}+\widehat f_{k,2}}.$$ From this, we see that the optimal choice for $N_{k+1,1}$, say, in the next adaptive step, is given by $$N_{k+1,1}={\left[\tfrac{1}{2}(1+\lambda_{1:k+1})M_{1:k+1}-N_{1:k,1}\right]}_+,$$ where the notation $y=[x]_+$ is shorthand for $y=x$ when $x\geq 0$, and $y=0$ when $x<0$. Straightforward algebra gives $$\lambda_{k+1}=2{\left[\frac{\widehat f_{k,1}+\widehat f_{k,1}\frac{N_{1:k,1}}{M_{k+1}}-\widehat f_{k,2}\frac{N_{1:k,2}}{M_{k+1}}}{\widehat f_{k,1}+\widehat f_{k,2}}\right]}_+-1.$$ To start the adaptive sequence, we need to decide on the initial estimates of $f_1$ and $f_2$. With no prior information, reasonable initial guesses for ${\vartheta_{1}}$ and ${\vartheta_{2}}$ are $0$ and $\frac{1}{2}$, respectively, the midpoints of the allowed ranges of ${\vartheta_{1}}$ and ${\vartheta_{2}}$. This corresponds to initial estimates $\widehat f_{0,1}=\widehat f_{0,2}=\tfrac{\sqrt 3}{4}$, and a vanishing starting value of $\lambda_1$, i.e., $M_1$ is divided equally between $e_1$ and $e_2$. We compare the MSE of an adaptive scheme with that of a static, i.e., non-adaptive, strategy where the $N$ uses of the channel are shared equally between $e_1$ and $e_2$. Let $\lambda_\mathrm{eff}\equiv \frac{2N_{1:K,1}}{N}-1$ be the effective $\lambda$ parameter for the adaptive scheme. The relative performance of the two schemes, for the same channel (i.e., fixed values of $f_1$ and $f_2$), can be written as $$\label{eq:Vratio} \frac{V_\mathrm{static}}{V_\mathrm{adapt}}=(1-\lambda_\mathrm{eff}^2){\left[-\lambda_\mathrm{eff}\frac{f_1^2-f_2^2}{f_1^2+f_2^2}+1\right]}^{-1}.$$ ![\[fig:magnitude.split\] (Color online.) A plot of the ratio $V_\mathrm{static}/V_\mathrm{adapt}$ against $\log_{10}({\theta_{1}}/{\theta_{2}})$ for numerical simulation of the adaptive procedure for the two-parameter family of Pauli channels, over a uniform grid of ${\theta_{1}}$ and ${\theta_{2}}$ values, with a step size of $0.01$. Here, $N=200$, and $K=10$. The colors show the dependence of the MSE ratio on the noise strength $1-{\vartheta_{2}}$; a plot of only the data points for the low-noise regime of $1- {\vartheta_{2}} \leq 0.5$ is given in Fig. \[fig:runway.comparison\](a) below. That the adaptive strategy shows a clear benefit in the regime of high asymmetry is evidenced by the points that lie above the $V_\mathrm{static}/V_\mathrm{adapt}=1$ horizontal line, which occur only when ${\theta_{1}}/{\theta_{2}}$ is far from 1.](Figure3.pdf){width="\columnwidth"} ![image](Figure4.pdf){width="90.00000%"} To examine the performance of the adaptive procedure, we carried out numerical experiments for an estimation scheme with $N=200$, and $K=10$ for the two-parameter family of Pauli channels. The simulations were run over a uniform grid of all possible $ {\theta_{1}}$ and ${\theta_{2}}$ values \[recall the relationship between ${\vartheta_{i}}$s and ${\theta_{i}}$s, as given in Eq. \], with a step size of $0.01$. The results are given in Fig. \[fig:magnitude.split\], which plots the ratio $V_\mathrm{static}/V_\mathrm{adapt}$ against $\log_{10}({\theta_{1}}/{\theta_{2}})$, a quantity we found useful in organizing the data. The MSE ratios in Fig. \[fig:magnitude.split\] show a dependence on the value of noise strength $1-{\vartheta_{2}}$, as indicated by the colors in the plot. Clearly visible in the plot is the strong benefit of using the adaptive strategy in a high asymmetry situation: The ratio $V_\mathrm{static}/V_\mathrm{adapt}$ is large when when ${\theta_{1}}/{\theta_{2}}$ is very different from 1, i.e., when $|\log_{10}({\theta_{1}}/{\theta_{2}})|$ is large. Away from the high asymmetry region, however, the adaptive scheme in fact does more poorly than the static scheme, except for the rare case that $ {\vartheta_{1}}={\vartheta_{2}} = 0$. It is not difficult to guess why this might be the case. In this region, the optimal value of $\lambda^*$ should stay close to $0$, as seen from Fig. \[fig:optlam\]. However, in the early phase of the experiment when one does not have a lot of data, statistical fluctuations can easily cause the adaptive scheme to opt for $\lambda$ values that are away from the optimal $0$ value. The adaptive scheme hence may meander around initially before we gather enough data to have good guidance in the adaptation, while the static scheme is already using a near-optimal $ \lambda $ value. To mitigate the effects of this initial meandering, we can modify our adaptive scheme to include an initial “runway", where an initial number of measurements are made without any adaptations, using a fixed equal weight between the $e_1$ and $e_2$ designs. The adaptation kicks in only when we have gathered enough data. The effect of a runway is shown in the numerical simulation data given in Fig. \[fig:runway.comparison\]. For quantum information processing applications, one is usually interested only in the low-noise regime, say, where $1- {\vartheta_{2}} \leq 0.5$, so we focus only on the data points in this regime. As before, we have a total of $N=200$ uses of the channel, and only the $1-{\vartheta_{2}}\leq 0.5$ data are shown in the plots. The MSE ratios are shown for four situations: (a) $K=10$, no runway (this is the same data as that of Fig. \[fig:magnitude.split\], but restricted to the regime of $1- {\vartheta_{2}} \leq 0.5$); (b) $K=10$, a runway with $N/2=100$ uses of the channel, followed by the remaining $N/2=100$ uses for adapatation (in the adaptive scheme); (c) as in (a) but now with $K=5$; (d) as in (b), but now with $K=5$. The runway clearly helps reduce the loss in accuracy due to the adaptation for ${\theta_{1}}/{\theta_{2}}\approx 1$, compared to the static scheme. However, it also reduces the edge of the adaptation over the static case in the high asymmetry regime, as can be expected given the fewer channel uses available for adaptation. Another feature visible in Fig. \[fig:runway.comparison\] is that the region (of ${\theta_{1}}/{\theta_{2}}$ values) where adaptation helps shrinks slightly when $K$ is reduced from 10 to 5, as does the range of the MSE ratio values. A full exploration of the adaptation strategy should also investigate the optimal values for $K$ and the length of the runway. The optimal choice of runway length will no doubt depend on the available prior information about the noise asymmetry in the channel. Note that, in our numerical simulations, we did not find a combination of runway length and $K$ values that shrank the region where $V_\mathrm{static}/V_\mathrm{adapt}<1$ to zero. ** Let us make a final remark about the noise asymmetry example. The noise asymmetry of the *full* Pauli channel of Example B can also be described via a suitable parameterization. In that case, two asymmetry parameters can capture the relative strengths of the three original parameters, requiring a nuisance parameter to fully characterize the Pauli channel. We can define new parameters $ {\vartheta_{1}} = {\theta_{1}} - {\theta_{2}} $ and $ {\vartheta_{2}} = {\theta_{3}} - {\theta_{2}} $, ${\vartheta_{3}}=1 - ({\theta_{1}} + {\theta_{2}} + {\theta_{3}}) $. The parameters of interest here are the asymmetry parameters ${\vartheta_{1}}$ and ${\vartheta_{2}}$; the nuisance parameter is $ {\vartheta_{3}}$. Preliminary numerical simulation of the full Pauli channel situation indicate similar observations that the full characterization of three-parameter Pauli channel is near optimal even in the presence of the nuisance parameter, i.e., the design $ J_{{\vartheta_{}}}[e(3)] = \sum_{i = 1,2, 3} \nu_i J_{{\vartheta_{}}}[e_i] $ is near optimal. Summary and outlook =================== In summary, we formulated the problem of quantum process tomography within the framework of optimal design of experiments (DoE). This allows us to adapt the many techniques developed in classical statistics to quantum tomography problems, as demonstrated in the examples discussed in this paper. Here, we worked out simple examples to get analytical results for clearer illustration; more generally, the question of finding an optimal design can be solved efficiently as a convex optimization problem. One of the well-known issues in standard optimal DoE problems is that one often finds a local optimal design. This local optimal design normally depends on the values of unknown parameters, including the very ones we are trying to estimate. Such a design hence cannot be realized exactly in practice. This point was demonstrated by several examples of qubit noise channels. The standard remedy against this local optimal design problem is to utilize an appropriate adaptive scheme. This is a well-established strategy known in the community. In the example of detecting noise asymmetry for the Pauli channel (Sec. \[sec5-3\]), we applied a particular adaptive scheme by splitting $N$ uses of the channel into $K$ steps of adaptation together with the use of a runway stage to acquire some information about the unknown parameters. From our numerical simulations, we observed a gain from this particular adaptive scheme in a high asymmetry regime. In the low asymmetry regime, however, the adaptive scheme did worse than the static one. This is partly due to the fact that the static design (of $\lambda^*=0$) is near optimal for wide ranges of parameters (see Fig. \[fig:optlam\]). This conclusion is, of course, highly dependent on the specifics of our particular example, but this raises an important question about the effectiveness of adaptation for implementing local optimal designs, worthy of further investigation. The same noise asymmetry example of Sec. \[sec5-3\] also raises another important point about nuisance parameters and singular designs. It is often stated in DoE literature that the generalized inverse sets the bound for the mean square error of estimates. While this is a correct mathematical statement, the actual realization of such a singular design has to be carefully examined before one can claim its optimality. This is particularly important when dealing with problems in the presence of nuisance parameters. In our example, the singular design simply cannot be used to estimate the noise asymmetry, the quantity of interest. Instead, the static design that actually estimates the noise asymmetry turns out to be one that can estimate all the parameters of the problem, i.e., both the quantity of interest and the nuisance parameter. Of course, since we have only investigated strategies up to $m=2$, we cannot claim the nonexistence of a strategy with larger $m$ capable of efficiently estimating only the parameter of interest. Yet, a small-$m$ strategy is one of most interest in practical implementations. In this work, we have but scratched the surface of the many possibilities of adopting ideas from the rich subject of classical theory of DoE. We expect further exploration in this direction to yield useful and interesting results for the quantum problem of process tomography. This work is supported in part by the Ministry of Education, Singapore (through grant number MOE2016-T2-1-130). The Centre for Quantum Technologies is a Research Centre of Excellence funded by the Ministry of Education and the National Research Foundation of Singapore. JS is partly supported by JSPS KAKENHI Grant Number JP17K05571. HKN is also supported by Yale-NUS College (through a start-up grant). Supplemental material ===================== Positive-definite matrix ------------------------ We denote the set of all real positive-definite and positive-semidefinite matrices of size $n$ by $\pd$ and $\nnd$, respectively. It is known that the following conditions are equivalent (see for example, Ref. [@bhatia]), $$\begin{aligned} \label{eq:pdcond} A \in \pd&\DEF \forall v\in\bbr^n,\, v\neq\v{0}\Rightarrow v^\mathrm{T}Av>0\\ \nonumber &\Lra \forall B\in \nnd,\, B\neq\v{0}\Rightarrow{\mathrm{Tr}\Big\{AB\Big\}}>0,\\ \label{eq:nndcond} A \in \nnd&\DEF \forall v\in\bbr^n,\, v^\mathrm{T}Av\ge0\\ \nonumber &\Lra \forall B\in \nnd,\, {\mathrm{Tr}\Big\{AB\Big\}}\ge0.\end{aligned}$$ Optimality function $\Psi$ {#sec-app_supp1} -------------------------- Mathematically, any function from a positive semidefinite matrix space to a real positive number, $\Psi:$ NND$(n)\to\bbr$ can be used as the optimality function. From statistics and information theoretical view, we normally impose the following properties on $\Psi$: 1. Isotonicity (operator monotone function):\ For $J_1,J_2\in\nnd$, if $J_1\ge J_2$, then $\Psi(J_1)\le\Psi(J_2)$. 2. Homogeneity: There exists a function $\psi$ such that $\Psi(a J)=\psi(a) \Psi(J)$ holds for any constant $a>0$ and all $J\in\nnd$. 3. Convexity: $\Psi(pJ_1+(1-p)J_2)\le p \Psi(J_1)+(1-p) \Psi(J_2)$, for $\forall p\in[0,1]$ and $\forall J_1,J_2\in\nnd$. Note that the popular optimality criteria discussed in the main text are expressed as follows. $A$-optimality: $\Psi_A(J)={\mathrm{Tr}\Big\{J^{-1}\Big\}}$, $D$-optimality: $\Psi_D(J)={\mathrm{Det}\{(J^{-1})\}}$, $E$-optimality: $\Psi_E=\max_{c\in\bbr^n}c^\mathrm{T}J^{-1}c/|c|^2$, $c$-optimality: $\Psi_c=c^\mathrm{T}J^{-1}c$. It is easy to check that $A$- and $E$-optimality satisfy the above three conditions. $D$-optimality violates the third condition, however, and the standard remedy is to instead optimize $\log {\mathrm{Det}\{J_{{\theta_{}}}[e]^{-1}\}}=- \log {\mathrm{Det}\{J_{{\theta_{}}}[e]\}}$, which is a convex function. As noted in the main text, the $\gamma$-optimality criterion contains the $A$-optimal ($\gamma=1$), $D$-optimal ($\gamma\to0$), and $E$-optimal ($\gamma\to\infty$) criteria as special cases up to appropriate constant factors. In this sense, the $\gamma$-optimality is a sort of generalization of the standard optimality criteria. There is a non-trivial inequality relation among the $D$-, and $E$-optimality functions [@fh97; @fl14]. A well-known Liapunov’s inequality in probability theory proves $$\label{eq:Liapunov.ineq} ({\mathrm{Det}\{J^{-1}\}})^{1/n}\le\frac1n{\mathrm{Tr}\Big\{J^{-1}\Big\}}\le\lambda_{\max}(J^{-1}),$$ where $n$ is the dimension of the parameter set ${\Theta}$. These inequality relationships, however, do not provide a general hierarchy among these criteria. Two optimality functions (or, more generally, many optimality functions) can be combined to define a [*compound optimality function*]{} $\Psi_p=p \Psi_1+(1-p) \Psi_2$ with $p \in [0, 1]$. $\Psi_{p}$ represents a trade-off relation between two different optimal designs defined by $\Psi_1$ and $\Psi_2$. Careful consideration is needed to compare different optimality functions. The value of $\Psi[e_*]$ is a relative quantity, and we cannot conclude that an $e_*$ for a particular $\Psi$ is also a good design according to another optimality function $\Psi'$ simply by looking at the value of $\Psi'[e_*]$. To compare the performance of different optimality criteria, one can consider a function $ \eta $ dependent on the optimal value $\Psi[e_*]$, ${\eta_\Psi[e]=\Psi[e_*]/\Psi[e]}$. The normalized function satisfies $0\le\eta_\Psi[e]\le1$, and it can describe the efficiency of the design $e$ for the function $\Psi$. Applications of the above extended optimal designs were discussed in various statistical problems, see Refs. [@pukelsheim; @fh97; @fl14; @pp13]. Löwner optimal design {#sec-app_lowner} --------------------- One way to check if a Löwner optimal design exists is to minimize a weighted $A$-optimal function $ \Psi_{A}(J) = {\mathrm{Tr}\Big\{ W J^{-1} \Big\}} $. If the optimal design $ e_* $ is $ W $-independent, then a Löwner optimal design exists. Otherwise, its existence is disproved. The reason behind this logic is the expression for the positive semidefinite matrix given by . Alternatively, one can work out the $c$-optimal design problem and then to check if the optimal design is independent of $c$. It is pointed out in the main text that the Löwner optimality is the strongest criterion. That is, if there exists a Löwner optimal design $e_*$, then $e_*$ is also optimal for all the other optimality criteria. To show this, let us assume that the optimality function $\Psi$ satisfies the three conditions discussed in Sec. \[sec-app\_supp1\]. In particular, when $\Psi$ is isotonic, then, for $J_{{\theta_{}}}[e_*]\ge J_{{\theta_{}}}[e]$, we have, $$\forall e\in\cE,\ \Psi(J_{{\theta_{}}}[e_*])\le \Psi(J_{{\theta_{}}}[e]).$$ This shows that the Löwner optimal design $e_*$ is indeed an optimal design for the optimality criterion defined by $\Psi$. Convex optimal structures {#sec-app_supp2} ------------------------- A common approach to finding an optimal design $ e_* $ is to introduce a convex structure to the design problem. This allows for a systematic search over the entire parameter space for an optimal design. Two separate ingredients are needed to create this convexity. The first is a convex structure on the design set $\cE$: A convex sum of two designs $e_1,e_2\in\cE$ is defined as ${e_p=p e_1+(1-p)e_2}$ for $p\in[0,1]$. It is easy to show that this convex structure preserves the locally unbiasedness of an estimator. Having this convex structure for the design space $\cE$ and convexity of the function $\Psi$, we can formulate our problem as a convex optimization problem. An important consequence of this formulation is that such a convex problem has optimal designs $e_*$ at the extremal points of the convex set $\cE$. The necessary and sufficient condition for such an optimal design can also be derived; see, for example, Refs. [@fedorov; @pukelsheim; @fh97; @fl14; @pp13]. Remember that the continuous design problem aims to find an optimal design $e_*(m)=(\v{\nu}_*,\v{e}_*)$ defined by $$\label{eq:opt.design2} e_*(m)=\arg\hspace{-5mm}\min_{e(m)\in\cP(m)\times\cE^m} \Psi\bigg(\sum_{i=1}^m \nu_iJ_{{\theta_{}}}[e_i]\bigg).$$ A convex structure can also be constructed from two continuous designs $e(m)=(\v{\nu},\v{e})$ and $e'(m)=(\v{\nu}',\v{e}')$, $$p e(m)+(1-p) e'(m)=\big(p\v{\nu}+(1-p)\v{\nu}',\, p\v{e}+(1-p)\v{e}' \big),$$ where $p\v{e}+(1-p)\v{e}' =\big(pe_i+(1-p)e'_i \big)$ is a well-defined convex sum of two designs. Special consideration is needed to define a convex sum of two designs $e(m)$ and $e'(m')$ for $m\neq m'$. This is done by introducing an integration measure $\mu$ for the design space $\cE$, allowing one to consider an experimental design of the form $e_\mu=\int \mu(de) e$. This formalism is more general, since a discrete measure contains the above-mentioned continuous design problem as a special case. In the literature on mathematical theory of optimal DoE, optimization of the measure $\mu$ is studied; see, for example, Refs. [@fedorov; @pukelsheim; @fh97; @fl14; @pp13]. Another important problem is characterizing the structure of the Fisher information matrices for all possible designs, i.e., finding a set of matrices defined by $$\cJ(m)= \{J=\sum_{i=1}^m \nu_iJ_{{\theta_{}}}[e_i]\,|\, \v{\nu}\in\cP(m),e_i\in\cE \},$$ or more generally, their unions $$\cJ=\bigcup_{m\in\bbn}\cJ(m).$$ Then, the optimization problem can be rephrased as a minimization over the convex set, $$\begin{aligned} \nonumber \Psi_*&=\min_{J\in\cJ} \Psi(J),\\ J_{{\theta_{}}}[e_*]&=\arg\min_{J\in\cJ} \Psi(J). \end{aligned}$$ Singular design {#sec-app_sing} --------------- A design $e\in\cE$ is called a [*singular design*]{} if the Fisher information matrix $J_{{\theta_{}}}[e]$ is not full rank. When $J_{{\theta_{}}}[e]$ is invertible, $ e $ is a [*regular design*]{}. A typical example of an optimal singular design is the $c$-optimal design, which corresponds to finding an optimal design in a particular direction of the information matrix. Such an optimal design should be in the direction specified by the c-vector, giving a singular Fisher information matrix. The inverse of the Fisher information matrix does not exist for singular designs, so an appropriate remedy is needed for a properly defined optimal design. The most common approach is to use the generalized inverse of the Fisher information matrix. In particular, the Moore-Penrose inverse matrix is uniquely defined for singular matrices, and is used in literature; see for example the standard textbook [@lc98] and also Ref. [@sm01] on this issue. A common alternative is the regularization method: Given a singular matrix $J$, we calculate $(J+\epsilon I)^{-1}$ with $\epsilon>0$, and then take the limit $\epsilon\to0$. These two methods, however, are not always practical, in which case more alternatives are needed. Yet another approach is to find an $A$-optimal design for the following positive-definite weight matrix, $$W_\epsilon=\left(\begin{array}{cc}W_{I} & 0 \\ 0& \epsilon I_N\end{array}\right),$$ where $I_N$ is the identity matrix for the $(n-k)\times(n-k)$ sub-block. Under mild regularity conditions, the optimal design ${e_* = \arg\min{\mathrm{Tr}\Big\{W_\epsilon J_{{\theta_{}}}[e]^{-1}\Big\}}}$ exists and is regular. Or, one can optimize a [*regularized*]{} optimality function by creating a continuous design problem. Let $e_0$ be a design such that $J_{{\theta_{}}}[e_0]$ is regular. Then, we can regularize any optimality function by using the Fisher information matrix $(1-\epsilon)J_{{\theta_{}}}[e]+\epsilon J_{{\theta_{}}}[e_0]$ for any $e\in\cE$ and $\epsilon\in(0,1)$. Lastly, one can use a compound optimal criterion. Let $\Psi$ be the optimality function under consideration whose optimal design can be singular. We can consider another optimality function $\Psi'$ such that $e_*=\arg\min\Psi'(e)$ is always regular. The combined optimality function $\Psi_\epsilon=(1-\epsilon) \Psi+\epsilon \Psi'$ (discussed in Appendix Sec. \[sec-app\_supp1\]) can then be used to define a regular optimal design. [20]{} J. Kiefer and J. Wolfowitz, Canad. J. Math. [**12**]{}, 363 (1960). V. V. Fedorov, [*Theory of Optimal Experiments*]{}, Academic Press (1972). F. pukelsheim, [*Optimal Design of Experiments*]{}, Wiley (1993). V. V. Fedorov and P. Hackl, [*Model-Oriented Design of Experiments*]{}, Springer (1997). V. V. Fedorov and S. L. Leonov, [*Optimal Design for Nonlinear Response Models*]{}, CRC Press (2014). L. Pronzato and A. Pázman, [*Design of Experiments in Nonlinear Models*]{}, Lecture Notes in Statistics [**212**]{}, Springer (2013). R. Kosut, I. Walmsley, and H. Rabitz, e-print arXiv: 0411093 (2004). J. Nunn, [*et al*]{}, Phys. Rev. A, [**81**]{}, 042109 (2010). G. Balló and K. M. Hangos, e-print arXiv: 1004.5209 (2010). G. Balló and K. M. Hangos, e-print arXiv: 1107.0890 (2011). G. Balló, K. M. Hangos, and D. Petz, IEEE Trans. Autom. Control, [**57**]{}, 2056 (2012). L. Ruppert, D. Virosztek, and K. Hangos, J. Phys. A [**45**]{}, 265305 (2012). A. Fujiwara, Phys. Rev.  A **63**, 042304 (2001). A. Fujiwara and H. Imai, J. Phys. A: Math. Theor. [**36**]{}, 8093 (2003). V. Giovannetti, S. Lloyd, and L. Maccone, Nat. Phot. **5**, 222 (2011). C. R. Rao, [*Linear Statistical Inference and its Applications*]{}, 2nd ed, Wiley-Interscience (1973). J. C. Kiefer, [*Introduction to Statistical Inference*]{}, Springer-Verlag (1985). E. L. Lehmann and G. Casella, [*Theory of Point Estimation*]{}, 2nd ed, Springer (1998). R. A. Fisher, J. Roy. Statist. Soc. [**98**]{}, 39 (1935). S. Amari, [*Differential-Geometrical Methods in Statistics*]{}, Springer-Verlag (1985). O. E. Barndorff-Nielsen and D. R. Cox, [*Inference and Asymptotics*]{}, Monographs on Statistics & Applied Probability, Chapman & Hall/CRC (1994). S. Amari and H. Nagaoka, [*Methods of Information Geometry*]{}, AMS and Oxford University Press (2000). D. Basu, J. Amer. Statist. Assoc. [**72**]{}, 355 (1977). D. R. Cox and N. Reid, J. Roy. Statist. Soc. B [**49**]{}, 1 (1987). S. Amari and M. Kumon, Ann. Stat. [**16**]{}, 1044 (1988). V. P. Bhapkar and C. Srinivasan, Ann. Inst. Statist. Math. [**46**]{}, 593 (1994). Y. Zhu and N. Reid, Can. J. Stat. [**22**]{}, 111 (1994). P. Stoica and T. L. Marzetta, IEEE Trans. Signal Processing, [**49**]{}, 87 (2001). M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information*, Cambridge University Press (2000). D. Petz, [*Quantum Information Theory and Quantum Statistics*]{}, Springer-Verlag, (2008). C. W. Helstrom, [*Quantum Detection and Estimation Theory*]{}, Academic Press, New York, (1976). A. S. Holevo, [*Probabilistic and Statistical Aspects of Quantum Theory*]{}, Edizioni della Normale, Pisa, 2nd ed (2011). D. Petz, Linear Algebr. Appl. [**244**]{}, 81 (1996). G. M. D’Ariano, P. L. Presti, and P. Perinotti, J. Phys. A: Math. Theor. [**38**]{}, 5979 (2005). A. Fujiwara, J. Phys. A: Math. Gen. [**39**]{}, 12489 (2006). K. Yamagata, Int. J. Quant. Inf., **9**, 1167 (2011). J. Suzuki, Proc. 39th Symp.  on Inform. Theory and its Appl. , 5.3.1, 283 (2016). T. Y. Young, Information Sciences, [**9**]{}, 25 (1975). H. Nagaoka, in [*Proc. 10th Symp.  on Inform. Theory and its Appl.*]{}, 241 (1987). English translation is available in [@hayashi]. S. L. Braunstein and C. M. Caves, Phys. Rev. Lett. [**72**]{}, 3439 (1994). M. Hayashi ed. [*Asymptotic Theory of Quantum Statistical Inference: Selected Papers*]{}, World Scientific, (2005). H. Nagaoka, Proc. 12th Symp.  on Inform. Theory and its Appl. pp 577 (1989). Reprinted in [@hayashi]. M. Hayashi and K. Matsumoto, “Statistical model with measurement degree of freedom and quantum physics,” Surikaiseki Kenkyusho Kokyuroku, [**1055**]{}, 96 (1998). English translation is available in [@hayashi]. O. E. Barndorff-Nielsen and R.D. Gill, J. Phys. A: Math. Gen. [**33**]{}, 4481 (2000). A. Fujiwara, J. Phys. A: Math. Gen. [**39**]{}, 12489 (2006). T. Sugiyama, P. S. Turner, and M. Murao, Phys. Rev. A **85**, 052107 (2012). R. Okamoto, M. Iefuji, S. Oyama,K. Yamagata, H. Imai, A. Fujiwara, and S. Takeuchi, Phys. Rev. Lett. **109**, 130404 (2012). D. H. Mahler, L. A. Rozema, A. Darabi, C. Ferrie, R. Blume-Kohout, and A. M. Steinberg, Phys. Rev. Lett. [**111**]{}, 183601 (2013). K. S. Kravtsov, S. S. Straupe, I. V. Radchenko, N. M. T. Houlsby, F. Huszár, and S. P. Kulik, Phys. Rev. A [**87**]{}, 062122 (2013). Z. Hou, H. Zhu, G.-Y. Xiang, C.-F.  Li, and G.-C. Guo, npj Quantum Information [**2**]{}, 16001 (2016). R. Okamoto, S. Oyama, K. Yamagata, A. Fujiwara, and S. Takeuchi, Phys. Rev. A [**96**]{}, 022124 (2017). J. Zhang, Y.-X. Liu, R.-B. Wu, K. Jacobs, and F. Nori, Phys. Rep. [**679**]{}, 1 (2017). M. Sarovar and G. J. Milburn, J. Phys. A: Math. Gen. [**39**]{}, 8487 (2006). J. Suzuki, Phys. Rev.  A **94**, 042306 (2016). D. Petz and H. Ohno, Acta Math. Hungar [**124**]{}, 165 (2009). R. Bhatia, [*Positive Definite Matrices*]{}, (Princeton University Press, 2007). The standard representation of the Pauli matrices is [$\sigma_1=\left(\begin{array}{cc}0 & 1 \\1 & 0\end{array}\right)$, $\sigma_2=\left(\begin{array}{cc}0 & -{\mathrm{i}}\\ {\mathrm{i}}& 0\end{array}\right)$, $\sigma_3=\left(\begin{array}{cc}1 & 0 \\0 & -1\end{array}\right)$.]{} To see this, note that $|{\vartheta_{1}}|={\theta_{>}}-{\theta_{<}}$, where ${\theta_{>(<)}}$ is the larger(smaller) of ${\theta_{1}}$ and ${\theta_{2}}$. Then, ${\vartheta_{2}}={\theta_{>}}+{\theta_{<}}=|{\vartheta_{1}}|+2{\theta_{<}}\geq |{\vartheta_{1}}|$ since ${\theta_{<}}\geq 0$. If the input state is $\v{s}=\frac{1}{\sqrt{2}}( 1,1,0)^\mathrm{T}$ or $\frac{1}{\sqrt{2}}( -1,-1,0)^\mathrm{T}$, the corresponding projective measurement is $\Pi=\{ \frac{1}{2}[I_d\pm \frac{1}{\sqrt{2}}(\sigma_1-\sigma_2)]\}$. For the input state $\v{s}=\frac{1}{\sqrt{2}}( 1,-1,0)^\mathrm{T}$ or $\frac{1}{\sqrt{2}}( -1,1,0)^\mathrm{T}$, the corresponding projective measurement is $\Pi=\{ \frac{1}{2}[I_d\pm \frac{1}{\sqrt{2}}(\sigma_1+\sigma_2)]\}$.
0.1cm ON THE RECONSTRUCTION OF THE MATRIX\ FROM ITS MINORS 0.2cm [**Mouftakhov A. V.(Ramat-Gan (Israel), BIU)**]{} 0.2cm [**Introduction.**]{} 0.2cm In our article we consider some algebraical methods which may be useful in some inverse spectral problems \[1\]–\[3\]. In those papers problems of recovery of the coefficients of the boundary conditions are considered. In \[1\]–\[3\] the boundary conditions of the inverse spectral problem are determined by the $2\times 4$-matrix $A$ (${\rm rank}\, A = 2$): $$A = \left[ \begin{array}{cccc} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \end{array} \right]$$ The boundary conditions which are determined by the $2\times 4$-matrix $A$ and the boundary conditions which are determined by the $2\times 4$-matrix $B$ (where ${\rm rank}\, A = {\rm rank}\, B = 2$) are equivalent if and only if their matrices $A$ and $B$ are linearly equivalent, i.e. there exists non-singular ${2 \times 2}$-matrix $S$ such that $ A = SB$ (see \[3\]). Hence, the search for the boundary conditions is equivalent to finding the class of linearly equivalent $2\times 4$-matrices. Let $$A_{ij} = \left| \begin{array}{cc} a_{1i} & a_{1j} \\ a_{2i} & a_{2j} \end{array} \right| (i=1,2,3,4;j=1,2,3,4 )$$ are minors of the matrix $A$. Let’s remind $A_{ij}=-A_{ji}$. By methods from \[1\]–\[3\] minors $A_{12},A_{13},A_{14},A_{23},A_{24},A_{34}$ can be found apart from a constant. We want to find a matrix which has minors $A_{12}$, $A_{13}$, $A_{14}$, $A_{23}$, $A_{24}$, $A_{34}$. More precisely, we want to find coefficients of the matrix. 0.2cm § 1 [**Reconstruction of a matrix by its minors.**]{} 0.2cm 0.2cm [**Theorem 1.**]{} *Let $A$, $B$ – $2\times 4$-matrices and ${\rm rank}\, A = {\rm rank}\, B = 2$.\ $ A_{ij} = \left| \begin{array}{cc} a_{1i} & a_{1j} \\ a_{2i} & a_{2j} \end{array} \right| (i=1,2,3,4; j=1,2,3,4 ) $ are minors of the matrix $A$.\ $ B_{ij} = \left| \begin{array}{cc} b_{1i} & b_{1j} \\ b_{2i} & b_{2j} \end{array} \right| (i=1,2,3,4; j=1,2,3,4 ) $ are minors of the matrix $B$.* $\left< {\bf a}_1,{\bf a}_2 \right>$ is a span of the row-vectors ${\bf a}_1= (a_{11}, a_{12}, a_{13}, a_{14})$ and ${\bf a}_2= (a_{21}, a_{22}, a_{23}, a_{24})$. $\left< {\bf b}_1,{\bf b}_2 \right>$ is a span of the row-vectors ${\bf b}_1= (b_{11},b_{12}, b_{13}, b_{14})$ and ${\bf b}_2= (b_{21},b_{22}, b_{23}, b_{24})$. Then the following statements are equivalent: i\. Exists a number $t \ne 0$ such that $ A_{ij } = t \, B_{ij}, (i = 1, 2,3,4; j = 1, 2, 3, 4)$ where $t$ does not depend on the suffixes $ i, j$ chosen. ii\. $\left< {\bf a}_1,{\bf a}_2 \right> = \left< {\bf b}_1,{\bf b}_2 \right>.$ iii\. The matrices $A$ and $B$ are linearly equivalent, i.e. there exist non-singular ${2 \times 2}$-matrix $S$ such that $ B = S \, A$ 0.2cm [**Proof.**]{} [**i $\Rightarrow$ ii.**]{} The condition that any vector $(x_1,x_2,x_3,x_4)$ should lie in the linear envelope $\left< {\bf a}_1,{\bf a}_2 \right>$ is that the matrix $$\left[ \begin{array}{cccc} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ x_1 & x_2 & x_3 & x_4 \end{array} \right]$$ should be of $rank \, 2$. In this case the determinant of the $3\times 3$-submatrix must be zero. Expanding these determinants in terms of the last row, we get $$\left\{ \begin{array}{ccccccccc} A_{23}x_1&+&A_{31}x_2&+&A_{12}x_3& & & = & 0 \\ A_{24}x_1&+&A_{41}x_2& & &+&A_{12}x_4 & = & 0 \\ A_{34}x_1&+& & & A_{41}x_3&+&A_{13}x_4& = & 0 \\ & & A_{34}x_2&+&A_{42}x_3&+&A_{23}x_4& = & 0 \end{array} \right. \eqno (1)$$ Similarly, $(x_1,x_2,x_3,x_4)$ should lie in the linear envelope $\left< {\bf b}_1,{\bf b}_2 \right>$ if and only if $$\left\{ \begin{array}{ccccccccc} B_{23}x_1&+&B_{31}x_2&+&B_{12}x_3& & & = & 0 \\ B_{24}x_1&+&B_{41}x_2& & &+&B_{12}x_4 & = & 0 \\ B_{34}x_1&+& & & B_{41}x_3&+&B_{13}x_4& = & 0 \\ & & B_{34}x_2&+&B_{42}x_3&+&B_{23}x_4& = & 0 \end{array} \right.\eqno (2)$$ $A_{ij } = t \, B_{ij}, (i = 1, 2,3,4; j = 1, 2, 3, 4)$ where $t\ne 0$ is not depend on the suffixes $ i, j$ chosen. Hence, the systems of the linear equations (1), (2) are equivalent. Therefore $\left< {\bf a}_1,{\bf a}_2 \right> = \left< {\bf b}_1,{\bf b}_2 \right>.$ [**ii $\Rightarrow$ iii.**]{} Let ${\bf L}=\left< {\bf a}_1,{\bf a}_2 \right> = \left< {\bf b}_1,{\bf b}_2 \right>.$ $rank \, A = rank \, B = 2$, hence, ${\bf a}_1,{\bf a}_2$ is basis in ${\bf L}$ and ${\bf b}_1, {\bf b}_2$ is basis in ${\bf L}$ too. Therefore there exist a non-singular ${2 \times 2}$-matrix $S$ of basis transformation such that ${\bf b}_i=s_{i1}\, {\bf a}_1 + s_{i2}\, {\bf a}_2, (i = 1, 2).$ Hence, $ B = S \, A$. [**iii $\Rightarrow$ i.**]{} Suppose that $ B = S \, A$, where $S$ is a non-singular ${2 \times 2}$-matrix. We consider the ${2 \times 2}$-submatrix of $A$ by selecting the columns with suffixes $i, j$ and corresponding submatrix of $B$. We have $$\left[ \begin{array}{cc} b_{1i} & b_{1j} \\ b_{2i} & b_{2j} \end{array} \right] = \left[ \begin{array}{cc} s_{11} & s_{12} \\ s_{21} & s_{22} \end{array} \right] \, \left[ \begin{array}{cc} a_{1i} & a_{1j} \\ a_{2i} & a_{2j} \end{array} \right]$$. and taking determinants of both the sides, we find that $ A_{ij } = t \, B_{ij}$ , where $t = \det\, S \ne 0$ is not depend on the suffixes $ i, j$. The theorem is proved. 0.2cm If we know minors $ A_{ij }, (i = 1, 2,3,4; j = 1, 2, 3, 4)$ of the matrix $A$, by (1) we can find a matrix which is linearly equivalent to $A$ . 0.2cm 0.2cm § 2 [**Examples.**]{} 0.2cm [**Example 1.**]{} Let $A_{12}\ne 0$, then (1) is equivalent to $$\left\{ \begin{array}{ccccccccc} A_{23}x_1&+&A_{31}x_2&+&A_{12}x_3& & & = & 0 \\ A_{24}x_1&+&A_{41}x_2& & &+&A_{12}x_4 & = & 0 \end{array} \right.$$ And the matrix $A$ is linearly equivalent to the following matrix $$\left[ \begin{array}{cccc} 1 & 0 & -A_{23} & -A_{24} \\ 0 & 1 & A_{13} & A_{14} \end{array} \right].$$ [**Example 2.**]{} Let $A_{13}\ne 0$, then (1) is equivalent to $$\left\{ \begin{array}{ccccccccc} A_{23}x_1&+&A_{31}x_2&+&A_{12}x_3& & & = & 0 \\ A_{34}x_1&+& & & A_{41}x_3&+&A_{13}x_4& = & 0 \end{array} \right.$$ And the matrix $A$ is linearly equivalent to the following matrix $$\left[ \begin{array}{cccc} 1 & A_{23} & 0 & -A_{34} \\ 0 & A_{12} & 1 & A_{14} \end{array} \right].$$ [**Example 3.**]{} Let $A_{14}\ne 0$, then (1) is equivalent to $$\left\{ \begin{array}{ccccccccc} A_{24}x_1&+&A_{41}x_2& & &+&A_{12}x_4 & = & 0 \\ A_{34}x_1&+& & & A_{41}x_3&+&A_{13}x_4& = & 0 \end{array} \right.$$ And the matrix $A$ is linearly equivalent to the following matrix $$\left[ \begin{array}{cccc} 1 & A_{24} & A_{34}& 0 \\ 0 & A_{12} & A_{13}& 1 \end{array} \right].$$ [**Example 4.**]{} Let $A_{23}\ne 0$, then (1) is equivalent to $$\left\{ \begin{array}{ccccccccc} A_{23}x_1&+&A_{31}x_2&+&A_{12}x_3& & & = & 0 \\ & & A_{34}x_2&+&A_{42}x_3&+&A_{23}x_4& = & 0 \end{array} \right.$$ And the matrix $A$ is linearly equivalent to the following matrix $$\left[ \begin{array}{cccc} A_{13}&1&0 & -A_{34} \\ - A_{12} &0&1& A_{24} \end{array} \right].$$ [**Example 5.**]{} Let $A_{24}\ne 0$, then (1) is equivalent to $$\left\{ \begin{array}{ccccccccc} A_{24}x_1&+&A_{41}x_2& & &+&A_{12}x_4 & = & 0 \\ & & A_{34}x_2&+&A_{42}x_3&+&A_{23}x_4& = & 0 \end{array} \right.$$ And the matrix $A$ is linearly equivalent to the following matrix $$\left[ \begin{array}{cccc} A_{14}&1&-A_{34}&0 \\ - A_{12} &0& A_{23}&1 \end{array} \right].$$ [**Example 6.**]{} Let $A_{34}\ne 0$, then (1) is equivalent to $$\left\{ \begin{array}{ccccccccc} A_{34}x_1&+& & & A_{41}x_3&+&A_{13}x_4& = & 0 \\ & & A_{34}x_2&+&A_{42}x_3&+&A_{23}x_4& = & 0 \end{array} \right.$$ And the matrix $A$ is linearly equivalent to the following matrix $$\left[ \begin{array}{cccc} A_{14}&A_{24}&1&0 \\ - A_{12} &- A_{23}&0&1 \end{array} \right].$$ 0.2cm § 3 [**Plucker relation.**]{} 0.2cm 0.2cm [**Theorem 2.**]{} *Let $A_{12}, A_{13}, A_{14}, A_{23}, A_{24}, A_{34}$ are some numbers not all equal to zero. Then the following statements are equivalent:* i\. There exists a $2\times 4$-matrix $A$ such that $A_{12}, A_{13}, A_{14}, A_{23}, A_{24}, A_{34}$ are minors of $A$ and $rank \, A =2$. ii\. The following condition is satisfied $$A_{12} \, A_{34} - A_{13} \, A_{24} + A_{14} \, A_{23} = 0. \eqno (3)$$ This condition is called [Plucker relation]{} (see \[4\], \[5\]). 0.2cm [**Proof.**]{} [**i $\Rightarrow$ ii.**]{} Since $$\left| \begin{array}{cccc} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \end{array} \right| =0,$$ by Laplace’s expansion of this determinant we get $$2 \, A_{12} \, A_{34} - 2 \, A_{13} \, A_{24} + 2 \, A_{14} \, A_{23} = 0.$$ Hence, Plucker condition (3) is satisfied. [**ii $\Rightarrow$ iii.**]{} Since ${\rm rank}\, A = 2$, for at least one set of suffixes $i,j$, $A_{ij}$ is not equal to zero. Let’s assume, for definiteness, that $A_{12} \ne 0$. Then by Plucker condition (3) we get $$A_{34}= \frac{ A_{13} \, A_{24} - A_{14} \, A_{23}}{A_{12}} .$$ Consider the following $2\times 4$-matrix $$\left[ \begin{array}{cccc} A_{12} & 0 & -A_{23} & -A_{24} \\ 0 & 1 & \frac{ A_{13}}{A_{12}} & \frac {A_{14}} {A_{12}} \end{array} \right].$$ It is easy to check up that $A_{12}, A_{13}, A_{14}, A_{23}, A_{24}, A_{34}$ are minors of this matrix. The theorem is proved. 0.2cm § 4 [**Approximation by a method of orthogonal projection.**]{} 0.2cm When we measure and we calculate something small errors are possible. Hence it is possible that the numbers $ \widetilde{A}_{12}$, $ \widetilde{A}_{13} $, $\widetilde{A}_{14}$, $\widetilde{A}_{23}$, $ \widetilde{A}_{24}$, $ \widetilde{A}_{34}$ which are found by methods from \[1\]–\[3\] do not satisfy to Plucker relation (3). Then these numbers are not minors of a matrix. Therefore we must find numbers $A_{12}$, $A_{13}$, $A_{14}$, $A_{23}$, $A_{24}$, $A_{34}$ close to the values $ \widetilde{A}_{12}$, $ \widetilde{A}_{13} $, $\widetilde{A}_{14} $, $\widetilde{A}_{23}$, $ \widetilde{A}_{24}$, $ \widetilde{A}_{34}$ and satisfy to Plucker relation (3). We have $$A_{12} \, A_{34} - A_{13} \, A_{24} + A_{14} \, A_{23} = 0.$$ By definition, put $ \, x_1 = A_{12}, \, x_2 = A_{34}, \, x_3 = A_{13}, \, x_4 = -A_{24}, \, x_5 = A_{14}, \, x_6 = A_{23}. $ Using this definition, we get the Plucker relation $$x_1\, x_2 + x_3\, x_4+ x_5\, x_6 = 0, \eqno (4)$$ which characterizes a surface $F$ in the 6-dimensional space. By definition, put $ \, y_1 = \widetilde{A}_{12}, \, y_2 = \widetilde{A}_{34}, \, y_3 = \widetilde{A}_{13}, \, y_4 = -\widetilde{A}_{24}, \, y_5 = \widetilde{A}_{14}, \, y_6 = \widetilde{A}_{23}. $ By definition, put $$\begin{array}{c} X = (x_1, x_2 , x_3, x_4, x_5, x_6) ,\quad Y = (y_1, y_2, y_3, y_4, y_5, y_6), \\ X^*=(x_2, x_1, x_4, x_3, x_6, x_5),\quad Y^*=(y_2, y_1, y_4, y_3, y_6, y_5), \\ (\vec X,\vec Y)=x_1\, y_1 + x_2\, y_2+x_3\, y_3 + x_4\, y_4+x_5\, y_5 + x_6\, y_6, \end{array}$$ where $\quad \vec X=\overrightarrow{OX}$, $\vec Y=\overrightarrow{OY}$, $O$ is the origin. Let $X$ be an orthogonal projection of $Y$ on surface (4). The vector $\vec X^*$ is normal for the surface (4) in the point $X$. It is identical to the following equations $$\vec Y = \vec X + p\,\vec X^*,\eqno (5)$$ $$(\vec X, \vec X^*) = 0, \eqno (6)$$ where $p$ is a real number. Having solved a set of linear equations (5) with the unknowns $x_1, x_2 , x_3, x_4, x_5, x_6$, we obtain $$\vec X = \frac1{1-p^2}\, (\vec Y - p\, \vec Y^*). \eqno (7)$$ From (7) it is easy to obtain $$\vec X^* = \frac1{1-p^2}\, (\vec Y^* - p\, \vec Y). \eqno (8)$$ Substituting (7) for $\vec X$ and (8) for $\vec X^*$ in (6), we obtain $$(\vec Y - p\, \vec Y^*,\, \vec Y^* - p\, \vec Y) = 0.$$ Notice that $$(\vec Y,\vec Y^*)\ne 0,\quad (\vec Y^*,\vec Y^*)=(\vec Y,\vec Y), \quad (\vec Y^*,\vec Y)=(\vec Y,\vec Y^*).$$ Therefore, $$p^2 - 2\, p\, \frac {(\vec Y,\vec Y)}{(\vec Y,\vec Y^*)} + 1 = 0.$$ This quadric equation has two roots $$p= \frac {(\vec Y,\vec Y)\mp\sqrt{(\vec Y,\vec Y)^2 - (\vec Y,\vec Y^*)^2}}{(\vec Y,\vec Y^*)}.$$ If $X$ is close to $Y$, then $|p|<<1$ and thus we have $$p= \frac {(\vec Y,\vec Y)-\sqrt{(\vec Y,\vec Y)^2 - (\vec Y,\vec Y^*)^2}}{(\vec Y,\vec Y ^*)}. \eqno (9)$$ The vector $\vec X$ can be found by using (7) and (9). The coordinates $A_{12}$, $A_{13}$, $A_{14}$, $A_{23}$, $A_{24}$, $A_{34}$ of $X$ are minors of a matrix. This matrix can be found by using (1). 0.2cm 0.3cm REFERENCES 0.2cm 1. [*Akhatov I. Sh., Akhtyamov A. M.*]{} Determination of the form of attachment of a rod using the natural frequencies of its flexural oscillations.  // J. Appl. Math. Mech., vol. 65 (2001), no. 2, pp. 283–290. 2. [*Akhtyamov A. M.*]{} On uniqueness of the reconstruction of the boundary conditions of a spectral problem from its spectrum (Russian).  // Fundam. Prikl. Mat, vol. 6 (2000), no. 4, pp. 995–1006. 3. [*Akhtyamov A. M., Mouftakhov A. V.*]{} Identification of boundary conditions using natural frequencies,  // Inverse Problems in Egineering, vol. 22, no. 3, 393–408 (2004). 4. [*Hodge W. V. D., Pedoe D.*]{} Methods of algebraic geometry. Vol. 1. University Press, Cambridge, 1994. 5. [*Postnikov M. M.*]{} Linear Algebra and Differential Geometry. MIR, Moscow, 1982.
--- author: - 'Clément Dombry[^1] Paul Jung[^2]' title: 'Approximation of stable random measures and applications to linear fractional stable integrals.' --- [**Key words:**]{} stable random measure, moving average, fractional stable motion, Lindeberg-Feller. [**AMS Subject classification:**]{} 60G22, 60G52, 60G57, 60H05 Introduction ============ Stable integration is an important tool in the theory of $\aa$-stable processes. Similar to the theory for Gaussian processes, it is known ([@samorodnitsky1994stable Sec. 13.2]) that all stable processes, satisfying mild conditions, can be constructed from integrals of the form \[stableprocess\] X\_t=\_E f\_t(x) M\_(dx),tT, where $M_\alpha$ is an independently scattered $\alpha$-stable random measure on the measurable space $(E,{{\mathcal E}})$ with control measure $m$ and $(f_t)_{t\in T}$ is a kernel such that $f_t\in L^\alpha(E,{{\mathcal E}},m)$ for all $t\in T$. If $T=\R$ and $f_t=1_{[0,t]}$ then $X_t$ is an $\aa$-stable Levy motion having independent and stationary increments (the symmetric case is the stable analog of Brownian motion). In this work, we approximate the finite-dimensional distributions of using a Riemann sum-type scheme. These approximations are useful for the dual purposes of intuition and simulation of stable processes. The weak convergence of our scheme is facilitated by a Lindeberg-Feller type stable limit theorem, which we have not previously seen in the literature. A couple of different discrete approximations of stable processes have appeared previously in the literature. One approach is Lepage’s series which was improved upon in a series of papers by J. Rosinski (see [@rosinski2001series] and the references therein). In the present paper, we use a lattice approximation of stable integrals which extends, to $f\in L^\aa(\R^d)$, the “moving-average" discrete approximations of L-FSMs in [@davydov; @maejima1983class; @astrauskas1983limit; @davis1985limit] corresponding to the case $f_t=1_{[0,t]}$. The work of [@kasahara1988weighted] improved upon these earlier papers to obtain discrete approximations of slightly more general stable processes, while [@avram1992weak] showed that tightness of discretized L-FSMs cannot be achieved in the $J_1$-Skorokhod topology. In [@kokoszka1995fractional], it was shown that discretized L-FSMs satisfy the fractional ARIMA equations and a closer look at issues concerning absolute convergence was taken. A secondary purpose of this work is to generalize certain Gaussian integrals to the $\aa$-stable case and, as in [@kasahara1988weighted], we then approximate such integrals with the scheme just described. In the past fifteen years or so, there has been an effort to develop stochastic integrals with respect to a broader class of Gaussian processes than just Brownian motion. In particular, consider Gaussian processes with stationary increments, but replace the independent increments condition with the weaker condition of self-similarity. Normalizing the variance at $t=1$ to unity, one gets the single parameter family of fractional Brownian motions (FBM) with Hurst self-similarity parameter $0<H<1$. The theory of integration with respect to FBM is difficult because FBM is not a semi-martingale. Nevertheless, rapid progress has been made using several different approaches (with significant overlap between them). Roughly speaking, they can be categorized into four approaches which use, respectively, fractional derivatives and integrals, Malliavin calculus, fractional white noise theory, and path-wise integration (see [@biagini2008]). In Section \[sec:lfsm\], we consider a generalization of the FBM integral based on fractional integro-differentiation to $\aa$-stable analogs of FBM called the linear fractional stable motions[^3] (L-FSMs). By “$\aa$-stable analog", we mean that a L-FSM is a self-similar, symmetric stable process with stationary increments. Any process with these properties is called a FSM. In contrast to the Gaussian picture, for each admissible $(\aa,H)$ pair, there is not a unique (normalized) FSM, up to finite-dimensional distributions. Moreover, for each $(\aa,H)$ pair with $0<H<1$ and $H\neq 1/\aa$, there are infinitely many L-FSMs. These L-FSMs are represented by where $E=\R$ is equipped with Lebesgue measure, $M_\aa$ is symmetric, and \[LFSMkernel\] &&f\^[a,b]{}\_t(x):=\ &&a$(t-x)_+^{H-1/\aa} - (-x)_+^{H-1/\aa}$ + b$(t-x)_-^{H-1/\aa} - (-x)_-^{H-1/\aa}$ for properly normalized order pairs $(a,b)$ where $a,b\ge 0$ (see [@samorodnitsky1994stable Sec. 7.4] for more details). Here $x_-=|x|$ if $x<0$ and $0$ otherwise (similarly for $x_+$). The family of L-FSMs were the first FSMs to be constructed and studied, and much is known about them. Our motivation comes partly from [@pipiras2000integration] which handles the $\aa=2$ case. As in their work, we restrict ourselves to deterministic integrands, but [@pipiras2000integration] shows that even in the $\aa=2$ case, the theory for deterministic integrands is not completely trivial. An integral with respect to L-FSM will be defined as an integral with respect to a linear fractional stable random measure which we define for $\aa>1$ and all permissable Hurst parameters $0<H<1$. We have recently learned that when $H>1/\aa$, [@maejima2008limit] has developed similar integrals and also discrete approximations for them. However, the convergence results for their approximations concern a strictly smaller class of integrands. In particular, they require bounded integrands which are piece-wise continuous (we require no continuity or boundedness) and which must satisfy a faster tail decay than ours. The rest of the paper is organized as follows. In Section 2, we review the notion of stable random measures and present our result concerning the convergence of discretizations of stable random measures. In Section 3, integrals with respect to L-FSMs are defined, and their approximation by moving averages of i.i.d. random variables are discussed. Section 4 is devoted to the proofs. Discrete approximations of random measures =========================================== A useful viewpoint is that a random measure is a stochastic process: Let $(E,{{\mathcal E}})$ be a measurable space and $V$ be a vector space of measurable functions $f:E\to{\mathbb{R}}$. A random measure on $(E,{{\mathcal E}})$ is a stochastic process $(M[f])_{f\in V}$ satisfying the linearity property: for all $a_1,a_2\in{\mathbb{R}}$ and $f_1,f_2\in V$, $$\label{eq:lin} M[a_1f_1+a_2f_2]=a_1M[f_1]+a_2M[f_2] \quad {\rm almost\ surely.}$$ Let us make a few comments concerning this definition. First of all, the linearity property ensures that the finite-dimensional distributions of the process $(M[f])_{f\in V}$ are determined by its one-dimensional distributions. If ${\mathbf{1}}_A\in V$ for $A\in{{\mathcal E}}$, we note $M(A)=M[{\mathbf{1}}_A]$ which is thought of as the random measure of the set $A$. If $M(A_i)$ are independent for disjoint sets $A_1,\ldots, A_k$, then $M$ is said to be [*independently scattered*]{}. For general $f\in V$, to emphasize the analogy with usual integration, the notation $M[f]=\int_E f(x)M(dx)$ is often used. Finally, if one so pleases, one may also view the random measure $M$ as a random linear functional on the linear space $V$ (see for example [@dudley1969random]). Let ${{\mathcal S}}_\aa(\sigma)$ be the symmetric $\alpha$-stable () law of index $\alpha\in (0,2]$ with $\sigma \geq 0$ being the scale parameter[^4]. We denote the characteristic function of ${{\mathcal S}}_\aa(\sigma)$ by $$\label{eq:fcstable1} \lambda_\alpha(\theta)=\exp\left(-|\sigma\theta|^\alpha \right), \quad \theta\in{\mathbb{R}}.$$ To reduce notation, when $\sigma=1$ we simply write ${{\mathcal S}}_\alpha={{\mathcal S}}_\aa(1)$. We now consider the class of independently scattered random measures, i.e. those where $M[f]$ is for all $f\in V$. Suppose that $(E,{{\mathcal E}},m)$ is a measure space where $m$ is a $\sigma$-finite measure and ${{\mathcal E}}_0$ is the class of measurable sets with finite $m$-measure. Following [@samorodnitsky1994stable Sec. 3.3], we say that the independently scattered random measure $M_\alpha$ has [*control measure*]{} $m$ if $M_\alpha(A)$ has distribution ${{\mathcal S}}_\alpha(m(A)^{1/\aa})$ for all $A\in{{\mathcal E}}_0$. For such random measures, it can be shown that $V=L^\aa(E)$ (see [@samorodnitsky1994stable Ch. 3]) and that the distributions $M_\aa(A), A\in{{\mathcal E}}_0$ uniquely determine the characteristic functions {iM\_å\[f\]}= -\_E |f(x)|\^å m(dx). In the Gaussian case $\alpha=2$, this is just the usual Wiener integral. In the rest of this section we develop a discrete approximation of $M_\aa$ when $E={\mathbb{R}}^d$ with Lebesgue control measure. We begin by recalling that the domain of attraction of ${{\mathcal S}}_\aa $ consists of random variables $\xi$ such that $$\label{eq:xi1} a_n^{-1}\Big(\sum_{k=1}^n \xi_k -b_n\Big) \Longrightarrow {{\mathcal S}}_\aa \quad \mathrm{as}\ n\to\infty,$$ where $a_n>0$ and $b_n\in{\mathbb{R}}$ are normalization constants and the $\xi_k$’s are i.i.d. copies of $\xi$. In the sequel, we will assume $\eta$ is , and $\xi$ is not only in the domain of attraction of $\eta$, but also that the normalization constants are precisely $$\label{eq:xi2} a_n=n^{1/\alpha}\quad \mbox{and}\quad b_n=0,\quad n\geq 1.$$ When $\aa<2$, such distributions are said to be in the domain of [*normal*]{} attraction of ${{\mathcal S}}_\aa$ which is not to be confused with the normal domain of attraction. We propose a discrete approximation of $M_\alpha$ based on the lattice $h{\mathbb{Z}}^d\subset {\mathbb{R}}^d$ with edge length $h$. Let $(\xi_k)_{k\in{\mathbb{Z}}^d}$ be a random field of i.i.d. copies of $\xi$ satisfying and and formally define $$\label{eq:series} M_\aa^h[f]:=\sum_{k\in{\mathbb{Z}}^d} f^h(k)\xi_{k},$$ where for $I^d=[0,1)^d$, $f^h:\Z^d\mapsto \R$ is $$\begin{aligned} \label{def:superh} f^h(k)&:=&\int_{h(k+I^d)} f(x) \,dx, \quad f\in L^1_{\text{loc}}(\R^d).\end{aligned}$$ Note that we have implicitly fixed an enumeration $\{k_n, n\geq 1\}$ of $\Z^d$ and convergence of $\sum_{k\in{\mathbb{Z}}^d}a_k$ really means convergence of $\sum_{n=1}^\infty a_{k_n}$. The discrete random measures $M^h_\aa$ approximate $M_\aa$ in the following sense: \[maineq2prop\] Fix $\aa\in(0,2]$. If $\aa\in[1,2]$, let $f_t\in L^\aa(\R^d)$ for all $t$ in an index set $T$. If $\aa\in(0,1)$, for a fixed $\epsilon>0$ let $f_t\in L^{\aa-\epsilon}\cap L^1(\R^d)$ for all $t\in T$. Then as $h\to 0$\[maineq2\] M\_å\^h\[f\_t\] M\_. The notation $\stackrel{fdd}{\longrightarrow}$ denotes weak convergence of the finite dimensional distributions, i.e., convergence in distribution of $ M_\aa^h[f]$ for all linear combinations $f=\th_1f_{t_1}+\cdots +\th_nf_{t_n}$. When the functions are indexed by one-dimensional time, it was shown in [@avram1992weak], that even for the simple family $f_t=1_{[0,t]}\in L^\aa(\R)$, the above convergence does not hold in the $J_1$-Skorokhod topology[^5]. Theorem \[maineq2prop\] will follow from a Lindeberg-Feller type result for stable distributions which we state in Theorem \[theo:whitecase\] below. Let us make one more remark before stating Theorem \[theo:whitecase\]. One motivation for was to provide a means to simulate a process $X_t=\int_{{\mathbb{R}}^d}f_t(x)M_\alpha(dx)$. For such simulations, it is natural to let the $\xi_k$’s be i.i.d. copies of ${{\mathcal S}}_\aa $ (rather than only in the domain of normal attraction). If one is concerned only with one-dimensional distributions (a single function $f$), then a better approximation is given by replacing $f^h(k)$ in by \[eq:one-dd\] u\_k:=$\int_{h(k+I^d)}f(x)^{<\aa>} \,dx$\^[&lt;1/å&gt;]{} where we have used the notation $x^{<\aa>}:=\mathrm{sign}(x)|x|^\alpha$. In fact, using $u_k$, one can check that the approximation is exact, and the right and left sides of are equal in distribution for every $h>0$. The reason we have not used for the general approximation scheme is due to the fact that is no longer a random measure under because the linearity property does not hold. The analysis of the finite-dimensional distributions then becomes much more difficult. \[theo:whitecase\] Suppose $(\xi_{k,j})_{k,j\in\N}$ is an i.i.d. array of random variables in the domain of normal attraction of ${{\mathcal S}}_\alpha$, $\alpha\in(0,2]$, and $(u^{(j)})_{j\in\N}$ is a sequence of vectors in $\ell^\alpha$, i.e. $u^{(j)}:=(u^{(j)}_k)_{k\in\N}\in \ell^\alpha$ for all $j\in\N$. If 1. $\lim_{j\to\ff} \|u^{(j)}\|_{\ell^\alpha}=\sigma$ and\ 2. $\lim_{j\to\ff} \|u^{(j)}\|_{\ell^\infty}=0$ then $ \sum_{k}u_k^{(j)}\xi_{k,j}<\ff $ a.s. for each $j\in\N$ and $$\sum_{k\in\N} u_k^{(j)}\xi_{k,j} \Longrightarrow {{\mathcal S}}_\alpha(\sigma) \quad \mathrm{as}\ j\to\infty.$$ [**Remarks:**]{} 1. The condition that the $\xi_{k,j}$ be identically distributed can be relaxed slightly to the condition that $\E[ \exp(i\theta \xi_{k,j}) ] = 1-|\theta|^\alpha + o(|\theta|^\alpha)$ holds uniformly in $k,j$ as $\theta\to 0$. For example, they may be chosen from a finite family of distributions in the domain of normal attraction of ${{\mathcal S}}_\aa$. 2. The a.s. convergence $\sum_{k\in\N} u_k^{(j)}\xi_{k,j}<\infty$ in fact occurs if and only if $u=(u_k)_{k\in\N}\in\ell^{\aa}$ as will be seen in Lemma \[lem1\]. 3. Although the series $\sum_{k\in\N} u_k^{(j)}\xi_{k,j}$ may not converge absolutely, switching the order of summation does not change the convergence in distribution to ${{\mathcal S}}_\aa(\sigma)$. This will be apparent in the proof. 4. In the Gaussian case, the result can be seen as a variant of the usual Lindeberg-Feller Theorem by noticing that condition 2, concerning $\ell_\infty$, is equivalent to $$\lim_{j\to\ff}\sum_k 1{\{|u^{(j)}_k|>\eps\}}=0$$ for all $\eps>0$. More generally when , the result is related to Theorem 3.3 of [@petrov1995limit] which gives necessary and sufficient conditions for convergence of sums of independent triangular arrays to a given infinitely divisible distribution. In particular, the conditions of Theorem \[theo:whitecase\] above imply the [*infinite smallness*]{} condition (cf. Eq. (3.2) in [@petrov1995limit]). However, it is unclear how to obtain Theorem \[theo:whitecase\] from [@petrov1995limit Thm 3.3] in a manner simpler than the proof of Theorem \[theo:whitecase\] provided below. Linear fractional stable random measures {#sec:lfsm} ======================================== To simplify matters, in this section we will restrict our attention to the one-dimensional case $E=\R^1$ equipped with Lebesgue measure. For higher dimensions, see the first remark following Corollary \[theo:fraccase\]. Also, in this section we assume that $1<\aa\le 2$. Fractional integro-differentiation and L-FSM integrals ------------------------------------------------------ In this subsection we define the stochastic integration of suitable functions with respect to different L-FSMs in terms of stable random measures which are not independently scattered. This is achieved using fractional integrals and derivatives. The intuition behind our definition is based on two facts. The first is that fractional integrals and derivatives can be realized using convolutions, and the second is that convolutions are moving averages. The practice of using fractional integro-differentiation for analogous integrals with respect to FBM was initiated in [@decreusefond1999stochastic], and was subsequently used in [@pipiras2000integration]. We note that the $M$ operator, which is fundamental in the development of the so-called WIS integral ([@elliot2003]), is simply fractional integro-differentiation in disguise. Before we define our integral, let us review some preliminaries concerning fractional integro-differentiation. The Riemann-Liouville integrals are defined, for $ f \in L^p(\R), 1\le p<1/ \delta$ and $0< \delta<1$, by \[fracintegral\] (I\^ \_[+]{} f )(x)&:=& \_[-]{}\^x dt\ &=& \_ dt\ (I\^ \_[-]{} f )(x)&:=& \_ dt Our notation is consistent with the standard reference on this topic, [@samko1987integrals Sec. 5.1], where some basic properties of the above can be found. For example, if $ f $ is in the Schwartz space and we allow for $ \delta\in\N$, then gives the usual integral, as can be seen by Cauchy’s formula for repeated integration: \_[-]{}\^x\_[-]{}\^[t\_[n]{}]{}\_[-]{}\^[t\_2]{} f (t\_1) dt\_1 dt\_[n-1]{}dt\_n = \_[-]{}\^x(x-t)\^[n-1]{} f (t) dt.Also, the above fractional integrals have the semigroup property for $ \delta,\gamma>0$ and $ \delta+\gamma<1$: I\^ \_I\^\_ f = I\^[ +]{}\_ f . For sufficiently nice $ f $, this semigroup property extends to all $ \delta,\gamma>0$. Suppose $f\in \CC^1 $ and $f'\in L^{1}$. These are sufficient conditions for the following Riemann-Liouville derivatives to exist: \[fracderiv\] (\^\_[+]{} f )(x)&:=& \_ dt\ (\^\_[-]{} f )(x)&:=& \_ dt. If $ f \in L^1$, it is known that the inversion $\DD_{\pm}^\bb I^\bb_\pm f = f $ holds. Bringing the derivative inside the integral in , the Riemann-Liouville integrals and derivatives of $ f $ can be seen as convolutions of $f$ and $ f'$ with the family \[def:w\] w\_[a,b]{}(x)=w\_[a,b]{}\^[()]{}(x):=ax\_-\^[-]{} + bx\_+\^[-]{}, (0,1) where we have set $\beta=1-\delta$. \[maindef\] Fix $1<\aa\le 2$ and $a,b\ge 0$. 1. If $\bb\in(1/\aa,1)$, let $f\in L^1\cap L^\aa $. The [*linear fractional random measure*]{} with [*long range dependence*]{} is defined by \[posintegral\] M\_[,H]{} \[f\]&:=& M\_å\[aI\^\_- f+b[I\^\_+ f]{}\]=M\_\[fw\_[a,b]{}\^[()]{}\] where the Hurst parameter is given by $H=1+1/\aa-\bb$. 2. If $\bb\in(0,1/\aa)$, let $f\in \CC^1 $ and $f'\in L^1\cap L^\alpha $. The [*linear fractional random measure*]{} with [*anti-persistence*]{} is defined by \[negintegral\] M\_[,H]{} \[f\]&:=& M\_å\[a\^\_- f+b[\^\_+ f]{}\]=M\_\[(fw\_[a,b]{}\^[()]{})\] where $H=1/\aa-\bb$. It is not hard to check that $I^\bb_\pm f$ and $\DD^\bb_\pm f$ are in $L^\aa$ so that and are well-defined: to see this, split $w_{a,b}^{(\bb)}$ into an $L^1$ and $L^\aa$ function using $1_{[-\epsilon,\epsilon]} +1_{[-\epsilon,\epsilon]^c}$ and apply Young’s convolution inequality, \[Young\] fg\_r f\_p g\_q +=+1. In fact, one can slightly improve the condition for to $f\in L^{\alpha(1+\alpha(1-\bb)+\epsilon)^{-1}}\cap L^{\alpha(1+\alpha(1-\bb)-\epsilon)^{-1}}$ for some $\eps>0$, and a similar condition can be found for and $f'$. However, in the interest of simple notation, we will not utilize these meager improvements in the sequel. Let us remark that the fact that is well-defined coincides with Proposition 3.2 in [@pipiras2000integration] for the Gaussian case. By the linearity of convolutions, it follows that $M_{\alpha,H}$ is a random measure. Also note that $M_{\alpha,H} [f]$ can be interpreted as the integral of $f$ with respect to a L-FSM in which case we write M\_[,H]{} \[f\]\_ f dL\_[å,H]{}. To check consistency with , we see that &&M\_\[1\_[\[0,t\]]{}w\_[a,b]{}\]\ &=&\_(\_1\_[\[0,t\]]{}(y)$a(x-y)_-^{-\bb}+b(x-y)_+^{-\bb}$dy)M\_(dx)\ &=&\_(\_1\_[\[0,t\]]{}(y)$a(y-x)_+^{-\bb}+b(y-x)_-^{-\bb}$dy)M\_(dx)\ &=& \_ f\_t\^[a,b]{}(x) M\_(dx) and &&M\_\[(1\_[\[0,t\]]{}w\_[a,b]{})(x)\]\ &=&\_\_1\_[\[0,t\]]{}(y)$a(x-y)_-^{-\bb}+b(x-y)_+^{-\bb}$dyM\_(dx)\ &=& \_ f\_t\^[a,b]{}(x) M\_(dx). When $f\in \CC^1 $ and that $f'\in L^{1}$, one can rewrite as && \_ dt\ &=& \_0\^f’ (x-t) \_t\^ ds\ &=& \_0\^ ds.The right-hand side above is slightly more general then and is called the [*Marchaud derivative*]{}. This is the fractional derivative used in [@pipiras2000integration], however, to keep a unified notation in our approximations of the next subsection, we will continue with the Riemann-Liouville derivative. Discrete approximations of linear fractional stable measures ------------------------------------------------------------ Let $1<\alpha\leq 2$, and consider the stationary moving average process $(\hat\xi_k)_{k\in{\mathbb{Z}}}$ obtained by “linearly filtering" an i.i.d. sequence $(\xi_l)_{l\in{\mathbb{Z}}}$ in the domain of normal attraction of ${{\mathcal S}}_\aa$: $$\label{eq:defhatxi} \hat\xi_k:=\sum_{l\in{\mathbb{Z}}}\B_{k-l}\xi_l.$$ Lemma \[lem1\] shows that if $\B\in\ell^\alpha$, the series converges almost surely. Recall the definition of $f^h$ from and denote the inversion of a sequence by $\check\B_k:=\B_{-k}$ A first stab at approximating a L-FSM integral of $f$, as defined in the previous subsection, might be to mimic and look at $\sum_{k\in{\mathbb{Z}}} f^h_k\hat\xi_{k}$ for appropriate filters $v$ (which would also depend on $h$). This is, for example, the approach of [@kasahara1988weighted] and [@maejima2008limit]. Then formally, $$\begin{aligned} \label{convolution convergence} \sum_{k\in{\mathbb{Z}}} f^h_k\hat\xi_{k}&=&\sum_{k,l\in{\mathbb{Z}}} f^h_k\B_{k-l}\xi_l \\&=&\sum_{l\in{\mathbb{Z}}} \(f^h\ast\check\B\)_l\xi_l <\ff.\nn\end{aligned}$$ However, in view of the right-hand side above, it is easier and perhaps more natural to first convolve $f$ with $w_{a,b}=w_{a,b}^{(\beta)}$ and then approximate the convolution on a lattice with side-length $h$. In particular, for $w_{a,b}$ corresponding to $H\in(1/\aa,1)$, define $$\label{eq:series2} M^h_{\alpha,H}[f]:=\sum_{k\in{\mathbb{Z}}} \(f\ast w_{a,b}\)^h_k \xi_{k},\quad \quad f\in L^1\cap L^{\aa}(\R)$$ where the sequence $\(f\ast w_{a,b}\)^h$ is defined according to . Alternatively, for $w_{a,b}$ corresponding to $H\in(0,1/\aa)$, define $$\label{eq:series2n} M^h_{\alpha,H}[f]:=\sum_{k\in{\mathbb{Z}}} \(f'\ast w_{a,b}\)^h_k\xi_{k},\quad \quad f\in \CC^1(\R), f'\in L^1\cap L^{\aa}(\R).$$ By and the remark above it, $f\ast w_{a,b}\in L^\aa(\R)$. Thus one obtains, from a direct application of Theorem \[maineq2prop\], the following corollary: \[theo:fraccase\] Fix $1<\alpha\leq 2$. Suppose that for all $t$ in an index set $T$, $f_t\in L^1\cap L^\aa$ when $H\in(1/\alpha,1)$ or $f_t\in \CC^1, f'_t\in L^1\cap L^\aa$ when $H\in(0, 1/\alpha)$. Then as $h\to 0$: \[theo:fraccase equation\] M\^h\_[,H]{}\[f\_t\]M\_[,H]{} \[f\_t\]. [**Remarks:**]{} 1. It is not hard to extend the $H>1/\aa$ case to stable random measures on $\R^d$ by generalizing the two fixed values $a,b\ge 0$ (representing the negative and positive directions) to a function on the unit sphere $S^{d-1}\subset \R^d$. However, one then has to specify what is meant by “stationary increments” as there are different possibilities for $d>1$. 2. Extending the $H<1/\aa$ case to higher dimensions is more difficult. One possibility is to consider the Marchaud derivative in place of the Riemann-Liouville derivative (see also the next remark). 3. When $H>1/\aa$, Eq. has been shown by various authors in the case where $f_t=1_{[0,t]}$ (see [@kasahara1988weighted] and its references). However, when $H<1/\aa$, to our knowledge, even the case $f_t=1_{[0,t]}$ has not appeared in the literature. It is, however, related to the normalization suggested in Theorem 5.2 of [@kasahara1988weighted] which can be thought of as a discrete Marchaud derivative in the case where $f_t=1_{[0,t]}$. Proofs {#sec:proof} ====== Before delving into the proofs, let us recall some facts about the domain of attraction of a stable distribution. We write $f(x)\sim g(x)$ as $x\to c$ if $\lim_{x\to c}f(x)/g(x)=1$. For $\alpha\in(0,2)$, the following statements are equivalent (see [@geluk1997stable Theorem 1] with $p=1/2$): - $\xi$ is in the domain of attraction of ${{\mathcal S}}_\alpha$ (i.e. Eq. holds); - the tail function $t\mapsto {\mathbb{P}}(|\xi|\geq t)$ is regularly varying at infinity with index $-\alpha$ and ${\mathbb{P}}(\xi\leq -t)\sim {\mathbb{P}}(\xi\geq t)$ as $t\to \infty$; - the characteristic function $\lambda(\theta)={\mathbb{E}}\left[e^{i\theta\xi}\right]$ satisfies - $\theta\mapsto 1- \mathrm{Re}(\lambda(\theta))$ is regularly varying at $0$ with index $\alpha$, - for all $x\neq 0$, $$\lim_{\theta\to 0} \frac{x\mathrm{Im}(\lambda(\theta x))-\mathrm{Im}(\lambda(\theta ))}{1- \mathrm{Re}(\lambda(\theta))}=0.$$ Moreover, if conditions (i)-(iii) hold, then $$1- \mathrm{Re}(\lambda(\theta))\sim c_\alpha {\mathbb{P}}(|\xi|\geq 1/\theta)\quad \mbox{with}\ c_\alpha=\int_0^\infty x^{-\alpha}\sin x\, dx$$ as $\theta\to 0$ and also $$\mathrm{Im}(\lambda(\theta))=\theta \int_0^{1/\theta}({\mathbb{P}}(\xi\geq s)-{\mathbb{P}}(\xi\leq-s))ds+ o({\mathbb{P}}(|\xi|\geq 1/\theta)).$$ Also, Remark 3 of [@geluk1997stable] shows that one may choose the normalization constants so that $$\lim_{n\to \infty} n\Big(1- \mathrm{Re}(\lambda(1/a_n))\Big) =1\quad \mbox{and}\quad b_n=n\mathrm{Im}(\lambda(1/a_n)).$$ Recall from that in the present framework, we have assumed $a_n=n^{1/\alpha}$ and $b_n=0$. Thus, $$\label{eq:heavy-tail} {\mathbb{P}}( \xi\geq t) \sim {\mathbb{P}}( \xi\leq -t)\sim \frac{1}{2c_\alpha} t^{-\alpha} \quad \mbox{as}\ \theta\to 0.$$ and $$\label{eq:fcstable2} \lambda(\theta)=1-|\theta|^\alpha+o(|\theta|^\alpha)=\lambda_\alpha(\theta)+o(|\theta|^\alpha)\quad \mbox{as}\ \theta\to 0.$$ where $\lambda_\alpha$ is defined in Eq. . Furthermore, (\[eq:heavy-tail\]) implies that there exists $C>0$ such that for any $s>0$ $$\label{eq:stablestim2} {\rm Var}[\xi{\mathbf{1}}_{\{|\xi|\leq s\}}]\leq Cs^{2-\alpha}\quad {\rm and} \quad \E[|\xi|{\mathbf{1}}_{\{|\xi|\leq s\}}]\leq Cs^{1-\alpha}.$$ Proof of Theorem \[theo:whitecase\] ----------------------------------- We begin with a lemma which shows the $\ell^\alpha$ is the right space for the sequence $u$. \[lem1\] If $(\xi_k)_{k\in\N}$ is an i.i.d. sequence in the domain of normal attraction of ${{\mathcal S}}_\aa $, then $\sum_{k}u_k\xi_{k}<\ff$ if and only if $u\in\ell^\alpha$. The case $\alpha=2$ is standard and omitted. Consider $\alpha\in (0,2)$. Recall Kolmogorov’s Three-series Theorem: $\sum_{k} u_k\xi_k$ converges a.s. if and only if for any $s>0$, the following three series converge $$\sum_{k\in \N} {\mathbb{P}}\left[|u_k\xi_k |>s\right],\quad \sum_{k\in \N} {\rm Var}\left [u_k\xi_k {\mathbf{1}}_{\{|u_k\xi_k |\leq s\}}\right], \quad \sum_{k\in \N} \E\left[u_k\xi_k{\mathbf{1}}_{\{|u_k\xi_k |\leq s\}}\right] .$$ Eq. (\[eq:heavy-tail\]) implies $$\nn {\mathbb{P}}\left[|u_k\xi_k |>s\right]\sim C|u_k|^\alpha s^{-\alpha}$$ and hence the first series converges if and only if $u\in \ell^\alpha$. If $u\in\ell^\alpha$, then (\[eq:stablestim2\]) implies the convergence of the third series since $$|u_k| \E[|\xi_k| 1_{\{|\xi_k|<s/|u_k|\}}] \leq |u_k| C(s/|u_k|)^{\alpha-1} = Cs^{\alpha-1} |u_k|^\alpha.$$ The convergence of the second series is obvious. If $\eta_{k,j}$ are i.i.d. ${{\mathcal S}}_\aa$ random variables, then for any fixed $j$ the above lemma allows us to write \[eq:fc-stableintegral\] {i\_[k]{} u\^[(j)]{}\_k\_[k,j]{}}=\_[k]{} \_å( u\^[(j)]{}\_k)= -|u\^[(j)]{}\_[\_å]{}|\^åand $$\label{eq:fc-noise} \E\exp\{i\theta \sum_{k\in\N} u^{(j)}_k\xi_{k,j}\}=\prod_{k\in\N} \lambda\left( u^{(j)}_k\th \right).$$ Note that since $\|u\|_{\ell_\aa}^\aa:=\sum_{k\in\N} |u(k)|^\aa$ absolutely converges, the order in which the summation and products above are taken is irrelevant. It suffices to show that as $ j\to\ff$, $$\label{eq:diff} \prod_{k\in\N} \lambda\left( u^{(j)}_k\th \right)=\prod_{k\in\N} \lambda_\aa\left( u^{(j)}_k\th \right)+o(1).$$ We fix $j$ and estimate the difference of the above products using the following fact : if $(z_i)_{i\in I}$ and $(z_i')_{i\in I}$ are two families of complex numbers with moduli no greater than $1$ and such that the products $\prod_{i\in I}z_i$ and $\prod_{i\in I}z'_i$ converge, then \[standardinequality\] |\_[iI]{}z’\_i -\_[iI]{}z\_i| \_[iI]{} |z’\_i - z\_i|. We therefore have $$\begin{aligned} \left|\prod_{k\in\N} \lambda\left( u^{(j)}_k\th \right)- \prod_{k\in\N} \lambda_\aa\left( u^{(j)}_k\th \right)\ \right| \leq \ \sum_{k\in\N} \ \left|\ \lambda \left(u^{(j)}_k\th \right)-\lambda_\alpha \left(u^{(j)}_k\th \right)\ \right|\label{eq:diff1}.\end{aligned}$$ Equation (\[eq:fcstable2\]) implies[^6] that the function $g$ defined by $g(0)=0$ and $$g(u)=|u|^{-\alpha}\left|\lambda(u)-\lambda_\aa(u)\right|\ \ ,\ \ u\neq 0,$$ is continuous and bounded and for any $k\in\N$, we have $$\left| \lambda \left( u^{(j)}_k\th \right)- \lambda_\aa \left( u^{(j)}_k\th \right) \right| = g( u^{(j)}_k\th )| u^{(j)}_k\th |^\alpha .$$ In order to obtain a uniform estimate on the above, define the function $\tilde g:\R^+\to\R^+$ by $$\tilde g(v) :=\sup_{|u|\leq v} |g(u)|.$$ Note that $\tilde g$ is continuous, bounded and vanishes at $0$, and that for any $k\in \N$ such that $| u^{(j)}_k\th |\leq \varepsilon$, $$\label{eq:diff2} \left| \lambda \left( u^{(j)}_k\th \right)- \lambda_\aa \left( u^{(j)}_k\th \right) \right|\leq \tilde g(\varepsilon)| u^{(j)}_k\th |^\alpha .$$ Let $\varepsilon>0$. Equations (\[eq:diff1\]) and (\[eq:diff2\]) together yield $$\begin{aligned} & &\left|\prod_{k\in\N} \lambda\left( u^{(j)}_k\th \right)- \prod_{k\in\N} \lambda_\aa\left( u^{(j)}_k\th \right)\ \right|\\ &\leq& \ \tilde g(\varepsilon)\sum_{k\in\N}\left| u^{(j)}_k\th \right|^\alpha{\mathbf{1}}_{\{| u^{(j)}_k\th |\leq\varepsilon\}} + 2\sum_{k\in\N}{\mathbf{1}}_{\{| u^{(j)}_k\th |>\varepsilon\}}.\end{aligned}$$ Now, by the continuity of $\tilde g$ at $0$, $\tilde g(\varepsilon)$ is small when $\varepsilon$ is small. Eq. follows since $\lim_{j\to\ff} \|u^{(j)}\|_{\ell^\infty}=0$ implies $\sum_{k\in\N}{\mathbf{1}}_{\{| u^{(j)}_k\th |>\varepsilon\}}\to 0$ as $j\to\ff$. Proof of Theorem \[maineq2prop\] -------------------------------- Let $\lfloor\cdot\rfloor$ denote the floor function applied to each coordinate of $\R^d$. Define $\fh:\R^d\mapsto\R$ to be a piece-wise constant function approximating $f\in L^1_{\text{loc}}(\R^d)$: \[def:fh\] (x)&:=& \_[h(h\^[-1]{}x+I\^d)]{}h\^[-d]{} f(y) dy\ &=&\_[h(k+I\^d)]{} h\^[-d]{} f(y) dy,   xh(k+I\^d)\ &=& h\^[-d]{} f\^h(k), xh(k+I\^d).Note that \[normsequal\] f\^h\_[\^]{}=\_[L\^]{}. \[laa lemma\]For $\aa\in[1,2]$, suppose $f\in L^\aa(\R^d)$. Then as $h\to 0$, $$\lim_{h\to 0}\| \fh-f\|_{L^\aa}=0.$$ To reduce notation we assume $d=1$, but the proof holds for general $d$. Fix $k\in\Z$ and consider the sequence of $h$’s such that $h=2^{-j}$ for $j\in\N$. We will exploit the fact that $\fh1_{[k,k+1)}$ is a martingale (in time $j$) with respect to Lebesgue measure on $[k,k+1)$ and with respect to the $\sigma$-fields generated by the sets $2^{-j}[i,i+1), i\in\Z$. For $\aa\ge 1$, $|\fh|^\aa1_{[k,k+1)}$ is a submartingale which, by the martingale convergence theorem, converges a.s. to $|f|^\aa1_{[k,k+1)}$. Thus, Fatou’s lemma gives \[eq:aag1\] 1\_[\[k,k+1)]{}\_[L\^å]{}\^åf1\_[\[k,k+1)]{}\_[L\^å]{}\^å. Since $\fh1_{[k,k+1)}$ converges a.s. and the $L^\aa$-norms converge, we have convergence in $L^\aa(\R)$ of $\fh1_{[k,k+1)}$ and also for $\fh1_{[-N,N)}$ for any $N\in\N$. For $f\in L^\aa(\R)$ without compact support, simply choose $N$ so that $$\|f1_{[-N,N)^c}\|_{L^\aa}^\aa<\epsilon.$$ Since $|\fh|^\aa1_{[k,k+1)}$ is a submartingale, we also have uniformly in $h$. Finally, to extend the above to general $h\to 0$. Note that all we really require is a sequence of lattices such that finer lattices are sublattices of prior ones and that the mesh size goes to zero. But any such sequence has the same limit in $L^\aa(\R)$, thus we conclude that the only real requirement is that the mesh size goes to zero. By the Crámer-Wold device, we must show that for all $\theta_1,\ldots,\theta_n\in{\mathbb{R}}$ and $f_1,\ldots,f_n\in L^\aa(\R^d)$, $$\sum_{i=1}^n \theta_iM_\alpha^h[f_i]\Longrightarrow \sum_{i=1}^n \theta_iM_\alpha[f_i]\quad \mbox{as}\ h\to 0.$$ Our proof uses Theorem \[theo:whitecase\]. First note that the comment following shows that switching the order of summation in the series $M_\alpha^h[f_i]$ does not affect its distribution. This, together with the linearity of $M_\alpha$ and $M_\alpha^h$, allows us to reduce the above to verifying $$\nn M_\alpha^h[f]\Longrightarrow M_\alpha[f]\quad \mbox{as}\ h\to 0$$ for a single $f\in L^\aa(\R^d)$. This will follow from Theorem \[theo:whitecase\] provided we check the two conditions $$\label{eq:cond1} \lim_{h\to 0}\|f^h\|_{\ell^\alpha}=\|f\|_{L^\alpha}$$ and \[eq:cond2\] \_[h0]{}f\^h\_[\^]{}=0. We consider $\aa\in[1,2]$ first. Condition easily follows from and Lemma \[laa lemma\]. For , note that convergence of the $L^1$ norms of $|\fh|^\aa$, coupled with a.e. convergence, shows that the family $\{|\fh|^\aa\}_{h\in\N}$ is uniformly integrable. For $\aa\in(0,1)$, we first consider the sequence of $h$’s such that $h=2^{-j}$ for $j\in\N$. By uniform integrability and the martingale convergence theorem (see the proof of Lemma \[laa lemma\]), we see that $\fh1_{[k,k+1)}$ converges in $L^1$ to $f1_{[k,k+1)}$. The final comment in the proof of Lemma \[laa lemma\] shows the convergence also holds for arbitrary $h\to 0$. Next, note that contains $L^\aa([k,k+1))$ and that the endomorphism on $L^1([k,k+1))$ which maps $$f1_{[k,k+1)}\ \mapsto \ |f|^\aa1_{[k,k+1)}$$ is continuous. Thus Eq. holds for $\aa\in(0,1)$. Since $f\in L^\aa$ we can choose $N_1$ so that $\|f1_{[-N_1,N_1)^c}\|_{L^\aa}^\aa$ is small. However, to uniformly bound the tails of the $f_h$, we will use the stronger condition of $f\in L^{\aa-\epsilon}$. In particular, there exist $N_2>0$, $C>0$ and $\delta>\alpha^{-1}$ such that $|x|\ge N_2$ implies $|f(x)|\leq C|x|^{-\delta}$. We have for $|x|\geq N_2+{h}$ that $${h}(\lfloor{h}^{-1}x\rfloor+I)\subset (-N_2,N_2)^c$$ and $$\label{eq:maj-unif} |\fh(x)|^\alpha=\left|{h}^{-1}\int_{{h}(\lfloor{h}^{-1}x\rfloor+I)}f(y) \, dy \right|^\alpha\leq C^\aa{h}^{1-\alpha}(|x|-{h})^{-\alpha\delta}.$$ Since $\alpha\delta>1$, follows from . Finally, as before, we see that along with a.e. convergence gives for $\aa\in(0,1)$. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to Gennady Samorodnitsky for helpful correspondence. [EVDH03]{} A. Astrauskas. Limit theorems for sums of linearly generated random variables. , 23(2):127–134, 1983. F. Avram and M.S. Taqqu. Weak convergence of sums of moving averages in the $\alpha$-stable domain of attraction. , pages 483–503, 1992. F. Biagini, Y. Hu, B. [Ø]{}ksendal, and T. Zhang. . Springer Verlag, 2008. S. Cambanis and M. Maejima. Two classes of self-similar stable processes with stationary increments. , 32(2):305–329, 1989. Y.A. Davydov. . , 15(3):498–509, 1970. R. Davis and S. Resnick. Limit theory for moving averages of random variables with regularly varying tail probabilities. , pages 179–195, 1985. L. Decreusefond and A.S. [Ü]{}st[ü]{}nel. . , 10(2):177–214, 1999. RM Dudley. Random linear functionals. , 136:1–24, 1969. R.J. Elliott and J. Van Der Hoek. . , 13(2):301–330, 2003. J.L. Geluk and L.F.M. Haan. Stable probability distributions and their domains of attraction. Technical report, Tinbergen Institute, 1997. Y. Kasahara and M. Maejima. Weighted sums of iid random variables attracted to integrals of stable processes. , 78(1):75–96, 1988. P.S. Kokoszka and M.S. Taqqu. Fractional arima with stable innovations. , 60(1):19–47, 1995. M. Maejima. On a class of self-similar processes. , 62(2):235–245, 1983. M. Maejima and S. Suzuki. Limit theorems for weighted sums of infinite variance random variables attracted to integrals of linear fractional stable motions. , 31(2):259–271, 2008. V.V. Petrov. . Oxford Science Publications, 1995. V. Pipiras and M.S. Taqqu. . , 118(2):251–291, 2000. J. Rosinski. . , page 401, 2001. S.G. Samko, A.A. Kilbas, and O.I. Marichev. . London: Gordon and Breach, 1987. G. Samorodnitsky and M.S. Taqqu. . Chapman & Hall/CRC, 1994. [^1]: Laboratoire LMA, CNRS UMR 7348, Université de Poitiers, Téléport 2, BP 30179, F-86962 Futuroscope-Chasseneuil cedex, France. Email: clement.dombry@math.univ-poitiers.fr [^2]: Department of Mathematics, University of Alabama Birmingham, USA. Email: pjung@uab.edu [^3]: The term [*linear fractional stable motion*]{} was introduced in [@cambanis1989two] due to its close relation to linear time series (moving average processes). [^4]: Our approximation in Thm \[maineq2prop\], as well as the Lindeberg-Feller result, can be extended to stable distributions with skewness $\nu\neq 0$, however, to simplify calculations and notation we have assumed symmetry. [^5]: In [@avram1992weak], it was also shown that under the right conditions, convergence does occur in Skorokhod’s $M_1$ topology. [^6]: We have assume $\alpha\in(0,2)$ for Eq. , but for $\alpha=2$ it is well-known.
--- abstract: 'We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high resolution imaging. The proposed reconstruction jointly recovers all the diffusion weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using an autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction and show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.' address: 'University of Iowa, Iowa, USA' bibliography: - 'Dmri\_HD.bib' title: 'Model-Based Deep Learning for Reconstruction of Joint k-q Under-sampled High Resolution Diffusion MRI' --- K-q space deep learning, diffusion MRI, autoencoder, model-based deep learning Introduction {#sec:intro} ============ Diffusion weighted magnetic resonance imaging (DWI) is a widely used neuroimaging technique, which can provide rich information about a variety of tissue microstructural information including brain connectivity, and density of neurons [@Novikov2019]. The acquisition of diffusion MRI (dMRI) at high spatial resolution and on a large number of q-space points are needed to probe the tissue microstructural information and resolve the ambiguities in the parameters related to tissue microstructure[@Sotiropoulos2013; @Jbabdi2015]. Conventional single-shot echo-planar (EPI) techniques have limited ability to improve the spatial resolution of dMRI. The long readouts required for higher resolution, often causes geometric distortions and blurring artifacts in the images. Several researchers have hence employed multi-shot EPI (msEPI) methods, where the k-space acquisition is segmented into multiple shots of shorter readout duration. However, a challenge with msEPI-based DWI acquisition scheme is the phase inconsistency between the shots. When the k-space data from the different shots are merged, these phase errors translate to ghosting artifacts in the images. Moreover, the multiple shots required to encode the images prolongs the acquisition time. Several acceleration methods have been introduced in diffusion MRI to overcome the above challenges. These include (a) spatial (k-space) acceleration methods that rely on parallel MRI and compressed sensing [@Shi2015; @Liao2017], (b) q-space acceleration methods that acquire only a subset of the q-space data and rely on data priors to fill in the missing information[@Michailovich2011b; @Welsh2013], and (c) k-q acceleration methods that jointly under sample both k- and q spaces [@Mani2014; @Schwab2018]. While the joint k-q under-sampling schemes can afford higher acceleration factors, the main challenges include (i) the high computational complexity of such scheme, resulting from the need to perform joint optimization, and (ii) the inability to account for complex diffusion models that do not conform with sparsity based models. We propose a deep-learning based joint reconstruction algorithm for multi-shot diffusion MRI. The proposed scheme rely on a model based reconstruction that simultaneously performs phase correction and jointly recovers artifact-free DWIs from highly under-sampled acquisition. Specifically, a data fidelity term performs phase correction using the generalized SENSE reconstruction with known phase maps while a deep-learned prior exploits the redundancy in the q-space data. To achieve this, we trained a denoising auto-encoder (DAE) using training data generated by a generalized diffusion model. The non-linear network is shown to learn a projection to the data-manifold, thus denoising the images. We propose to use the residual error of the network, which can be used as a prior in a model-based reconstruction scheme. The reconstructed DWIs can then be used for further analysis to estimate the diffusion microstructure model parameters. The proposed scheme has significant differences with deep-learning based q-space acceleration techniques [@Golkov2016]. This scheme rely on supervised learning to learn the mapping from the diffusion signal to the parameters of a specific model (e.g. NODDI) from fully sampled q-space images. By contrast, our focus is to recover the DWI data with high spatial and q-space resolution, which allows the fitting of any desired diffusion model. Methods {#sec:format} ======= Standard Multi-compartmental Diffusion Model -------------------------------------------- The diffusion signal in the brain is often modeled by a multi-compartment model [@Novikov2019] that accounts for the intra-, and extra-neurite tissue compartments for each voxels, in addition to a isotropic compartment. The signal model is given by $$ \rho(b, \mathbf g) = \rho_0\int_{\hat{ \bf{n}}} {\cal{P}}(\hat{ \bf{n}}) \circledast K(b,\hat{\bf{g}} \cdot \mathbf n) ~d\hat {\bf{n}} \label{model}$$ where $\mathcal P$ is the fiber orientation distribution function (ODF) and $\circledast$ denotes a spherical convolution operation with a kernel $K $. The kernel is specified by $$\hspace{0em} K(b,\zeta) = f_1e^{-bD_a\zeta^2} + f_2e^{-bD_e^{\perp} -b\left(D_e^{||} - D_e^{\perp}\right)\zeta^2}+f_{\rm iso} e^{-bD_{\rm iso}}, \notag$$ where $f_{i}$’s are the volume fractions, $D$’s are the compartmental diffusivities, $b$ is the diffusion gradient strength, $\rho(b, \mathbf g) $ and $\rho_0$ are the diffusion weighted and the reference non-diffusion weighted signal. The above diffusion signal model is very rich with several free model parameters. It has been noted to be useful for detailed microstructural analysis for the estimation of several tissue microstructure parameters when high quality diffusion data is available. Image Formation for msDWI -------------------------- Let $\mathbf \rho_{q}(x); q=1,..,Q$ represent the diffusion weighted image for the $q^{\rm th}$ location in q-space (the 3D space spanned by $b-\mathbf g$), where $\mathbf x$ represent the spatial co-ordinates. Then, the image acquisition model for an $S$-shot sampling in the presence of Gaussian noise $n$ can be represented as: $$\label{eq:model1} \mathbf {\hat y}_s = \mathcal{A}_s({\rho_{q}}) + \mathbf n, ~~ s=1:S $$ where $\mathbf {\hat y}_s$ is the measured k-space data from shot $S$, and $\mathcal{A}=\mathcal{S}_s\circ \mathcal{F} \circ \mathcal {C} $. Here, $\mathcal{F}$, $\mathcal{S}_s$, and $\mathcal{C}$ denotes Fourier transform, selection of the acquired k-space samples for a specific shot $s$, and weighting by coil sensitivities, respectively. For the phase compensated reconstruction for msDW data, we account the phase term into the coil sensitivity maps. In a fully sampled scenario, the sampling patterns for the different shots are complementary; the combination of the data from the different shots will result in a fully sampled k-space. However, such fully sampled acquisitions result in long acquisition times. To simultaneously achieve high spatial and angular resolution using multi-shot sequences in a reasonable scan time, we propose to under-sample the joint k-q space of dMRI. Figure \[fig:fig1\] represents the proposed joint k-q under-sampling that we pursue in the current work. This joint k-q acceleration scheme can be effectively achieved on the MRI scanners by randomly under-sampling the shots for each of the q-space sampling points. We compactly denote the acquisition process as $$\label{eq:compact} \widehat {\mathbf Y} = \mathcal{A}\left(\mathbf P\right) + \mathbf N,$$ where $\mathbf Y$ is the Casoratti matrix (of dimension $ N_1 \times N_2 \times Q$), of the data corresponding to the different q-space points.      Model-based Joint Reconstruction Algorithm ------------------------------------------ At high acceleration factors, the k-q under-sampled data needs to be jointly reconstructed. Denoting the k-space measurement matrix for the joint reconstruction as $\widehat{\mathbf Y}$, we propose to recover $\mathbf P$ by solving: $$\label{recon} \mathbf P = \operatorname*{arg\,min}_{\mathbf P} \norm{\mathcal {A}({\mathbf P})-\widehat{\mathbf Y }}_2^2 + \lambda ~~ \mathcal{R}({\mathbf P}). $$ Here, the joint reconstruction enforces data consistency (DC) to the measured data using the generalized SENSE encoding operator $\mathcal{A}$ in the first term. The second term is an arbitrary regularization prior $\mathcal{R}$. Priors including total variation spatial regularization and sparsity had been introduced by other researchers [@Michailovich2011b; @Welsh2013; @Mani2014; @Schwab2018]. In our previous work [@Mani2014], we employed sparsity priors, assuming a ball-and-stick diffusion dictionary model similar to MR fingerprinting. However, the extension of this idea for the recovery of the parameters directly from the acquired data using fingerprinting-like recovery is complicated for diffusion models such as the model in Eq . Evidently, the main challenge is the large size of the dictionary, resulting from the large number of free parameters, as well as the high coherence between the atoms that make $\ell_{1}$ minimization challenging. Denoising Autoencoder Prior --------------------------- We introduce a self-learning DMRI framework based on denoising autoencoders (DAE). DAEs were introduced as unsupervised schemes to learn the data manifold. Theoretical results show that the DAE representation error is a measure of the derivative of the smoothed log density [@Vincent2008] of the data; the derivative is zero if the point is on the manifold, while it is high when the point moves away from the *data-manifold*. Instead of using a dictionary based sparse prior, we propose to pre-learn a DAE from the dictionary $\mathbf Z$ such that: $$\label{daetraining} \Theta^* = \arg \min_{{\Theta}} \mathbb E_{I} \left(\mathbb E_{\mathbf S \sim \mathcal N(\mathbf 0,\sigma_i^2)}\|\mathcal D_{\Theta}\left(\mathbf Z+\mathbf S\right) - \mathbf Z\|_F^2\right)$$ Here, $\mathbb E$ denotes the expectation operator and $\mathbf S$ is a noise realization with a zero mean complex Gaussian density with variance $\sigma_{I}^{2}$; the $\sigma_{i}$ are chosen from a set of variances, indexed within the set $I$. Once the parameters $\Theta$ are learned, we use the trained denoiser as a regularizer in plug-and-play framework [@zhang2017learning] in as: $$\label{joint} \mathbf P^{*} = \arg \min_{\mathbf P} \|\mathcal A (\mathbf P)-\widehat{\mathbf Y}\|^{2}_{2}+ \lambda~ \|\mathbf P - \mathcal D_{\Theta}(\mathbf P)\|^{2},$$ where $\mathcal N_{\Theta}(\mathbf P) = \mathbf P-\mathcal D_{\Theta}(\mathbf P)$ is the DAE error. We solve the proposed joint recovery optimization using the alternating direction method of multipliers as follows: $$\begin{aligned} \label{jointsteps} \mathbf P_{n+1} &=& \arg \min_{\mathbf P} \|\mathcal A (\mathbf P)-\widehat{\mathbf Y}\|^{2}+ \lambda~ \|\mathbf P-\mathbf Q_n\|^2\\ \mathbf Q_{n+1} &=& \mathcal D_{\Theta}(\mathbf P_{n+1}).\end{aligned}$$ Experimental Setup ------------------ ### Dictionary generation To generate the dictionary $\mathbf Z$, we employ the DWI signal model in Eq and generate the diffusion signal $S(b, \mathbf g) $ for a range of model parameters. This model has 7 free parameters, all of which were varied within the physiological ranges to generate a dictionary that is a small subset of all possible diffusion signals. Specifically we used the ranges: $f_{i}$’s $\in [0,1]$, and $D$’s $\in [0.1,3]$ [@Novikov2019]. The fiber direction $\hat{ \bf{n}}$ was varied for 30 different unit vectors in 3D space with crossing fibers simulated from the linear combination of these unit vectors. Since the reconstruction concerns the recovery of complex data, the generated signals were modulated with random phase terms, which counts as an additional parameter. ### DAE architecture and training The generated diffusion signals were corrupted at various noise levels at $ $0$\%, $20$\%, $40$\%, \text{~and~} $60$\%$, and were used for training. The training data was fed to an autoencoder neural network. In this preliminary work, we employed an architecture with three fully connected layers, with ReLU activation function. The dimension of the input layer was the dimension of the q-space. The bottleneck layer was constrained to represent one fourth of the input dimension. ### Testing data To test the joint reconstruction, we used a synthesized brain MRI data. This ground truth data was generated as follows: A high quality brain diffusion data was collected on a human volunteer using a variable density interleaved spiral acquisition with 22 spatial interleaves to achieve a high spatial resolution of 1.1mm in-plane. The data was collected on a 3T MRI with 8-channel head coil. 60 DWIs were acquired using the fully sampled spiral acquisition, which were independently reconstructed using CG-NUFFT SENSE reconstruction. The fiber orientation distribution functions in each pixel of this data was estimated and stored. These fiber orientation were used to generate the synthetic brain data. Figure \[fig:fsim\] shows one DWI from this synthetic ground truth data, which displays crossing fibers in several voxels. The ground truth data was retrospectively under-sampled to generate the joint k-q under-sampled data for testing. Here, we assumed a Cartesian acquisition and the under-sampling was simulated using a multi-shot EPI scheme at different shot factors to study the various acceleration factors. Acceleration factors at R = 4, 6, and 8 were considered. Random phase values were added to each of the shot images to simulate phase errors of the multi-shot imaging.     Results {#sec:pagestyle} ======== The goal is to derive a regularization prior that can denoise the diffusion signal in the q-domain, which can be applied in a voxel-wise manner along the q-dimension during the joint reconstruction. Figure \[fig:denoise\] shows the successful learning of the q-space signal manifold by the DAE. The DAE was then used in the joint reconstruction in Eq to recover all 60 DWIs simultaneously for various undersampling factors following the alternating scheme discussed above. Figure \[fig:usfig\] shows the results of the proposed reconstruction for various acceleration factors. Here, the first row shows the 4-shot case, where only one shot per DWI was sampled; the shot was chosen randomly for each DWI. Similarly, the second and third row show 6-shot and 8-shot cases. In all cases, only one random shot per q-space point was sampled. The performance of the denoiser at the first iteration as well as the DC updates at various stages of the reconstruction are shown in Figure 4. The root-mean-square error (RMSE) and peak signal-to-noise ratio (PSNR) for various acceleration factors are reported in Table 1. It is clear from Figure \[fig:usfig\] and Table 1 that the proposed DAE regularizer is an efficient recovery prior for the reconstruction of highly under-sampled data. \ \ \  \ \[tab:time\] Acceleration RMSE PSNR -------------- ---------- --------- $R = 4$ $0.0176$ $35.04$ $R = 6$ $0.0548$ $25.19$ $R = 8$ $0.079$ $22.01$ : Reconstruction error of the proposed scheme for various undersampling factors. Discussion & Conclusion ======================= We introduced a model based deep learning framework for the joint recovery of DWIs from joint k-q under-sampled data. In this preliminary work, we show the feasibility of employing a DAE to prelearn the projection to q-space signal manifold. The prelearning was performed using simulated diffusion data using a general diffusion model with several degrees of freedom. We note that the accuracy of the DAE is determined by the training data; specifically, the more range of parameters used to simulate the data will result in improved denoising. The need to account for multiple fiber orientations per voxel significantly inflate the parameter space. In the current study, we only considered 30 unique fiber directions, which may have contributed to reduced accuracy. In future work, we would explore the scenario with larger dictionary with more fiber directions. We also plan to extend this work for the recovery of multi-shell dMRI data in the future.\